How to Convert LONG to Clob PART 2

Hi,

I just want to write a small post today. Some time ago, I shared a way to convert long to clob in a query directly: https://mustafakalayci.me/2020/04/06/how-to-convert-a-long-column-to-clob-in-a-query/ Even if this method works, it works really slow and anything slow annoys me!

So, I want to share another method which is much much faster than Sys_dburigen function but THIS IS UNDOCUMENTED and UNSUPPORTED! It is used by internally by oracle so it works fine but you must be careful. Since, no one uses LONG column in their application (right? you don’t! please don’t!) this kind of code is handy for mostly admins because for backward compatibility data dictionary has some LONG data type columns.

What we are going to use is a LIBRARY called UTL_XML_LIB . I almost never create objects under SYS user, so I will create this code under a privileged admin user. first we must grant privileges on UTL_XML_LIB library and then create an EXTERNAL procedure in plsql.

now create external procedure:

“kuxLong2Clob” is a function which returns void (basically it is a procedure) written in C by Oracle. this procedure takes a select statement for your LONG column and rowid. procedure returns CLOB data as an out parameter. I created my_long2clob procedure but to be able to call it via SQL statement, it must be a function, so:

to test it, I won’t create a new table with a LONG column but instead I will use one of in the data dictionary (which I use these kind of codes against). Let’s use SYS.TRIGGER$ table which is the base table for DBA_TRIGGERS view. ACTION# column is a LONG data type. of course we must first grant select privilege on SYS.TRIGGER$ to mustafa user.

I suppressed the output but here it is. really fast long to clob conversion in SQL.

as a CON, it will only work against tables not VIEWs! you cannot use it as “f_my_long2cloc(rowid, ‘DBA_TRIGGERS’, ‘TRIGGER_BODY’)”. Since it requires ROWID, you must be referring underlying table and its rowid.

beyond that, if you want to get LONG as VARCHAR2, similarly you can use “kpdbLong2Varchar2” procedure in DBMS_PDB_LIB library. as UTL_XML_LIB it is very fast. only difference is, it returns VARCHAR2 and trims the overflow part. Where is it in use? Check how DBA_VIEWS shows TEXT_VC column data 😉

I hope this helps to fight against LONG columns in your codes.

Wish you all healthy, happy days.

Zero Data Loss DISASTER Database for Standard Edition 2

Hi,

As you know Data Guard is an Enterprise Edition option and only accepted method to protect your data against a “DISASTER” (DR) is data guard. How about Standard Edition 2(SE2)? Is this possible? Well I did it so, king of yes 🙂 with some risk of course.

Warining: I don’t take any responsibility for wrong settings. Wrong actions can damage your database. Please be careful.

if you google Oracle SE2 Data guard or Manual Log Shipping, you will find many blog post and documents about it. Even youtube has videos about this but almost all of them (at least all I’ve seen and read) are not “Zero Data Loss”. Steps are quite easy:

  1. you have a production database based on SE2
  2. you use another server as your DR server. (it could be on the same server if you use duplicate snapshot method)
  3. copy all your backup
  4. restore database and keep it in MOUNT mode
  5. copy your archivelog ever X minutes to DR server
  6. recover database using rman or sqlplus.

And that is it. This is a valid method. Only GAP is in this, you are transferring your archive logs but not redo logs. This means, you will certainly loose some data in case of a disaster because changes in redo logs will be lost (if you cannot reach out the server). So, what can we do? basic answer is by copying redo logs too but how? Redo logs are changing constantly. Even if you copy redo logs with in a 1 minute interval, you are still be able to lost 1 minute data.

To achieve this, you must force database to write your both redo logs and archive logs onto DR server. Not copying files in an interval, force database to write it there. Probably, you already understood what I did. to use this kind of method you should (must) have a strong, reliable, stable, fast network connection between your production and disaster servers. Because, we will send redo logs and archivelogs instantly to DR server.

I am using 19.15 databases on Oracle Linux 8 (virtual box). Same things can be done in other operating systems too. Logic will be same.

Here are the steps that I am about to do:

  1. On the DR server, create a folder to share over network via NFS.
  2. On the PROD server, Mount this shared folder on PROD server.
  3. On the PROD server, Add an archive log destination to this shared folder.
  4. On the PROD server, Add redo log members for each group to this shared folder.
  5. On the DR server, create a copy of prod database via rman (not duplicate)
  6. On the DR server, catalog archive logs, which resides on shared folder and recover database.
  7. In a disaster situation, copy redo logs to their original location with original name and recover database.

that’s it. Let me demonstrate those steps.

I created 2 virtual machines. They have both Oracle DB Software and one of them (PROD) has a database and I want to use other virtual machine as DR.

  1. Oracle home is under /u01/app/oracle/product/19c/dbhome_1
  2. I (always) use OMF (Oracle Managed Files).
  3. data files are under /u01/app/oracle/oradata
  4. recovery area is under /u01/app/oracle/fast_recovery_area
  5. SID of database is cdb
  6. PROD server ip is 192.168.56.101
  7. DR server ip is       192.168.56.104

1- On the DR server, create a folder to share over network via NFS

As the oracle user (Oracle db software owner user), create necessary directories.

this is the folder that I will share over the network and mount it (map it) on the production server.

to share over network, as root user:

2- On the PROD server, Mount this shared folder on PROD server

as oracle user, create a directory to mount shared directory on DR. /u01/app/oracle/fast_recovery_area/DR_LOGS will be used for mount point. directories in it is for OMF.

as root user:

at this point if I put a file into /u01/app/oracle/fast_recovery_area/DR_LOGS directory on PROD server, it will be written to /u01/PROD_LOGS on DR server

3- On the PROD server, Add an archive log destination to the shared folder

By default for OMF, if there is no value for log_archive_dest_n parameters, then Oracle internally use log_archive_dest_1 parameter as LOCATION=USE_DB_RECOVERY_FILE_DEST. that way, archivelogs are written under /u01/app/oracle/fast_recovery_area directory with sub directory structure of course (/u01/app/oracle/fast_recovery_area(<SID>/archivelog/YYYY_MM_DD).

but! if you assign a value to any of those parameters, then it will just use that location to create archivelogs. so I will set 2 log_archive_dest_n parameter, one location for PROD server and one for shared folder.

by setting these 2 parameters (log_archive_dest 1 and 2) we make sure that oracle will create 2 copies at the same time to 2 different location and one of them is actually out DR server. As you see there is one more parameter which is very important. log_archive_min_succeed_dest, by default is 1 and if you don’t set this as 2, in an access problem situation (network might have issues or DR server could be shutdown), Oracle will just write the first location and pass the inaccessible location. this will cause not writing necessary archive logs to DR server. if we want “zero data loss” DR db then oracle must write archive log to dr server. (also you can use MANDATORY clause).

4- On the PROD server, Add redo log members for each group to the shared folder

as the archive logs, we need to do the same things for redo log too.

as log_archive_dest_n parameter, if we set any of db_create_online_log_dest_n parameter, we need to set all three of them. 2 locations are on PROD server and 1 location is on DR server. this is not a mandatory step actually but it is a good practice. Because, we can just add redo log members (last 3 command above) to the shared location and this is enough but if one day you need to create another log group and forget to add a log member to shared folder then, you could be exposed to data loss! so set it any way.

then, add 1 member for each log group to shared folder. I use standard naming here because I need to know the groups of files by their name (we will use this later).

lastly, I create pfile to move to DR server, so we will create our DR database.

5- On the DR server, create a copy of prod database via rman

on PROD server, database “cdb” is up and running. Let’s start:

I don’t want to go all the details about standard copy backups and restore a database so, I won’t put outputs here.

On PROD server, take a backup of database via RMAN and then copy everything to DR server via scp.

from shell:

copy all backups, archivelogs etc and also pfile.

then go to DR server and create necessary directories & restore your database:

these directories will be needed for database. restore database:

at this point we have a copy prod database on DR in MOUNT mode.

somet very important thing to do

DO THIS ON DR SERVER ONLY! we are deleting any copied redo logs because during recovery mode, there must not be any redo log!

6- On the DR server, catalog archive logs, which resides on shared folder and recover database

from now on, you can create a recover.sql script and call it ever n minute from cron to recover database regularly. content of recover.sql is:

we must catalog newly arrived archivelogs first and then recover database. I want to remind that /u01/PROD_LOGS directory is the one that we shared over network and map it on to prod server.

every recovery command will end up with an ERROR:

this is an expected behavior. since we don’t apply redo logs, Oracle is looking for the next archive log file in the sequence but couldn’t find it. our recovery is now working. after archive log with sequence 18 is arrived, recover command will raise the same error above for sequence 19.

7- In a disaster situation, copy redo logs to their original location with original name and recover database

from now on, our archive and redo log files are stored on DR server too. if there is a disaster situation, you must open the database with applying all logs including redo logs. this is the only moment that we will apply redo logs.

these are the redo log locations on PROD:

/u01/app/oracle/oradata/CDB/onlinelog
/u01/app/oracle/fast_recovery_area/CDB/onlinelog
/u01/app/oracle/fast_recovery_area/DR_LOGS/CDB/onlinelog (this one is the network directory on DR server)

actual directory that redo files are written is /u01/PROD_LOGS on DR server. Those directories must be existed on DR server too (we already created them in scripts above). We must copy redo logs under /u01/PROD_LOGS to actual locations now. I deliberately named redo logs as redoN.log so that I can identify which redo file is belong to which redo group. a very simple sql script and shell script will copy redo logs to original locations:

calling put_redo_logs.sh file will run an sql script which will produce copy shell commands.

After that, you can run recover.sql one more time and this time RMAN will not raise any error and you will see “Recovery Completed” message. now you can open your database and use it.

Now, why did I delete copied redo log files via scp command on DR server? if you recover a redo log in mount mode, control file will raise some flags and all recover commands will start from the sequence of applied redo log, this will cause problems after some time because archivelogs will be deleted eventually because of retention policy. This also means, if you apply a redo log during recovery, to have a healthy copy of database, you must restore both control file & database all over again.

Pros

obviously, you have a DR db with full consistent data.

Cons

This approach contains high risk because if database cannot write archived or redo log to shared folder then production database will hang! You might have some issues about network or DR server might be restarted or shutdown. You might loose disk drives on DR server which shared folder resides on etc etc…

I will write another post about those situations and make some tests. if loosing some data for a small time then, still refreshable pluggable database would be a better option. I will also write a post about it.

if you don’t have a reliable network, this option could be catastrophic! So, try it carefully. I hope this helps in some way.

Wish you all healthy, happy days.

How to Restore Dropped PDB

Hi,

it seems easy, right? Well, not that much but not that hard either. if you drop a pdb in a cdb, you cannot restore it using current database because when you drop the pdb, all information about the pdb is lost. So, what is the solution? you need a point in time recovered duplicated database.

I have a cdb named CDB and had a pluggable database named PDB1. this database is a 21c database and installation of CDB is pretty standard.

db_create_file_dest     = /u01/app/oracle/oradata
db_recovery_file_dest = /u01/app/oracle/fast_recovery_area

Let’s say I dropped it by mistake and make a recover:

I created the pdb and create a table in it. I also get the time because it will be required while restoring pdb. now, let’s drop it by mistake:

I don’t add outputs for now. Okay, now I lost my pdb. to restore it, we must create a duplicate database, recovered until a specific time. step by step:

1- create a new pfile for duplicate database (let’s called out db name as “dup”)

2- create password file:

3- startup the instance

4- use rman and create duplicate database

you can get different error messages at this point but they all have enough information to solve the problem. if you get this error:

RMAN-05501: aborting duplication of target database
RMAN-05657: There are multiple database IDs present.

then you must set dbid before run the duplicate database command.

5- after duplicate database command completed (which will take for a while) your new database called “dup” will be ready and in mount mode.

at this point if you query pdbs (containers), a strange name will appear “_###_UNKNOWN_PDB_#_4” (4 is pdb number so it might  be different on your systems). after Opening your database with resetlogs (this is point in time recovery) it will be converted to correct name.

here it is. after this point we have 2 container databases on the server, cdb and dup. so, we can simply unplug and plug pdb1 into database CDB again.

I use “move” while plugging it into CDB database because I want its datafiles move to db_create_file_dest locations of CDB database.

that’s it. I found this solution is a little bit tiresome and I believe next releases will have easier solutions but still we can complete what we need in a few commands.

thanks for reading, I hope this helps.

wish you all healthy days.

CONTAINER_DATA for Common Users

Hello Everyone (if anyone reading this 🙂 ),

it has been a long year and I am glad it is over. As always, we hope the for “a time of period” like new year, birthday etc etc but hope is the only thing that we can hold on. I hope whole world get rid of this illness called corona, racism, wars (and again etc etc, list goes on)… Wish everyone a healthy, peaceful year.

In my personal blog, I mentioned vary of things and I realized that I didn’t mention about container database structure at all. Main reason for this is I was lazy on multi-tenant architecture and didn’t work on it, yeah, pretty much this was the reason 🙂

I wanted to start with common user’s container_data attribute (or property I am not sure). As you know, in multi-tenant architecture there two types of users COMMON and LOCAL. Common users are defined in the CDB (not in a pluggable database) and they are known by every current and future pdb. Local users are defined in a pluggable database and can access to only pluggable database they are defined. By saying “defined”, I mean “created”. So, local users don’t aware of other pluggable databases or container database (cdb) but common users are aware that they are in a CDB and can connect to PDBs on that CDB if they have privileges.

By default, newly created common users doesn’t see whole data in container data views or tables. what is “container data views”? As you noticed (even if you use non-cdb database 12c and above) there are some new columns on data dictionary and v$ views like CON_ID. in a multi-tenant database when a privileged user like SYS queried V$SESSION, user can see whole sessions in all containers (cdb$root and pdbs). Let’s see:

I will be running those queries on a 21.3 version. I have 3 PDBs on my CDB:

these are just dummy PDBs that I use for testing purpose on VM. Now Lets connect as SYS and query some views:

I connected as SYS and I am in the cdb$root container. I queried V$SESSION and V$DATAFILE views by grouping con_id column. I am able to see sessions on different PDBs, in my example, con_id 3 and 5 has 1 session each. V$DATAFILE query shows count of all data files per PDBs.

Now, let’s create a common user and give necessary privileges and query with that user:

I created C##MY_ADMIN user and grant DBA for all containers. So, this user can connect to any pdb and query any data dictionary view. for example, c##my_admin can connect to pdb1, pdb2 etc and query v$session in all of them and is able to see whole session for current pdb that user in it but problem is when c##my_admin connected to root (cdb$root), he/she won’t be able to see whole sessions in CDB (including PDB sessions). just the ones in cdb$root:

as you see, only sessions in con_id 1 which is cdb$root are listed. By the way, con_id 0 is CDB itself and only background processes are in there and they are listed in v$session all the time.

again, v$datafile lists only files in current container which is cdb$root. Please consider that c##my_admin is a DBA in CDB so it can connect to any PDB and query their data. it has necessary privileges for that. It has just don’t have “privilege” to query PDBs’ data in the CDB$ROOT container.

That is what CONTAINER_DATA does. it allows a common user to query some or all pdbs data in cdb$root. you must add necessary pdbs to common user’s container_data property:

this alter must be done by a privileged user in cdb$root container. Here, I set container_data of c##my_admin user as PDB1, PDB2 (con_ids are 3 and 4) and CDB$ROOT.

there are few important keys here:

  1. this command must be run in cdb$root.
  2. CDB$ROOT must be in container_data list because it is the root and common user will query data in it so, it must be able to see it
  3. CONTAINER=CURRENT is mandatory because by default it is ALL and this command can be run only in cdb$root which is “current”

so, c##my_admin will be able to see whole sessions in v$session for cdb$root, pdb1 and pdb2 (not PROXY_PDB which its con_id is 5). Let’s see:

so, c##my_admin is able to query other pdbs data in cdb$root. There is no session in pdb2 by the way, that is why no session for con_id 4 in v$session.

instead of giving all container data view access you can specified necessary view as this:

by setting container_data to default it will reset everything. now, I just add container_data info just for V_$session view (which is based view for v$session, you can not grant on v$session) and query v$session and datafile again with c##my_admin:

here it is. we can see session rows for other pdbs in v$session but not for v$datafile.

if you want to set all pdbs at once:

you can use ALL clause for pdb list. remember when you use SET as “ALTER USER … SET CONTAINER_DATA” it will overwrite previous container_data property. if you want to add a new PDB into container_data list then you can use ADD instead of SET:

You can query who has access to what using CDB_CONTAINER_DATA data dictionary view.

I hope this will helpful, any comments is appreciated.

Thanks for reading, wish you healthy days.

How to Demonize Apex V Function!

Hi,

Answer of the question in the subject is “Using too much V in SQL statements”. Let’s see why and what should you do.

First things first, I am NOT a APEX developer. I am a DBA and Database Developer but some of my clients are using APEX and I am dealing with many sql functions that uses V function in apex. Since the day 1, I did not like this V function because developers using it in very different ways and it returns VARCHAR2 but it uses for all kind of compares without any conversion functions (this is not on the V function of course).

So, what is the problem here? At my clients’ codes, V function is used heavily in sql statements. Some queries that I dealt with lead me to investigate V function because even if the query has no performance issue, still, it takes too much to complete. When I checked the V function, I find out that it is not DETERMINISTIC (which shouldn’t be because of the logic of DETERMINISTIC but I will talk about this later). Let’s do some tests but before I must explain a few things. I hacked the code of V a little bit and add some logging codes to demonstrate how it is worked and performance issues about it but I won’t explain how I do it (probably many of you can do it easily) and share any source code of it. whenever a V function is called my logging code will write the parameter name of v function into “my_log” table.

I added my log codes, set an apex session in my sql developer session and run these selects:

I delete log table first then call a select statement with four V function in select list run over DUAL table and then check the log table, as you see it is called 4 times. So same data is read 4 times and 4 plsql function call is occurred. this caused too much context switch between SQL and PLSQL and this can reduce your performance dramatically.  Especially, Where clause is extremely dangerous, why? Because V function might run for all rows in the table which will cause a disaster.

as you can see,  this query called  V function 79.647 times why? because there are 79.645 rows in the TMP table. Since I use V in Where clause, condition ( object_id = v(‘OBJECT_ID’) ) executed for every rows in the table (79.645 rows) and only 1 of the rows meets the condition so select list runs 1 times and there are 2 more V function (same with the one in where clause) so totally V called 79.647 times. I hope you see the problem here. in your application code if you use V function in your sql statements, it might be called too much!

How about an index? yep that will help:

so it run only 3 times at this execution because this time search operation (where condition) is run on an index not table so V function is called 1 times in where clause and 2 times in select list. this means you should have indexes at least for your condition columns.

is that enough? Let me show you a pseudo code on a database:

so many same V function call in select statement. creating and index will help you about where clause how about select list? since all items in select list cause a function call (even with same parameters) let’s test performance:

calling this query for 10.000 times (or you can thing this as calling sql 1 times for 10.000 rows) takes 3.2 seconds (20 same V function). if we change the code to this:

this query takes only 0.5 seconds. timings might significant but think about 1000 different queries that run on your database 1000 times per day and calculate spend of time for just calling V function. it is a lot! your significant amount of db time might be wasted.

Let’s improve the code a little bit more:

even better. if you increase number of call, you will see that first 2 plsql code timings will increase linearly but the last code (code just above) will give you almost same performance. Remember, V is a function not a variable. Use variables instead of using V if you can.

Even if these might help your system, main problem is still there: V is not DETERMINISTIC. you might change your codes as I suggested above or create more indexes but many developers also use V in VIEWS which is another and for me a bigger disaster. it is harder to modify the code you cannot use bind variables etc… What would happen if V was deterministic? I modified V and made it deterministic and let’s see the result with first test case without index:

pay attention that there is no index but V is just called 2 times (1 for where clause and 1 for select list, event if select list has 2 copy). it was 79.647 without DETERMINISTIC if you remember the first result.

So making V as deterministic will solve almost every kind of problem but only problem is what I looking for here is not deterministic. Why? Let me explain what deterministic is first. if a function always returns same value with same parameters then this function is a deterministic function like TO_CHAR or SUBSTR. this allow oracle to know that if calls with same parameters are repeated, it can be cached and used first calculation result for on going calls. This way, if same function with same parameters are called in a SQL statement, that function is called only 1 times but this is not V. Result of V is not deterministic because application might change the value of a V item and this means V call with same parameters can return different results because it is just return a context variable value.

So, it shouldn’t be deterministic but it should be something that acting like deterministic for an sql statement. I don’t think any apex developer wants to see a change of V values during a select statement. I think they assume values of V won’t change during the sql statement. So, there should be something like DETERMINISTIC_FOR_CALL in Oracle database. we should be able to define plsql objects as “deterministic” while they are used in a sql statement. Their result should be calculated once and used during the sql call. I think this is also necessary for any plsql object that we can call on SQL. because current structure of plsql is kind of violating the consistency. A select statement returns rows as they were when the sql call started. even if the rows are changed during the sql call, result of the statement won’t be affected from those changes but PLSQL breaks that. when a function calculated for every row, a select statement inside the plsql object can return different results because it will be affected by on going changes. This is a database option. Database should provide us this functionality, Apex team has nothing to do about it.

By the way, I am really frustrated about data type conversions about V. since V returns VARCHAR2, many developer does not care about type conversions. if you compare a V value with a number then you should use “TO_NUMBER(V(…))” for example or at least (as I search for it) NV function (number result of V). this is another issue to me.

For now, I suggest to avoid using V in a SQL statement as much as possible and use bind variable if you can.  Let’s say you have some SQL with so many V call with same parameter, what can you do?  you can “force” oracle to cache them:

there are better way to write this sql but I just want to demonstrate a way. as you see this query takes 3.26 seconds to complete because of too many V calls. you can rewrite your query and cache the V results first and then use them as this:

I am adding a NO_MERGE hint to prevent optimizer to merge queries before query run. as you see this version returns in 0.19 seconds 15 times faster becuase V functions are called only 2 times (2 different V item is selected OBJECT_ID and USER_NAME).

Edit:

Another and maybe better solution for this problem is using your own V function and create it as deterministic:

this is faster than previous join method. only thing that you should change V calls with MY_V.

I hope, this helps you on your systems. Now after those tests, I must connect with my clients and explain them to change their coding to gain significant db time.

wish you all healthy days.

Hierarchical Blocking Session List

Hi,

Today, I needed to list all blocking sessions hierarchically. As you know, all information about sessions (and blocking sessions) are in gv$session (or just v$session if you are using single instance) view. I wanted to see who is blocking who and blocker at the top. you can write simple connect by (or with clause) query but it will list blockers at the bottom so I just developed it a little bit. if I remember correctly there were a small script under rdbms/admin but it was creating some tables etc etc. I just want to get it using a simple sql statements.

here is the sql:

first, I get all blockers’ sids (blockers data source in the query) then I use them to generate the hierarchy (blocker_hierarcy data source in the query) then I remove some blockers from the hierarchy because they are also blocked by some other sessions and will be listed under them.

for example, let’s say sid 10 is blocking 2 different sessions (sid 20 and 30) and also sid 30 is blocking some other session (sid 40). so this query will produce a result like this:

you can see the hierarch in a visual way and the first 40 character of their currently running sql statements.

Hope this will be useful for you.

wish you healthy days.

 

Subquery Caching

Hi,

I want to talk about very sweet performance “trick”. I love when I do something small and gain a lot 🙂 Today, I will demonstrate subquery caching in Oracle and try to explain how it works and it’s limitations.

Probably, you are using some plsql functions in your select (or dml) statements. There is always a package that contains some functions calculates staff or return some constant strings like “base url” or “environment name” etc. some of those functions has parameters too and we call them from SQL. Calling user defined plsql function from sql statement is a performance problem because it will cause context switch a lot. Let’s say you have function return a constant string. To see the performance difference I will add a sleep code into function so it will be easier to see performance problem.

this is our function. you need to execute grant on DBMS_LOCK package to create this procedure. Now, we will call this function from an sql statements. let’s create a table and then call this function:

when I call sf_base_url function in the first select statement (2 rows) it takes 2 seconds and on the second select statement (with 4 rows) it takes 4 seconds. this proves that our function called as many times as the number of rows from the query. Did you see the problem here? this function always returns the same result but it called more than 1 times. Actually calling it just 1 time is enough. Think about it on your databases. you have business function takes 10 seconds to complete and it is called on an sql statements which returns 1000 rows. you will lose 9990 seconds (we must call it at least 1 time so 10 seconds must be spend) calling your function unnecessarily.

This is where the subquery caching comes in. We can tell Oracle to cache this function and use that value from now on. How we do it is a little bit interesting to me 🙂 We just use a subquery to call this function.

as you see, I change sf_Base_url function to a subquery and select the function value from dual. Main query returns 4 rows (rownum <=4) but it only takes 1 second! our function is just called 1 times and result of it used consequent rows. That is so small change and very nice performance gain.

So far so good. how about a function with parameters? Let’s change the function a little bit and make it have 1 parameter, returning our parameter added at the end of base url .

let’s call this from the sql and give a column value as paramater:

it takes 4 seconds as expected. Now, subquery caching:

it also takes 4 seconds. Subquery caching didn’t work? of course not it is used but this time our function is cached with it’s parameters. since every parameter is different, our function is called for each different parameter and every row has different my_string column value. we would see the performance gain if we have same values on our table. lets do it:

Now our table has 3 A, B and C rows. let’s run subquery caching again.

Now it returns 12 rows (3 of each letter) but it takes 4 seconds because our function called one time only for each parameter and repeated parameters are read from the cache. Work like a charm.

Is there a limit for that? how many different parameter can be cached and how it is work? Let’s test.

Now, our table has 256 distinct rows and then I double it by inserting all rows again. Finally, we have 512 rows on the table. Run our subquery cached select statement again.

interesting result. This table has 256 distinct value and 512 total number of rows. every row is doubled but both subquery caching and normal call takes almost same time. Did not subquery caching work? Actually, It worked but it can only cache limited number of rows. if there are more than number of rows that can be cached, new parameter values of function will overwrite an old cached values. So, our table rows are like this: 1,2,3,4…256,1,2,3,4…256 it cached some rows but new arriving rows eventually overwrite the cache and we couldn’t take advantage of subquery caching. lets order those rows:

t2 has the same table with table t but rows are ordered in the first place. When we run subquery caching example on t2 it takes 270 seconds almost half of the previous version which was on table t. now it was able to use cached values before they are overwrite.

Edit:

I wrote that my tests indicates that only 8 different values has been cached not 255 but my further tests show that this is not true. I assume Oracle uses a hash function to store data and while hashing our function and variable, some collusions occur. that is why some of the function results are overwritten frequently. this is not something we can change. of course consequent same calls are cached and take advantage of subquery caching.

thank you very much for reading. any comment will be welcomed.

wish you all healthy days.

Why should you use PLSQL_OPTIMIZE_LEVEL as 3

Hi,

As you know PLSQL_OPTIMIZE_LEVEL parameter is used to optimize the codes while compiling them and it is really important. By default it’s value is 2 but you can increase it to 3 and you should, here is why.

before begin, this is an extension for this post: http://mustafakalayci.me/2020/06/26/performance-gain-of-nocopy-parameters/

Very much thanks to Stephan Borsodi who contacted me over linked in and told me that Java developers are using database objects to interact with database. I actually remember that one of the insurance company (I had trained their developer teams years before) that uses Java, has objects in database. So, Stephan added a object type test script for testing performance gain of NOCOPY parameters. On his tests there were no performance gain for object type plsql parameters. When I do the test, I saw more than 20 times performance gain for nocopy parameters, then we discussed the situation about this and so for I realized that plsql optimize level is really doing a good stuff on this.

So what was the difference, I did my tests after setting the plsql_optimize_level to 1. when I do that nocopy parameters for object types are 20 times faster. I won’t rewrite whole code for my testing purposes, you can find it on my previous post (http://mustafakalayci.me/2020/06/26/performance-gain-of-nocopy-parameters/) but of course he added object type SP_MAIN and SP_PROCESS procedures into my test package here are their codes:

I must say that, his codding is much tidier than mine 🙂

here is the object type and the array type of that object which Stephan used:

compile the test package before starting the test and I use 1 as plsql_optimize_level

then run the test:

so, it is an amazing result, more than 30 times faster in NOCOPY parameters but why Stephan’s results shows no performance difference? I decided to run this code on a different database than my local db and I made a mistake! I forgot to compile package with plsql_optimiz_level as 1. it was 2 which is default value for plsql_optimize_level and there were no performance gain :

result was almost same and it was so fast that I run the first code for 1.000 times but on this second one I run it for 1.000.000 times and it was extremely faster than previous one. 1 million parameter call completed in 0.22 second. then I got curious about plsql_optimize_level 3 and tested of course:

and the result was even much more better. 5 times faster than plsql_optimize_level 2! of course this is the purpose of plsql_optimize_level but this is really a big performance gain with a very very small change.

Should we stop here? of course not 🙂 Let’s also add PLSQL_CODE_TYPE parameter. it is INTERPRETED by default, let’s make it NATIVE (plsql_optimize_level 3 too)  and see the difference one more time.

Yes, now it is even 2 times more faster than just plsql_optimize_level 2 version. So, if you are a Java developer or a database developer which runs Java application and use database objects (or collection of objects, table collections) then you must consider compiling your plsql objects with plsql_optimize_level 3 and plsql_code_type as NATIVE. Of course all calls will be faster with plsql_optimize_level 3 but non of the data types has this much (10 times in our example) performance gain. for example when I run the same code for VARCHAR2 data types, it worked approximately 2 times faster.

This is the type of performance tuning that I like most. with a very low effort, gain a lot 🙂 I would like to thank to Stephan one more time for improving my code and find out something beautiful about Java applications and db objects.

wish you all healthy days.

Enable Unified Audit on Linux and Windows

Hi,

As you know, since 12c Oracle introduce Unified Auditing which has much more capabilities than standard auditing. Creating policies, audit conditions, top level auditing etc. By default Oracle uses “mixed” mode which allows you to run unified auditing commands and standard auditing. you can check if you are using unified auditing by default:

if result is FALSE then you are not using unified auditing, you are using either mixed mode or standard auditing. Unfortunately, enabling unified auditing is not as easy as changing a parameter. On Linux OS:

shutdown all oracle components (database, listener etc) then

then start database and listener.

On Windows OS:

shutdown all oracle components (database, listener etc) BUT on Windows you must shutdown and start Oracle Services. shutting down and starting database using sqlplus will not work. Open services.msc and find Oracle database and listener services and stop them.

after stopping services, rename the %ORACLE_HOME%/bin/orauniaud19.dll.dbl file to %ORACLE_HOME%/bin/orauniaud19.dll. 19 in the name of dll is your version if you are using 12c then you must rename orauniaud12.dll.dbl (or orauniaud12.dll.option) file.

and then start windows services. you can check v$option view as I wrote above to see if unified auditing is enabled.

if you are not using unified auditing policies by now, you should start using them. probably they will be the only audit feature in the future. Also they are much more easier than standard auditing. You have much more option to create audit rows. for example if you are using an APEX web application (or any other) you application will logon to database with same user. if you want to audit only few of your users you can use SYS_CONTEXT values like client identifier and you can audit only those users which has specific client identifier info. Also you can create conditions to audit specific database users too. One of the most important thing is new unified audit table is tamper proof, you can not run DML on it so your audit data will be safe (unfortunately you can modify standard auditing base table AUD$). So, don’t be late.

wish you healthy days, don’t forget to get your vaccination.

Create a Clone DB from Backup on the Same Server (or remote)

Hi,

I wanted to create a post series about cloning/duplicating database on the same or remote servers. There are so many options and combinations about this like rman duplicate database with/without  target/source db connection, rman restore using backups, ASM to OS or OS to ASM, same server or remote server, OMF or regular file naming etc…

What I am about to demonstrate is creating a TEST db from PROD db on the same server with “no ASM” only OS file system using PROD DB backups (not rman duplicate database command). of course, you can use same method to create a TEST db on “remote server”  just by copying backups and archivelogs (maybe a copy of controlfile too) to same location on a remote server. So, Lets get started:

source database (let’s say this is your PROD db which is source db) SID                : orcl
duplicate/target database (let’s say this is the TEST db which you want to create ) SID : dup
method : using backups only (no duplicate database command)
server    : same or remote (you can copy your backups to a remote server under same location with prod server and follow the same steps)
file naming : OMF (Oracle Managed Files)
OS : Linux / Windows (Windows steps are also very similar. I will add some notes for windows).
Database version: 19c (19.3)

while doing this, I will try to explain why we do that step and important files of database.

1- Parameter File

Parameter file is the first file to read while starting a database instance. it defines vital and mandatory parameters to create an instance, like database name. So, to create a new database, first we need parameter file. it could be either SPFILE (binary version) or PFILE (text version). Since we can not create a binary file easily, I will create text parameter file and then create binary version of it. there thousands of parameters in Oracle but just 3 of them is enough to create an instance. under ORACLE_HOME/dbs directory, create an empty file with name initdup.ora and put those into it:

Windows OS Note: for Windows OS, directory is ORACLE_HOME/database and file name is INITDUP.ora by default. Also you need to create a Windows Service for Oracle Database instance using “oradim” tool first: “oradim -NEW -SID dup -STARTMODE auto” is enough to achieve this.

what are these 3 parameters?

  1. db_name: name of the database which is “orcl” for now. wait what? don’t we create a new db called “dup” why is this orcl ? because we will use orcl database backups to create our database and I don’t want to show all backup files to rman. so We will act as if we are creating an db called “orcl” but we will change it while doing this. we will use orcl is just for finding backups and restore/recover datafiles.
  2. db_unique_name: this parameter is set to null by default and if it is null, it’s value is db_name. so you can think it as if Oracle uses it “NVL(db_unique_name, db_name)”. as you see this parameter is set to “dup” which is name of new database. while doing restore/recover operations, if rman does not know that this is a different database then it will restore /recover source database (orcl). with this parameter we are identifying our unique database.
  3. db_create_file_dest: this is the parameter that makes your database as OMF (oracle managed files) which means you don’t need to give any file name. just say “add a datafile to this tablespace”; Oracle will add new datafile and give a name to it. we don’t need to know what is the name of a datafile. this parameter also, of course, tells that where the files will be created. by saying files not just datafiles. also redo logs and control files too. this is the “main directory”  for a database. so path pattern is like this:
    /u01/app/oracle/oradata/<instance_name>/datafile => for datafiles
    /u01/app/oracle/oradata/<instance_name>/controlfile => for controlfiles
    /u01/app/oracle/oradata/<instance_name>/onlinelog => for redo logs.
    in our example, instance name will be “dup”.

Now, we have a parameter file so we can start our database instance. Do not forget that our source database is also on the same server so we must be careful not to mess with source db. So, I will set ORACLE_SID environment variable first:

Windows OS Note: in Windows you can set your environment variable ORALCE_HOME by using “set” command not export. “set ORACLE_SID=dup”.

as you can see we started an instance already. of course we don’t even have a datafile yet but still this is an instance. we didn’t specify any memory parameter for example in pfile but they have all default values so it will work but of course if you have a defined configuration you can add them into pfile (initdup.ora) so db instance will be using them.

I created SPFILE before starting the instance so spfiledup.ora is created under ORACLE_HOME/dbs (ORACLE_HOME/database for windows os ) directory and instance is started by this spfile.

so we have a basic parameter file and our instance is up. let’s continue with step 2.

2- Control File

control file is the next step while starting a database. it stores very critical information like where are datafiles, backups, redo logs. Also many database limits are defined in controlfile like maximum number of data files which can be created in db (MAXDATAFFILES). to restore control file, first I will backup as copy control file from source db (which is orcl, our prod database). so, from “orcl”, first we create control file copy.

so I created controlORCL.ora file under /tmp which is copy of current control file of orcl db. now let’s turn to dup db and restore it.

command is “restore controlfile from ‘/tmp/controlORCL.ora’;” and control file restored under: /u01/app/oracle/oradata/DUP/controlfile/ directory. as you see it is under DUP directory because of db_unique_name parameter.

We restored our control file but this is not entirely correct because this control file is belong to orcl db and it knows this database as “orcl” but we will deal with that later. Now, this control file knows where backups are, so we can restore datafiles but first we must take our database to mount mode.

Now, restore database BUT as I mentioned, this control file belongs to “orcl” db so if I run “restore database” it will restore datafiles to original “orcl” db datafile locations.  You must be careful here, you don’t want to shutdown your source db. to restore those datafiles to a different location (dup db datafile location)  we will use “set newname” command:

since I use OMF, ” set newname for database to new;” means, restore datafiles to wherever db_file_create_dest parameter points. by default, control file knows where those datafiles are so if I restore them, it will be under ORCL db location. “set newname for database to new” cause re-read parameter file and datafile location information and restore datafiles to there. as you see in the output, datafiles are restore under DUP location (ex: channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/DUP/datafile/o1_mf_system_%u_.dbf) .

perfect. now we have datafiles too but we can not open database right away. first, even if we are able to open database, control file is still old and will look for datafiles under ORCL directory not DUP directory. we must tell controlfile to use those recently restored datafiles as primary datafiles. so we must “switch database to copy”.

let’s recover database now:

this will apply all changes which are made after the backup. it will read archivelog files and apply them into datafiles.

Good!, can we open database now? well, no 🙂 why? because as I said earlier, this controlfile is still a stranger! it knows this database as “orcl” but it is not anymore! Since we get everything we want (controlfile, datafiles, recover operation) now we can change our controlfile. to achieve that we must re-create control file and tell “this is dup db anymore”.  of course before do that, we must change “db_name” parameter anymore. it is still “orcl”. we can set it as “dup” now but this parameter can not be changed without restart. Problem about that, you can not change db_name with “alter system” if you are using spfile. so stesp are these:

  1. I create pfile
  2. shutdown database
  3. modify pfile and change db_name parameter to “dup”, remove db_unique_name parameter
  4. create spfile again. ===> This is very important. if you forgot this step, when you start db, it will be using old values.
  5. start db in mount mode.
  6. create controlfile create script using “alter database backup controlfile to trace” command.

finally we need to re create our controlfile to make it completely a dup db controlfile. I created controlfile script under the tmp directory. you can open it any text editor. you will find 2 “create controlfile ” command. one with NORESETLOGS and the other RESETLOGS. on a new database, to create redo logs, we must use RESETLOGS clause. so I will just write here RESETLOGS version in control_file.txt file.

as you can see, controlfile thinks that this is ORCL database and all redo logs are also under ORCL directory (‘/u01/app/oracle/oradata/ORCL/onlinelog/o1_mf_1_j6xpd19n_.log’,). This is also an important step. if you don’t change those ORCLs to DUP, you might cause damage to your source database which is orcl in my example. Since I use OMF, and I always use OMF just for these reasons, all I need to do is remove those path.

Also, since we want to change database name, we must add SET clause after “create controlfile reuse” part. Please note that, just changing ORCLs to DUP in this script won’t work because datafiles also has database name information in their header file. so we must use SET which means, “set database name”. this will also update datafile headers. to create controlfile again, you must be in nomount mode (in mount mode, cotnrolfiles are already in use, you can not change them.)

finally, we have a real controlfile for dup db. after now, we can  open it with resetlogs option. creating controlfile will also take our database into “mount” mode so no need another “alter database mount” command.

viola! we have a new db with a new name which is also created from another database backup. you can use this on the same or remote servers. as said before, only difference is copying backups to remote server.

I hope this helps. I will try to write for other methods too like duplicate database etc.

please make a comment, so I can believe that not daily 30 visitors on my blog are bots 🙂

wish you healthy days.

Important Edit: you might ask why we don’t create controlfile before all those operations. it has 2 reasons. creating control file will cause loosing the location of backups and more importantly, create controlfile command validates the datafiles if they are in the correct locations. so we can not create controlfile without having datafiles.