How to install Oracle Database Options like Spatial, Label Security etc After DB Creation

Hi,

yes, I am aware that title is a little bit long and too informative but I believe it is necessary because terms are complicated according to me.

When you say “option” it could be many thinks like “advanced compression” option in EE or Spatial option or Partitioning option and not all of these are the same “option”. they have different backgrounds and structures. some times they called “components” too but in the documentation they always mentioned as options. I want to write about “options” like Spatial, Jserver, Label Security, Data Vault, Olap… Do you see where I am going to? These are the “options” in dbca utility that you choose while installing the database (only if you choose “custom database” not general purposes or dw as db type) but also I will talk about partitioning and RAT (real application testing) option because their namespaces intersect in some “options” (ok, I will stop putting double quotes around the option).

I didn’t care about options much until now and I wanted to learn how to install/uninstall after db creation or what options do we have. Started to dig the documentations and I realized that it is a little bit more complicated than I thought. What I focused was the options on dbca as I said:

here they are. So instead searching docs and google first, I use a small trick and check the option “Generate Script” at the last page of dbca and I looked for the scripts. fortunately, Oracle puts every options creation into a script so it is really easy to find them but before talking about the scripts I want to talk about the option types.

Shared Objects & DLLs

Some options are made of just simple scripts but some of them are deeper! For example, if you want to install Spatial option (to use geographic coordinates in your db) you just need to run an sql script but for OLAP, you need to “link” some objects into Oracle kernel and then run some scripts. What does it mean? some options are not just tables/packages etc, they need some special C functions in oracle kernel but not everything is in the kernel. Shared Objects in Linux and (mostly known) DLL files in windows are widely used to add or remove functionalities to your application. In the end Oracle is an application and uses those dynamic libraries. I am not a software engineer so this part is a little bit above my paygrade but I will try to explain. In Linux if you want to add a functionality to your application via shared objects (dlls in windows os) you must “link” them to your application by “make” command. For example, to use OLAP it must be linked to your oracle binaries in Linux, as such necessary DLLs must be available in windows. Remember this operation is not per database but Oracle Home. When you do this, all current and future databases in this Oracle Home will be or will not be able to use these options.

Of course, it is not that easy 🙂 some options must be linked to Oracle binaries but they don’t need to run an sql script. So, if we group the options by their installation types, we have 3 different types:

  1. SQL Scripts (for ex: spatial)
  2. Linking Binaries (for ex: unified auditing)
  3. Linking Binaries & SQL Scripts (for ex: OLAP)

To install Spatial you just need to run $ORACLE_HOME/md/admin/mdinst.sql script as a sysdba user. That is it. This script will create MDSYS user and necessary objects to use spatial option.

To enable “unified audit” option in database you must just link some binaries but you don’t need to know everything of course. you can check this post of mine: http://mustafakalayci.me/2021/09/08/enable-unified-audit-on-linux-and-windows/

spoiler:
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk uniaud_on ioracle

and To install OLAP, first you must link OLAP module first and then then run $ORACLE_HOME/olap/admin/olap.sql script as sysdba.

Linking Binaries & chopt

for linking operation Oracle provides as a simple tool called chopt short for Change Option. Instead of running “make” command you can use chopt and disable and enable “some” options from Oracle Kernel. it is basically just calling “make” command itself. It does support very few option:

as you see, you can enable/disable OLAP, Advanced Analytics, Partitioning and RAT.  I want to repeat again, when you use chopt, you are linking/unlinking some binaries. So, this will add or remove some components from Oracle kernel and currently available and future databases will be or will not be able to use these options. usege of chopt is very easy:

if you run it on Windows, it will just rename a DLL (to enable option, it will rename it to correct name and to disable, rename it to something else)

chopt has different options from version to version. for example in old version there a module called DM (data mining) but in 19c it is under the Advanced Analytics so if you want to enable/disable data mining you must use chopt enable/disable oaa command. Also, you can see that “partitioning” is an option too. if you don’t have a partitioning license you can disable partitioning option so that non of developers create a partitioned table and caused (maybe) thousands of dollars cost.

Only use the options you need! Nothing more. Consider that unnecessary options will increase db upgrade timings too. Also, as in partitioning example, some options are licensed options which means you have to pay to use them. if you don’t have the license then disable it.

As far as I understand, there is a historical development in this. for example, in old versions, Spatial was an option that needed to be linked too. sdo_on and sdo_off was used to enable/disable the Spatial.

as you see, when you try to turn it off now, warns us about “sdo_off is disable”. it is always on anymore.

here is some examples to enable/disable options in Oracle kernel but some of them (like Label security) is deprecated.

Product/Component Enable Switch Disable Switch
Automated Storage Management asm_on asm_off
Oracle Data Mining dm_on dm_off
Database Vault dv_on dv_off
Oracle OLAP olap_on olap_off
Oracle Label Security lbac_on lbac_off
Oracle Partitioning part_on part_off
Real Application Cluster rac_on rac_off
Real Application Testing rat_on rat_off

Also, please consider that if you disable an options like OLAP using chopt (or make command) it will be disabled in dbca too. you won’t be able to choose it:

WARNING: before use chopt or make commands, always close all services in the oracle home first (database, listener etc). for windows servers, also stop the related Oracle Services in Windows Services (like OracleServiceOrcl…)

Dbca Options

Options that you can install into a database (might required binary linking). you saw them at the screenshot above. Every each of them has some unique features (I won’t go detail the purposes of the options) and some of them depends to others. So, basic hierarchy is like this:

  1. Oracle JVM
    1. OLAP
    2. Multimedia
  2. Oracle Text
  3. Spatial
  4. APEX
  5. Label Security
    1. Database Vault
  6. Extension for .NET

to be able to install Multimedia you must have Oracle JVM first or you can install APEX or Label security without installing anything else. What are the scripts to install? you should always check the documentation or support to achieve this but as an example I wanted to write them in 19.13 version. DO NOT run them directly! Always check the documentation first, scripts might be changed! those samples are taken from dbca generated scripts.

  1. Oracle JVM
    $ORACLE_HOME/javavm/install/initjvm.sql;
    $ORACLE_HOME/xdk/admin/initxml.sql;
    $ORACLE_HOME/xdk/admin/xmlja.sql;
    $ORACLE_HOME/rdbms/admin/catjava.sql;
    connect “SYS”/”&&sysPassword” as SYSDBA
    $ORACLE_HOME/rdbms/admin/catxdbj.sql;
  2. OLAP
    $ORACLE_HOME/olap/admin/olap.sql SYSAUX TEMP;
  3. Multimedia
    $ORACLE_HOME/ord/im/admin/iminst.sql;
  4. Oracle Text
    $ORACLE_HOME/ctx/admin/catctx.sql Xbkfsdcdf1ggh_123 SYSAUX TEMP LOCK;
    alter user CTXSYS account unlock identified by “CTXSYS”;
    connect “CTXSYS”/”CTXSYS”
    $ORACLE_HOME/ctx/admin/defaults/dr0defin.sql “AMERICAN”;
    connect “SYS”/”&&sysPassword” as SYSDBA
    alter user CTXSYS password expire account lock;
    $ORACLE_HOME/rdbms/admin/dbmsxdbt.sql;
  5. Spatial
    $ORACLE_HOME/md/admin/mdinst.sql;
  6. Apex
    $ORACLE_HOME/apex/catapx.sql Xbkfsdcdf1ggh_123 SYSAUX SYSAUX TEMP /i/ NONE;
  7. Label Security
    $ORACLE_HOME/rdbms/admin/catols.sql;
  8. Database Vault
    $ORACLE_HOME/rdbms/admin/catmac.sql SYSAUX TEMP;
  9. Extension for .NET
    $ORACLE_HOME/rdbms/admin/dbmsclr.plb (in windows only)

How to Install Dbca Options

Well, I pointed out the scripts and you can run them (for the same version, and always check the documentations first) but of course it is not that simple 🙂

This is very important! Installation method depends on whether you are using a Container database or Non Container database! if you are not using container database (old design which is not supported anymore by starting 21c), you can run the scripts by connecting a sysdba user but for a container database it is more complicated.

for a container database, most of those options must be run in cdb$root and then all the pdbs but this is a hard job and opens for mistakes. instead of this, use catcon.pl perl file to automate this. spatial sample creation script taken from dbca generated scripts:

it seems complicated but mostly paths makes it longer. simply call a perl script and as a parameter it passes the original script of the necessary option. Always use catcon.pl to install/uninstall an option in a container database.

How to Check Which Options are Installed

This is (as always) not that simple 🙂 Some options are “registered options” so you can list them via dba_registry data dictionary view:

These rows do not match to an option exactly. OLAP option is listed as 2 rows, “Oracle OLAP API” and “OLAP Analytic Workspace”. Also, some options are installed by default like “Oracle Workspace Manager”.

is that all? of course not. these are registered options (components) but also there are options in Oracle kernel like unified auditing. it is not depend on an sql script. for all options in the kernel use V$OPTION view:

I just listed some of the rows because it is a long list.

Uninstall an Option

Uninstalling and option is not that easy because some of the options create specific objects in different schemas (like sys) and to uninstall it either you must find every object that used by the option or use some scripts again that either Oracle Support provide or exist under the oracle home directory.

To Uninstall Spatial, for instance, you can simply just drop MDSYS schema but there might be some tables which uses spatial data types, indexes etc. you must find and drop them first. even if it seems easy there is always a catch. So, be careful and always follow support documents about it.

In Conclusion

So, it is a deep topic and probably there are much more. I said this before but again; remove all unused options from your database. They will be a burden to your system. So far these are what I understand and learn… I hope, these are useful and helpful to you while understanding the db options.

As always, wish you healthy days, NO WAR in anywhere, all life in this planet matters.

Tables with Memoptimize for Write (fast ingest)

Hi There,

this is the second part of this post: http://mustafakalayci.me/2022/02/15/tables-with-memoptimize-for-read/

Oracle adda new feature called memoptimized tables for mostly devices like IoTs. Fast lookup and fast ingest tables. I have already mentioned about fast lookup (memoptimize for read) tables in my previous post and now I want to write about Fast Ingest tables. as Fast ingest tables, those tables are optimized for inserting new data. Especially small devices (like any IoTs) are tend to be send data more than processing or querying data and we have billions of them.

for the Fast Lookup tables (in my previous post), plsql is not an option but fortunately we can use fast ingest tables in plsql. My tests will be based on plsql codes. as fast lookup tables, you will set table as fast ingest table (memoptimize for write) and while inserting you need to use a special hint “MEMOPTIMIZE_WRITE”. this way Oracle will understand this is a fast ingest table and inserts will be completed accordingly.

How it is work? your insert statements (either row by row or bulk) will be stacked in LARGE_POOL (a memory component in database) and after passing a threshold, data will be inserted (as if a bulk insert). this provides significant performance increment but as always, nothing is perfect and there is always a “but”. So, this will increase your write speed “but” your data will not be immediate anymore! you might not see your data just after inserting and committing it because it will be cached in large pool and after a time will be inserted to the table. Secondly, you might loose some data! this is a big “but”! since data is in large pool for a certain amount of time, any thing that can go wrong like an instance crash, might cause loose some of the data. you must be careful while using this feature.

So let’s test and see the difference. To demonstrate this, I will shutdown and startup my database to remove everything in the memory and then connect as “mustafa” user.

ok, before start my test, I would like to check my large pool size first. large_pool parameter is 0 as usual because I am using ASMM (sga_target is set) so I need to check dynamic components view:

sizes are in megabytes. as you see I have 96MB assigned large pool.

Let’s create two tables, one for memoptimize for write (fast ingest) and one for standard heap table.

now, I will do inserts row by rows and commit at the end:

as you see I use MEMOPTIMIZE_WRITE hint in the first plsql code for test_fast_ingest table. of course timing takes our attention:

fast ingest : 25.44 seconds
standard heap: 25.54 seconds

huh! is there a problem? well actually not, let’s check large pool size again:

as you see, large pool has reached it’s limits. it is now 768MB. as I said, memoptimize for write uses large pool, since it was not enough to do the job we waited larges pool’s growth operation. let’s run the same script again:

here it is. fast ingest table is more than two times faster than standard heap table. this is a major performance increment. Let’s try commit row by row:

this is a hard job to do and even for this job, fast ingest table is doing a great job.  for row by row commit, fast ingest time increased 2.25 times (approximately) and heap table time increased 2.6 times. if you have many devices sending many insert statements then fast ingest table will be able to handle it. even so, increasing large pool size would increase the performance.

I have already mentioned about some cons like inserts are not immediate (even if they are committed) but also there is one more cons! lets truncate the table and make inserts. please remember that ID column is PRIMARY KEY.

as you see, I inserted three rows and they all have same ID, so there must be a unique constraint violations right? well, in my tests it didn’t probably because data is not inserted yet. it is gathered in large pool and then will be inserted but it is not my session doing the inserts so I won’t get any unique constraint violation for this action. you must be careful while using this.

fast ingest tables have more satisfying results than fast lookup tables for me. we can increase the performance at least 2 times here! And it works with bulk inserts too:

created a base table (tmp_base) with 1M rows and use it to insert the data. as a result, fast ingest table is 4.2 times faster than heap table.

I expect memoptimize tables (both read and write) will be developed much more and become more popular. Just careful using fast ingest because it has significant cons. agan; you might loose data even if you committed them!

wish you all healthy days.

Tables with Memoptimize for Read (fast lookup)

Hi,

I would like to share my initial thoughts about memoptimized for read (fast lookup) tables in this post. fast lookup tables came with 18c and they are here to provide faster access to a table if you are using primary key with a equality condition. this is basically single table hash cluster. instead of using primary key index, Oracle creates a hash index and searching on a hash index should be much more faster than a unique index.

first, create the test environment.

to use a fast lookup table you must set memoptimize_pool_size paramter to a number higher than 0. this is a static memory component in sga to store hash index of the table. this memory component is not resizable which means it won’t grow or become smaller with automatic shared memory management.

create a table for fast lookup:

segment creation must be immediate otherwise you will get an error! Your table must have a primary key as well, since whole structure is based on PK, it is logical to have one. I roughly inserted 146.000 rows into table.

EDIT: I also tried those tests with 14 million rows too but results are similar.

this is not enough! you must use dbms_memoptimize package to use this table as fast lookup table.

let’s see what is happening now:

I skipped first execution statistics! you will see a small consistent gets at the first execution but consecutive executions will be like above.

as you see there are new execution plan steps “TABLE ACCESS BY INDEX ROWID READ OPTIM” and “INDEX UNIQUE SCAN READ OPTIM”, so these steps tell us, an hash index is used to retrieve data like as in key-value pairs. there is no “consistent gets”. this is amazing. almost nothing has read for this query and that should make this query so much faster.  Is it? well, I couldn’t find it, not as much as I expected at least. Actually, I found different blog posts mentioning about fast lookup tables but non of them has made a performance test.

So, I want to test the speed but there is a problem. for now, there are many limitations on fast lookup tables. one of them is “you cannot use it via plsql”. this means I can’t create a simple plsql procedure and run a query for 100000 times and compare timings. to do a test, I wrote a small python code.

Warning: I am no python developer, I just now python to solve my small problems, this code that I share below probably has many bugs but I just use it for this test purposes.

here is the python code I use:

after changing code many times, I used this final code. this code creates 9 threads and every thread run “select * from tmp_optimize where id = :myid” query for 100.000 times and myid variable value is between 1 and 10000 in circular way in this loop. each threads print total execution time for 100.000 runs of the query. here is the result:

timing is changed between 16.0 and 16.1 seconds for this fast lookup tables. how about a normal table with normal primary key?

I used same python code just changed table name from “tmp_optimize” to “tmp_not_optimize” and result is like this:

it is between 17.1 and 17.2 seconds. there is %6 performance gain (give or take).

I am not sure whether %6 is a good gain or not because to use fast lookup table actively, we need to separate serious amount of memory area. Also, there are few limitations like, cannot be used with partitioned or compressed tables, cannot be used in plsql (not yet anyway). only equality conditions on primary keys can advantage of fast lookup.

During my tests I didn’t check wait events. I expect that less latch waits on fast lookup table since no consistent gets occur. Maybe there is something I missed and I will do more tests on these tables (I didn’t even test “memoptimize for write” (fast ingest) yet).

wish you all healthy days.

How to Restore Dropped PDB

Hi,

it seems easy, right? Well, not that much but not that hard either. if you drop a pdb in a cdb, you cannot restore it using current database because when you drop the pdb, all information about the pdb is lost. So, what is the solution? you need a point in time recovered duplicated database.

I have a cdb named CDB and had a pluggable database named PDB1. this database is a 21c database and installation of CDB is pretty standard.

db_create_file_dest     = /u01/app/oracle/oradata
db_recovery_file_dest = /u01/app/oracle/fast_recovery_area

Let’s say I dropped it by mistake and make a recover:

I created the pdb and create a table in it. I also get the time because it will be required while restoring pdb. now, let’s drop it by mistake:

I don’t add outputs for now. Okay, now I lost my pdb. to restore it, we must create a duplicate database, recovered until a specific time. step by step:

1- create a new pfile for duplicate database (let’s called out db name as “dup”)

2- create password file:

3- startup the instance

4- use rman and create duplicate database

you can get different error messages at this point but they all have enough information to solve the problem. if you get this error:

RMAN-05501: aborting duplication of target database
RMAN-05657: There are multiple database IDs present.

then you must set dbid before run the duplicate database command.

5- after duplicate database command completed (which will take for a while) your new database called “dup” will be ready and in mount mode.

at this point if you query pdbs (containers), a strange name will appear “_###_UNKNOWN_PDB_#_4” (4 is pdb number so it might  be different on your systems). after Opening your database with resetlogs (this is point in time recovery) it will be converted to correct name.

here it is. after this point we have 2 container databases on the server, cdb and dup. so, we can simply unplug and plug pdb1 into database CDB again.

I use “move” while plugging it into CDB database because I want its datafiles move to db_create_file_dest locations of CDB database.

that’s it. I found this solution is a little bit tiresome and I believe next releases will have easier solutions but still we can complete what we need in a few commands.

thanks for reading, I hope this helps.

wish you all healthy days.

CONTAINER_DATA for Common Users

Hello Everyone (if anyone reading this 🙂 ),

it has been a long year and I am glad it is over. As always, we hope the for “a time of period” like new year, birthday etc etc but hope is the only thing that we can hold on. I hope whole world get rid of this illness called corona, racism, wars (and again etc etc, list goes on)… Wish everyone a healthy, peaceful year.

In my personal blog, I mentioned vary of things and I realized that I didn’t mention about container database structure at all. Main reason for this is I was lazy on multi-tenant architecture and didn’t work on it, yeah, pretty much this was the reason 🙂

I wanted to start with common user’s container_data attribute (or property I am not sure). As you know, in multi-tenant architecture there two types of users COMMON and LOCAL. Common users are defined in the CDB (not in a pluggable database) and they are known by every current and future pdb. Local users are defined in a pluggable database and can access to only pluggable database they are defined. By saying “defined”, I mean “created”. So, local users don’t aware of other pluggable databases or container database (cdb) but common users are aware that they are in a CDB and can connect to PDBs on that CDB if they have privileges.

By default, newly created common users doesn’t see whole data in container data views or tables. what is “container data views”? As you noticed (even if you use non-cdb database 12c and above) there are some new columns on data dictionary and v$ views like CON_ID. in a multi-tenant database when a privileged user like SYS queried V$SESSION, user can see whole sessions in all containers (cdb$root and pdbs). Let’s see:

I will be running those queries on a 21.3 version. I have 3 PDBs on my CDB:

these are just dummy PDBs that I use for testing purpose on VM. Now Lets connect as SYS and query some views:

I connected as SYS and I am in the cdb$root container. I queried V$SESSION and V$DATAFILE views by grouping con_id column. I am able to see sessions on different PDBs, in my example, con_id 3 and 5 has 1 session each. V$DATAFILE query shows count of all data files per PDBs.

Now, let’s create a common user and give necessary privileges and query with that user:

I created C##MY_ADMIN user and grant DBA for all containers. So, this user can connect to any pdb and query any data dictionary view. for example, c##my_admin can connect to pdb1, pdb2 etc and query v$session in all of them and is able to see whole session for current pdb that user in it but problem is when c##my_admin connected to root (cdb$root), he/she won’t be able to see whole sessions in CDB (including PDB sessions). just the ones in cdb$root:

as you see, only sessions in con_id 1 which is cdb$root are listed. By the way, con_id 0 is CDB itself and only background processes are in there and they are listed in v$session all the time.

again, v$datafile lists only files in current container which is cdb$root. Please consider that c##my_admin is a DBA in CDB so it can connect to any PDB and query their data. it has necessary privileges for that. It has just don’t have “privilege” to query PDBs’ data in the CDB$ROOT container.

That is what CONTAINER_DATA does. it allows a common user to query some or all pdbs data in cdb$root. you must add necessary pdbs to common user’s container_data property:

this alter must be done by a privileged user in cdb$root container. Here, I set container_data of c##my_admin user as PDB1, PDB2 (con_ids are 3 and 4) and CDB$ROOT.

there are few important keys here:

  1. this command must be run in cdb$root.
  2. CDB$ROOT must be in container_data list because it is the root and common user will query data in it so, it must be able to see it
  3. CONTAINER=CURRENT is mandatory because by default it is ALL and this command can be run only in cdb$root which is “current”

so, c##my_admin will be able to see whole sessions in v$session for cdb$root, pdb1 and pdb2 (not PROXY_PDB which its con_id is 5). Let’s see:

so, c##my_admin is able to query other pdbs data in cdb$root. There is no session in pdb2 by the way, that is why no session for con_id 4 in v$session.

instead of giving all container data view access you can specified necessary view as this:

by setting container_data to default it will reset everything. now, I just add container_data info just for V_$session view (which is based view for v$session, you can not grant on v$session) and query v$session and datafile again with c##my_admin:

here it is. we can see session rows for other pdbs in v$session but not for v$datafile.

if you want to set all pdbs at once:

you can use ALL clause for pdb list. remember when you use SET as “ALTER USER … SET CONTAINER_DATA” it will overwrite previous container_data property. if you want to add a new PDB into container_data list then you can use ADD instead of SET:

You can query who has access to what using CDB_CONTAINER_DATA data dictionary view.

I hope this will helpful, any comments is appreciated.

Thanks for reading, wish you healthy days.

How to Demonize Apex V Function!

Hi,

Answer of the question in the subject is “Using too much V in SQL statements”. Let’s see why and what should you do.

First things first, I am NOT a APEX developer. I am a DBA and Database Developer but some of my clients are using APEX and I am dealing with many sql functions that uses V function in apex. Since the day 1, I did not like this V function because developers using it in very different ways and it returns VARCHAR2 but it uses for all kind of compares without any conversion functions (this is not on the V function of course).

So, what is the problem here? At my clients’ codes, V function is used heavily in sql statements. Some queries that I dealt with lead me to investigate V function because even if the query has no performance issue, still, it takes too much to complete. When I checked the V function, I find out that it is not DETERMINISTIC (which shouldn’t be because of the logic of DETERMINISTIC but I will talk about this later). Let’s do some tests but before I must explain a few things. I hacked the code of V a little bit and add some logging codes to demonstrate how it is worked and performance issues about it but I won’t explain how I do it (probably many of you can do it easily) and share any source code of it. whenever a V function is called my logging code will write the parameter name of v function into “my_log” table.

I added my log codes, set an apex session in my sql developer session and run these selects:

I delete log table first then call a select statement with four V function in select list run over DUAL table and then check the log table, as you see it is called 4 times. So same data is read 4 times and 4 plsql function call is occurred. this caused too much context switch between SQL and PLSQL and this can reduce your performance dramatically.  Especially, Where clause is extremely dangerous, why? Because V function might run for all rows in the table which will cause a disaster.

as you can see,  this query called  V function 79.647 times why? because there are 79.645 rows in the TMP table. Since I use V in Where clause, condition ( object_id = v(‘OBJECT_ID’) ) executed for every rows in the table (79.645 rows) and only 1 of the rows meets the condition so select list runs 1 times and there are 2 more V function (same with the one in where clause) so totally V called 79.647 times. I hope you see the problem here. in your application code if you use V function in your sql statements, it might be called too much!

How about an index? yep that will help:

so it run only 3 times at this execution because this time search operation (where condition) is run on an index not table so V function is called 1 times in where clause and 2 times in select list. this means you should have indexes at least for your condition columns.

is that enough? Let me show you a pseudo code on a database:

so many same V function call in select statement. creating and index will help you about where clause how about select list? since all items in select list cause a function call (even with same parameters) let’s test performance:

calling this query for 10.000 times (or you can thing this as calling sql 1 times for 10.000 rows) takes 3.2 seconds (20 same V function). if we change the code to this:

this query takes only 0.5 seconds. timings might significant but think about 1000 different queries that run on your database 1000 times per day and calculate spend of time for just calling V function. it is a lot! your significant amount of db time might be wasted.

Let’s improve the code a little bit more:

even better. if you increase number of call, you will see that first 2 plsql code timings will increase linearly but the last code (code just above) will give you almost same performance. Remember, V is a function not a variable. Use variables instead of using V if you can.

Even if these might help your system, main problem is still there: V is not DETERMINISTIC. you might change your codes as I suggested above or create more indexes but many developers also use V in VIEWS which is another and for me a bigger disaster. it is harder to modify the code you cannot use bind variables etc… What would happen if V was deterministic? I modified V and made it deterministic and let’s see the result with first test case without index:

pay attention that there is no index but V is just called 2 times (1 for where clause and 1 for select list, event if select list has 2 copy). it was 79.647 without DETERMINISTIC if you remember the first result.

So making V as deterministic will solve almost every kind of problem but only problem is what I looking for here is not deterministic. Why? Let me explain what deterministic is first. if a function always returns same value with same parameters then this function is a deterministic function like TO_CHAR or SUBSTR. this allow oracle to know that if calls with same parameters are repeated, it can be cached and used first calculation result for on going calls. This way, if same function with same parameters are called in a SQL statement, that function is called only 1 times but this is not V. Result of V is not deterministic because application might change the value of a V item and this means V call with same parameters can return different results because it is just return a context variable value.

So, it shouldn’t be deterministic but it should be something that acting like deterministic for an sql statement. I don’t think any apex developer wants to see a change of V values during a select statement. I think they assume values of V won’t change during the sql statement. So, there should be something like DETERMINISTIC_FOR_CALL in Oracle database. we should be able to define plsql objects as “deterministic” while they are used in a sql statement. Their result should be calculated once and used during the sql call. I think this is also necessary for any plsql object that we can call on SQL. because current structure of plsql is kind of violating the consistency. A select statement returns rows as they were when the sql call started. even if the rows are changed during the sql call, result of the statement won’t be affected from those changes but PLSQL breaks that. when a function calculated for every row, a select statement inside the plsql object can return different results because it will be affected by on going changes. This is a database option. Database should provide us this functionality, Apex team has nothing to do about it.

By the way, I am really frustrated about data type conversions about V. since V returns VARCHAR2, many developer does not care about type conversions. if you compare a V value with a number then you should use “TO_NUMBER(V(…))” for example or at least (as I search for it) NV function (number result of V). this is another issue to me.

For now, I suggest to avoid using V in a SQL statement as much as possible and use bind variable if you can.  Let’s say you have some SQL with so many V call with same parameter, what can you do?  you can “force” oracle to cache them:

there are better way to write this sql but I just want to demonstrate a way. as you see this query takes 3.26 seconds to complete because of too many V calls. you can rewrite your query and cache the V results first and then use them as this:

I am adding a NO_MERGE hint to prevent optimizer to merge queries before query run. as you see this version returns in 0.19 seconds 15 times faster becuase V functions are called only 2 times (2 different V item is selected OBJECT_ID and USER_NAME).

Edit:

Another and maybe better solution for this problem is using your own V function and create it as deterministic:

this is faster than previous join method. only thing that you should change V calls with MY_V.

I hope, this helps you on your systems. Now after those tests, I must connect with my clients and explain them to change their coding to gain significant db time.

wish you all healthy days.

Hierarchical Blocking Session List

Hi,

Today, I needed to list all blocking sessions hierarchically. As you know, all information about sessions (and blocking sessions) are in gv$session (or just v$session if you are using single instance) view. I wanted to see who is blocking who and blocker at the top. you can write simple connect by (or with clause) query but it will list blockers at the bottom so I just developed it a little bit. if I remember correctly there were a small script under rdbms/admin but it was creating some tables etc etc. I just want to get it using a simple sql statements.

here is the sql:

first, I get all blockers’ sids (blockers data source in the query) then I use them to generate the hierarchy (blocker_hierarcy data source in the query) then I remove some blockers from the hierarchy because they are also blocked by some other sessions and will be listed under them.

for example, let’s say sid 10 is blocking 2 different sessions (sid 20 and 30) and also sid 30 is blocking some other session (sid 40). so this query will produce a result like this:

you can see the hierarch in a visual way and the first 40 character of their currently running sql statements.

Hope this will be useful for you.

wish you healthy days.

 

Subquery Caching

Hi,

I want to talk about very sweet performance “trick”. I love when I do something small and gain a lot 🙂 Today, I will demonstrate subquery caching in Oracle and try to explain how it works and it’s limitations.

Probably, you are using some plsql functions in your select (or dml) statements. There is always a package that contains some functions calculates staff or return some constant strings like “base url” or “environment name” etc. some of those functions has parameters too and we call them from SQL. Calling user defined plsql function from sql statement is a performance problem because it will cause context switch a lot. Let’s say you have function return a constant string. To see the performance difference I will add a sleep code into function so it will be easier to see performance problem.

this is our function. you need to execute grant on DBMS_LOCK package to create this procedure. Now, we will call this function from an sql statements. let’s create a table and then call this function:

when I call sf_base_url function in the first select statement (2 rows) it takes 2 seconds and on the second select statement (with 4 rows) it takes 4 seconds. this proves that our function called as many times as the number of rows from the query. Did you see the problem here? this function always returns the same result but it called more than 1 times. Actually calling it just 1 time is enough. Think about it on your databases. you have business function takes 10 seconds to complete and it is called on an sql statements which returns 1000 rows. you will lose 9990 seconds (we must call it at least 1 time so 10 seconds must be spend) calling your function unnecessarily.

This is where the subquery caching comes in. We can tell Oracle to cache this function and use that value from now on. How we do it is a little bit interesting to me 🙂 We just use a subquery to call this function.

as you see, I change sf_Base_url function to a subquery and select the function value from dual. Main query returns 4 rows (rownum <=4) but it only takes 1 second! our function is just called 1 times and result of it used consequent rows. That is so small change and very nice performance gain.

So far so good. how about a function with parameters? Let’s change the function a little bit and make it have 1 parameter, returning our parameter added at the end of base url .

let’s call this from the sql and give a column value as paramater:

it takes 4 seconds as expected. Now, subquery caching:

it also takes 4 seconds. Subquery caching didn’t work? of course not it is used but this time our function is cached with it’s parameters. since every parameter is different, our function is called for each different parameter and every row has different my_string column value. we would see the performance gain if we have same values on our table. lets do it:

Now our table has 3 A, B and C rows. let’s run subquery caching again.

Now it returns 12 rows (3 of each letter) but it takes 4 seconds because our function called one time only for each parameter and repeated parameters are read from the cache. Work like a charm.

Is there a limit for that? how many different parameter can be cached and how it is work? Let’s test.

Now, our table has 256 distinct rows and then I double it by inserting all rows again. Finally, we have 512 rows on the table. Run our subquery cached select statement again.

interesting result. This table has 256 distinct value and 512 total number of rows. every row is doubled but both subquery caching and normal call takes almost same time. Did not subquery caching work? Actually, It worked but it can only cache limited number of rows. if there are more than number of rows that can be cached, new parameter values of function will overwrite an old cached values. So, our table rows are like this: 1,2,3,4…256,1,2,3,4…256 it cached some rows but new arriving rows eventually overwrite the cache and we couldn’t take advantage of subquery caching. lets order those rows:

t2 has the same table with table t but rows are ordered in the first place. When we run subquery caching example on t2 it takes 270 seconds almost half of the previous version which was on table t. now it was able to use cached values before they are overwrite.

Edit:

I wrote that my tests indicates that only 8 different values has been cached not 255 but my further tests show that this is not true. I assume Oracle uses a hash function to store data and while hashing our function and variable, some collusions occur. that is why some of the function results are overwritten frequently. this is not something we can change. of course consequent same calls are cached and take advantage of subquery caching.

thank you very much for reading. any comment will be welcomed.

wish you all healthy days.

Why should you use PLSQL_OPTIMIZE_LEVEL as 3

Hi,

As you know PLSQL_OPTIMIZE_LEVEL parameter is used to optimize the codes while compiling them and it is really important. By default it’s value is 2 but you can increase it to 3 and you should, here is why.

before begin, this is an extension for this post: http://mustafakalayci.me/2020/06/26/performance-gain-of-nocopy-parameters/

Very much thanks to Stephan Borsodi who contacted me over linked in and told me that Java developers are using database objects to interact with database. I actually remember that one of the insurance company (I had trained their developer teams years before) that uses Java, has objects in database. So, Stephan added a object type test script for testing performance gain of NOCOPY parameters. On his tests there were no performance gain for object type plsql parameters. When I do the test, I saw more than 20 times performance gain for nocopy parameters, then we discussed the situation about this and so for I realized that plsql optimize level is really doing a good stuff on this.

So what was the difference, I did my tests after setting the plsql_optimize_level to 1. when I do that nocopy parameters for object types are 20 times faster. I won’t rewrite whole code for my testing purposes, you can find it on my previous post (http://mustafakalayci.me/2020/06/26/performance-gain-of-nocopy-parameters/) but of course he added object type SP_MAIN and SP_PROCESS procedures into my test package here are their codes:

I must say that, his codding is much tidier than mine 🙂

here is the object type and the array type of that object which Stephan used:

compile the test package before starting the test and I use 1 as plsql_optimize_level

then run the test:

so, it is an amazing result, more than 30 times faster in NOCOPY parameters but why Stephan’s results shows no performance difference? I decided to run this code on a different database than my local db and I made a mistake! I forgot to compile package with plsql_optimiz_level as 1. it was 2 which is default value for plsql_optimize_level and there were no performance gain :

result was almost same and it was so fast that I run the first code for 1.000 times but on this second one I run it for 1.000.000 times and it was extremely faster than previous one. 1 million parameter call completed in 0.22 second. then I got curious about plsql_optimize_level 3 and tested of course:

and the result was even much more better. 5 times faster than plsql_optimize_level 2! of course this is the purpose of plsql_optimize_level but this is really a big performance gain with a very very small change.

Should we stop here? of course not 🙂 Let’s also add PLSQL_CODE_TYPE parameter. it is INTERPRETED by default, let’s make it NATIVE (plsql_optimize_level 3 too)  and see the difference one more time.

Yes, now it is even 2 times more faster than just plsql_optimize_level 2 version. So, if you are a Java developer or a database developer which runs Java application and use database objects (or collection of objects, table collections) then you must consider compiling your plsql objects with plsql_optimize_level 3 and plsql_code_type as NATIVE. Of course all calls will be faster with plsql_optimize_level 3 but non of the data types has this much (10 times in our example) performance gain. for example when I run the same code for VARCHAR2 data types, it worked approximately 2 times faster.

This is the type of performance tuning that I like most. with a very low effort, gain a lot 🙂 I would like to thank to Stephan one more time for improving my code and find out something beautiful about Java applications and db objects.

wish you all healthy days.

ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT While Compiling a Huge PLSQL package

Hello everyone,

Lately, I faced with an interesting ORA-04036 errors at one of my clients’ development database. My notification modules alerted me about ORA-04036 error and I was just started to investigate trace files (and saw a package name frequently) and one of the developer sent me an email about the error they got. They were trying to compile a package and they got this pga error. it was interesting because they didn’t even try to change the code they were just trying to compile the package but after a minute, Oracle raises ORA-04036.

So, the problem, obviously, related to that package and when I open the package source code, it was a little bit shock to me. The package contains 164 procedure/function and source code has more than 24.000 lines! This is the biggest package that I ever see (and I worked with really big/old/complex systems). with a quick look, unfortunately, many developers add many things to the package and not all of them are related. they used this package as a container for everything. Many procedures/functions are not even related by code or business logic.

Since this is a development database, server is really small (14GB memory, 2 cpu core) and weak. Compiling a source code that big causes too much use of PGA and finally raise the error. As you guess first request from dev team was increase the memory or remove the pga limit of course and my reactions was an absolute no to that as well 🙂 having that big package is bad for in every angel! I will explain the solution (workaround) at the end of my post but let’s talk about plsql objects and their sizes first.

Oracle says that (Oracle university contents) a plsql procedure or function works with performance at most 60-70 lines of codes. this might be a little bit utopic but true. Packages contain of procedures and functions and Packages should contain only related procedures/functions. Let’s say you have some conversion functions to standardize the converted strings and you shouldn’t put them into an invoicing related package. they should have their own package and can be used from in it. In that point of view, a package can contain 30-35 procedures/functions at most. Remember you should divide your code as much as you can, that way you will be able to manage your code more easily and your code will be far away from spaghetti coding. if we say a procedure or function should at most 100 lines of code, a package shouldn’t be more than 3000-3500 lines.

Packages are copied to users’ pga area before they run. You might be familiar to “ORA-04068: existing state of packages has been discarded” error. if your package has global variables and source code has been changed by another user, you will get this error because source code is copied to you pga area and since it is different now, all code must be reloaded. Even if you use just one procedure or function in that code, all the codes of package is loaded into the memory. That is really unnecessary. Keep your packages small.

How did we overcome the error? As I said before, this server is really small and we already using almost all the source on the server and as a workaround, I talked to developers and tell them to split up those code into another package. Of course, development team has some serious deadlines currently and it is not logical to ask them separate those code as they were supposed to be (create new packages grouped by business logic and call those new packages from application etc) but we talked about this job and it must be done in somewhere. They created necessary action records to do that as well. As the solution, I told them just create a new package, move biggest procedures/functions to new package and call this new package from the current package’s procedures and function.

PKG_CURRENT
SP_X — 1000 lines
SP_Y — 1000 lines
SP_Z — 1000 lines
SP_Q — 1000 lines

create new package and move some code into it:

PKG_NEW
SP_X — 1000 lines
SP_Y — 1000 lines

and call this new package from the old one so you don’t need to change your application code.

PKG_CURRENT
SP_X  => begin pkg_new.sp_x end; — 4 lines
SP_Y => begin pkg_new.sp_y end; — 4 lines
SP_Z — 1000 lines
SP_Q — 1000 lines

that way current package will be much more smaller and you don’t need to change a thing at the application. they take the action and it worked. of course this is not the correct way as I said, this package should be split into many packages based on business logic and dependencies. They will do that too.

while ending my post here I would like to remind those wise words: Keep it simple, keep it stupid.

wish you all healthy days.