3 Ways to Migrate a Non-CDB Database to a PDB

Hi Everyone,

Time is fast and everything is changing constantly. Multi-tenant architecture has come many years ago and starting by 21c, it is the only option! So,  old “non-cdb” databases must be migrated/converted to a PDB (Pluggable database). I must admin that I was being lazy for many years and I didn’t work much to learn new things and now I am literally trying to learn everything. One of the first goal was about multi-tenant architecture and now, since I feel that I am confident enough about it, I will move my clients’ databases to multi-tenant architecture. Also, I must say, I loved multi-tenant architecture.

I won’t be going details about what multi-tenant or a pdb is in this post. I will cover only Migration / Conversion.

So, for a long time I am testing about migrating noncdb databases to pdb and I want to explain 3 main methods to do that.

  • Plugging in a noncdb as pdb
  • Creating a pdb from noncdb over a database link (dblink)
  • Autoupgrade

Methods like export import can be used of course but I don’t think that can be count as a “migrate”. So, I won’t be talk about it in this post. Let’s start then.

Environment

For all 3 methods same environment is used: Oracle Linux 8.5 (virtual box), Oracle Database 19.14 Enterprise Edition. (I will add changes on 21c in my post)

Noncdb Database SID        : orcl
Multi-tenant Database SID: cdb

Both databases are in same home (could be in different homes, that would change nothing).

I always use OMF so I won’t be using FILE_PATH_CONVERT parameter in my examples. if you don’t use OMF (I don’t know why you do that) then you must use file_path_convert and point necessary directories.

I put around 4.5 GB of data in orcl (noncdb) database because amount of data will affect the completion time of the process.

By the way, I will suppress some outputs to keep it short and emphasis real results.

Pre Steps

Steps that I am about to show you should (must to me) do whatever method you are using to migrate your database. You must always check if noncdb database is applicable as a pdb. To do that, you will use simple pdb plug in method basically. Of course in this case, we cannot “unplug” our noncdb database because it is not a pdb, so what?

As you know, you can plug in and unplug a PDB to/from a container database in multi-tenant architecture. this is the coolest option I think. As if you are plugging in a pdb, you can plug in a noncdb as a PDB. While plugging in a pdb, we need an XML file that defines properties and necessary information about the pdb like compatibility, nls info, data file locations etc. To plug in a noncdb, we will need that file as well. Oracle already embedded a package to achieve that. DBMS_PDB. Describe procedure in this package will create XML file that we need.

if you are going to use PLUG IN method then put your source database (non-cdb database) into transactionally consistent state which means open it in READ ONLY mode. Reason is XML file contains some SCN information too but if db is up and running scn will move. When you tried to plug in database, you will get an error about data file scn number and config file (XML file) does not match. Other 2 methods doesn’t require that! I personally prefer this in db link method too. I don’t want any process keep doing something while converting a noncdb to pdb.

for plug in method:

I connect to “orcl” database and create necessary xml file:

Now we have an xml file that defines our noncdb database.

Then, check if this is a compatible database with our container database. Connect to CDB database and check the xml file:

DBMS_PDB has a function called “check_plug_compatibility”. this function reads XML file and returns true or false. if true then you can plug this noncdb as pdb (maybe not so fast). if result is false then you must check what is wrong? even if result of this function is true you must check “violations”. every CDB database has a data dictionary view called PDB_PLUG_IN_VIOLATIONS. After running “check_plug_compatibility” function, errors/warnings/infos will be written to this view and you must check them before plugging in the database.

if you see any ERROR in this view you must not proceed until solving the problem.  You might see various problems here like different database parameters like open_cursors could be 500 on noncdb but 300 on cdb. remember, cdb parameter values will override previous parameters.  (I will explain “PDB plugged in is a non-CDB, requires noncdb_to_pldb.sql be run” warning later)

Some important points are DB OPTIONS. Your databases might have different options. Let’s say your noncdb database has SPATIAL option installed but it is not installed on CDB. if so, then you must install SPATIAL option before plug in the database. how to install options on a database? You can check my previous post about it:  https://mustafakalayci.me/2022/03/03/how-to-install-oracle-database-options-like-spatial-label-security-etc-after-db-creation/

As a rule, your non-cdb database might have equal or less than compatible parameter value. if this is the situation you will see a row in PDB_PLUG_IN_VIOLATIONS.

if you resolve a problem on PDB_PLUG_IN_VIOLATIONS, it will still be there. even so, after solving a “warning” problem (like installing an option) will be listed as PENDING not resolved. it requires open the pluggable database.

As I said, these steps should be mandatory to you before migrating non-cdb to pdb. after checking violations, you can proceed to plug in operation.

1- Plugging in a Noncdb as PDB

At this point we should have checked the xml file and solved the violation problems. Please remember you should have created the XML file when noncdb database in READ ONLY mode!

Now, we have XML file so we can plug in noncdb as a pdb. Before do that, close source database (noncdb).

then connect to CDB and create a pluggable database via plug in:

I don’t want to go to detail of create PDB via plug in but as a summary; you can use a different PDB name. instead of orcl, you can use orcl_pdb for example. Also, default mode for plugging in a pdb is COPY which means it will copy the datafiles that orcl.xml pointed out (they are under /u01/app/oracle/oradata/orcl directory currently). PLUG IN method will copy them under /u01/app/oracle/cdb directory (wherever db_create_file_dest is).  this way, noncdb database (orcl) will be still available (it is in shutdown state).

it takes 10 seconds to complete on my laptop. as I said before it has 4.5GB extra dummy data in this database. total size is around 7.5GB. 10 seconds is the time of copying datafiles to a new location. if you use MOVE instead of COPY method it will take much less time.

as you see it only takes 0.68 seconds. size of data is not matter now because data files are MOVEd to another location and this is a very fast operation. even if you have 1TB of data it will be completed in seconds. Also, if you are satisfied where datafiles reside currently, you can use NOCOPY option too.

if you use same with noncdb database then you must sure that noncdb is closed because there will be 2 services with same name.

you can use this method even if your databases are on different servers. just create XML file on noncdb database, then move datafiles and XML file to new server, modify XML file because file locations might be changed and wherever you copy the data files, change locations in XML file to new location and then run create pluggable database command.

Always check PDB_PLUG_IN_VIOLATIONS view after creating PDB because there might be new issues.

Now we have a new pdb called orcl:

NOTE: Another amazing option we have is we can create a PDB from a lower version noncdb. for example, if you have a 12.2 noncdb database, just install 19c and follow the same steps. Cool right? there is only one thing to do! before calling noncdb_to_pdb.sql file run dbupgrade -c “ORCL” command on shell. dbupgrade is kind of a shortcut for catupgrd.sql file. this will upgrade your newly created pluggable database first.

as you see it is in MOUNTED mode which means it is closed but we CANNOT open it yet on 19c database. For 19c, we must connect to newly created pdb and run noncdb_to_pdb.sql script which is under $ORACLE_HOME/rdbms/admin

it will take a while. as I said, don’t open pdb. set your container as new pdb then run noncdb_to_pdb.sql file. question mark means ORACLE_HOME in sqlplus. so I basically called @/u01/app/oracle/product/19c/dbhome_1/rdbms/admin/noncdb_to_pdb.sql

21c UPDATE: After plugging in (create pluggable database) command, you don’t need to run dbupgrade (if noncdb is a lower db version) or noncdb_to_pdb.sql scripts. when you run ALTER PLUGGABLE DATABASE OPEN command, 21c will automatically run them but open command, of course, will take some time to complete.

That is it.

2- Create PDB from Non-CDB via DBLINK

This step is much easier for an Admin but takes much more time to complete. Good thing about this method is your source DB can stay open during the operation and when pdb creation is completed, it will apply archivelog of noncdb database. So, PDB will be transactionally consistent at the time of END of the pdb creation.

Do the PRE STEPs first as described above. create XML file and check for the violations. if everything is OK then, you can proceed.

This method just requires a database link (dblink) from CDB to NONCDB. Dblink must be created at the cdb$root container. First create a user on noncdb database which will be use to connect in dblink.

Remember, this user must have resource, connect and create pluggable database privileges. after that create database link on CDB:

from now on we will create a pdb from a remote link:

it completed in 10 seconds. if source database was bigger it would take much more time. Since non-cdb has no PDB name, we use NON$CDB.

as I explained in previous methods, for 19c, if you created pdb from a lower version run dbupgrade command. after that, you must set current container as new pdb and run noncdb_to_pdb.sql file.

if this is a 21c you can simply open pdb. 21c will do the job but open command will take some time.

I am not sure if this is supported (I did not checked yet but test it of course). I use this method to create a PDB from a noncdb which is on a different server and different OS and it worked!

3- Autoupgrade

New King of the upgrades/convertions/migrations 🙂 Autoupgrade is an amazing tool and support many options now. Even, I have just read that version 2 is released while I was writing this post on Mike Dietrich’s blog ( https://mikedietrichde.com/2022/03/25/autoupgrade-2-0-has-been-released-and-got-reuploaded/ )

You can use autoupgrade to convert a noncdb database to pdb. Autoupgrade will take care of everything even pre steps and post steps. It will even run the fix scripts.

I still would like to do my “Pre STEPS” to check the operation.

autoupgrade is a jar file and requires java to run but no worries, everything you need is in the ORACLE_HOME. Before take an action, always download the latest version of autoupgrade.jar file from https://support.oracle.com/epmos/faces/DocumentDisplay?id=2485457.1 it is a single jar file and put it under $ORACLE_HOME/rdbms/admin for 19c and above.

autoupgrade has different modes and I won’t detail all of them here. What we use here is ANALYZE and DEPLOY modes. ANALYZE mode will run all the steps just skipping the actual operations and will show you a result. After that you can run it in DEPLOY mode which will do the actual job. autoupgrade requires a config file to identify what to do and necessary info. here is my sample:

it is quite self explanatory. source is upg1.sid which is noncdb and upg1.target_cdb is cdb which autoupgrade understands this is a noncdb to pdb conversion. you can give a new name for your pdb (my_pdb in my example). define a start time, set if a timezone upgrade will be proceed etc. I saved this file under /home/oracle/noncdb_to_pdb.cfg.

EDIT: I forgot to mention about something. By default, autoupgrade will remove your noncdb! it is creating PDB with MOVE option so datafiles of noncdb are gone. if you want to prevent this, use KEEP_SOURCE_PDB=yes in your config file like:

upg1.keed_source_pdb=yes

Before Running Autoupgrade : This is an important note. it might be solved already because autoupgrade team is really fast but I had some problems while doing that because of glogin.sql file. for details: https://mustafakalayci.me/2022/03/20/666/ thanks to Mike Dietrich and Joseph Errede, they take care of it really fast.

just run autoupgrade in analyze mode first:

then check the result and run it in DEPLOY mode. Deploy mode will start a CLI (command line interface) so you can watch the steps by typing “lsj” command:

When operation is over it will automatically exit the cli and will print the log file locations. as you see in my example, it already started noncdb_to_pdb.sql file. Yes, all of those steps are automatic in autoupgrade.

here is the end:

here is the html file screen shot:

that’s it.

STOPPED Job While Migrating Non-CDB to PDB via Autoupgrade

Hi,

Lately, I started to learn about autoupgrade which is the new upgrade tool for almost anything. Many thanks to Mike Dietrich for amazing blog posts ( https://mikedietrichde.com ). His blog is a real treasure.

I am about to migrate some non-cdbs to pdbs and before doing that I wanted to learn my options about how I can achieve that. I will write a blog post about three ways of migrating non-cdb to pdb soon. one of them, of course, autoupgrade and I almost trying to do it for last 10 days! All my efforts has ended with a STOPPED autoupgrade job at “compatibility check” step and it drove me crazy. You must know that for 10 days I tries tens of different combinations to achieve that and every each of them failed! Finally (just a mistake) I realized what happened. Let me explain:

I don’t want to go into details about autoupgrade non-cdb to pdb operation in this post, so I will just share my config file and command to run autoupgrade.

Here is the test case:

Oracle Database EE 19.14 on Oracle Linux 8 (virtual machine on Windows 11 host).

Latest autopgrade.jar is downloaded from Oracle Support.

Oracle Home is the same for both cdb (sid: cdb) and non-cdb (sid: orcl) databases.

here is my autoupgrade config file:

then run the autoupgrade to migrade orcl db into a CDB as a pdb:

I have read so many documents and blog posts about it and every time I run this command, it started Command Line Interface (CLI) of autoupgrade and when I query my job with lsj command, I got this result:

This is instant. I type lsj immediately when I reach to cli and I saw the upgrade job is STOPPED (status is running but operation is stopped and job is not working). So I check the log files of course and here is the error:

so an ArrayIndexOutOfBoundsException is thrown. Also, in noncdb_to_pdb_orcl.log file I got this:

This makes me think that there might be a compatibility problem or a violation but actually there were none! I spend days to understand this 🙂

After tens of autoupgrade tests, reading hundereds of log file results, with some luck and coincidence (and my hard work of course)  something got my attention. in log files I saw same familiar codes of mine! Where did I use those codes and why are they in autoupgrade logs? They are my “glogin.sql” file codes. for those who are not familiar to glogin.sql it is a initial script file for sqlplus tool in $ORACLE_HOME/sqlplus/admin directory. I use glogin.sql to set some default values like assigning db name in prompt or set SID as terminal header so I can see my current sqlplus SID all the time etc. but non of them is causing this but the guilty one is:

yes! I got “set timing on” in my glogin.sql file so I always get execution time of my sql scripts and somehow autoupgrade does not run a “set timing off” command and this cause some unexpected results. as you know, set timing on adds execution time to end of of your commands outout. probably, some scripts are assume it as a result of a query.  I remove it and re-run the autoupgrade tool. everything work smooth and perfect. database is migrated as a pdb.

So,  I spent 10 days because of “set timing on” but at least I learned a lot and solved a problem of mine, I am proud of it 🙂 I hope this helps whoever uses glogin.sql file and set timing on in it.

Wish you healthy days.

Peace at the Home, Peace in the World (Mustafa Kemal ATATÜRK) .

How to install Oracle Database Options like Spatial, Label Security etc After DB Creation

Hi,

yes, I am aware that title is a little bit long and too informative but I believe it is necessary because terms are complicated according to me.

When you say “option” it could be many thinks like “advanced compression” option in EE or Spatial option or Partitioning option and not all of these are the same “option”. they have different backgrounds and structures. some times they called “components” too but in the documentation they always mentioned as options. I want to write about “options” like Spatial, Jserver, Label Security, Data Vault, Olap… Do you see where I am going to? These are the “options” in dbca utility that you choose while installing the database (only if you choose “custom database” not general purposes or dw as db type) but also I will talk about partitioning and RAT (real application testing) option because their namespaces intersect in some “options” (ok, I will stop putting double quotes around the option).

I didn’t care about options much until now and I wanted to learn how to install/uninstall after db creation or what options do we have. Started to dig the documentations and I realized that it is a little bit more complicated than I thought. What I focused was the options on dbca as I said:

here they are. So instead searching docs and google first, I use a small trick and check the option “Generate Script” at the last page of dbca and I looked for the scripts. fortunately, Oracle puts every options creation into a script so it is really easy to find them but before talking about the scripts I want to talk about the option types.

Shared Objects & DLLs

Some options are made of just simple scripts but some of them are deeper! For example, if you want to install Spatial option (to use geographic coordinates in your db) you just need to run an sql script but for OLAP, you need to “link” some objects into Oracle kernel and then run some scripts. What does it mean? some options are not just tables/packages etc, they need some special C functions in oracle kernel but not everything is in the kernel. Shared Objects in Linux and (mostly known) DLL files in windows are widely used to add or remove functionalities to your application. In the end Oracle is an application and uses those dynamic libraries. I am not a software engineer so this part is a little bit above my paygrade but I will try to explain. In Linux if you want to add a functionality to your application via shared objects (dlls in windows os) you must “link” them to your application by “make” command. For example, to use OLAP it must be linked to your oracle binaries in Linux, as such necessary DLLs must be available in windows. Remember this operation is not per database but Oracle Home. When you do this, all current and future databases in this Oracle Home will be or will not be able to use these options.

Of course, it is not that easy 🙂 some options must be linked to Oracle binaries but they don’t need to run an sql script. So, if we group the options by their installation types, we have 3 different types:

  1. SQL Scripts (for ex: spatial)
  2. Linking Binaries (for ex: unified auditing)
  3. Linking Binaries & SQL Scripts (for ex: OLAP)

To install Spatial you just need to run $ORACLE_HOME/md/admin/mdinst.sql script as a sysdba user. That is it. This script will create MDSYS user and necessary objects to use spatial option.

To enable “unified audit” option in database you must just link some binaries but you don’t need to know everything of course. you can check this post of mine: http://mustafakalayci.me/2021/09/08/enable-unified-audit-on-linux-and-windows/

spoiler:
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk uniaud_on ioracle

and To install OLAP, first you must link OLAP module first and then then run $ORACLE_HOME/olap/admin/olap.sql script as sysdba.

Linking Binaries & chopt

for linking operation Oracle provides as a simple tool called chopt short for Change Option. Instead of running “make” command you can use chopt and disable and enable “some” options from Oracle Kernel. it is basically just calling “make” command itself. It does support very few option:

as you see, you can enable/disable OLAP, Advanced Analytics, Partitioning and RAT.  I want to repeat again, when you use chopt, you are linking/unlinking some binaries. So, this will add or remove some components from Oracle kernel and currently available and future databases will be or will not be able to use these options. usege of chopt is very easy:

if you run it on Windows, it will just rename a DLL (to enable option, it will rename it to correct name and to disable, rename it to something else)

chopt has different options from version to version. for example in old version there a module called DM (data mining) but in 19c it is under the Advanced Analytics so if you want to enable/disable data mining you must use chopt enable/disable oaa command. Also, you can see that “partitioning” is an option too. if you don’t have a partitioning license you can disable partitioning option so that non of developers create a partitioned table and caused (maybe) thousands of dollars cost.

Only use the options you need! Nothing more. Consider that unnecessary options will increase db upgrade timings too. Also, as in partitioning example, some options are licensed options which means you have to pay to use them. if you don’t have the license then disable it.

As far as I understand, there is a historical development in this. for example, in old versions, Spatial was an option that needed to be linked too. sdo_on and sdo_off was used to enable/disable the Spatial.

as you see, when you try to turn it off now, warns us about “sdo_off is disable”. it is always on anymore.

here is some examples to enable/disable options in Oracle kernel but some of them (like Label security) is deprecated.

Product/Component Enable Switch Disable Switch
Automated Storage Management asm_on asm_off
Oracle Data Mining dm_on dm_off
Database Vault dv_on dv_off
Oracle OLAP olap_on olap_off
Oracle Label Security lbac_on lbac_off
Oracle Partitioning part_on part_off
Real Application Cluster rac_on rac_off
Real Application Testing rat_on rat_off

Also, please consider that if you disable an options like OLAP using chopt (or make command) it will be disabled in dbca too. you won’t be able to choose it:

WARNING: before use chopt or make commands, always close all services in the oracle home first (database, listener etc). for windows servers, also stop the related Oracle Services in Windows Services (like OracleServiceOrcl…)

Dbca Options

Options that you can install into a database (might required binary linking). you saw them at the screenshot above. Every each of them has some unique features (I won’t go detail the purposes of the options) and some of them depends to others. So, basic hierarchy is like this:

  1. Oracle JVM
    1. OLAP
    2. Multimedia
  2. Oracle Text
  3. Spatial
  4. APEX
  5. Label Security
    1. Database Vault
  6. Extension for .NET

to be able to install Multimedia you must have Oracle JVM first or you can install APEX or Label security without installing anything else. What are the scripts to install? you should always check the documentation or support to achieve this but as an example I wanted to write them in 19.13 version. DO NOT run them directly! Always check the documentation first, scripts might be changed! those samples are taken from dbca generated scripts.

  1. Oracle JVM
    $ORACLE_HOME/javavm/install/initjvm.sql;
    $ORACLE_HOME/xdk/admin/initxml.sql;
    $ORACLE_HOME/xdk/admin/xmlja.sql;
    $ORACLE_HOME/rdbms/admin/catjava.sql;
    connect “SYS”/”&&sysPassword” as SYSDBA
    $ORACLE_HOME/rdbms/admin/catxdbj.sql;
  2. OLAP
    $ORACLE_HOME/olap/admin/olap.sql SYSAUX TEMP;
  3. Multimedia
    $ORACLE_HOME/ord/im/admin/iminst.sql;
  4. Oracle Text
    $ORACLE_HOME/ctx/admin/catctx.sql Xbkfsdcdf1ggh_123 SYSAUX TEMP LOCK;
    alter user CTXSYS account unlock identified by “CTXSYS”;
    connect “CTXSYS”/”CTXSYS”
    $ORACLE_HOME/ctx/admin/defaults/dr0defin.sql “AMERICAN”;
    connect “SYS”/”&&sysPassword” as SYSDBA
    alter user CTXSYS password expire account lock;
    $ORACLE_HOME/rdbms/admin/dbmsxdbt.sql;
  5. Spatial
    $ORACLE_HOME/md/admin/mdinst.sql;
  6. Apex
    $ORACLE_HOME/apex/catapx.sql Xbkfsdcdf1ggh_123 SYSAUX SYSAUX TEMP /i/ NONE;
  7. Label Security
    $ORACLE_HOME/rdbms/admin/catols.sql;
  8. Database Vault
    $ORACLE_HOME/rdbms/admin/catmac.sql SYSAUX TEMP;
  9. Extension for .NET
    $ORACLE_HOME/rdbms/admin/dbmsclr.plb (in windows only)

How to Install Dbca Options

Well, I pointed out the scripts and you can run them (for the same version, and always check the documentations first) but of course it is not that simple 🙂

This is very important! Installation method depends on whether you are using a Container database or Non Container database! if you are not using container database (old design which is not supported anymore by starting 21c), you can run the scripts by connecting a sysdba user but for a container database it is more complicated.

for a container database, most of those options must be run in cdb$root and then all the pdbs but this is a hard job and opens for mistakes. instead of this, use catcon.pl perl file to automate this. spatial sample creation script taken from dbca generated scripts:

it seems complicated but mostly paths makes it longer. simply call a perl script and as a parameter it passes the original script of the necessary option. Always use catcon.pl to install/uninstall an option in a container database.

How to Check Which Options are Installed

This is (as always) not that simple 🙂 Some options are “registered options” so you can list them via dba_registry data dictionary view:

These rows do not match to an option exactly. OLAP option is listed as 2 rows, “Oracle OLAP API” and “OLAP Analytic Workspace”. Also, some options are installed by default like “Oracle Workspace Manager”.

is that all? of course not. these are registered options (components) but also there are options in Oracle kernel like unified auditing. it is not depend on an sql script. for all options in the kernel use V$OPTION view:

I just listed some of the rows because it is a long list.

Uninstall an Option

Uninstalling and option is not that easy because some of the options create specific objects in different schemas (like sys) and to uninstall it either you must find every object that used by the option or use some scripts again that either Oracle Support provide or exist under the oracle home directory.

To Uninstall Spatial, for instance, you can simply just drop MDSYS schema but there might be some tables which uses spatial data types, indexes etc. you must find and drop them first. even if it seems easy there is always a catch. So, be careful and always follow support documents about it.

In Conclusion

So, it is a deep topic and probably there are much more. I said this before but again; remove all unused options from your database. They will be a burden to your system. So far these are what I understand and learn… I hope, these are useful and helpful to you while understanding the db options.

As always, wish you healthy days, NO WAR in anywhere, all life in this planet matters.

Tables with Memoptimize for Read (fast lookup)

Hi,

I would like to share my initial thoughts about memoptimized for read (fast lookup) tables in this post. fast lookup tables came with 18c and they are here to provide faster access to a table if you are using primary key with a equality condition. this is basically single table hash cluster. instead of using primary key index, Oracle creates a hash index and searching on a hash index should be much more faster than a unique index.

first, create the test environment.

to use a fast lookup table you must set memoptimize_pool_size paramter to a number higher than 0. this is a static memory component in sga to store hash index of the table. this memory component is not resizable which means it won’t grow or become smaller with automatic shared memory management.

create a table for fast lookup:

segment creation must be immediate otherwise you will get an error! Your table must have a primary key as well, since whole structure is based on PK, it is logical to have one. I roughly inserted 146.000 rows into table.

EDIT: I also tried those tests with 14 million rows too but results are similar.

this is not enough! you must use dbms_memoptimize package to use this table as fast lookup table.

let’s see what is happening now:

I skipped first execution statistics! you will see a small consistent gets at the first execution but consecutive executions will be like above.

as you see there are new execution plan steps “TABLE ACCESS BY INDEX ROWID READ OPTIM” and “INDEX UNIQUE SCAN READ OPTIM”, so these steps tell us, an hash index is used to retrieve data like as in key-value pairs. there is no “consistent gets”. this is amazing. almost nothing has read for this query and that should make this query so much faster.  Is it? well, I couldn’t find it, not as much as I expected at least. Actually, I found different blog posts mentioning about fast lookup tables but non of them has made a performance test.

So, I want to test the speed but there is a problem. for now, there are many limitations on fast lookup tables. one of them is “you cannot use it via plsql”. this means I can’t create a simple plsql procedure and run a query for 100000 times and compare timings. to do a test, I wrote a small python code.

Warning: I am no python developer, I just now python to solve my small problems, this code that I share below probably has many bugs but I just use it for this test purposes.

here is the python code I use:

after changing code many times, I used this final code. this code creates 9 threads and every thread run “select * from tmp_optimize where id = :myid” query for 100.000 times and myid variable value is between 1 and 10000 in circular way in this loop. each threads print total execution time for 100.000 runs of the query. here is the result:

timing is changed between 16.0 and 16.1 seconds for this fast lookup tables. how about a normal table with normal primary key?

I used same python code just changed table name from “tmp_optimize” to “tmp_not_optimize” and result is like this:

it is between 17.1 and 17.2 seconds. there is %6 performance gain (give or take).

I am not sure whether %6 is a good gain or not because to use fast lookup table actively, we need to separate serious amount of memory area. Also, there are few limitations like, cannot be used with partitioned or compressed tables, cannot be used in plsql (not yet anyway). only equality conditions on primary keys can advantage of fast lookup.

During my tests I didn’t check wait events. I expect that less latch waits on fast lookup table since no consistent gets occur. Maybe there is something I missed and I will do more tests on these tables (I didn’t even test “memoptimize for write” (fast ingest) yet).

wish you all healthy days.

How to Restore Dropped PDB

Hi,

it seems easy, right? Well, not that much but not that hard either. if you drop a pdb in a cdb, you cannot restore it using current database because when you drop the pdb, all information about the pdb is lost. So, what is the solution? you need a point in time recovered duplicated database.

I have a cdb named CDB and had a pluggable database named PDB1. this database is a 21c database and installation of CDB is pretty standard.

db_create_file_dest     = /u01/app/oracle/oradata
db_recovery_file_dest = /u01/app/oracle/fast_recovery_area

Let’s say I dropped it by mistake and make a recover:

I created the pdb and create a table in it. I also get the time because it will be required while restoring pdb. now, let’s drop it by mistake:

I don’t add outputs for now. Okay, now I lost my pdb. to restore it, we must create a duplicate database, recovered until a specific time. step by step:

1- create a new pfile for duplicate database (let’s called out db name as “dup”)

2- create password file:

3- startup the instance

4- use rman and create duplicate database

you can get different error messages at this point but they all have enough information to solve the problem. if you get this error:

RMAN-05501: aborting duplication of target database
RMAN-05657: There are multiple database IDs present.

then you must set dbid before run the duplicate database command.

5- after duplicate database command completed (which will take for a while) your new database called “dup” will be ready and in mount mode.

at this point if you query pdbs (containers), a strange name will appear “_###_UNKNOWN_PDB_#_4” (4 is pdb number so it might  be different on your systems). after Opening your database with resetlogs (this is point in time recovery) it will be converted to correct name.

here it is. after this point we have 2 container databases on the server, cdb and dup. so, we can simply unplug and plug pdb1 into database CDB again.

I use “move” while plugging it into CDB database because I want its datafiles move to db_create_file_dest locations of CDB database.

that’s it. I found this solution is a little bit tiresome and I believe next releases will have easier solutions but still we can complete what we need in a few commands.

thanks for reading, I hope this helps.

wish you all healthy days.

Hierarchical Blocking Session List

Hi,

Today, I needed to list all blocking sessions hierarchically. As you know, all information about sessions (and blocking sessions) are in gv$session (or just v$session if you are using single instance) view. I wanted to see who is blocking who and blocker at the top. you can write simple connect by (or with clause) query but it will list blockers at the bottom so I just developed it a little bit. if I remember correctly there were a small script under rdbms/admin but it was creating some tables etc etc. I just want to get it using a simple sql statements.

here is the sql:

first, I get all blockers’ sids (blockers data source in the query) then I use them to generate the hierarchy (blocker_hierarcy data source in the query) then I remove some blockers from the hierarchy because they are also blocked by some other sessions and will be listed under them.

for example, let’s say sid 10 is blocking 2 different sessions (sid 20 and 30) and also sid 30 is blocking some other session (sid 40). so this query will produce a result like this:

you can see the hierarch in a visual way and the first 40 character of their currently running sql statements.

Hope this will be useful for you.

wish you healthy days.

 

Enable Unified Audit on Linux and Windows

Hi,

As you know, since 12c Oracle introduce Unified Auditing which has much more capabilities than standard auditing. Creating policies, audit conditions, top level auditing etc. By default Oracle uses “mixed” mode which allows you to run unified auditing commands and standard auditing. you can check if you are using unified auditing by default:

if result is FALSE then you are not using unified auditing, you are using either mixed mode or standard auditing. Unfortunately, enabling unified auditing is not as easy as changing a parameter. On Linux OS:

shutdown all oracle components (database, listener etc) then

then start database and listener.

On Windows OS:

shutdown all oracle components (database, listener etc) BUT on Windows you must shutdown and start Oracle Services. shutting down and starting database using sqlplus will not work. Open services.msc and find Oracle database and listener services and stop them.

after stopping services, rename the %ORACLE_HOME%/bin/orauniaud19.dll.dbl file to %ORACLE_HOME%/bin/orauniaud19.dll. 19 in the name of dll is your version if you are using 12c then you must rename orauniaud12.dll.dbl (or orauniaud12.dll.option) file.

and then start windows services. you can check v$option view as I wrote above to see if unified auditing is enabled.

if you are not using unified auditing policies by now, you should start using them. probably they will be the only audit feature in the future. Also they are much more easier than standard auditing. You have much more option to create audit rows. for example if you are using an APEX web application (or any other) you application will logon to database with same user. if you want to audit only few of your users you can use SYS_CONTEXT values like client identifier and you can audit only those users which has specific client identifier info. Also you can create conditions to audit specific database users too. One of the most important thing is new unified audit table is tamper proof, you can not run DML on it so your audit data will be safe (unfortunately you can modify standard auditing base table AUD$). So, don’t be late.

wish you healthy days, don’t forget to get your vaccination.

Blockchain vs Immutable tables

Hi,

as you know 21c is here for linux and some new features are (as always) pretty exciting. two of them are Blockchain and Immutable tables. Let’s explain and test.

first of all, those are new features for 21c but they are backported to 19c. at 19.10 you are able to use BLOCKCHAIN tables and 19.11 IMMUTABLE tables. I will make my tests at 19.12

if you are familiar to Bitcoin or any other alt coin then you probably now what blockchain is. basically it is a distributed ledger. every transaction is a “block” and each block contains information about transaction like sender, receiver, amount etc and also a hash value of the previous transaction (using some hash functions). That way all new transactions are added to the end of the “chain” and that creates the chain. This ensures that chain is unbreakable you can not tamper with old records because that will breaks the chain (hash values won’t match with next blocks).

So, Oracle uses this technology to create a new table called Blockchain table. every inserted row in this table are stored with some additional information like current timestamp and most importantly, previous row’s  hash values! Hash value operations are made at the “COMMIT” time not before. that way Oracle guarantees that every row is a part of chain and you can not tamper with those rows. if you do, chain will be broken and can not be verified. Also, Oracle won’t let you delete or update those rows using any DML statement like delete, update, merge etc.

Immutable tables are pretty similar to Blockchain tables. you can not update those table using any DML as blockchain tables. Difference is rows in the Immutable tables are not linked to each other. Rows does not store any hash value from previous rows.

both of them are designed to create “unchangeable”, Insert only tables. if you think that this is a real necessity especially auditing staff. for example if you store a log information for a credit card information you don’t want anyone to change them and you want to be sure that those rows are intact and unchanged on an investigation.

of course you just need to store table data for a specific amount of time like 5 years etc. No one can store them forever so at some point you must be able to delete old rows. This is something you can define while creating those tables. Also, even if I may not able to delete the rows what would happen if someone drops the table! this is of course forbidden and again you can define a drop policy while creating tables. Let’s start with a blockchain table:

you need BLOCKCHAIN keyword (or IMMUTABLE for immutable tables) at the create statement.

“NO DROP UNTIL 31 DAYS IDLE” defines how can you drop this table. if you have no actions on that table for 31 days then and only then you can drop it. By the way after creating table, if you don’t insert any row then you can drop it immediately.  you can just use NO DROP to disable dropping of this table (only way is dropping database entirely).

“NO DELETE until 30 days after insert” defines how can you delete “old” rows. in my example you can delete a row after 31 days of it’s insertion. Again you can use NO DELETE LOCKED to disable deleting from table.

hashing part is the default and you have to write this down to define blockchain structure.

let’s insert a row and try to delete or update:

as you can see modifying is forbidden. you can not even delete rows that passes the retention time that you defined (no delete until…) with a delete statement. You must use DBMS_BLOCKCHAIN_TABLE.DELETE_EXPIRED_ROWS procedure to delete rows beyond the retention.

This is BLOCKCHAIN table. of course there are much more properties like user signs etc but I will leave it here for now. For IMMUTABLE tables, almost everything is the same:

just change “blockchain” keyword to “immutable” and remove hash clause. pretty straight. as expected you can not delete or update rows. if you want to delete old rows then you must use DBMS_IMMUTABLE_TABLE.DELETE_EXPIRED_ROWS procedure.

Common Feature of both tables is that they can not be modified by authorized db users not even SYS or db vault admins. No one in the database can change those rows. Basic difference of the both tables  is Blockchain table has steps for cryptographic linking between the last row and new row but Immutable table has no option like that. what does this provide? Immutable tables can not be modified via Oracle Database Software but how about bypassing DB? if someone modify datafile using Operating system tools then your table data will be modified and you can not be aware of it. Blockchain tables, on the other hand, has a verify function. since every row linked to each other with some hash value, if you modify this data using any tool, like OS tools, verification will be failed because if a row changes then it’s hash value will be change and that hash value is stored on the next row.

Then why immutable tables are exists since blockchain tables are more secure? You might guess that, creating hash values and storing them on the next row causes some extra work which means performance issues.

Let’s make a test (with a flow on purpose):

I inserted rows using bulk insert and row by row on both table and here are the results. huh! blockchain table is almost faster than immutable table? Actually no! as I said earlier hashing operations are done at the commit time not before on the blockchain table! in my code there is no commit! that’s why we see the same performance result. how about adding commits before printing time:

here it is! at worst, immutable tables are faster more than 6 times (row by row insert) and at best, almost 40 times faster than blockchain tables. So, while choosing table type be careful. if immutable tables meet the need then use them.

this is just a glance for new table types. I will make more tests and come with new posts.

wish you healthy days.

Edit: I didn’t mentioned about COMPATIBLE parameter. it should be at least 19.10 to create blockchain table and 19.11 for immutable table.

ORA-00600 [kzan_open_osfile:ksfdread] while querying/purging unified audit

Hi everyone,

Lately, I hit and ORA-600 while querying unified_audit_trail on 19.8 Oracle database and same thing happens if you want to purge audit data too. the exact error message is:

ORA-00600: internal error code, arguments: [kzan_open_osfile:ksfdread], [], [], [], [], [], [], [], [], [], [], []

Normally, I check the support first but this time ora 600 argument seems understandable to me. I can see that Oracle is has some problems while reading a file. so I checked alert.log.

p.s. : I changed db names with “orcl”

I checked trace file that I found in alert.log (/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_5714.trc ) after the trace file header information, here is the text belongs to error:

if you look closely, you can see that Oracle is trying to read /u01/app/oracle/audit/orcl/.ora_audit_01267.bin file and then got an file I/O Error and at the very end the error is “no data found”. So, definitely there is something about this file. when I queried the file I realized that file is 0 bytes:

I just consider that Oracle is trying to read this file but since it is and empty audit file got an no data found error. I deleted this file and all the problems are gone 🙂

hope this is helpful.

wish you healthy days.

 

X Forwarding for an Oracle DBA

Hi,

It’s been a long time. I want to write something about X Forwarding. you might feel more comfortable to use a GUI instead of silent installations sometimes. if you are connecting to a linux server remotely then you must enable X Forwarding to enable GUI for Oracle product like dbca, netca etc.

First of all, use MobaXterm to eliminate many problems. Moba sets some parameters for you and it also shows you if X Forwarding is enabled or disabled for that session:

just using MobaXterm is not enough of course:

1- create .Xauthority file under your linux user home directory:

touch ~/.Xauthority

2- check ssh config if X11Forwarding parameter is set to yes or not:

sudo vim /etc/ssh/sshd_config

it must be yes of course.

3- install xauth packages:

sudo yum install xorg-x11-xauth xorg-x11-utils xorg-x11-fonts-* xorg-x11-font-utils xorg-x11-fonts-Type1

(fonts are not mandatory)

4- create a magic cookie:

that is pretty much it. after these steps MobaXterm should be showing you X11Forwarding as enabled.

so far so good but there is a specific scenario. many company uses VPN accounts now and you are logging on to servers with your VPN account but oracle installation could be under another user like “oracle” and you might not have password of oracle user since it is also a directory user and managed by different teams on hosting company (or your own company). in that case you might be using “sudo su – oracle” command to switch to oracle user and if you do that X11 Forwarding will not work! as a quick hack you can use this:

my vpn user: mustafa

oracle installation user: oracle

login as mustafa, then copy .Xauthority file to oracle user’s home directory, then change owner of the .Xauthority file as oracle:

you can put this into a sh file and run. you must run these commands with sudo of course.

last step is $DISPLAY parameter. echo $DISPLAY parameter with mustafa user and after switching to oracle user set this parameter:

that’s it. you should be able to use X11 Forwarding.

hope this helps.

Note: in my country there is a huge fire at almost all sea side. Many beautiful towns and villages are burned. People died, dozens of hills are turned into ash and thousands of animals had died. I hope it will end soon and climate change is real and here. every human should be aware of this.