NON-CDB to PDB Migration Horror

Hello everyone,

Lately I migrated a non-cdb to pdb for a client database and it almost become a horror movie for over 2 weeks to me. before I start, if you want to know how to migrate from a non-cdb to pdb you can check one of my previous post: http://mustafakalayci.me/2022/03/25/3-ways-to-migrate-a-non-cdb-database-to-a-pdb/

Basically this is not a complicated or hard task. Especially, if you are using autoupgrade tool, it does everything for you. So, why did this task become a horror movie? First of all I am not an APEX guy (which I should be) and since there is an unknown component in this problem to me, I led to the wrong direction. What happened? I created a new container database on the server and then I used autoupgrade to migrate that non-cdb to pdb. Pretty straight and easy, then developer team told me that they cannot login to ords! here it is started. When they try to connect the ords url it just says “Contact to your application administratior. Details about this incident are available via debug id” and ends with some number. after a quick search I found these tables that debug messages are recorded: WWV_FLOW_DEBUG_MESSAGES and WWV_FLOW_DEBUG_MESSAGES2 and checked the tables for error messages. almost constantly new rows are inserted (since there are some integrations) and mostly error messages like this:

these are different error messages in same debug id (page_view_id). Errors are pointing HTMLDB_UTIL package but this package name has changed from time to time. same errors are raised for WWV_APEX_JSON and many others too.

So, first things first, I checked invalid objects and there weren’t any. I actually find the target procedure in HTMLDB_UTIL which is raised the error and when I run it in sql developer, it works properly. I also saw few “ORA-04068: existing state of packages has been discarded” error too. Well this should lead me to solution but…

From that moment, I thought something is wrong in apex since these errors are only raising for apex codes and also one of the developer team leader had told me that ords must be reinstalled if dbid has changed! That was a problem because during the non-cdb to pdb, new pdb database has a new db id and (as far as I know) it cannot be changed. What did I do? I started to learn how to reinstall ords! It was not that complicated and got it. I reinstall the ords but error kept coming.

Before migrations like this, I always test it on virtual machines by using same environments as much as possible. In that case, I didn’t have a RedHat and I did the tests on Oracle linux. as you can guess there were no error at all.

in one of my tries, I recompiled HTMLDB_UTIL package by “alter package … compile” (and body as well) but compilation is completed with some “warnings”.  I opened the source code and re-run the code (create or replace ….) and that cause a brand new error which I didn’t see before:

I checked libraries if this is an existing library but it wasn’t. I  thought there might be some problems about WWV_FLOW_PAGE package and then I tried to recompile it with alter package command but it raises similar error for some other packages and this goes on an on. Also, whenever I compiled a package in APEX schema, some other apex packages become invalid too. Since I was frustrated and really furious, I started to open every package in error messages and re run their source code one by one. Finally, it worked! all packages become VALID and ords url was working. I was able to login apex builders etc.

I thought that I solved it but it didn’t last long. after making a change in an application package which uses some apex packages, suddenly all the errors I fought with come back. run time errors, invalid objects etc.

by the way I also tried  dblink and dbms_pdb methods for migration instead of autoupgrade and nothing worked.

Then, this run time errors got my attention and search for them and there are some couple of MOS documents. to recompile invalid objects we use utlrp.sql file as you know but there is also another file that does the opposite, utlirp.sql (there is an I letter after utl). this package invalidates all objects in the database. so what the heck, I give it a try:

first close pdb and open in migrate mode:

now everything is invalidated, so recompile them:

and yes, that’s it. That solved the problem. if I focused on run time errors I would find the solution days earlier but not knowing APEX make me look into it.

By the way, those errors sometimes comes after hours later than pdb migration. so, I think I will use utlirp and then utlrp for every non-cdb to pdb migration anymore.

I hope this helps if you encounter such problems and you don’t suffer as I do. thanks for reading.

wish you all healthy and beautiful days.

Old Obsolete Backup is not Deleted

Hi,

Happy new year (after almost a month). I was quite busy and lazy to write a new blog post but today I found the energy.

What I realized newly is a little bit shocking for me because I had no idea! Maybe this is just an ignorance of me but I never read anything about this (I had read entire backup and recovery documentation of oracle) and never heard of it. Here is the problem: “Report Obsolete”  or “Delete Obsolete” commands don’t report or delete an old backup even if it is beyond the retention period.

At first, I thought probably I am missing something about retention period or maybe a datafile is not backed up correctly (which is not much a possibility). After debugging RMAN report obsolete command, some debug logs takes my attention and I think I found the issue but have no idea whether this is a bug or a feature. I will demonstrate it with a single non-cdb database instance to keep things easy. This is a 19.16 database in a virtual machine and this is happening in both Linux and windows. an empty database, here is the schema report:

Let’s create a dummy tablespace first:

Database has no backup at all at that point, Let’s get one:

our new test tablespace TBS_TEST is backup up along with the database. Now, my retention period is default which is REDUNDANCY 1:

that means only 1 last backup for everything (database, tablespace, datafile) is enough and all previous backups are “obsolete”. Now before getting a second backup let’s delete TBS_TEST tablespace:

and get a new backup:

Good, I have two backups now and TBS_TEST is not included on the last one. since my retention policy is “REDUNDANCY 1”, previous backup should be “obsolete”. List backups:

as you see datafile 5 row in the first backup has no Datafile Name anymore because it is dropped. Now, lets report obsolete backups:

Huh! Only autobackup is obsolete and report command doesn’t mention about previous backup! Please remember my retention is redundancy 1 so only last backup should be kept but RMAN is keeping previous backup, the one with dropped tablespace/datafile.

if I keep taking new backups and run report obsolete (or delete obsolete) commands, it doesn’t report the first backup, only the ones after that.

I tried this with a 19.3 databases and it didn’t happen! it marked old backup as obsolete immediately. So as I said, I don’t know if this is my ignorance, lack of knowledge of just missing something or a bug but on different systems (on virtual machines, linux and windows servers) for 19.16, I see this behavior.  Also, it could be a Container database or a non-cdb. You might want to check your systems if there are older backups than your retention periods.

Thanks for reading, wish you all a happy, healthy new year. My next post will be about a ORA-00600 which drives me crazy for a few weeks.

EDIT: I should add one more thing, by the retention policy requirements all archivelogs between old “obsolete” backup and the current one are deleted! so, this old backup file is completely useless.

APPLY with status WITH ERRORS in the PDB

Yesterday, I was doing some tests about noncdb to pdb conversions and during one of these tests I got a strange error. By the way, I am doing those tests because of a side effect of noncdb to pdb. I have already created an SR about it and write a post here later.

here is the environment in my test case; I installed Linux 8.6 on Virtual Box, Oracle 19c (19.16 RU) and two databases on same home, orcl and cdb19. orcl is noncdb database and cdb19 is the container database. I created both databases via dbca at the same time and dbca completed successfully (at least I thought so). Both databases are up and running. So I started noncdb to pdb conversion via autouptrade tool (latest version 22.5). I also tried, db link and unplug/plug methods too.

So after starting autoupgrade it completed successfully but when I login into database and open new pdb, it raises some errors. in pdb_plug_in_violations, I saw “APPLY with status WITH ERRORS in the PDB” error. For some reason, datapatch didn’t run correctly after dbca operation for orcl (noncdb) database. I run datapatch for orcl database again, it did some patching and re-run autoupgrade. After that operation completed and pdb created successfully.

by the way, my main problem still exists. this migrated pdb has its own data dictionary (which shouldn’t be because data dictionary is in cdb$root). anyway if you ever encounter with this error you can try datapatch again.

wish you all healthy happy days.

catcon.pl cheat sheet

Hello,

Oracle uses a container database architecture permanently anymore, you should be familiar with “catcon.pl” perl file. Let’s say you have container database with 10 different PDBs and you need to run a script file on all of those pdbs (maybe even in root and seed too) so what you need is catcon.pl.

catcon.pl will automatically execute your script file and run it on all or specified containers. Since it is a perl file it must be called with perl binary (perl in linux, perl.exe in windows) and you can find it under $ORACLE_HOME/perl/bin directory and catcon.pl itself is under $ORACLE_HOME/rdbms/admin .

so, you have a sql file to run under the /home/oracle/scripts directory called myfile.sql. let’s see how we can execute it in all pdbs:

that is it. myfile.sql file will be run in all containers (including root and seed) with the command above. Let’s dissect the command:

$ORACLE_HOME/perl/bin/perl => PERL binary to run catcon.pl
$ORACLE_HOME/rdbms/admin/catcon.pl => catcon.pl path
-d /home/oracle/scripts => directory where script is located.
-b MyScript => Prefix for the log files so you can separate them from other scripts logs.
myfile.sql => actual script name

actual script file must be the last parameter. It is that easy. in this example ORACLE_HOME and ORACLE_SID should be set before you run the command otherwise you cannot connect to the database. This is a local connection and by default it is a SYSDBA connection. So whatever you run will be run under SYS user.

To add some more functionality:

  1. -c : list of containers that you want to execute the script:
    ….catcon.pl -c ‘SALES PDB1 TEST’ => execute the script only at SALES, PDB1 and TEST pdbs.
  2. -C : list of containers that you don’t want to execute the script.
  3. -l : log directory
  4. -s : spool the output of script files.
  5. -S : (capital S) don’t execute the script in root and seed. (Edit: this option also don’t set “_oracle_scripts” parameter as true, so if you are creating a user it won’t be Oracle Maintained user)
  6. -u : specify username/password.
  7. -z : ez-connect string
  8. -n : number of workers. if you want to execute a script for 100 pdbs then you can increase the number of workers to execute script simultaneously on different pdbs. default is 1
  9. -e : set echo on
  10. -r : run script in Reverse order, first in pdbs then cdb. useful while removing an Oracle component like multimedia (it is desupported).

Can we use catcon.pl to run a script on a remote server? Yes with a combination of -u and -z

run myfile.sql script under /home/oracle/scripts directory on CDBX database and only specified pdbs (PDB1…PDB8) which is on mydbserver.mydomain server using mustafa user and create 2 worker process so script will be run on 2 pdbs at a time and write log files under /home/oracle/logs.

catcon has more parameters but I just write about the ones I used most.

thanks for reading.

edit: I add -r and -e options.

3 Ways to Migrate a Non-CDB Database to a PDB

Hi Everyone,

Time is fast and everything is changing constantly. Multi-tenant architecture has come many years ago and starting by 21c, it is the only option! So,  old “non-cdb” databases must be migrated/converted to a PDB (Pluggable database). I must admin that I was being lazy for many years and I didn’t work much to learn new things and now I am literally trying to learn everything. One of the first goal was about multi-tenant architecture and now, since I feel that I am confident enough about it, I will move my clients’ databases to multi-tenant architecture. Also, I must say, I loved multi-tenant architecture.

I won’t be going details about what multi-tenant or a pdb is in this post. I will cover only Migration / Conversion.

So, for a long time I am testing about migrating noncdb databases to pdb and I want to explain 3 main methods to do that.

  • Plugging in a noncdb as pdb
  • Creating a pdb from noncdb over a database link (dblink)
  • Autoupgrade

Methods like export import can be used of course but I don’t think that can be count as a “migrate”. So, I won’t be talk about it in this post. Let’s start then.

Environment

For all 3 methods same environment is used: Oracle Linux 8.5 (virtual box), Oracle Database 19.14 Enterprise Edition. (I will add changes on 21c in my post)

Noncdb Database SID        : orcl
Multi-tenant Database SID: cdb

Both databases are in same home (could be in different homes, that would change nothing).

I always use OMF so I won’t be using FILE_PATH_CONVERT parameter in my examples. if you don’t use OMF (I don’t know why you do that) then you must use file_path_convert and point necessary directories.

I put around 4.5 GB of data in orcl (noncdb) database because amount of data will affect the completion time of the process.

By the way, I will suppress some outputs to keep it short and emphasis real results.

Pre Steps

Steps that I am about to show you should (must to me) do whatever method you are using to migrate your database. You must always check if noncdb database is applicable as a pdb. To do that, you will use simple pdb plug in method basically. Of course in this case, we cannot “unplug” our noncdb database because it is not a pdb, so what?

As you know, you can plug in and unplug a PDB to/from a container database in multi-tenant architecture. this is the coolest option I think. As if you are plugging in a pdb, you can plug in a noncdb as a PDB. While plugging in a pdb, we need an XML file that defines properties and necessary information about the pdb like compatibility, nls info, data file locations etc. To plug in a noncdb, we will need that file as well. Oracle already embedded a package to achieve that. DBMS_PDB. Describe procedure in this package will create XML file that we need.

if you are going to use PLUG IN method then put your source database (non-cdb database) into transactionally consistent state which means open it in READ ONLY mode. Reason is XML file contains some SCN information too but if db is up and running scn will move. When you tried to plug in database, you will get an error about data file scn number and config file (XML file) does not match. Other 2 methods doesn’t require that! I personally prefer this in db link method too. I don’t want any process keep doing something while converting a noncdb to pdb.

for plug in method:

I connect to “orcl” database and create necessary xml file:

Now we have an xml file that defines our noncdb database.

Then, check if this is a compatible database with our container database. Connect to CDB database and check the xml file:

DBMS_PDB has a function called “check_plug_compatibility”. this function reads XML file and returns true or false. if true then you can plug this noncdb as pdb (maybe not so fast). if result is false then you must check what is wrong? even if result of this function is true you must check “violations”. every CDB database has a data dictionary view called PDB_PLUG_IN_VIOLATIONS. After running “check_plug_compatibility” function, errors/warnings/infos will be written to this view and you must check them before plugging in the database.

if you see any ERROR in this view you must not proceed until solving the problem.  You might see various problems here like different database parameters like open_cursors could be 500 on noncdb but 300 on cdb. remember, cdb parameter values will override previous parameters.  (I will explain “PDB plugged in is a non-CDB, requires noncdb_to_pldb.sql be run” warning later)

Some important points are DB OPTIONS. Your databases might have different options. Let’s say your noncdb database has SPATIAL option installed but it is not installed on CDB. if so, then you must install SPATIAL option before plug in the database. how to install options on a database? You can check my previous post about it:  https://mustafakalayci.me/2022/03/03/how-to-install-oracle-database-options-like-spatial-label-security-etc-after-db-creation/

As a rule, your non-cdb database might have equal or less than compatible parameter value. if this is the situation you will see a row in PDB_PLUG_IN_VIOLATIONS.

if you resolve a problem on PDB_PLUG_IN_VIOLATIONS, it will still be there. even so, after solving a “warning” problem (like installing an option) will be listed as PENDING not resolved. it requires open the pluggable database.

As I said, these steps should be mandatory to you before migrating non-cdb to pdb. after checking violations, you can proceed to plug in operation.

1- Plugging in a Noncdb as PDB

At this point we should have checked the xml file and solved the violation problems. Please remember you should have created the XML file when noncdb database in READ ONLY mode!

Now, we have XML file so we can plug in noncdb as a pdb. Before do that, close source database (noncdb).

then connect to CDB and create a pluggable database via plug in:

I don’t want to go to detail of create PDB via plug in but as a summary; you can use a different PDB name. instead of orcl, you can use orcl_pdb for example. Also, default mode for plugging in a pdb is COPY which means it will copy the datafiles that orcl.xml pointed out (they are under /u01/app/oracle/oradata/orcl directory currently). PLUG IN method will copy them under /u01/app/oracle/cdb directory (wherever db_create_file_dest is).  this way, noncdb database (orcl) will be still available (it is in shutdown state).

it takes 10 seconds to complete on my laptop. as I said before it has 4.5GB extra dummy data in this database. total size is around 7.5GB. 10 seconds is the time of copying datafiles to a new location. if you use MOVE instead of COPY method it will take much less time.

as you see it only takes 0.68 seconds. size of data is not matter now because data files are MOVEd to another location and this is a very fast operation. even if you have 1TB of data it will be completed in seconds. Also, if you are satisfied where datafiles reside currently, you can use NOCOPY option too.

if you use same with noncdb database then you must sure that noncdb is closed because there will be 2 services with same name.

you can use this method even if your databases are on different servers. just create XML file on noncdb database, then move datafiles and XML file to new server, modify XML file because file locations might be changed and wherever you copy the data files, change locations in XML file to new location and then run create pluggable database command.

Always check PDB_PLUG_IN_VIOLATIONS view after creating PDB because there might be new issues.

Now we have a new pdb called orcl:

NOTE: Another amazing option we have is we can create a PDB from a lower version noncdb. for example, if you have a 12.2 noncdb database, just install 19c and follow the same steps. Cool right? there is only one thing to do! before calling noncdb_to_pdb.sql file run dbupgrade -c “ORCL” command on shell. dbupgrade is kind of a shortcut for catupgrd.sql file. this will upgrade your newly created pluggable database first.

as you see it is in MOUNTED mode which means it is closed but we CANNOT open it yet on 19c database. For 19c, we must connect to newly created pdb and run noncdb_to_pdb.sql script which is under $ORACLE_HOME/rdbms/admin

it will take a while. as I said, don’t open pdb. set your container as new pdb then run noncdb_to_pdb.sql file. question mark means ORACLE_HOME in sqlplus. so I basically called @/u01/app/oracle/product/19c/dbhome_1/rdbms/admin/noncdb_to_pdb.sql

21c UPDATE: After plugging in (create pluggable database) command, you don’t need to run dbupgrade (if noncdb is a lower db version) or noncdb_to_pdb.sql scripts. when you run ALTER PLUGGABLE DATABASE OPEN command, 21c will automatically run them but open command, of course, will take some time to complete.

That is it.

2- Create PDB from Non-CDB via DBLINK

This step is much easier for an Admin but takes much more time to complete. Good thing about this method is your source DB can stay open during the operation and when pdb creation is completed, it will apply archivelog of noncdb database. So, PDB will be transactionally consistent at the time of END of the pdb creation.

Do the PRE STEPs first as described above. create XML file and check for the violations. if everything is OK then, you can proceed.

This method just requires a database link (dblink) from CDB to NONCDB. Dblink must be created at the cdb$root container. First create a user on noncdb database which will be use to connect in dblink.

Remember, this user must have resource, connect and create pluggable database privileges. after that create database link on CDB:

from now on we will create a pdb from a remote link:

it completed in 10 seconds. if source database was bigger it would take much more time. Since non-cdb has no PDB name, we use NON$CDB.

as I explained in previous methods, for 19c, if you created pdb from a lower version run dbupgrade command. after that, you must set current container as new pdb and run noncdb_to_pdb.sql file.

if this is a 21c you can simply open pdb. 21c will do the job but open command will take some time.

I am not sure if this is supported (I did not checked yet but test it of course). I use this method to create a PDB from a noncdb which is on a different server and different OS and it worked!

3- Autoupgrade

New King of the upgrades/convertions/migrations 🙂 Autoupgrade is an amazing tool and support many options now. Even, I have just read that version 2 is released while I was writing this post on Mike Dietrich’s blog ( https://mikedietrichde.com/2022/03/25/autoupgrade-2-0-has-been-released-and-got-reuploaded/ )

You can use autoupgrade to convert a noncdb database to pdb. Autoupgrade will take care of everything even pre steps and post steps. It will even run the fix scripts.

I still would like to do my “Pre STEPS” to check the operation.

autoupgrade is a jar file and requires java to run but no worries, everything you need is in the ORACLE_HOME. Before take an action, always download the latest version of autoupgrade.jar file from https://support.oracle.com/epmos/faces/DocumentDisplay?id=2485457.1 it is a single jar file and put it under $ORACLE_HOME/rdbms/admin for 19c and above.

autoupgrade has different modes and I won’t detail all of them here. What we use here is ANALYZE and DEPLOY modes. ANALYZE mode will run all the steps just skipping the actual operations and will show you a result. After that you can run it in DEPLOY mode which will do the actual job. autoupgrade requires a config file to identify what to do and necessary info. here is my sample:

it is quite self explanatory. source is upg1.sid which is noncdb and upg1.target_cdb is cdb which autoupgrade understands this is a noncdb to pdb conversion. you can give a new name for your pdb (my_pdb in my example). define a start time, set if a timezone upgrade will be proceed etc. I saved this file under /home/oracle/noncdb_to_pdb.cfg.

EDIT: I forgot to mention about something. By default, autoupgrade will remove your noncdb! it is creating PDB with MOVE option so datafiles of noncdb are gone. if you want to prevent this, use KEEP_SOURCE_PDB=yes in your config file like:

upg1.keed_source_pdb=yes

Before Running Autoupgrade : This is an important note. it might be solved already because autoupgrade team is really fast but I had some problems while doing that because of glogin.sql file. for details: https://mustafakalayci.me/2022/03/20/666/ thanks to Mike Dietrich and Joseph Errede, they take care of it really fast.

just run autoupgrade in analyze mode first:

then check the result and run it in DEPLOY mode. Deploy mode will start a CLI (command line interface) so you can watch the steps by typing “lsj” command:

When operation is over it will automatically exit the cli and will print the log file locations. as you see in my example, it already started noncdb_to_pdb.sql file. Yes, all of those steps are automatic in autoupgrade.

here is the end:

here is the html file screen shot:

that’s it.

STOPPED Job While Migrating Non-CDB to PDB via Autoupgrade

Hi,

Lately, I started to learn about autoupgrade which is the new upgrade tool for almost anything. Many thanks to Mike Dietrich for amazing blog posts ( https://mikedietrichde.com ). His blog is a real treasure.

I am about to migrate some non-cdbs to pdbs and before doing that I wanted to learn my options about how I can achieve that. I will write a blog post about three ways of migrating non-cdb to pdb soon. one of them, of course, autoupgrade and I almost trying to do it for last 10 days! All my efforts has ended with a STOPPED autoupgrade job at “compatibility check” step and it drove me crazy. You must know that for 10 days I tries tens of different combinations to achieve that and every each of them failed! Finally (just a mistake) I realized what happened. Let me explain:

I don’t want to go into details about autoupgrade non-cdb to pdb operation in this post, so I will just share my config file and command to run autoupgrade.

Here is the test case:

Oracle Database EE 19.14 on Oracle Linux 8 (virtual machine on Windows 11 host).

Latest autopgrade.jar is downloaded from Oracle Support.

Oracle Home is the same for both cdb (sid: cdb) and non-cdb (sid: orcl) databases.

here is my autoupgrade config file:

then run the autoupgrade to migrade orcl db into a CDB as a pdb:

I have read so many documents and blog posts about it and every time I run this command, it started Command Line Interface (CLI) of autoupgrade and when I query my job with lsj command, I got this result:

This is instant. I type lsj immediately when I reach to cli and I saw the upgrade job is STOPPED (status is running but operation is stopped and job is not working). So I check the log files of course and here is the error:

so an ArrayIndexOutOfBoundsException is thrown. Also, in noncdb_to_pdb_orcl.log file I got this:

This makes me think that there might be a compatibility problem or a violation but actually there were none! I spend days to understand this 🙂

After tens of autoupgrade tests, reading hundereds of log file results, with some luck and coincidence (and my hard work of course)  something got my attention. in log files I saw same familiar codes of mine! Where did I use those codes and why are they in autoupgrade logs? They are my “glogin.sql” file codes. for those who are not familiar to glogin.sql it is a initial script file for sqlplus tool in $ORACLE_HOME/sqlplus/admin directory. I use glogin.sql to set some default values like assigning db name in prompt or set SID as terminal header so I can see my current sqlplus SID all the time etc. but non of them is causing this but the guilty one is:

yes! I got “set timing on” in my glogin.sql file so I always get execution time of my sql scripts and somehow autoupgrade does not run a “set timing off” command and this cause some unexpected results. as you know, set timing on adds execution time to end of of your commands outout. probably, some scripts are assume it as a result of a query.  I remove it and re-run the autoupgrade tool. everything work smooth and perfect. database is migrated as a pdb.

So,  I spent 10 days because of “set timing on” but at least I learned a lot and solved a problem of mine, I am proud of it 🙂 I hope this helps whoever uses glogin.sql file and set timing on in it.

Wish you healthy days.

Peace at the Home, Peace in the World (Mustafa Kemal ATATÜRK) .