Cloud Base Database Service Mustafa, 2024-04-252024-05-14 Hi, Since I relocated to the Netherlands, I cannot find much time to write but finally I have some spare time. Recently, we migrated a client database environment to the cloud and to achieve that I started to learn cloud at the end of 2023 (I know I am “a little bit” late). So far so good. Base DB Service is great. you have full control, all the features (based on you license), automatic backups, pricing (pricing is really good) etc. During the migration I learned a lot and I think there are few points to pay attention. I will share my findings so far, in case if you plan to migrate OCI Base DB Service, then you should consider. 1- Time Zone File I posted about danger of “General Purpose Template” before: https://mustafakalayci.me/2023/05/11/dbca-templates-and-dangerous-general-purpose-template/ if you use General Purpose Template during database creation, DBCA is not creating a database from scratch, it restore a “seed” database and rename it because creating database from scratch takes longer time. one of the problem about that was “time zone file” version. even if you have a newer version of time zone file in your oracle home software, it won’t be used because it is restored from an old database and as you can guess, we have the same issue on Base DB Service. I think base db service is restored from an old default backup. So, when you create a 19c base db service, default time zone file version is 32 which is pretty old. Latest version is 42 or 43 now. Don’t forget to update your time zone file version before using base db service. 2- Invalid Directory Object Path as I explained above, base db service database is restored from an old backup and I am pretty sure that old database name was ORCL 🙂 why? because if you query dba_directories for DATA_PUMP_DIR directory on base db service, you will see it is: /u01/app/oracle/admin/ORCL/dpdump even if your database name is completely different. you should fixed it by recreating directory with proper db unique name. 3- Single Member for Redo Log Group this one was a little bit surprising for me. Every oracle document says, always use multiplexed redo log file! because redo log is one of the most important part of your database. if you loose an active/current redo log file then you will loose data for sure. This is 101. you must have multiple redo log file for each redo log group and preferably they should reside on different disks/volumes. Unfortunately, on base database service there are only 1 copy for redo log files (query v$logfile). of course, oracle uses some redundant disk structures here but in case of a file corruption, you must have multiple redo log members. So I set db_create_log_dest_2 parameter and create new redo log groups so that I can have 2 copies redo log file on different mount points. EDIT: Here is an important note about single member of redo log groups: if you add another member to the redo log group and then try to create a “dataguard association” via cloud UI, it raises an error and error doesn’t explain much. I just check all the steps I did so that I was able to identify it was the new redo log member that I added. of course you can setup your own data guard configuration but if you are planning to use cloud interface then you might get an error. 4- Default PDB Base DB Service comes with a default pdb even if you don’t enter a pdb name while creating. so if you are not planning to use it, you should drop it. so these are important stuff for me to know. besides those, if you are planning to migrate to OCI Base DB Service, you should consider the actions too: Backup your TDE wallet and password. by default, if you didn’t change it at the db service creation, wallet password is same with SYS password but in time you will change the sys password so don’t forget to save tde password too. enable unified audit if didn’t. on 19c it is not enabled by default so you can disable traditional audit and enable unified audit. install DBMS_CLOUD package. since you are on the cloud now, eventually you will need to use it. a bad side is, after a Release Update like 19.21 to 19.22, it is deleted and you need to reinstall it 🙁 of course there might be many other steps for example if you are migrating from a TDE disabled system, you need to encrypt your tablespaces first otherwise base db service will not be able to create automatic backups! Or add necessary patches before you migrate in case you needed. Doing a migration is not an easy task but after a few tests you will have your steps and actual migration will be easy. Luckily, my client’s CTO is very experienced and competent person with very good communication skills. Many thanks to him to make this migration really easy for me. any comment is welcome as always and thanks for reading. wish you all healthy happy days. 19c 21c 23ai Administration Cloud base database servicebase db servicebase db systemcloudcloud data guard errormigrating cloudprechecks
What is the db size you migrated to base db and by which method. Another point is that better to keep the pdb name same as source db before migration. This keep things easier Reply
Hi Raja, thanks for your comment. this is the smallest db we have and it was around 250GB. I have tested many methods (and I am planning to write some posts about them too) but there were some limitations in our case because source database was standard edition 2 and we migrated to enterprise edition. So, to minimize the downtime we used refreshable clone pdb. it really worked well for us. since we used refreshable clone, initial pdb was not any good for us. Reply