Expanded controlfile section n from nnn to nnn records

Hello everyone,

I have been already write revealing information that control file stores: https://mustafakalayci.me/2020/08/11/control-file-content-and-the-limits/.

Data in control file is separated into “SECTIONS” and you can see those sections name (type) via V$CONTROLFILE_RECORD_SECTION view. for example, database separates a section for “BACKUP SET”s. every section has a record size limit . All those information can be queried via v$controlfile_record_section of course.

RECORDS_TOTAL number of records allocated for that section.
RECORDS_USED number of used records.

You might encounter an alert log message from time to time like “Expanded controlfile section 11 from 100 to 200”. this is a message that database need to store more data in control file and currently allocated space is not enough, so oracle expands the control file. in this sample message, 100 is current RECORDS_USED and RECORDS_TOTAL. since all records are filled, it expands to 200, so RECORDS_TOTAL become 200. how about the section? what is section 11? unfortunately there is no SECTION_ID but V$controlfile_record_section always return rows in specific order and you can use this select to learn section ids:

So, 11 is ARCHIVED LOG. which means to be able to store currently required (based on your deletion policy in rman) database need to increase number of records in control file.

Database needs to increase those sections when required data to be stored increase. You can see the alert log messages above when this occur.

wish you all good, healthy days.

lsnrctl or new db connection is too slow

Hello everyone,

I know this is not new but I haven’t seen this problem for a long time and it takes a while to find out it.

A team leader (who is an elder brother to me) from the company I started to my first job called me and asked about a problem on their customer. Their customer complained about too slow db connection and no problem after the connection. first things first, I wanted to check database availability but because of some PATH problems (this was a Windows server) it took some time. after fixing environment variables, I finally reached out to database via local sysdba connection and database is up and running. also, connection was fast. by the way database was an 11.2 version which is quite old.

So, I started to focus on LISTENER. I first checked the status but “lsnrctl status”  command was quite slow. then I stopped and started again but these two operations was also slow. I checked the logfile under: $ORACLE_BASE/diag/tnslsnr/<SERVER_NAME>/<LISTENER_NAME>/trace but everything seemed ok! When I tried to logon via listener on the database server (using the net service name) it took around 70 seconds.

I started a listener trace (adding tracing parameters into sqlnet.ora file and restarting the listener) and tried to make a connection again. as soon as I issue my connection command (sqlplus username/pass@db) trace file is created. this means network connection was fast but while reading the trace file, after passing the connection information, listener was waiting for around 60-70 seconds.

I understand that network has no issue, connection arrives immediately but db connections and lsnrctl commands were slow. as I said I already checked, listener trace directory because I know that if listener.log file under trace directory has a big size, this could cause a slow operations on listener but it was around few megabytes. I checked few more things and finally I realized that I never checked “alert” directory under diag ($ORACLE_BASE/diag/tnslsnr/<SERVER_NAME>/<LISTENER_NAME>/alert) and when I opened it, there were over 1900 log.xml file which is the xml version of listener.log file. I deleted them all and it worked like a charm. Listener was checking all xml files to decide to write log data in which xml file. that was the cause and we were able to solve the problem.

Always check your listener log files and keep their size minimum as much as possible.

Wish you all healthy days.

SESSION_CACHED_CURSORS Parameter and How can It Affect the Database

Hello,

Let’s talk about the SESSION_CACHED_CURSORS parameter. It might have significant affect on your system so understanding how it works and what it cause is important.

This parameter limits the number of cached cursors on the session. As you know there is 2 kind of parse SOFT and HARD. Hard parse is creating the cursor from the scratch. Checking grants, syntax and semantics, creating execution plans etc. Soft parse is using a previously created cursor. So, if Oracle has already calculated all information for a query (sql statement, cursor) then, it stores it in the shared pool and can use it over and over again. not calculating everything from scratch is a big performance increment.

Even in a soft parse, Oracle still need to search through library cache and find the cursor first. SESSION_CACHED_CURSOR allows you to cache a cursor in your sessions PGA (and still in SGA). This way if you run a query over an over, it will be found and execute even faster. Basically, SESSION_CACHED_CURSOR is kind of “Softer” parse. Starting from 11g its default value is 50 and in 21c it is the same.

You can see cached cursors via V$OPEN_CURSOR view. Let’s open a new sqlplus session and check its cached information. to check cache information I will use a separate session because I don’t want to affect the main session. here is main session:

so my main sessions SID is 1470. from monitoring session I will check cached objects in the session:

I won’t be able to put all output because it will be much bigger in a minute. simple explanation for cursor types:

OPEN is currently opened (last run) sql statement.
OPEN-PL/SQL is currently opened cursors in plsql. it could be more than one, you can open many cursor remember that.
OPEN-RECURSIVE is recursive sql statements for your actual statement (for example Oracle needs a privilege check if you have access to those objects, it runs a query to find that)
SESSION CURSOR CACHED is half of what we are looking for here. direct sql statements which has been run in this session and cached in session PGA.
PL/SQL CURSOR CACHED is the other big half of what we are looking for here. Sql/plsql statements which has been run in a plsql object which is called in this session.

there are few more types but I won’t go detail all of them right now. if session runs a sql statements directly (select, insert, update, delete, merge etc) it will be cached as SESSION CURSOR CACHED. if session calls a plsql objects, all sql statements in it will be cached as PL/SQL CURSOR CACHED. Remember, to cache an sql statement, it must be run at least 3 times, only then it will be cached. by saying “run” cursor must be opened and closed. if you just run a query 3 times, you will not see it in cache because last one is still opened. run something else and then it will be in the cache.

you can see session cached cursors count via session statistics:

“session cursor cache hits” is how many times a new statement has been found in session cursor cache. if you  run a query 100 times repeatedly, after first 3 (after 3 runs it will be cached) next 97 will be read from cache and session cursor cache hits will increase by 97. “session cursor cache count” is currently number of cached objects (all sql, plsql and dictionary object queries).

as I said by default SESSION_CACHED_CURSORS is 50 so my session can cache up to 50 different statements and if there is more? Oracle will use a Least Recently Used (LRU) algorithm to keep most used (and recently) queries. Previously issued statements will be deleted from the session cache (not from library cache).

Let’s start some tests and run 3000 different queries 3 times (so they can be cached). of course instead of calling 3000 different queries I will use a simple plsql. those queries will be cached as PL/SQL CURSOR CACHED. Also, I will create a tmp table as copy of dba_objects to do my tests:

before continue, you might wonder why I used dynamic sql. if I use a static sql as “select count(*) from t where object_id = i” then i would be a bind variable and every query I run would be same. so, we couldn’t see the difference.

From the monitoring session let’s check V$OPEN_CURSOR for main session (a little bit long output)

as you see, 50 different select statement has been cached which IDs starting from 2951 to 3000. out last 50 queries. also OPEN cursor is our initial plsql code.

so what would happen if we increase the cache? is it a good thing or a bad thing? well, it depends. it is a good thing because you will be doing “softer” parse. your session will access to repeated queries faster but since this information is stored in PGA (session specific memory area) it will increase the usage of PGA. if you don’t have enough PGA it could be a pain for your database. PGA is automatically managed by Oracle. it grows or shrinks based on session needs. So, you might end up too much pga resize operation and even cannot cache that much queries at all.

Let’s check current PGA size for our main session now, from monitoring session:

so our main (testing) session is using 3 MB pga size. now, I will increase the size of session_cached_cursors to 1000 and run previous 3000 queries again.

check the pga size now:

now it uses 8MB of PGA. Numbers could be small but think it this way; we are using 2.5 times higher PGA then previous. Also, if you have 10.000 active sessions, this could be a problem. it is a high usage. if you have enough memory then, there is nothing to be afraid of or at least you should be adjusting your memory components accordingly. there is no correct size or ratio because this is highly depend on your application coding. You must test the values because if your application uses same queries during a business job then, you can have a smaller session_cached_cursors but if not then, you must increase the size based on your memory.

how about the good side? Yes, let’s measure that. Assume we have a business process running with 400 different queries for every run and run this process 1000 times in a day. how it would affect the performance increasing or decreasing the parameter. I will start with 100.

it took 29.38 seconds to run all queries. Now increasing the parameter value to 400 (which is almost sufficient to store all queries I run):

24.63 seconds. 5 seconds shorter which means around %17 percent faster. which is not bad and actually good. think your system is running faster by %17 percent without doing a complex sql tuning or changing the hardware. if you set session_cached_cursors parameter between 100 and 400, elapsed time would be between 24 and 29 accordingly, pga usage too.

finally, you can also query how many times your session used session cached cursors :

as you see currently, 399 different queries (sql statements) has been cached in the session and those queries has been accessed 2.011.646 times.

This is a quite nice trick to me but must be used cautiously because of the increase of PGA usage.

I hope this helps to increase your databases efficiency.

Wish you all healthy days.

Marked as unrecoverable datafile in V$datafile even if they are not!

Hello,

it has been a while since I posted. I lost a close relative to me and it wasn’t easy. So, let’s continue.

I run orachk on one my development databases yesterday and result was very informative. Interestingly, I had a “FAILED” check that indicates some of the datafiles are not recoverable! This is a serious issue so I checked. Check detail indicates result of the query below shouldn’t return a row:

and there were rows returned indeed. this database is not in force logging mode because it is just the development database and customer agrees with that. for out of curiosity I checked the production database and there were some datafiles returned from the query above. that concerns me. production database is in force logging mode . When I checked unrecoverable_time column, I realized that dates are really old and this is odd to me because I get full backup every day. How could these files be unrecoverable?

I run “report unrecoverable” command on RMAN and it shows nothing (for both production and development database). That is good but why those datafiles marked as unrecoverable and still listed as that way? if you do a unrecoverable operation on a table then, datafiles that those tables are stored in will be marked as unrecoverable in V$datafile.

of course, it is turned out this is a lack of knowledge of me about unrecoverable_time column on v$datafile. here is 19c documentation:

https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/V-DATAFILE.html#GUID-7BF7955C-9705-40F4-B2F6-5D7F3A32DD30

UNRECOVERABLE_TIME DATE Timestamp of the last unrecoverable change. This column is updated only if the database is in ARCHIVELOG mode.

This column is “updated only”. so, if you did a unrecoverable operation like sqlloder direct load on a table, related datafile will be marked as unrecoverable and unrecoverable_time column will be updated on v$datafile. Even after a full backup, this won’t be updated back to null! it will just keep the last unrecoverable time information.

Better way to check unrecoverable datafiles could be this:

This also make me think why ORAchk runs the first query at all since it is not provide an exact information about unrecoverable datafiles. Maybe a minor bug or maybe still there is something that I don’t know.

Edit: I realized that I didn’t mention how to clear unrecoverable_time and unrecoverable_change# columns. it won’t be clear if you take a full (or any kind of) backup. these information is stored in control file and as I mentioned before, this column is only updated. So, you need to recreate your control file and this operation requires a shutdown.

Generate “create controlfile” statement, shutdown database, startup in nomount mode, recreate controlfile, recover database if needed and then open the database. one post step is to catalog recovery area because this new control file won’t have any recovery file information.

alter database backup controlfile to trace as ‘/tmp/my_control_file2.ctl’ noresetlogs ;
shutdown immediate;
startup nomount;
create controlfile … — use noresetlogs option for not to loose any data
alter database open;

thanks for reading. wish you all healthy, happy days.

How to Convert LONG to Clob PART 2

Hi,

I just want to write a small post today. Some time ago, I shared a way to convert long to clob in a query directly: https://mustafakalayci.me/2020/04/06/how-to-convert-a-long-column-to-clob-in-a-query/ Even if this method works, it works really slow and anything slow annoys me!

So, I want to share another method which is much much faster than Sys_dburigen function but THIS IS UNDOCUMENTED and UNSUPPORTED! It is used by internally by oracle so it works fine but you must be careful. Since, no one uses LONG column in their application (right? you don’t! please don’t!) this kind of code is handy for mostly admins because for backward compatibility data dictionary has some LONG data type columns.

What we are going to use is a LIBRARY called UTL_XML_LIB . I almost never create objects under SYS user, so I will create this code under a privileged admin user. first we must grant privileges on UTL_XML_LIB library and then create an EXTERNAL procedure in plsql.

now create external procedure:

“kuxLong2Clob” is a function which returns void (basically it is a procedure) written in C by Oracle. this procedure takes a select statement for your LONG column and rowid. procedure returns CLOB data as an out parameter. I created my_long2clob procedure but to be able to call it via SQL statement, it must be a function, so:

to test it, I won’t create a new table with a LONG column but instead I will use one of in the data dictionary (which I use these kind of codes against). Let’s use SYS.TRIGGER$ table which is the base table for DBA_TRIGGERS view. ACTION# column is a LONG data type. of course we must first grant select privilege on SYS.TRIGGER$ to mustafa user.

I suppressed the output but here it is. really fast long to clob conversion in SQL.

as a CON, it will only work against tables not VIEWs! you cannot use it as “f_my_long2cloc(rowid, ‘DBA_TRIGGERS’, ‘TRIGGER_BODY’)”. Since it requires ROWID, you must be referring underlying table and its rowid.

beyond that, if you want to get LONG as VARCHAR2, similarly you can use “kpdbLong2Varchar2” procedure in DBMS_PDB_LIB library. as UTL_XML_LIB it is very fast. only difference is, it returns VARCHAR2 and trims the overflow part. Where is it in use? Check how DBA_VIEWS shows TEXT_VC column data 😉

I hope this helps to fight against LONG columns in your codes.

Wish you all healthy, happy days.

Zero Data Loss DISASTER Database for Standard Edition 2

Hi,

As you know Data Guard is an Enterprise Edition option and only accepted method to protect your data against a “DISASTER” (DR) is data guard. How about Standard Edition 2(SE2)? Is this possible? Well I did it so, king of yes 🙂 with some risk of course.

Warining: I don’t take any responsibility for wrong settings. Wrong actions can damage your database. Please be careful.

if you google Oracle SE2 Data guard or Manual Log Shipping, you will find many blog post and documents about it. Even youtube has videos about this but almost all of them (at least all I’ve seen and read) are not “Zero Data Loss”. Steps are quite easy:

  1. you have a production database based on SE2
  2. you use another server as your DR server. (it could be on the same server if you use duplicate snapshot method)
  3. copy all your backup
  4. restore database and keep it in MOUNT mode
  5. copy your archivelog ever X minutes to DR server
  6. recover database using rman or sqlplus.

And that is it. This is a valid method. Only GAP is in this, you are transferring your archive logs but not redo logs. This means, you will certainly loose some data in case of a disaster because changes in redo logs will be lost (if you cannot reach out the server). So, what can we do? basic answer is by copying redo logs too but how? Redo logs are changing constantly. Even if you copy redo logs with in a 1 minute interval, you are still be able to lost 1 minute data.

To achieve this, you must force database to write your both redo logs and archive logs onto DR server. Not copying files in an interval, force database to write it there. Probably, you already understood what I did. to use this kind of method you should (must) have a strong, reliable, stable, fast network connection between your production and disaster servers. Because, we will send redo logs and archivelogs instantly to DR server.

I am using 19.15 databases on Oracle Linux 8 (virtual box). Same things can be done in other operating systems too. Logic will be same.

Here are the steps that I am about to do:

  1. On the DR server, create a folder to share over network via NFS.
  2. On the PROD server, Mount this shared folder on PROD server.
  3. On the PROD server, Add an archive log destination to this shared folder.
  4. On the PROD server, Add redo log members for each group to this shared folder.
  5. On the DR server, create a copy of prod database via rman (not duplicate)
  6. On the DR server, catalog archive logs, which resides on shared folder and recover database.
  7. In a disaster situation, copy redo logs to their original location with original name and recover database.

that’s it. Let me demonstrate those steps.

I created 2 virtual machines. They have both Oracle DB Software and one of them (PROD) has a database and I want to use other virtual machine as DR.

  1. Oracle home is under /u01/app/oracle/product/19c/dbhome_1
  2. I (always) use OMF (Oracle Managed Files).
  3. data files are under /u01/app/oracle/oradata
  4. recovery area is under /u01/app/oracle/fast_recovery_area
  5. SID of database is cdb
  6. PROD server ip is 192.168.56.101
  7. DR server ip is       192.168.56.104

1- On the DR server, create a folder to share over network via NFS

As the oracle user (Oracle db software owner user), create necessary directories.

this is the folder that I will share over the network and mount it (map it) on the production server.

to share over network, as root user:

2- On the PROD server, Mount this shared folder on PROD server

as oracle user, create a directory to mount shared directory on DR. /u01/app/oracle/fast_recovery_area/DR_LOGS will be used for mount point. directories in it is for OMF.

as root user:

at this point if I put a file into /u01/app/oracle/fast_recovery_area/DR_LOGS directory on PROD server, it will be written to /u01/PROD_LOGS on DR server

3- On the PROD server, Add an archive log destination to the shared folder

By default for OMF, if there is no value for log_archive_dest_n parameters, then Oracle internally use log_archive_dest_1 parameter as LOCATION=USE_DB_RECOVERY_FILE_DEST. that way, archivelogs are written under /u01/app/oracle/fast_recovery_area directory with sub directory structure of course (/u01/app/oracle/fast_recovery_area(<SID>/archivelog/YYYY_MM_DD).

but! if you assign a value to any of those parameters, then it will just use that location to create archivelogs. so I will set 2 log_archive_dest_n parameter, one location for PROD server and one for shared folder.

by setting these 2 parameters (log_archive_dest 1 and 2) we make sure that oracle will create 2 copies at the same time to 2 different location and one of them is actually out DR server. As you see there is one more parameter which is very important. log_archive_min_succeed_dest, by default is 1 and if you don’t set this as 2, in an access problem situation (network might have issues or DR server could be shutdown), Oracle will just write the first location and pass the inaccessible location. this will cause not writing necessary archive logs to DR server. if we want “zero data loss” DR db then oracle must write archive log to dr server. (also you can use MANDATORY clause).

4- On the PROD server, Add redo log members for each group to the shared folder

as the archive logs, we need to do the same things for redo log too.

as log_archive_dest_n parameter, if we set any of db_create_online_log_dest_n parameter, we need to set all three of them. 2 locations are on PROD server and 1 location is on DR server. this is not a mandatory step actually but it is a good practice. Because, we can just add redo log members (last 3 command above) to the shared location and this is enough but if one day you need to create another log group and forget to add a log member to shared folder then, you could be exposed to data loss! so set it any way.

then, add 1 member for each log group to shared folder. I use standard naming here because I need to know the groups of files by their name (we will use this later).

lastly, I create pfile to move to DR server, so we will create our DR database.

5- On the DR server, create a copy of prod database via rman

on PROD server, database “cdb” is up and running. Let’s start:

I don’t want to go all the details about standard copy backups and restore a database so, I won’t put outputs here.

On PROD server, take a backup of database via RMAN and then copy everything to DR server via scp.

from shell:

copy all backups, archivelogs etc and also pfile.

then go to DR server and create necessary directories & restore your database:

these directories will be needed for database. restore database:

at this point we have a copy prod database on DR in MOUNT mode.

somet very important thing to do

DO THIS ON DR SERVER ONLY! we are deleting any copied redo logs because during recovery mode, there must not be any redo log!

6- On the DR server, catalog archive logs, which resides on shared folder and recover database

from now on, you can create a recover.sql script and call it ever n minute from cron to recover database regularly. content of recover.sql is:

we must catalog newly arrived archivelogs first and then recover database. I want to remind that /u01/PROD_LOGS directory is the one that we shared over network and map it on to prod server.

every recovery command will end up with an ERROR:

this is an expected behavior. since we don’t apply redo logs, Oracle is looking for the next archive log file in the sequence but couldn’t find it. our recovery is now working. after archive log with sequence 18 is arrived, recover command will raise the same error above for sequence 19.

7- In a disaster situation, copy redo logs to their original location with original name and recover database

from now on, our archive and redo log files are stored on DR server too. if there is a disaster situation, you must open the database with applying all logs including redo logs. this is the only moment that we will apply redo logs.

these are the redo log locations on PROD:

/u01/app/oracle/oradata/CDB/onlinelog
/u01/app/oracle/fast_recovery_area/CDB/onlinelog
/u01/app/oracle/fast_recovery_area/DR_LOGS/CDB/onlinelog (this one is the network directory on DR server)

actual directory that redo files are written is /u01/PROD_LOGS on DR server. Those directories must be existed on DR server too (we already created them in scripts above). We must copy redo logs under /u01/PROD_LOGS to actual locations now. I deliberately named redo logs as redoN.log so that I can identify which redo file is belong to which redo group. a very simple sql script and shell script will copy redo logs to original locations:

calling put_redo_logs.sh file will run an sql script which will produce copy shell commands.

After that, you can run recover.sql one more time and this time RMAN will not raise any error and you will see “Recovery Completed” message. now you can open your database and use it.

Now, why did I delete copied redo log files via scp command on DR server? if you recover a redo log in mount mode, control file will raise some flags and all recover commands will start from the sequence of applied redo log, this will cause problems after some time because archivelogs will be deleted eventually because of retention policy. This also means, if you apply a redo log during recovery, to have a healthy copy of database, you must restore both control file & database all over again.

Pros

obviously, you have a DR db with full consistent data.

Cons

This approach contains high risk because if database cannot write archived or redo log to shared folder then production database will hang! You might have some issues about network or DR server might be restarted or shutdown. You might loose disk drives on DR server which shared folder resides on etc etc…

I will write another post about those situations and make some tests. if loosing some data for a small time then, still refreshable pluggable database would be a better option. I will also write a post about it.

if you don’t have a reliable network, this option could be catastrophic! So, try it carefully. I hope this helps in some way.

Wish you all healthy, happy days.

Indexes with NOSEGMENT a.k.a Virtual Index

Hi,

Today, an old trainee has contacted with my on linkedin and we talk about some performance issues about a query. During the conversation I suggest to use “virtual indexes” to simulate the index so I wanted to write about virtual indexes and their benefits in this post.

if you don’t work on a large database (or a small database server) then, you probably don’t get bored while creating an index but if you do then, index creation can be a pain. Let’s say you examined a query and decided to create an index based on some columns could get increase the performance but will it? if your table is a large one then, creating an index could take serious amount of time. During that time, your server will have extra work load and if that index does not work as you assumed then all those time and work load will be for nothing! Also, you will have generated too many archivelogs. Those archivelogs could increase your recovery time in case of a failure.

So what can we do? You can use “virtual index”. Virtual index is an index without a segment which means this index has no data, does not index anything at all! So, what good come out of it? it’s creation time almost instantaneous and no work load on your server, no archivelog generation, no performance impact in any way. So, why do we create it? Because you can check your execution plan and see the cost and other metric values so that you can decide whether this index should be created or not!

This will save you from wasting too much time and give an initial idea.

Here is an example:

I have 72.544 rows in my dba_objects view. By using this view, I created a table:

I created a BIG table which contains 1.1 Billion rows in it. Those rows are copies of dba_objects view and it takes 35 minutes to create it in parallel (I enabled parallel dml and ddl operations to achieve that). Table size is around 185GB.

Now, Let’s create indexes on it:

Unique index took almost 18 minutes to complete and non unique took 10 minutes (no parallel). I must say that, my laptop is a monster with 10th gen i9 (8core), 64GB ram and nvme ssds. those timings could be double or triple on a regular db server. As I said, after all this time, if indexes aren’t useful then you drop them.

Instead of that, we can use NOSEGMENT clause while creating the index (virtual index)

As you see, it is fast because it just created an index definition into oracle data dictionary. Literally, there is no such index. check dba_indexes:

How Oracle can use an index if it is not actually created? Answer is STATISTICS. To use virtual index and get a consistent result from it, your statistics must be up to date. Of course, result will not be certain but close enough to give us an idea. By default Oracle will not use virtual indexes in your execution plan so you must set a hidden parameter, then your explained plans will have virtual indexes:

after setting hidden parameter _use_nosegment_indexes optimizer will create an execution plan with considering virtual indexes and I can see that the cost of the query has decreased to 2 from 6.450.000. This saves so much time. after checking your execution plans, if you are ok with them you can actually create the index.

EDIT: There is 2 things that I forgot to mention. after creating virtual index run DBMS_STATS.GENERATE_STATS procedure to generate some statistics of your virtual indexes.

and the second thing is, this is can be used only in enterprise edition not standard edition. Standard edition has no deferred segment creation so you cannot create a data object without data segment.

To create this virtual index as a real one, you must drop it and re-create it without nosegment clause.

Time is the most valuable thing for us. Don’t waste your time carelessly.

Wish you all healthy, beautiful days.

How to install Oracle Database Options like Spatial, Label Security etc After DB Creation

Hi,

yes, I am aware that title is a little bit long and too informative but I believe it is necessary because terms are complicated according to me.

When you say “option” it could be many thinks like “advanced compression” option in EE or Spatial option or Partitioning option and not all of these are the same “option”. they have different backgrounds and structures. some times they called “components” too but in the documentation they always mentioned as options. I want to write about “options” like Spatial, Jserver, Label Security, Data Vault, Olap… Do you see where I am going to? These are the “options” in dbca utility that you choose while installing the database (only if you choose “custom database” not general purposes or dw as db type) but also I will talk about partitioning and RAT (real application testing) option because their namespaces intersect in some “options” (ok, I will stop putting double quotes around the option).

I didn’t care about options much until now and I wanted to learn how to install/uninstall after db creation or what options do we have. Started to dig the documentations and I realized that it is a little bit more complicated than I thought. What I focused was the options on dbca as I said:

here they are. So instead searching docs and google first, I use a small trick and check the option “Generate Script” at the last page of dbca and I looked for the scripts. fortunately, Oracle puts every options creation into a script so it is really easy to find them but before talking about the scripts I want to talk about the option types.

Shared Objects & DLLs

Some options are made of just simple scripts but some of them are deeper! For example, if you want to install Spatial option (to use geographic coordinates in your db) you just need to run an sql script but for OLAP, you need to “link” some objects into Oracle kernel and then run some scripts. What does it mean? some options are not just tables/packages etc, they need some special C functions in oracle kernel but not everything is in the kernel. Shared Objects in Linux and (mostly known) DLL files in windows are widely used to add or remove functionalities to your application. In the end Oracle is an application and uses those dynamic libraries. I am not a software engineer so this part is a little bit above my paygrade but I will try to explain. In Linux if you want to add a functionality to your application via shared objects (dlls in windows os) you must “link” them to your application by “make” command. For example, to use OLAP it must be linked to your oracle binaries in Linux, as such necessary DLLs must be available in windows. Remember this operation is not per database but Oracle Home. When you do this, all current and future databases in this Oracle Home will be or will not be able to use these options.

Of course, it is not that easy 🙂 some options must be linked to Oracle binaries but they don’t need to run an sql script. So, if we group the options by their installation types, we have 3 different types:

  1. SQL Scripts (for ex: spatial)
  2. Linking Binaries (for ex: unified auditing)
  3. Linking Binaries & SQL Scripts (for ex: OLAP)

To install Spatial you just need to run $ORACLE_HOME/md/admin/mdinst.sql script as a sysdba user. That is it. This script will create MDSYS user and necessary objects to use spatial option.

To enable “unified audit” option in database you must just link some binaries but you don’t need to know everything of course. you can check this post of mine: http://mustafakalayci.me/2021/09/08/enable-unified-audit-on-linux-and-windows/

spoiler:
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk uniaud_on ioracle

and To install OLAP, first you must link OLAP module first and then then run $ORACLE_HOME/olap/admin/olap.sql script as sysdba.

Linking Binaries & chopt

for linking operation Oracle provides as a simple tool called chopt short for Change Option. Instead of running “make” command you can use chopt and disable and enable “some” options from Oracle Kernel. it is basically just calling “make” command itself. It does support very few option:

as you see, you can enable/disable OLAP, Advanced Analytics, Partitioning and RAT.  I want to repeat again, when you use chopt, you are linking/unlinking some binaries. So, this will add or remove some components from Oracle kernel and currently available and future databases will be or will not be able to use these options. usege of chopt is very easy:

if you run it on Windows, it will just rename a DLL (to enable option, it will rename it to correct name and to disable, rename it to something else)

chopt has different options from version to version. for example in old version there a module called DM (data mining) but in 19c it is under the Advanced Analytics so if you want to enable/disable data mining you must use chopt enable/disable oaa command. Also, you can see that “partitioning” is an option too. if you don’t have a partitioning license you can disable partitioning option so that non of developers create a partitioned table and caused (maybe) thousands of dollars cost.

Only use the options you need! Nothing more. Consider that unnecessary options will increase db upgrade timings too. Also, as in partitioning example, some options are licensed options which means you have to pay to use them. if you don’t have the license then disable it.

As far as I understand, there is a historical development in this. for example, in old versions, Spatial was an option that needed to be linked too. sdo_on and sdo_off was used to enable/disable the Spatial.

as you see, when you try to turn it off now, warns us about “sdo_off is disable”. it is always on anymore.

here is some examples to enable/disable options in Oracle kernel but some of them (like Label security) is deprecated.

Product/Component Enable Switch Disable Switch
Automated Storage Management asm_on asm_off
Oracle Data Mining dm_on dm_off
Database Vault dv_on dv_off
Oracle OLAP olap_on olap_off
Oracle Label Security lbac_on lbac_off
Oracle Partitioning part_on part_off
Real Application Cluster rac_on rac_off
Real Application Testing rat_on rat_off

Also, please consider that if you disable an options like OLAP using chopt (or make command) it will be disabled in dbca too. you won’t be able to choose it:

WARNING: before use chopt or make commands, always close all services in the oracle home first (database, listener etc). for windows servers, also stop the related Oracle Services in Windows Services (like OracleServiceOrcl…)

Dbca Options

Options that you can install into a database (might required binary linking). you saw them at the screenshot above. Every each of them has some unique features (I won’t go detail the purposes of the options) and some of them depends to others. So, basic hierarchy is like this:

  1. Oracle JVM
    1. OLAP
    2. Multimedia
  2. Oracle Text
  3. Spatial
  4. APEX
  5. Label Security
    1. Database Vault
  6. Extension for .NET

to be able to install Multimedia you must have Oracle JVM first or you can install APEX or Label security without installing anything else. What are the scripts to install? you should always check the documentation or support to achieve this but as an example I wanted to write them in 19.13 version. DO NOT run them directly! Always check the documentation first, scripts might be changed! those samples are taken from dbca generated scripts.

  1. Oracle JVM
    $ORACLE_HOME/javavm/install/initjvm.sql;
    $ORACLE_HOME/xdk/admin/initxml.sql;
    $ORACLE_HOME/xdk/admin/xmlja.sql;
    $ORACLE_HOME/rdbms/admin/catjava.sql;
    connect “SYS”/”&&sysPassword” as SYSDBA
    $ORACLE_HOME/rdbms/admin/catxdbj.sql;
  2. OLAP
    $ORACLE_HOME/olap/admin/olap.sql SYSAUX TEMP;
  3. Multimedia
    $ORACLE_HOME/ord/im/admin/iminst.sql;
  4. Oracle Text
    $ORACLE_HOME/ctx/admin/catctx.sql Xbkfsdcdf1ggh_123 SYSAUX TEMP LOCK;
    alter user CTXSYS account unlock identified by “CTXSYS”;
    connect “CTXSYS”/”CTXSYS”
    $ORACLE_HOME/ctx/admin/defaults/dr0defin.sql “AMERICAN”;
    connect “SYS”/”&&sysPassword” as SYSDBA
    alter user CTXSYS password expire account lock;
    $ORACLE_HOME/rdbms/admin/dbmsxdbt.sql;
  5. Spatial
    $ORACLE_HOME/md/admin/mdinst.sql;
  6. Apex
    $ORACLE_HOME/apex/catapx.sql Xbkfsdcdf1ggh_123 SYSAUX SYSAUX TEMP /i/ NONE;
  7. Label Security
    $ORACLE_HOME/rdbms/admin/catols.sql;
  8. Database Vault
    $ORACLE_HOME/rdbms/admin/catmac.sql SYSAUX TEMP;
  9. Extension for .NET
    $ORACLE_HOME/rdbms/admin/dbmsclr.plb (in windows only)

How to Install Dbca Options

Well, I pointed out the scripts and you can run them (for the same version, and always check the documentations first) but of course it is not that simple 🙂

This is very important! Installation method depends on whether you are using a Container database or Non Container database! if you are not using container database (old design which is not supported anymore by starting 21c), you can run the scripts by connecting a sysdba user but for a container database it is more complicated.

for a container database, most of those options must be run in cdb$root and then all the pdbs but this is a hard job and opens for mistakes. instead of this, use catcon.pl perl file to automate this. spatial sample creation script taken from dbca generated scripts:

it seems complicated but mostly paths makes it longer. simply call a perl script and as a parameter it passes the original script of the necessary option. Always use catcon.pl to install/uninstall an option in a container database.

How to Check Which Options are Installed

This is (as always) not that simple 🙂 Some options are “registered options” so you can list them via dba_registry data dictionary view:

These rows do not match to an option exactly. OLAP option is listed as 2 rows, “Oracle OLAP API” and “OLAP Analytic Workspace”. Also, some options are installed by default like “Oracle Workspace Manager”.

is that all? of course not. these are registered options (components) but also there are options in Oracle kernel like unified auditing. it is not depend on an sql script. for all options in the kernel use V$OPTION view:

I just listed some of the rows because it is a long list.

Uninstall an Option

Uninstalling and option is not that easy because some of the options create specific objects in different schemas (like sys) and to uninstall it either you must find every object that used by the option or use some scripts again that either Oracle Support provide or exist under the oracle home directory.

To Uninstall Spatial, for instance, you can simply just drop MDSYS schema but there might be some tables which uses spatial data types, indexes etc. you must find and drop them first. even if it seems easy there is always a catch. So, be careful and always follow support documents about it.

In Conclusion

So, it is a deep topic and probably there are much more. I said this before but again; remove all unused options from your database. They will be a burden to your system. So far these are what I understand and learn… I hope, these are useful and helpful to you while understanding the db options.

As always, wish you healthy days, NO WAR in anywhere, all life in this planet matters.

Tables with Memoptimize for Write (fast ingest)

Hi There,

this is the second part of this post: http://mustafakalayci.me/2022/02/15/tables-with-memoptimize-for-read/

Oracle adda new feature called memoptimized tables for mostly devices like IoTs. Fast lookup and fast ingest tables. I have already mentioned about fast lookup (memoptimize for read) tables in my previous post and now I want to write about Fast Ingest tables. as Fast ingest tables, those tables are optimized for inserting new data. Especially small devices (like any IoTs) are tend to be send data more than processing or querying data and we have billions of them.

for the Fast Lookup tables (in my previous post), plsql is not an option but fortunately we can use fast ingest tables in plsql. My tests will be based on plsql codes. as fast lookup tables, you will set table as fast ingest table (memoptimize for write) and while inserting you need to use a special hint “MEMOPTIMIZE_WRITE”. this way Oracle will understand this is a fast ingest table and inserts will be completed accordingly.

How it is work? your insert statements (either row by row or bulk) will be stacked in LARGE_POOL (a memory component in database) and after passing a threshold, data will be inserted (as if a bulk insert). this provides significant performance increment but as always, nothing is perfect and there is always a “but”. So, this will increase your write speed “but” your data will not be immediate anymore! you might not see your data just after inserting and committing it because it will be cached in large pool and after a time will be inserted to the table. Secondly, you might loose some data! this is a big “but”! since data is in large pool for a certain amount of time, any thing that can go wrong like an instance crash, might cause loose some of the data. you must be careful while using this feature.

So let’s test and see the difference. To demonstrate this, I will shutdown and startup my database to remove everything in the memory and then connect as “mustafa” user.

ok, before start my test, I would like to check my large pool size first. large_pool parameter is 0 as usual because I am using ASMM (sga_target is set) so I need to check dynamic components view:

sizes are in megabytes. as you see I have 96MB assigned large pool.

Let’s create two tables, one for memoptimize for write (fast ingest) and one for standard heap table.

now, I will do inserts row by rows and commit at the end:

as you see I use MEMOPTIMIZE_WRITE hint in the first plsql code for test_fast_ingest table. of course timing takes our attention:

fast ingest : 25.44 seconds
standard heap: 25.54 seconds

huh! is there a problem? well actually not, let’s check large pool size again:

as you see, large pool has reached it’s limits. it is now 768MB. as I said, memoptimize for write uses large pool, since it was not enough to do the job we waited larges pool’s growth operation. let’s run the same script again:

here it is. fast ingest table is more than two times faster than standard heap table. this is a major performance increment. Let’s try commit row by row:

this is a hard job to do and even for this job, fast ingest table is doing a great job.  for row by row commit, fast ingest time increased 2.25 times (approximately) and heap table time increased 2.6 times. if you have many devices sending many insert statements then fast ingest table will be able to handle it. even so, increasing large pool size would increase the performance.

I have already mentioned about some cons like inserts are not immediate (even if they are committed) but also there is one more cons! lets truncate the table and make inserts. please remember that ID column is PRIMARY KEY.

as you see, I inserted three rows and they all have same ID, so there must be a unique constraint violations right? well, in my tests it didn’t probably because data is not inserted yet. it is gathered in large pool and then will be inserted but it is not my session doing the inserts so I won’t get any unique constraint violation for this action. you must be careful while using this.

fast ingest tables have more satisfying results than fast lookup tables for me. we can increase the performance at least 2 times here! And it works with bulk inserts too:

created a base table (tmp_base) with 1M rows and use it to insert the data. as a result, fast ingest table is 4.2 times faster than heap table.

I expect memoptimize tables (both read and write) will be developed much more and become more popular. Just careful using fast ingest because it has significant cons. agan; you might loose data even if you committed them!

wish you all healthy days.

Tables with Memoptimize for Read (fast lookup)

Hi,

I would like to share my initial thoughts about memoptimized for read (fast lookup) tables in this post. fast lookup tables came with 18c and they are here to provide faster access to a table if you are using primary key with a equality condition. this is basically single table hash cluster. instead of using primary key index, Oracle creates a hash index and searching on a hash index should be much more faster than a unique index.

first, create the test environment.

to use a fast lookup table you must set memoptimize_pool_size paramter to a number higher than 0. this is a static memory component in sga to store hash index of the table. this memory component is not resizable which means it won’t grow or become smaller with automatic shared memory management.

create a table for fast lookup:

segment creation must be immediate otherwise you will get an error! Your table must have a primary key as well, since whole structure is based on PK, it is logical to have one. I roughly inserted 146.000 rows into table.

EDIT: I also tried those tests with 14 million rows too but results are similar.

this is not enough! you must use dbms_memoptimize package to use this table as fast lookup table.

let’s see what is happening now:

I skipped first execution statistics! you will see a small consistent gets at the first execution but consecutive executions will be like above.

as you see there are new execution plan steps “TABLE ACCESS BY INDEX ROWID READ OPTIM” and “INDEX UNIQUE SCAN READ OPTIM”, so these steps tell us, an hash index is used to retrieve data like as in key-value pairs. there is no “consistent gets”. this is amazing. almost nothing has read for this query and that should make this query so much faster.  Is it? well, I couldn’t find it, not as much as I expected at least. Actually, I found different blog posts mentioning about fast lookup tables but non of them has made a performance test.

So, I want to test the speed but there is a problem. for now, there are many limitations on fast lookup tables. one of them is “you cannot use it via plsql”. this means I can’t create a simple plsql procedure and run a query for 100000 times and compare timings. to do a test, I wrote a small python code.

Warning: I am no python developer, I just now python to solve my small problems, this code that I share below probably has many bugs but I just use it for this test purposes.

here is the python code I use:

after changing code many times, I used this final code. this code creates 9 threads and every thread run “select * from tmp_optimize where id = :myid” query for 100.000 times and myid variable value is between 1 and 10000 in circular way in this loop. each threads print total execution time for 100.000 runs of the query. here is the result:

timing is changed between 16.0 and 16.1 seconds for this fast lookup tables. how about a normal table with normal primary key?

I used same python code just changed table name from “tmp_optimize” to “tmp_not_optimize” and result is like this:

it is between 17.1 and 17.2 seconds. there is %6 performance gain (give or take).

I am not sure whether %6 is a good gain or not because to use fast lookup table actively, we need to separate serious amount of memory area. Also, there are few limitations like, cannot be used with partitioned or compressed tables, cannot be used in plsql (not yet anyway). only equality conditions on primary keys can advantage of fast lookup.

During my tests I didn’t check wait events. I expect that less latch waits on fast lookup table since no consistent gets occur. Maybe there is something I missed and I will do more tests on these tables (I didn’t even test “memoptimize for write” (fast ingest) yet).

wish you all healthy days.