23c is Finally Here!

Hello everyone,

Well, it is just developer version so don’t try to install on your environments ๐Ÿ™‚ My twitter is full of 23c tweets about these days. I also downloaded immediately and tested what I wanted the most for 4-5 years! “Schema Level Privileges” and I am so excited about it.

Preliminary Information

if you are a DBA or experienced developer, you already know about “privileges”. As a reminder, if a user wants to create a table on its own schema then, user must have “CREATE TABLE” privileges and that is fine. if a user wants to (must do) create a table on another users schema then, user must have “CREATE ANY TABLE” privilege. with this privilege user can create tables in other schemas too.

A simple example is, your developers need to create/drop/alter tables/packages/indexes on application user schemas. My problem starts at that point! Let’s say we have different applications and they have their own schemas (APP_USER1 and APP_USER2). these two application schemas are isolated in all aspects like they don’t use their objects, don’t connect to other schema and even so their developer teams are different.

I think you see where this is going. a developer (lets call him DEVELOPER1) in APP_USER1 development team must have “CREATE ANY TABLE” or “SELECT ANY TABLE” privilege to create new tables or select tables on APP_USER1 schema but this ANY privileges are too strong. when I grant those privileges, DEVELOPER1 is able to create and select ANY table in any schema the database (except SYS schema). So, developer1 must be creating/dropping/selecting objects in only APP_USER1 but now developer can do this on APP_USER2 schema too (on other schemas too).

For me that is a big security issue. a developer can access a highly sensitive data even if he/she shouldn’t! Until 23c ๐Ÿ™‚ before 23c you could do this Oracle Database Vault which is a licensed option (extra cost) for Enterprise Edition.

I posted this idea around 4 years ago: https://forums.oracle.com/ords/apexds/post/grant-all-on-a-specific-schema-instead-of-any-privileges-0577 and finally it is arrived.

On 23c we now have SCHEMA LEVEL PRIVILEGES which allows grant a privilege on a specific schema.

Quick Demo

let’s create two application schemas (APP_USER1 and APP_USER2) and one developer user (DEVELOPER1).

this developer edition 23c has default FREEPDB1 pdb, so I connected to it (well, I set my container) and create users. now grant:

as you see we can use ON SCHEMA <schema name> structure anymore. One thing you should be careful is privilege is not “create table”, it is still CREATE ANY TABLE since user needs to create table in another schema. check:

here it is. my developer can access only the schemas that she/he granted.

There are so many new features (as always) in new version but this one is something I wait for years. I didn’t even finish to learn everything on 19c and 21c. Oracle is faster than me ๐Ÿ™‚ hope you enjoy it.

Wish you all healthy good days.

Not Deploying is also Important!

Hello there,

I mentioned about writing a nasty ORA-600 error and here it is; One day I woke up and one my databases was started to raise some ORA-600 errors:

ORA-00600: internal error code, arguments: [ktfacht1-0], [], [], [], [], [], [], [], [], [], [], []

there were also some “snapshot too old” errors too but don’t confuse with manual or insufficient undo etc. Application on this database is using Flashback Data Archive. some of the tables are in FBA (short for FlashBack [data] Archive). some thimes developers or app users are trying to get even before the flashback retention time and they got snapshoot too old error so we are used to it. My first response was “naah probably nothing” well, it wasn’t ๐Ÿ™‚

Suddenly some developers contact me about getting snapshot error on flashback archive tables and they were getting error for any time even just for 5 minutes ago (this short time range even doesn’t need flashback archive, standard undo retention is enough for that) What was happening? For some table that are in flashback data archive we started to get snapshot too old error, then number of tables are started to increase! first 1 then 4 then 6…

When I check the trace files for that ora-600 errors I got statements like this:

(I deleted actual column list, owner and table_name)

you might familiar with flashback archive but if you are not check this post: http://mustafakalayci.me/2019/03/02/goodbye-log-triggers-welcome-flashback-data-archive/

Preliminary Information

Flashback archive has some tables in SYS schema (mostly stores context data) and when you add a table into flashback archive, Oracle creates 3 tables in actual table’s schema.

SYS_FBA_DDL_COLMAP_nnnnn : column mapping table, when you add/remove/modify a column a new row will be inserted here
SYS_FBA_HIST_nnnnn : actual table historical data.
SYS_FBA_TCRV_nnnn : historical transaction data.

“nnnnn” is OBJECT_ID in dba_objects for that table.

when you add a table into flashback archive, Oracleย does not create those SYS_FBA tables immediately! They created after some transactions on the table and whenever the flashback data required to written into those tables. (that is I think a problem by the way, they should be created immediately).

End of Preliminary Information

When I checked some important tables data in SYS_FBA_HIST_nnnnn table, I did see that new data is coming but when I run a flashback query I got ORA-00600. when you search for “ktfacht1-0” you get some docs about “virtual columns” which are not related to our case, there were no any virtual columns (or functioned based indexes etc).

Of course create an SR with severity 1 and as you can guess this was happening on only PROD environment! Murphy laws. After around 2 weeks (yes it was a little bit long), uploading mass amount of log/trace files to Oracle support and talking to many different support technicians, they offered a zoom call, during the call we check the system again and trace files as well, finally they told me that only one table has a problem and the problem is empty SYS_FBA_DDL_COLMAP table!

When a table is added to FBA and SYS_FBA_DDL_COLMAP is created, current columns of table are written into ddl_colmap. Somehow it wasn’t exist on that table. Because of that Oracle cannot write other flashback data (like old table data and transaction information) cannot be written into sys_fba tables and as a result of that, Oracle blocks some rollback segments which also contains other tables data (other tables that cannot run flashback query on them).

Support told me to remove this table from flashback archive will solve the problem and since this tables newly added into FBA and there were no historical data, I talked to dev team and removed table from FBA and viola! everything get back to normal. that is it. Oracle support couldn’t tell why this could be happen (and they cannot because actually we did it ๐Ÿ™‚ ).

After talking to support and solving the problem, I started to dig why that happened. During my checks, I realized that there some SYS_FBA tables but they are not related to any actual tables! empty SYS_FBA_HIST_nnnnn tables were there, then I started to check my audits about DDL operations on the problematic table and I found it! it was added to FBA on Thursday (day is important) then weekend and on Wednesday BAM, we got ORA-00600s.

after getting into really deep I finally solved the mystery: as said before, SYS_FBA tables are not created immediately, they are created after some transactions. This table is a parameter definition table and not get much transactions. When we add this table into FBA on THURSDAY, there were no any SYS_FBA tables. here is step by step how Murphy laws are activated:

  1. Table is added into Flashback archive on THURSDAY (for all databases, dev, test, acp and prod)!
  2. on that weekend there were “Application PRODUCTION Release”. Dev team is using a third party tool to get differences between two databases (ACP and PROD), generate scripts and run them in PROD database. Release is completed.
  3. ACP database is copied from PROD few years ago so some of the table hasย same object idย including this problematic table.
  4. on ACP database some DML statements has run on that table and SYS_FBA tables are created on ACP db.
  5. since there were no limitation on third party tool (to get differences between dbs) this tool generates create script for all SYS_FBA tables that is not exists in PROD. This tool does not copy data , just structure and SYS_FBA table for that table created on PROD without data (that is way sys_fba_ddl_colmap was empty).
  6. after weekend on Wednesday, a new row is inserted into table and Oracle found SYS_FBA tables are created (even if they are not actual sys_fba tables) so tried to use them but data was missing and that eventually caused ORA-00600.

and also that is why there some SYS_FBA tables that are not related to actual tables (they were not in dba_flashback_Archive_Tables or their object_id (nnnnn at the end) was not exists in dba_objects). this tool was copying them into upper databases (dev => test => acp => prod).

After finding the root cause, we defined some filters on the application not to get sys_fba (and some other Oracle related) tables and removed all un-related SYS_FBA tables in databases. by the way SYS_FBA tables cannot be dropped by default using DROP TABLE command but as you guess, these ones dropped because they are actual sys_fba tables, they were just user defined tables.

Thanks for reading if you come to the bottom ๐Ÿ™‚ Deployment is one of the most important parts of application development, so choose your methods wisely.

Wish you good, healthy days.

Unique Columns For V$Sql

Hi,

Until yesterday I was saying SQL_ID and CHILD_NUMBER is enough to get a sql child cursor from V$SQL and today I know I was wrong.

Probably many times you needed the sql statement that a session is running. To do that you query V$session and find the session information and then use SQL_ID column to reach out V$Sql. if you write a query like this:

you can write any filter you want here but you will immediately realized that some session rows are multiplied. Why? Because you didn’t use “CHILD_NUMBER” while reaching v$sql. V$SQL is not main data source for unique sql statements in your database. V$SQLAREA is. So instead of v$sql, you can use v$sqlarea in this query but if you need specific information about the exact sql cursor that session is currently running then you must use v$sql.

Note: I will not go deep Parent-Child cursors here. basically, every sql statements has child cursors. they all same sql but for some reason they have different properties and your session is running a “child cursor”.

so your query should be like this:

and now you reached to exact correct sql child cursor that is running for that session. right? well, I did know so but I find out, it is not! On one of my clients system, I discovered that this query returns 2 rows from v$sql which means for same SQL_ID and CHILD_NUMBER there are 2 rows in v$sql. I couldn’t find anything on docs or google (maybe couldn’t search enough) and started to dig about this sql statements.

here is the necessary information about this sql:

almost everything is same but ADDRESS and CHILD_ADDRESS. CHILD_ADDRESS is the memory address that this cursor resides on so it must be different obviously. Only thing that create this difference is ADDRESS column. ADDRESS column in V$sql is the address of the PARENT cursor which is in V$SQLAREA view. So let’s check this view:

there is only one of them exists. So some of those child cursors in v$sql are not (I am not sure the term) “valid”. they might be used before (and they were in my example) but now they are remaining of a past sql. So which child cursor in v$sql is the sql statements that session is currently running? Now we need to use ADDRESS column too. SQL_ADDRESS column in v$session is the parent address of the sql statement. So, I should add it into my queries too:

thanks for reading. comments are welcome. wish you healthy happy days.

How to Check Empty Space in a Securefile LOB

Hi,

PS: please check last sql statement in this post for shorter and faster solution. first sql is for showing all information about the lob.

Yesterday, while checking segment space usage data in a client’s database and some of the LOB segments were absurdly big. So, I want to check what is in it and how much free space I can gain.

To do that, I cannot use just DBMS_LOB.GetLength function because for CLOB data type, it takes much more space than original data because of charsets (especially if you use AL32UTF8 charset) so I just want to check currently empty space in lob segment. if you need that kind of data then, DBMS_SPACE is the solution. DBMS_SPACE has a SPACE_USAGE procedure which checks a segment andย  return space information but starting 12c, SECUREFILE is the default lob type and for securefile you cannot use same space_usage procedure with the one you use for tables.

space_usage has overloaded versions and second and third overloaded version (order in dbms_space package from top to bottom) is for SECUREFILE lob segments.

here is a sample script I wrote to check currently empty space in securefile lobs:

Caution: I check all LOBS in the database so, you might not want to do that, just modify sql and add necessary filters.

first with function block is to call dbms_space.space_usage procedure and return data in string, on second with clause (get_info) call the function and get lob information, by the way SEGMENT_TYPE for a lob segment is “LOB SEGMENT” but dbms_space.space_usage does not accept it as segment_Type, you must send “LOB” instead, then parse result as columns and list SEGMENT_SIZE_MB which is total size of LOB segment size in megabytes
USED_SIZE_MB which is actual used size in megabytes
EXPIRED_SIZE_MB which is expired data size (for versioning) in megabytes.

to calculate empty space just subtract used size from segment size. this is approximately empty size in your lob segment. if you have a partitioned table then you will get result for each partition (like mustafa.tmp table in the example). I hope this script helps in your job.

EDIT:

I forgot to mentioned why lob segments have empty spaces. biggest reason is delete operation of course. if you delete a big portion of the table (or lob segment) then deleted space will be considered as empty space. So, if you delete on a table, should you shrink the lob segment or table? NO! that space will be used by the table again. if this is your standard operation then empty spaces in table/lob segment will be reused but let’s say you inserted many rows by a mistake or because of a bug in your code and then deleted those rows. if you have enough disk space for the database then there is nothing to worry about (mostly). That table will get many inserts in time and it will use that space eventually but if too much space is reserved in the table / lob segment and maybe you have a disk space issues then you can reclaim it (by shrinking or moving table/lob segment). So, this is not a regular thing.

EDIT 2:

sql above is aiming to show all information about lob like used space, total space, expired space etc. probably , you will just need to see empty space in lob. so you can use this sql statement. this will work faster because no regexp usage and parsing operations. I also subtracted “unexpired bytes” from total size since this space is still in use.

 

 

thanks for reading, your comments are always welcome. wish you healthy days.

catcon.pl cheat sheet

Hello,

Oracle uses a container database architecture permanently anymore, you should be familiar with “catcon.pl” perl file. Let’s say you have container database with 10 different PDBs and you need to run a script file on all of those pdbs (maybe even in root and seed too) so what you need is catcon.pl.

catcon.pl will automatically execute your script file and run it on all or specified containers. Since it is a perl file it must be called with perl binary (perl in linux, perl.exe in windows) and you can find it under $ORACLE_HOME/perl/bin directory and catcon.pl itself is under $ORACLE_HOME/rdbms/admin .

so, you have a sql file to run under the /home/oracle/scripts directory called myfile.sql. let’s see how we can execute it in all pdbs:

that is it. myfile.sql file will be run in all containers (including root and seed) with the command above. Let’s dissect the command:

$ORACLE_HOME/perl/bin/perl => PERL binary to run catcon.pl
$ORACLE_HOME/rdbms/admin/catcon.pl => catcon.pl path
-d /home/oracle/scripts => directory where script is located.
-b MyScript => Prefix for the log files so you can separate them from other scripts logs.
myfile.sql => actual script name

actual script file must be the last parameter. It is that easy. in this example ORACLE_HOME and ORACLE_SID should be set before you run the command otherwise you cannot connect to the database. This is a local connection and by default it is a SYSDBA connection. So whatever you run will be run under SYS user.

To add some more functionality:

  1. -c : list of containers that you want to execute the script:
    ….catcon.pl -c ‘SALES PDB1 TEST’ => execute the script only at SALES, PDB1 and TEST pdbs.
  2. -C : list of containers that you don’t want to execute the script.
  3. -l : log directory
  4. -s : spool the output of script files.
  5. -S : (capital S) don’t execute the script in root and seed. (Edit: this option also don’t set “_oracle_scripts” parameter as true, so if you are creating a user it won’t be Oracle Maintained user)
  6. -u : specify username/password.
  7. -z : ez-connect string
  8. -n : number of workers. if you want to execute a script for 100 pdbs then you can increase the number of workers to execute script simultaneously on different pdbs. default is 1
  9. -e : set echo on
  10. -r : run script in Reverse order, first in pdbs then cdb. useful while removing an Oracle component like multimedia (it is desupported).

Can we use catcon.pl to run a script on a remote server? Yes with a combination of -u and -z

run myfile.sql script under /home/oracle/scripts directory on CDBX database and only specified pdbs (PDB1…PDB8) which is on mydbserver.mydomain server using mustafa user and create 2 worker process so script will be run on 2 pdbs at a time and write log files under /home/oracle/logs.

catcon has more parameters but I just write about the ones I used most.

thanks for reading.

edit: I add -r and -e options.

How to Convert LONG to Clob PART 2

Hi,

I just want to write a small post today. Some time ago, I shared a way to convert long to clob in a query directly:ย https://mustafakalayci.me/2020/04/06/how-to-convert-a-long-column-to-clob-in-a-query/ Even if this method works, it works really slow and anything slow annoys me!

So, I want to share another method which is much much faster than Sys_dburigen functionย but THIS IS UNDOCUMENTED and UNSUPPORTED! It is used by internally by oracle so it works fine but you must be careful. Since, no one uses LONG column in their application (right? you don’t! please don’t!) this kind of code is handy for mostly admins because for backward compatibility data dictionary has some LONG data type columns.

What we are going to use is a LIBRARY called UTL_XML_LIB . I almost never create objects under SYS user, so I will create this code under a privileged admin user. first we must grant privileges on UTL_XML_LIB library and then create an EXTERNAL procedure in plsql.

now create external procedure:

“kuxLong2Clob” is a function which returns void (basically it is a procedure) written in C by Oracle. this procedure takes a select statement for your LONG column and rowid. procedure returns CLOB data as an out parameter. I created my_long2clob procedure but to be able to call it via SQL statement, it must be a function, so:

to test it, I won’t create a new table with a LONG column but instead I will use one of in the data dictionary (which I use these kind of codes against). Let’s use SYS.TRIGGER$ table which is the base table for DBA_TRIGGERS view. ACTION# column is a LONG data type. of course we must first grant select privilege on SYS.TRIGGER$ to mustafa user.

I suppressed the output but here it is. really fast long to clob conversion in SQL.

as a CON, it will only work against tables not VIEWs! you cannot use it as “f_my_long2cloc(rowid, ‘DBA_TRIGGERS’, ‘TRIGGER_BODY’)”. Since it requires ROWID, you must be referring underlying table and its rowid.

beyond that, if you want to get LONG as VARCHAR2, similarly you can use “kpdbLong2Varchar2” procedure in DBMS_PDB_LIB library. as UTL_XML_LIB it is very fast. only difference is, it returns VARCHAR2 and trims the overflow part. Where is it in use? Check how DBA_VIEWS shows TEXT_VC column data ๐Ÿ˜‰

I hope this helps to fight against LONG columns in your codes.

Wish you all healthy, happy days.

Tables with Memoptimize for Read (fast lookup)

Hi,

I would like to share my initial thoughts about memoptimized for read (fast lookup) tables in this post. fast lookup tables came with 18c and they are here to provide faster access to a table if you are using primary key with a equality condition. this is basically single table hash cluster. instead of using primary key index, Oracle creates a hash index and searching on a hash index should be much more faster than a unique index.

first, create the test environment.

to use a fast lookup table you must set memoptimize_pool_size paramter to a number higher than 0. this is a static memory component in sga to store hash index of the table. this memory component is not resizable which means it won’t grow or become smaller with automatic shared memory management.

create a table for fast lookup:

segment creation must be immediate otherwise you will get an error! Your table must have a primary key as well, since whole structure is based on PK, it is logical to have one. I roughly inserted 146.000 rows into table.

EDIT: I also tried those tests with 14 million rows too but results are similar.

this is not enough! you must use dbms_memoptimize package to use this table as fast lookup table.

let’s see what is happening now:

I skipped first execution statistics! you will see a small consistent gets at the first execution but consecutive executions will be like above.

as you see there are new execution plan steps “TABLE ACCESS BY INDEX ROWID READ OPTIM” and “INDEX UNIQUE SCAN READ OPTIM”, so these steps tell us, an hash index is used to retrieve data like as in key-value pairs. there is no “consistent gets”. this is amazing. almost nothing has read for this query and that should make this query so much faster.ย  Is it? well, I couldn’t find it, not as much as I expected at least. Actually, I found different blog posts mentioning about fast lookup tables but non of them has made a performance test.

So, I want to test the speed but there is a problem. for now, there are many limitations on fast lookup tables. one of them is “you cannot use it via plsql”. this means I can’t create a simple plsql procedure and run a query for 100000 times and compare timings. to do a test, I wrote a small python code.

Warning: I am no python developer, I just now python to solve my small problems, this code that I share below probably has many bugs but I just use it for this test purposes.

here is the python code I use:

after changing code many times, I used this final code. this code creates 9 threads and every thread run “select * from tmp_optimize where id = :myid” query for 100.000 times and myid variable value is between 1 and 10000 in circular way in this loop. each threads print total execution time for 100.000 runs of the query. here is the result:

timing is changed between 16.0 and 16.1 seconds for this fast lookup tables. how about a normal table with normal primary key?

I used same python code just changed table name from “tmp_optimize” to “tmp_not_optimize” and result is like this:

it is between 17.1 and 17.2 seconds. there is %6 performance gain (give or take).

I am not sure whether %6 is a good gain or not because to use fast lookup table actively, we need to separate serious amount of memory area. Also, there are few limitations like, cannot be used with partitioned or compressed tables, cannot be used in plsql (not yet anyway). only equality conditions on primary keys can advantage of fast lookup.

During my tests I didn’t check wait events. I expect that less latch waits on fast lookup table since no consistent gets occur. Maybe there is something I missed and I will do more tests on these tables (I didn’t even test “memoptimize for write” (fast ingest) yet).

wish you all healthy days.

Blockchain vs Immutable tables

Hi,

as you know 21c is here for linux and some new features are (as always) pretty exciting. two of them are Blockchain and Immutable tables. Let’s explain and test.

first of all, those are new features for 21c but they are backported to 19c. at 19.10 you are able to use BLOCKCHAIN tables and 19.11 IMMUTABLE tables. I will make my tests at 19.12

if you are familiar to Bitcoin or any other alt coin then you probably now what blockchain is. basically it is a distributed ledger. every transaction is a “block” and each block contains information about transaction like sender, receiver, amount etc and also a hash value of the previous transaction (using some hash functions). That way all new transactions are added to the end of the “chain” and that creates the chain. This ensures that chain is unbreakable you can not tamper with old records because that will breaks the chain (hash values won’t match with next blocks).

So, Oracle uses this technology to create a new table called Blockchain table. every inserted row in this table are stored with some additional information like current timestamp and most importantly, previous row’sย  hash values! Hash value operations are made at the “COMMIT” time not before. that way Oracle guarantees that every row is a part of chain and you can not tamper with those rows. if you do, chain will be broken and can not be verified. Also, Oracle won’t let you delete or update those rows using any DML statement like delete, update, merge etc.

Immutable tables are pretty similar to Blockchain tables. you can not update those table using any DML as blockchain tables. Difference is rows in the Immutable tables are not linked to each other. Rows does not store any hash value from previous rows.

both of them are designed to create “unchangeable”, Insert only tables. if you think that this is a real necessity especially auditing staff. for example if you store a log information for a credit card information you don’t want anyone to change them and you want to be sure that those rows are intact and unchanged on an investigation.

of course you just need to store table data for a specific amount of time like 5 years etc. No one can store them forever so at some point you must be able to delete old rows. This is something you can define while creating those tables. Also, even if I may not able to delete the rows what would happen if someone drops the table! this is of course forbidden and again you can define a drop policy while creating tables. Let’s start with a blockchain table:

you need BLOCKCHAIN keyword (or IMMUTABLE for immutable tables) at the create statement.

“NO DROP UNTIL 31 DAYS IDLE” defines how can you drop this table. if you have no actions on that table for 31 days then and only then you can drop it. By the way after creating table, if you don’t insert any row then you can drop it immediately.ย  you can just use NO DROP to disable dropping of this table (only way is dropping database entirely).

“NO DELETE until 30 days after insert” defines how can you delete “old” rows. in my example you can delete a row after 31 days of it’s insertion. Again you can use NO DELETE LOCKED to disable deleting from table.

hashing part is the default and you have to write this down to define blockchain structure.

let’s insert a row and try to delete or update:

as you can see modifying is forbidden. you can not even delete rows that passes the retention time that you defined (no delete until…) with a delete statement. You must use DBMS_BLOCKCHAIN_TABLE.DELETE_EXPIRED_ROWS procedure to delete rows beyond the retention.

This is BLOCKCHAIN table. of course there are much more properties like user signs etc but I will leave it here for now. For IMMUTABLE tables, almost everything is the same:

just change “blockchain” keyword to “immutable” and remove hash clause. pretty straight. as expected you can not delete or update rows. if you want to delete old rows then you must use DBMS_IMMUTABLE_TABLE.DELETE_EXPIRED_ROWS procedure.

Common Feature of both tables is that they can not be modified by authorized db users not even SYS or db vault admins. No one in the database can change those rows. Basic difference of the both tablesย  is Blockchain table has steps for cryptographic linking between the last row and new row but Immutable table has no option like that. what does this provide? Immutable tables can not be modified via Oracle Database Software but how about bypassing DB? if someone modify datafile using Operating system tools then your table data will be modified and you can not be aware of it. Blockchain tables, on the other hand, has a verify function. since every row linked to each other with some hash value, if you modify this data using any tool, like OS tools, verification will be failed because if a row changes then it’s hash value will be change and that hash value is stored on the next row.

Then why immutable tables are exists since blockchain tables are more secure? You might guess that, creating hash values and storing them on the next row causes some extra work which means performance issues.

Let’s make a test (with a flow on purpose):

I inserted rows using bulk insert and row by row on both table and here are the results. huh! blockchain table is almost faster than immutable table? Actually no! as I said earlier hashing operations are done at the commit time not before on the blockchain table! in my code there is no commit! that’s why we see the same performance result. how about adding commits before printing time:

here it is! at worst, immutable tables are faster more than 6 times (row by row insert) and at best, almost 40 times faster than blockchain tables. So, while choosing table type be careful. if immutable tables meet the need then use them.

this is just a glance for new table types. I will make more tests and come with new posts.

wish you healthy days.

Edit: I didn’t mentioned about COMPATIBLE parameter. it should be at least 19.10 to create blockchain table and 19.11 for immutable table.

Pragma UDF & Deterministic

Hello everyone,

I wanted to write about something simple and can improve the performance of your sql statements. Unfortunately, many database developers use too much plsql function in their SQL statements. I say too much because if you learn more about the SQL, you will use less plsql. SQL is very capable language to process data. Of course we don’t design and develop a whole system all the time. We start to work somewhere and use their standards and run old codes. Many of my clients have some basic plsql functions which does not interact with SQL like just doing basic calculations or conversions. there plsql functions are pure algorithms which can be also run by SQL too. for example:

this is a very simple VAT adding function. As you can see this calculation can be done by SQL too but let’s say someone writes this code 20 years ago and now it is used everywhere in database (maybe also in the application too). So you might not be able to change all of them but you can increase it’s performance by deterministic or pragma UDF.

let’s make a small example first:

this will create a table with 800.000 rows. now I will run a query against this table with and without function:

it takes almost 10 times more to complete the query when you use a PLSQL function.

Reason is, whenever you use a plsql function inside a query (or vice versa) something called “context switch” is happening. you can think this as, SQL statements are run by SQL Engine and PLSQL statements are run by PLSQL engines and when ever these two engines meet with opposite statement, they have to call the other engine. this is context switching. You must avoid as much as you can from this but as I said if the system you work on is run the queries like this, you might not have to change all but you can, at least, increase the performance. that where pragma udf and deterministic come.

Deterministic tells Oracle, this function will return same result for same parameter all the time! if your plsql object does not interact with sql objects like tables or time variables like sysdate then probably it is a good candidate for deterministic. Also, if you want to index a user defined plsql function it must be deterministic. you can add deterministic clause to a function which is interact with sql objects or sysdate etc but this will only cause you headaches. so be careful about that.

Pragma udf is a way to reduce context switch overhead. it does not reduce the number of context switch but it makes it smaller and faster. that way if you use a plsql function in sql, context switch will be lighter.

Let’s make a small test now, my example is very good candidate for both deterministic and pragma udf:

I created 3 version of my vat function. first is the base one, second is with a DETERMINISTIC clause and third is both deterministic and pragma udf. they all do the same work. let’s compare their performance:

I run 4 versions, with basic function, deterministic function, deterministic & pragma udf and finally of course pure sql statement. As you can see BASE version is the worst and takes 1.15 seconds average to complete. just adding a deterministic clause decreases time spent almost 5 times and adding pragma udf decrease another 30-35% based on deterministic and finally pure sql is of course the fastest. pure sql is 10 times faster than BASE version. so, if you can use pure sql use it! if it is not possible at least you can check for deterministic candidates and use pragma udf.

Of course there is something called, “subquery caching” :

as you see, subquery caching might be even faster than pragma udf and deterministic but it depends. this solution also requires changing the sql statement too.

So you can improve the performance just by small changes. find a plsql function that is so simply and used everywhere in your database (I am pretty sure there is one) and try the suggestions above. by the way, 19c has something even stronger called “SQL Macro” which I mentioned a little bit before:

http://mustafakalayci.me/2020/10/28/sql-macro-is-finally-here/

http://mustafakalayci.me/2020/10/30/sql-macro-part2-column_value-nvlparam-column_value/

I mostly mentioned for table sql macro but you can use sql macro to generate parts of sql statement too.

I hope this helps you to tune your sql & plsql statements.

wish you all healthy days.

PLSQL and Flashback Data Archive Problems.

Hi,

I have written about flashback data archive before. if you didn’t read please check these post first.

http://mustafakalayci.me/2019/03/02/goodbye-log-triggers-welcome-flashback-data-archive/

http://mustafakalayci.me/2020/09/10/flashback-data-archive-problems/

I’ve already mentioned about some problems on my second post but these are not kind of problems which will stop your work. The problem I am about talk is that kind of problem.

From my perspective, it seems that flashback is designed to be used in SQL. track the history of your tables (even ddl changes) etc but if you start to use flashback in PLSQL, you will hit some walls!

PLSQL is not much dynamic language yet. Please don’t get me wrong. I am and Oracle Database Developer and Administrator. I am not interested any other databases and use SQL and PLSQL all the time but there some limits that cause to use very hard solutions some times.

I have tested the codes below at 19.8 database.

Flashback Data Archive(FDA) is a very useful option. Even if you add, modify or drop a column, you will be able to see old versions of a table and that is where my problem starts:

I created a table and add it to a flashback data archive. I add some sleep codes to wait necessary internal tables to be created. in the end, at 14:49:42 (my local time) I have a table which contains 2 rows. a simple procedure called SP_Xย  is getting a parameter as DATE and uses this date in flashback query.

as you can see this procedure works well. I didn’t demonstrate different data in different times but it works as well.

Let’s drop a column from this table:

and re run the procedure:

as you see even if I am trying to get data from sysdate, I got an error. problem is even if I tried to query with previous dates cause same error starting now!

even dropping and recreating the procedure is not working. as far as I understand, Oracle cannot decide exact definition of the query anymore. adding a column does not cause this problem but dropping a column does.

you might think of using FOR loop to fetch cursor but you will hit the same error.

by the way queries that we are trying to run are working correctly in sql:

so, using flashback queries in plsql seems a little bit dangerous. Actually one of my client also gets “ORA-01007: variable not in select list” error in plsql (sql is perfectly fine). so be careful while using flashback in plsql.

EDIT:

during my tests, I rename cust_id column and I started to got “ORA-00932: inconsistent datatypes: expected NUMBER got DATE” error. after cust_id column I got, order_date column so that make me think that, column order might be changing and I realized there is a greater problem in this!

here is sample code:

there columns c1,c2,c3 and they have ordered number in them (1,2,3).

column order is not changed but when I used this with flasback query:

column order has changed even if I asked for sysdate. in plsql this could cause problems because you might not aware of this and wrong variables might store your data:

as you see r_tmp.C20 variable is storing data of C3 column because it comes first in select list when flashback query is used.

so, very very careful about flashback query and plsql.

wish you healthy days.