Enable Unified Audit on Linux and Windows

Hi,

As you know, since 12c Oracle introduce Unified Auditing which has much more capabilities than standard auditing. Creating policies, audit conditions, top level auditing etc. By default Oracle uses “mixed” mode which allows you to run unified auditing commands and standard auditing. you can check if you are using unified auditing by default:

if result is FALSE then you are not using unified auditing, you are using either mixed mode or standard auditing. Unfortunately, enabling unified auditing is not as easy as changing a parameter. On Linux OS:

shutdown all oracle components (database, listener etc) then

then start database and listener.

On Windows OS:

shutdown all oracle components (database, listener etc) BUT on Windows you must shutdown and start Oracle Services. shutting down and starting database using sqlplus will not work. Open services.msc and find Oracle database and listener services and stop them.

after stopping services, rename the %ORACLE_HOME%/bin/orauniaud19.dll.dbl file to %ORACLE_HOME%/bin/orauniaud19.dll. 19 in the name of dll is your version if you are using 12c then you must rename orauniaud12.dll.dbl (or orauniaud12.dll.option) file.

and then start windows services. you can check v$option view as I wrote above to see if unified auditing is enabled.

if you are not using unified auditing policies by now, you should start using them. probably they will be the only audit feature in the future. Also they are much more easier than standard auditing. You have much more option to create audit rows. for example if you are using an APEX web application (or any other) you application will logon to database with same user. if you want to audit only few of your users you can use SYS_CONTEXT values like client identifier and you can audit only those users which has specific client identifier info. Also you can create conditions to audit specific database users too. One of the most important thing is new unified audit table is tamper proof, you can not run DML on it so your audit data will be safe (unfortunately you can modify standard auditing base table AUD$). So, don’t be late.

wish you healthy days, don’t forget to get your vaccination.

Blockchain vs Immutable tables

Hi,

as you know 21c is here for linux and some new features are (as always) pretty exciting. two of them are Blockchain and Immutable tables. Let’s explain and test.

first of all, those are new features for 21c but they are backported to 19c. at 19.10 you are able to use BLOCKCHAIN tables and 19.11 IMMUTABLE tables. I will make my tests at 19.12

if you are familiar to Bitcoin or any other alt coin then you probably now what blockchain is. basically it is a distributed ledger. every transaction is a “block” and each block contains information about transaction like sender, receiver, amount etc and also a hash value of the previous transaction (using some hash functions). That way all new transactions are added to the end of the “chain” and that creates the chain. This ensures that chain is unbreakable you can not tamper with old records because that will breaks the chain (hash values won’t match with next blocks).

So, Oracle uses this technology to create a new table called Blockchain table. every inserted row in this table are stored with some additional information like current timestamp and most importantly, previous row’s  hash values! Hash value operations are made at the “COMMIT” time not before. that way Oracle guarantees that every row is a part of chain and you can not tamper with those rows. if you do, chain will be broken and can not be verified. Also, Oracle won’t let you delete or update those rows using any DML statement like delete, update, merge etc.

Immutable tables are pretty similar to Blockchain tables. you can not update those table using any DML as blockchain tables. Difference is rows in the Immutable tables are not linked to each other. Rows does not store any hash value from previous rows.

both of them are designed to create “unchangeable”, Insert only tables. if you think that this is a real necessity especially auditing staff. for example if you store a log information for a credit card information you don’t want anyone to change them and you want to be sure that those rows are intact and unchanged on an investigation.

of course you just need to store table data for a specific amount of time like 5 years etc. No one can store them forever so at some point you must be able to delete old rows. This is something you can define while creating those tables. Also, even if I may not able to delete the rows what would happen if someone drops the table! this is of course forbidden and again you can define a drop policy while creating tables. Let’s start with a blockchain table:

you need BLOCKCHAIN keyword (or IMMUTABLE for immutable tables) at the create statement.

“NO DROP UNTIL 31 DAYS IDLE” defines how can you drop this table. if you have no actions on that table for 31 days then and only then you can drop it. By the way after creating table, if you don’t insert any row then you can drop it immediately.  you can just use NO DROP to disable dropping of this table (only way is dropping database entirely).

“NO DELETE until 30 days after insert” defines how can you delete “old” rows. in my example you can delete a row after 31 days of it’s insertion. Again you can use NO DELETE LOCKED to disable deleting from table.

hashing part is the default and you have to write this down to define blockchain structure.

let’s insert a row and try to delete or update:

as you can see modifying is forbidden. you can not even delete rows that passes the retention time that you defined (no delete until…) with a delete statement. You must use DBMS_BLOCKCHAIN_TABLE.DELETE_EXPIRED_ROWS procedure to delete rows beyond the retention.

This is BLOCKCHAIN table. of course there are much more properties like user signs etc but I will leave it here for now. For IMMUTABLE tables, almost everything is the same:

just change “blockchain” keyword to “immutable” and remove hash clause. pretty straight. as expected you can not delete or update rows. if you want to delete old rows then you must use DBMS_IMMUTABLE_TABLE.DELETE_EXPIRED_ROWS procedure.

Common Feature of both tables is that they can not be modified by authorized db users not even SYS or db vault admins. No one in the database can change those rows. Basic difference of the both tables  is Blockchain table has steps for cryptographic linking between the last row and new row but Immutable table has no option like that. what does this provide? Immutable tables can not be modified via Oracle Database Software but how about bypassing DB? if someone modify datafile using Operating system tools then your table data will be modified and you can not be aware of it. Blockchain tables, on the other hand, has a verify function. since every row linked to each other with some hash value, if you modify this data using any tool, like OS tools, verification will be failed because if a row changes then it’s hash value will be change and that hash value is stored on the next row.

Then why immutable tables are exists since blockchain tables are more secure? You might guess that, creating hash values and storing them on the next row causes some extra work which means performance issues.

Let’s make a test (with a flow on purpose):

I inserted rows using bulk insert and row by row on both table and here are the results. huh! blockchain table is almost faster than immutable table? Actually no! as I said earlier hashing operations are done at the commit time not before on the blockchain table! in my code there is no commit! that’s why we see the same performance result. how about adding commits before printing time:

here it is! at worst, immutable tables are faster more than 6 times (row by row insert) and at best, almost 40 times faster than blockchain tables. So, while choosing table type be careful. if immutable tables meet the need then use them.

this is just a glance for new table types. I will make more tests and come with new posts.

wish you healthy days.

Edit: I didn’t mentioned about COMPATIBLE parameter. it should be at least 19.10 to create blockchain table and 19.11 for immutable table.

ORA-00600 [kzan_open_osfile:ksfdread] while querying/purging unified audit

Hi everyone,

Lately, I hit and ORA-600 while querying unified_audit_trail on 19.8 Oracle database and same thing happens if you want to purge audit data too. the exact error message is:

ORA-00600: internal error code, arguments: [kzan_open_osfile:ksfdread], [], [], [], [], [], [], [], [], [], [], []

Normally, I check the support first but this time ora 600 argument seems understandable to me. I can see that Oracle is has some problems while reading a file. so I checked alert.log.

p.s. : I changed db names with “orcl”

I checked trace file that I found in alert.log (/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_5714.trc ) after the trace file header information, here is the text belongs to error:

if you look closely, you can see that Oracle is trying to read /u01/app/oracle/audit/orcl/.ora_audit_01267.bin file and then got an file I/O Error and at the very end the error is “no data found”. So, definitely there is something about this file. when I queried the file I realized that file is 0 bytes:

I just consider that Oracle is trying to read this file but since it is and empty audit file got an no data found error. I deleted this file and all the problems are gone 🙂

hope this is helpful.

wish you healthy days.

 

X Forwarding for an Oracle DBA

Hi,

It’s been a long time. I want to write something about X Forwarding. you might feel more comfortable to use a GUI instead of silent installations sometimes. if you are connecting to a linux server remotely then you must enable X Forwarding to enable GUI for Oracle product like dbca, netca etc.

First of all, use MobaXterm to eliminate many problems. Moba sets some parameters for you and it also shows you if X Forwarding is enabled or disabled for that session:

just using MobaXterm is not enough of course:

1- create .Xauthority file under your linux user home directory:

touch ~/.Xauthority

2- check ssh config if X11Forwarding parameter is set to yes or not:

sudo vim /etc/ssh/sshd_config

it must be yes of course.

3- install xauth packages:

sudo yum install xorg-x11-xauth xorg-x11-utils xorg-x11-fonts-* xorg-x11-font-utils xorg-x11-fonts-Type1

(fonts are not mandatory)

4- create a magic cookie:

that is pretty much it. after these steps MobaXterm should be showing you X11Forwarding as enabled.

so far so good but there is a specific scenario. many company uses VPN accounts now and you are logging on to servers with your VPN account but oracle installation could be under another user like “oracle” and you might not have password of oracle user since it is also a directory user and managed by different teams on hosting company (or your own company). in that case you might be using “sudo su – oracle” command to switch to oracle user and if you do that X11 Forwarding will not work! as a quick hack you can use this:

my vpn user: mustafa

oracle installation user: oracle

login as mustafa, then copy .Xauthority file to oracle user’s home directory, then change owner of the .Xauthority file as oracle:

you can put this into a sh file and run. you must run these commands with sudo of course.

last step is $DISPLAY parameter. echo $DISPLAY parameter with mustafa user and after switching to oracle user set this parameter:

that’s it. you should be able to use X11 Forwarding.

hope this helps.

Note: in my country there is a huge fire at almost all sea side. Many beautiful towns and villages are burned. People died, dozens of hills are turned into ash and thousands of animals had died. I hope it will end soon and climate change is real and here. every human should be aware of this.

Create a Clone DB from Backup on the Same Server (or remote)

Hi,

I wanted to create a post series about cloning/duplicating database on the same or remote servers. There are so many options and combinations about this like rman duplicate database with/without  target/source db connection, rman restore using backups, ASM to OS or OS to ASM, same server or remote server, OMF or regular file naming etc…

What I am about to demonstrate is creating a TEST db from PROD db on the same server with “no ASM” only OS file system using PROD DB backups (not rman duplicate database command). of course, you can use same method to create a TEST db on “remote server”  just by copying backups and archivelogs (maybe a copy of controlfile too) to same location on a remote server. So, Lets get started:

source database (let’s say this is your PROD db which is source db) SID                : orcl
duplicate/target database (let’s say this is the TEST db which you want to create ) SID : dup
method : using backups only (no duplicate database command)
server    : same or remote (you can copy your backups to a remote server under same location with prod server and follow the same steps)
file naming : OMF (Oracle Managed Files)
OS : Linux / Windows (Windows steps are also very similar. I will add some notes for windows).
Database version: 19c (19.3)

while doing this, I will try to explain why we do that step and important files of database.

1- Parameter File

Parameter file is the first file to read while starting a database instance. it defines vital and mandatory parameters to create an instance, like database name. So, to create a new database, first we need parameter file. it could be either SPFILE (binary version) or PFILE (text version). Since we can not create a binary file easily, I will create text parameter file and then create binary version of it. there thousands of parameters in Oracle but just 3 of them is enough to create an instance. under ORACLE_HOME/dbs directory, create an empty file with name initdup.ora and put those into it:

Windows OS Note: for Windows OS, directory is ORACLE_HOME/database and file name is INITDUP.ora by default. Also you need to create a Windows Service for Oracle Database instance using “oradim” tool first: “oradim -NEW -SID dup -STARTMODE auto” is enough to achieve this.

what are these 3 parameters?

  1. db_name: name of the database which is “orcl” for now. wait what? don’t we create a new db called “dup” why is this orcl ? because we will use orcl database backups to create our database and I don’t want to show all backup files to rman. so We will act as if we are creating an db called “orcl” but we will change it while doing this. we will use orcl is just for finding backups and restore/recover datafiles.
  2. db_unique_name: this parameter is set to null by default and if it is null, it’s value is db_name. so you can think it as if Oracle uses it “NVL(db_unique_name, db_name)”. as you see this parameter is set to “dup” which is name of new database. while doing restore/recover operations, if rman does not know that this is a different database then it will restore /recover source database (orcl). with this parameter we are identifying our unique database.
  3. db_create_file_dest: this is the parameter that makes your database as OMF (oracle managed files) which means you don’t need to give any file name. just say “add a datafile to this tablespace”; Oracle will add new datafile and give a name to it. we don’t need to know what is the name of a datafile. this parameter also, of course, tells that where the files will be created. by saying files not just datafiles. also redo logs and control files too. this is the “main directory”  for a database. so path pattern is like this:
    /u01/app/oracle/oradata/<instance_name>/datafile => for datafiles
    /u01/app/oracle/oradata/<instance_name>/controlfile => for controlfiles
    /u01/app/oracle/oradata/<instance_name>/onlinelog => for redo logs.
    in our example, instance name will be “dup”.

Now, we have a parameter file so we can start our database instance. Do not forget that our source database is also on the same server so we must be careful not to mess with source db. So, I will set ORACLE_SID environment variable first:

Windows OS Note: in Windows you can set your environment variable ORALCE_HOME by using “set” command not export. “set ORACLE_SID=dup”.

as you can see we started an instance already. of course we don’t even have a datafile yet but still this is an instance. we didn’t specify any memory parameter for example in pfile but they have all default values so it will work but of course if you have a defined configuration you can add them into pfile (initdup.ora) so db instance will be using them.

I created SPFILE before starting the instance so spfiledup.ora is created under ORACLE_HOME/dbs (ORACLE_HOME/database for windows os ) directory and instance is started by this spfile.

so we have a basic parameter file and our instance is up. let’s continue with step 2.

2- Control File

control file is the next step while starting a database. it stores very critical information like where are datafiles, backups, redo logs. Also many database limits are defined in controlfile like maximum number of data files which can be created in db (MAXDATAFFILES). to restore control file, first I will backup as copy control file from source db (which is orcl, our prod database). so, from “orcl”, first we create control file copy.

so I created controlORCL.ora file under /tmp which is copy of current control file of orcl db. now let’s turn to dup db and restore it.

command is “restore controlfile from ‘/tmp/controlORCL.ora’;” and control file restored under: /u01/app/oracle/oradata/DUP/controlfile/ directory. as you see it is under DUP directory because of db_unique_name parameter.

We restored our control file but this is not entirely correct because this control file is belong to orcl db and it knows this database as “orcl” but we will deal with that later. Now, this control file knows where backups are, so we can restore datafiles but first we must take our database to mount mode.

Now, restore database BUT as I mentioned, this control file belongs to “orcl” db so if I run “restore database” it will restore datafiles to original “orcl” db datafile locations.  You must be careful here, you don’t want to shutdown your source db. to restore those datafiles to a different location (dup db datafile location)  we will use “set newname” command:

since I use OMF, ” set newname for database to new;” means, restore datafiles to wherever db_file_create_dest parameter points. by default, control file knows where those datafiles are so if I restore them, it will be under ORCL db location. “set newname for database to new” cause re-read parameter file and datafile location information and restore datafiles to there. as you see in the output, datafiles are restore under DUP location (ex: channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/DUP/datafile/o1_mf_system_%u_.dbf) .

perfect. now we have datafiles too but we can not open database right away. first, even if we are able to open database, control file is still old and will look for datafiles under ORCL directory not DUP directory. we must tell controlfile to use those recently restored datafiles as primary datafiles. so we must “switch database to copy”.

let’s recover database now:

this will apply all changes which are made after the backup. it will read archivelog files and apply them into datafiles.

Good!, can we open database now? well, no 🙂 why? because as I said earlier, this controlfile is still a stranger! it knows this database as “orcl” but it is not anymore! Since we get everything we want (controlfile, datafiles, recover operation) now we can change our controlfile. to achieve that we must re-create control file and tell “this is dup db anymore”.  of course before do that, we must change “db_name” parameter anymore. it is still “orcl”. we can set it as “dup” now but this parameter can not be changed without restart. Problem about that, you can not change db_name with “alter system” if you are using spfile. so stesp are these:

  1. I create pfile
  2. shutdown database
  3. modify pfile and change db_name parameter to “dup”, remove db_unique_name parameter
  4. create spfile again. ===> This is very important. if you forgot this step, when you start db, it will be using old values.
  5. start db in mount mode.
  6. create controlfile create script using “alter database backup controlfile to trace” command.

finally we need to re create our controlfile to make it completely a dup db controlfile. I created controlfile script under the tmp directory. you can open it any text editor. you will find 2 “create controlfile ” command. one with NORESETLOGS and the other RESETLOGS. on a new database, to create redo logs, we must use RESETLOGS clause. so I will just write here RESETLOGS version in control_file.txt file.

as you can see, controlfile thinks that this is ORCL database and all redo logs are also under ORCL directory (‘/u01/app/oracle/oradata/ORCL/onlinelog/o1_mf_1_j6xpd19n_.log’,). This is also an important step. if you don’t change those ORCLs to DUP, you might cause damage to your source database which is orcl in my example. Since I use OMF, and I always use OMF just for these reasons, all I need to do is remove those path.

Also, since we want to change database name, we must add SET clause after “create controlfile reuse” part. Please note that, just changing ORCLs to DUP in this script won’t work because datafiles also has database name information in their header file. so we must use SET which means, “set database name”. this will also update datafile headers. to create controlfile again, you must be in nomount mode (in mount mode, cotnrolfiles are already in use, you can not change them.)

finally, we have a real controlfile for dup db. after now, we can  open it with resetlogs option. creating controlfile will also take our database into “mount” mode so no need another “alter database mount” command.

viola! we have a new db with a new name which is also created from another database backup. you can use this on the same or remote servers. as said before, only difference is copying backups to remote server.

I hope this helps. I will try to write for other methods too like duplicate database etc.

please make a comment, so I can believe that not daily 30 visitors on my blog are bots 🙂

wish you healthy days.

Important Edit: you might ask why we don’t create controlfile before all those operations. it has 2 reasons. creating control file will cause loosing the location of backups and more importantly, create controlfile command validates the datafiles if they are in the correct locations. so we can not create controlfile without having datafiles.

Pragma UDF & Deterministic

Hello everyone,

I wanted to write about something simple and can improve the performance of your sql statements. Unfortunately, many database developers use too much plsql function in their SQL statements. I say too much because if you learn more about the SQL, you will use less plsql. SQL is very capable language to process data. Of course we don’t design and develop a whole system all the time. We start to work somewhere and use their standards and run old codes. Many of my clients have some basic plsql functions which does not interact with SQL like just doing basic calculations or conversions. there plsql functions are pure algorithms which can be also run by SQL too. for example:

this is a very simple VAT adding function. As you can see this calculation can be done by SQL too but let’s say someone writes this code 20 years ago and now it is used everywhere in database (maybe also in the application too). So you might not be able to change all of them but you can increase it’s performance by deterministic or pragma UDF.

let’s make a small example first:

this will create a table with 800.000 rows. now I will run a query against this table with and without function:

it takes almost 10 times more to complete the query when you use a PLSQL function.

Reason is, whenever you use a plsql function inside a query (or vice versa) something called “context switch” is happening. you can think this as, SQL statements are run by SQL Engine and PLSQL statements are run by PLSQL engines and when ever these two engines meet with opposite statement, they have to call the other engine. this is context switching. You must avoid as much as you can from this but as I said if the system you work on is run the queries like this, you might not have to change all but you can, at least, increase the performance. that where pragma udf and deterministic come.

Deterministic tells Oracle, this function will return same result for same parameter all the time! if your plsql object does not interact with sql objects like tables or time variables like sysdate then probably it is a good candidate for deterministic. Also, if you want to index a user defined plsql function it must be deterministic. you can add deterministic clause to a function which is interact with sql objects or sysdate etc but this will only cause you headaches. so be careful about that.

Pragma udf is a way to reduce context switch overhead. it does not reduce the number of context switch but it makes it smaller and faster. that way if you use a plsql function in sql, context switch will be lighter.

Let’s make a small test now, my example is very good candidate for both deterministic and pragma udf:

I created 3 version of my vat function. first is the base one, second is with a DETERMINISTIC clause and third is both deterministic and pragma udf. they all do the same work. let’s compare their performance:

I run 4 versions, with basic function, deterministic function, deterministic & pragma udf and finally of course pure sql statement. As you can see BASE version is the worst and takes 1.15 seconds average to complete. just adding a deterministic clause decreases time spent almost 5 times and adding pragma udf decrease another 30-35% based on deterministic and finally pure sql is of course the fastest. pure sql is 10 times faster than BASE version. so, if you can use pure sql use it! if it is not possible at least you can check for deterministic candidates and use pragma udf.

Of course there is something called, “subquery caching” :

as you see, subquery caching might be even faster than pragma udf and deterministic but it depends. this solution also requires changing the sql statement too.

So you can improve the performance just by small changes. find a plsql function that is so simply and used everywhere in your database (I am pretty sure there is one) and try the suggestions above. by the way, 19c has something even stronger called “SQL Macro” which I mentioned a little bit before:

http://mustafakalayci.me/2020/10/28/sql-macro-is-finally-here/

http://mustafakalayci.me/2020/10/30/sql-macro-part2-column_value-nvlparam-column_value/

I mostly mentioned for table sql macro but you can use sql macro to generate parts of sql statement too.

I hope this helps you to tune your sql & plsql statements.

wish you all healthy days.

PLSQL and Flashback Data Archive Problems.

Hi,

I have written about flashback data archive before. if you didn’t read please check these post first.

http://mustafakalayci.me/2019/03/02/goodbye-log-triggers-welcome-flashback-data-archive/

http://mustafakalayci.me/2020/09/10/flashback-data-archive-problems/

I’ve already mentioned about some problems on my second post but these are not kind of problems which will stop your work. The problem I am about talk is that kind of problem.

From my perspective, it seems that flashback is designed to be used in SQL. track the history of your tables (even ddl changes) etc but if you start to use flashback in PLSQL, you will hit some walls!

PLSQL is not much dynamic language yet. Please don’t get me wrong. I am and Oracle Database Developer and Administrator. I am not interested any other databases and use SQL and PLSQL all the time but there some limits that cause to use very hard solutions some times.

I have tested the codes below at 19.8 database.

Flashback Data Archive(FDA) is a very useful option. Even if you add, modify or drop a column, you will be able to see old versions of a table and that is where my problem starts:

I created a table and add it to a flashback data archive. I add some sleep codes to wait necessary internal tables to be created. in the end, at 14:49:42 (my local time) I have a table which contains 2 rows. a simple procedure called SP_X  is getting a parameter as DATE and uses this date in flashback query.

as you can see this procedure works well. I didn’t demonstrate different data in different times but it works as well.

Let’s drop a column from this table:

and re run the procedure:

as you see even if I am trying to get data from sysdate, I got an error. problem is even if I tried to query with previous dates cause same error starting now!

even dropping and recreating the procedure is not working. as far as I understand, Oracle cannot decide exact definition of the query anymore. adding a column does not cause this problem but dropping a column does.

you might think of using FOR loop to fetch cursor but you will hit the same error.

by the way queries that we are trying to run are working correctly in sql:

so, using flashback queries in plsql seems a little bit dangerous. Actually one of my client also gets “ORA-01007: variable not in select list” error in plsql (sql is perfectly fine). so be careful while using flashback in plsql.

EDIT:

during my tests, I rename cust_id column and I started to got “ORA-00932: inconsistent datatypes: expected NUMBER got DATE” error. after cust_id column I got, order_date column so that make me think that, column order might be changing and I realized there is a greater problem in this!

here is sample code:

there columns c1,c2,c3 and they have ordered number in them (1,2,3).

column order is not changed but when I used this with flasback query:

column order has changed even if I asked for sysdate. in plsql this could cause problems because you might not aware of this and wrong variables might store your data:

as you see r_tmp.C20 variable is storing data of C3 column because it comes first in select list when flashback query is used.

so, very very careful about flashback query and plsql.

wish you healthy days.

ORA-24244: invalid host or port for access control list (ACL) assignment

Hello everyone,

So you wanted to migrate your ACLs to another database or you are trying to add a new acl entity but you got ORA-24244. There are many reasons for that error unfortunately. I wish Oracle provides us more detail about what is the error but I wanted to share what I have seen so far and how to generate acl records. What I am about to write is for 19c databases.

  1. First of all, of course, you can get this error by setting an invalid port like -123 or 999999999. check them first.
  2. secondly, port range you use might intersect with another definition. for example, there is an acl entry for www.example.com for 3000 and 3010 ports. you can not add a new port range for same host, same principal and same privileges with port range 3010-3015. since 3010 is already defined in first entry you can not use it again, second entry must start with 3011 at least.
  3. thirdly, you might be trying to insert an entry for RESOLVE privilege with port definition! this is not also allowed. for resolve privilege, lower and upper ports must be null.

these are my basic steps to check. even expdp utility with “include=NETWORK_ACL” option might failed while importing them because of these 3 steps. Lately I was copying acls from a 12c db to 19c and after getting too much error I decided to generate a simple select statements to generate acls append command. you can either copy, paste and run the result of statement or run it in a loop and run the command with dynamic sql.

Please don’t judge for not using match_recognize. I just wrote it fastest way I can think of. script is first dividing port ranges to row by row single ports and them regroup them (some old entries had intersected port ranges). also uses NULL for resolve privileges. I didn’t add all options of course. I hope this script helps you.

 

wish you healthy days.

export import ACL & ORA-24244 error during import

Hello everyone,

it’s been a while since my last post. So, I wanted to write about export and import ACL privileges. They always become a pain (at least for me). after 12c, Oracle introduce us a new and easy way of ACL copying from db to db. export&import.

so here is the basic command on your source database:

this will export all stored ACL privileges on your database and to import those ACLs to a new db:

so easy but during the import you can get ORA-24244:  invalid host or port for access control list (ACL) assignment error.

I encountered this error while creating a new db for a client. In my case this has happened because RESOLVE privileges shouldn’t be used with port definition!

so, somehow previous db has port definitions for RESOLVE privileges and those ones wasn’t imported. to add them into new db, I simply used this sql:

run this select on the source db and execute the output on the target db.

wish you all happy and healthy days.

Which Column has Changed in Every Row?

Hi,

Edit: Thanks to Chris Saxon who is from asktom team, checked my code and warned me about a problem. Not to overload the PGA fetch_rows function only fetches 1024 rows at once so every row set chunk size is 1024. In this case for the 1025th row, previous row information is lost because fetch_rows procedure ended and recalled. He suggested to use XStore to store data from previous run of fetch_rows and I added it. While doing that I removed some unnecessary lines too. Thanks again Chris.

PS: this is not a Polymorphic Table Function (PTF) explanation or introduction, I just wanted to share a code that I wrote before and very useful (at least for me). if you want to learn more about PTFs check https://blog.sqlora.com/en/polymorphic-table-functions-example-transposing-columns-to-rows/

PTF is a great tool that enhances the sql. I think I only need a “pointer” definition in PLSQL after PTF and that’s all I would be very happy 🙂 even in this example, it would be nice if I can use a pointer to point data collections which makes the code shorter and more readable.

About a few months ago, I debugged a code for some unnecessary rows. It comes to an sql statement which has a MINUS set operator to eliminate rows but some rows are not eliminated as needed. I found the rows and try to find out what column has different values so that row is not eliminated in MINUS but this table has so many NUMBER columns which has many decimal points and it was really hard to find which column(s) has different values.

Since I am a so lazy person to check all column by column in two rows, I created a Polymorphic Table Function (PTF) to find the differences for me and it worked as I needed. I added a new varchar2 column to the base query (select … minus select …) called DIFFERENT_COLUMNS  and I put different column names, old value and new value into that column and got all of them. After solving my problems, I realized that I can use this code for something even better!!!

While looking to a log data of table (trigger base or flashback or flashback data archive), I have been asked many times  “what changed”. which column(s) has changed? it is not easy and mostly required to check row by row, column by column. So, my polymorphic table function is perfect solution for that! Just provide necessary table or select statement and find all columns changed every step.

Here is my basic code:

this package has it’s own polymorphic table functions (SF_Find_Differences) which has two parameter, first is the table or query that you want to find differences with previous rows. By saying “previous” I am using default order what you provided to my PTF I am not ordering data in any way. I will explain second parameter later. let’s make an example:

this will be my sample data (flashback query). one problem is while you are looking for a historical data mostly you will be checking one specified row history not all rows. so you must filter rows using primary key or something. Unfortunately you can not pass a “subquery” to PTF but you can pass a WITH data source.

So, I just created a dummy emp table and then updated employee with id 100. first change first_name then last_name then salary and commission_pct both. here is my query:

I think this is so cool 🙂 if you check the last column it will be listing what is changed (which columns). As you can see I use a WITH clause to define specific historical data and order it using VERSIONS_ENDTIME. One problem is, I wanted to see when these changes happend (versions_starttime and endtime) but since they are also columns in this query my DIFFERENT_COLUMNS has those changed columns too! that is where the second parameter gets involved. second parameter is EXCLUDED column list from change control. so:

much clear. on the second row, Steven becomes Mustafa, on the third row, King becomes KALAYCI and on the last one, salary and commission_pct are changed. it is more easier to track it or listing in your application.

Of course this is not a complete code, I just checked for varchar2, number, date and timestamp columns but not the others. Also didn’t work on “formatting” like date and timestamp data types which can be done but still pretty nice.

I wanted to share this with whole community. I hope this helps.

Edit: while checking whether old value and new value are different or not, I used if conditions but I would like to use SYS_OP_MAP_NONNULL function. unfortunately SYS_OP_MAP_NONNULL is not defined in plsql.

Edit: I made a change about excluding columns. instead of for loop I used “member of” clause.

wish you healthy corona & earthquake free days. Unfortunately, an earthquake happened in my country and neighbor countries. Rescue teams are still trying to save people. I hope this never happens in any place and no one experience this pain.