Enable Unified Audit on Linux and Windows

Hi,

As you know, since 12c Oracle introduce Unified Auditing which has much more capabilities than standard auditing. Creating policies, audit conditions, top level auditing etc. By default Oracle uses “mixed” mode which allows you to run unified auditing commands and standard auditing. you can check if you are using unified auditing by default:

if result is FALSE then you are not using unified auditing, you are using either mixed mode or standard auditing. Unfortunately, enabling unified auditing is not as easy as changing a parameter. On Linux OS:

shutdown all oracle components (database, listener etc) then

then start database and listener.

On Windows OS:

shutdown all oracle components (database, listener etc) BUT on Windows you must shutdown and start Oracle Services. shutting down and starting database using sqlplus will not work. Open services.msc and find Oracle database and listener services and stop them.

after stopping services, rename the %ORACLE_HOME%/bin/orauniaud19.dll.dbl file to %ORACLE_HOME%/bin/orauniaud19.dll. 19 in the name of dll is your version if you are using 12c then you must rename orauniaud12.dll.dbl (or orauniaud12.dll.option) file.

and then start windows services. you can check v$option view as I wrote above to see if unified auditing is enabled.

if you are not using unified auditing policies by now, you should start using them. probably they will be the only audit feature in the future. Also they are much more easier than standard auditing. You have much more option to create audit rows. for example if you are using an APEX web application (or any other) you application will logon to database with same user. if you want to audit only few of your users you can use SYS_CONTEXT values like client identifier and you can audit only those users which has specific client identifier info. Also you can create conditions to audit specific database users too. One of the most important thing is new unified audit table is tamper proof, you can not run DML on it so your audit data will be safe (unfortunately you can modify standard auditing base table AUD$). So, don’t be late.

wish you healthy days, don’t forget to get your vaccination.

Code Base Access Control & Invoker’s Right

Hi,

So you might have already used some “invoker’s right” functions. As you know when you create a procedure/package/function and grant execute on them to other users, whenever other user run those codes, they will run with your privilege.  Sometimes you might need those objects to run with “invoker’s right”. that way you make sure that the code will run only if invoker user has necessary privileges. a common example is a procedure which changes a user password. if this code won’t run with invoker’s privileges, then any user might change any other user’s password so that kind of procedure is created as “authid current_user” which means invoker’s right.  I will talk about this later in another post.

Now, current problem is, you want to create a function and you want this function should be run under the invoker’s privileges but some objects in that function shouldn’t be accessible by the invoker.

I have a basic SALES table with some dummy date and a function which returns a given date total earnings.

let’s create a user and give the execute privilege:

Test1 user should be able to call the function:

so test1 user is able to call the function but since test1 user has no privilege on SALES table, it got an “table or view does not exists” error. To overcome the problem you can give a SELECT privilege on SALES to test1 user but this will make all the sales data accessible to test1 user and that could be security breach. I don’t want my users see my actual data, only the one that I provided using function and only if they have privilege to call it.

After 12c we got a solution for that. You can assign a ROLE to a FUNCTION which called Code Base Access Control. So a function itself could have a privilege, that way you don’t need to assign privileges to users, just to the function.

as you can see, I created a ROLE and assign it to a function. Please consider that you can not give direct privileges to a function, only ROLEs.

So my function has a role called MyRole and this role contains a select privilege on sales table. so anyone calling this function does not need a select on sales privileges anymore.

as you see test1 user is able to call function with it’s own privileges but can not select the table. keep your database safe.

 

Goodbye Log Triggers Welcome Flashback Data Archive

Hello,

I would like to talk about Flashback Data Archive (or Flashback Archive – FBA) in 12c. FBA was introduced in 11g. It is not new but it has very important new features that allow us to use FBA very efficiently and for free.  I would like to talk about new features more than what it does and and how it works but let’s give a quick look.

What is FBA?

Basically FBA is a module that let you store historical information about data in your table. When you enable FBA for a table then Oracle will start to watch this table and store every change on the table. this is a large definition and it is not wrong because all the DML changes will be started to log but also physical changes will be recorded too. you will be able to flashback your table before truncate or any other table alter. if you dropped a column, you can take it back or vice versa.

What is new in 12c?

First of all, and I believe the most important one, it is free anymore! in 11g FBA was using compressed tables by default which requires “Advanced Compression License” and that means additional costs. in 12c this is an optional feature. By default Oracle does not create those table as compressed so you don’t have to pay anything unless you don’t want to use compression option.

Secondly, FBA can store context information along the data changes anymore which I needed most and couldn’t use it in 11g just because of that. if you have a web application then probably  application will be using a common user and managing users itself. this causes you not to identify sessions because they are all same users but if you have a good developer team then you can ask them to set some context information like client_identifier. This data can be used to separate sessions and identify real users for example. With 12c FBA is able to store those information with changes and when we check historical data we can see all context information too.

Is FBA better than Log Triggers?

In my opinion, YES! of course there are many things to check but I will try to make a demonstration about performance of FBA.

This is my test case:

So we have two tables, TMP and TMP_FBA. I created a logging trigger on TMP and write every DML into TMP_LOG table with some context information like, client identifier, os_user, terminal etc. In this point you can see that my trigger is a for each row trigger and it will be writing every change one by one to log table. Some might use a compound trigger and store changed rows into a collection write it to log table at after statement section. This can optimize your logging while using Bulk DMLs but if your DMLs change too much rows then this can cause you consume too much PGA and memory problems. So I didn’t use it in my example. By the way to provide stability I created T_Base_Data table and I will use this to insert my original test tables.

My FBA is not compressed one (I didn’t use “optimize data” clause) as well as my Tmp_Log table too. I will do some DML and compare the performance. Also I want to compare size of the tables it will give useful information too. First I will insert some data with “insert select” statement then insert same data row by row using a for loop.

When we check timings, we see unbelievable  difference:

Trigger Bulk Insert: 22.13 seconds
FBA Bulk Insert       : 00.07 seconds

Trigger Row By Row Insert: 33.62 seconds
FBA Row By Row Insert       : 05.12 seconds

so for performance of our Insert statements, winner is definitely FBA. if we check log sizes, Our log table which is inserted by trigger, has reach to 112MB but FBA related objects are 35MB. One of the best things about FBA is it does not generate much INSERT log records because the original data is already in our table. This feature has already given us a lot of space. So we can say that about logging size FBA is winner again!

PS: While running my codes, I want you to know that only FBA table is Tmp_FBA so while checking size of FBA related object I used “object_name like ‘SYS_FBA%” condition. I will explain those objects at the end.

Let’s run some UPDATE:

Trigger Update : 53.38 seconds
FBA Update        : 02.25 seconds

Trigger Log Size : 96 MB (208 – 112)
FBA Log Size        : 91 MB (126 – 35)

Winner is still FBA.

Delete:

Trigger Delete : 48.24 seconds
FBA Delete        : 01.92 seconds

Trigger Log Size : 104 MB (312-208)
FBA Log Size        : 48 MB (174-126)

and winner is again FBA!

Everything is awesome but how can we see our logs in FBA? where are those logs? of course we can check the tables that FBA created automatically but there is a better way to see logs, Flashback Query:

you don’t even need to find the log table, just a flashback query will be enough to see historical data.

In my example I inserted 2 times all dba_objects into t_base_data table and I used this table to insert 2 times again into Tmp_FBA, that is why you see 4 COL$ tables.

finally, if you want to see FBA tables:

SYS_FBA_DDL_COLMAP_nnnnn is used to store column changes.
SYS_FBA_TCRV_nnnnn is used to store transaction informations.
SYS_FBA_HIST_nnnnn is used to store data changes.

also there are 2 default indexes on those tables.

Why FBA is so much faster?

trigger logging cause 2 actions. first calling a trigger which is a plsql object and then running an another insert statement. That means too much job to complete and context switch between sql and plsql.

FBA is using UNDO segments so basically it does no extra job! whenever you run a DML statement, Oracle copies all data which you are about to change to undo segments. if you commit, undo segments become obsolete (unless there is no select actively running) but if you rollback then all data in undo segments copied back to original table blocks. that is why commit is too fast but rollback is slow. Anyway, FBA reads undo segments which means your DML already generated undo blocks and FBA just read and save them. That’s all.

How about the Security?

One more time, FBA is the winner! You can not modify FBA related tables and by saying modify we mean any DML or DDL. even if SYS user can not drop or delete FBA related tables:

any user with drop any table or delete any table can delete your trigger base logging table but not with FBA! that brings a huge security advantage. of course a user who has flashback archive administer privilege can remove FBA from your table but this will be an obvious action because previous data will be also lost!

In Conclusion

Based on results of my test case I decided to convert all my log structure to FBA but there are a few more tests that I must complete first like checking PMOs (partition management operation), compression on FBA (since I have advanced compression license) etc.

thanks for reading.