Goodbye Log Triggers Welcome Flashback Data Archive Mustafa, 2019-03-022020-10-25 Hello, I would like to talk about Flashback Data Archive (or Flashback Archive – FBA) in 12c. FBA was introduced in 11g. It is not new but it has very important new features that allow us to use FBA very efficiently and for free. I would like to talk about new features more than what it does and and how it works but let’s give a quick look. What is FBA? Basically FBA is a module that let you store historical information about data in your table. When you enable FBA for a table then Oracle will start to watch this table and store every change on the table. this is a large definition and it is not wrong because all the DML changes will be started to log but also physical changes will be recorded too. you will be able to flashback your table before truncate or any other table alter. if you dropped a column, you can take it back or vice versa. What is new in 12c? First of all, and I believe the most important one, it is free anymore! in 11g FBA was using compressed tables by default which requires “Advanced Compression License” and that means additional costs. in 12c this is an optional feature. By default Oracle does not create those table as compressed so you don’t have to pay anything unless you don’t want to use compression option. Secondly, FBA can store context information along the data changes anymore which I needed most and couldn’t use it in 11g just because of that. if you have a web application then probably application will be using a common user and managing users itself. this causes you not to identify sessions because they are all same users but if you have a good developer team then you can ask them to set some context information like client_identifier. This data can be used to separate sessions and identify real users for example. With 12c FBA is able to store those information with changes and when we check historical data we can see all context information too. Is FBA better than Log Triggers? In my opinion, YES! of course there are many things to check but I will try to make a demonstration about performance of FBA. This is my test case: Oracle PL/SQL -- OLD SCHOOL WAY LOG TRIGGER Drop Table Tmp Purge; Create Table Tmp As Select Object_Name, Object_Id From Dba_Objects Where Rownum < 2; Drop Table Tmp_Log; Create Table Tmp_Log ( Object_Name Varchar2(128), Object_Id Number, Log$_Date Timestamp, Log$_Operation Varchar2(1), -- I: INSERT, U: UPDATE, D: DELETE Log$_User Varchar2(30), Log$_Session_Info Varchar2(4000) ) Pctfree 0 Partition By Range (Log$_Date) Interval (Numtoyminterval(1, 'MONTH')) ( Partition P1 Values Less Than (To_Timestamp('01/03/2019', 'DD/MM/YYYY')) ); Create Or Replace Trigger Tr_Tmp_Log After Insert Or Update Or Delete On Tmp For Each Row Declare R_Row Tmp_Log%Rowtype; Begin R_Row.Log$_Date := Systimestamp; R_Row.Log$_User := User; R_Row.Log$_Session_Info := Substr( 'OS_USER: ' || Upper(Sys_Context('USERENV', 'OS_USER')) || Chr(10) || ' CLIENT_INFO: ' || Upper(Sys_Context('USERENV', 'CLIENT_INFO')) || Chr(10) || ' SID: ' || Upper(Sys_Context('USERENV', 'SID')) || Chr(10) || ' INSTANCE: ' || Upper(Sys_Context('USERENV', 'INSTANCE')) || Chr(10) || ' HOST: ' || Upper(Sys_Context('USERENV', 'HOST')) || Chr(10) || ' IP_ADDRESS: ' || Upper(Sys_Context('USERENV', 'IP_ADDRESS')) || Chr(10) || ' MODULE: ' || Upper(Sys_Context('USERENV', 'MODULE')) || Chr(10) || ' TERMINAL: ' || Upper(Sys_Context('USERENV', 'TERMINAL')) || Chr(10) || ' SESSION_USER: ' || Upper(Sys_Context('USERENV', 'SESSION_USER')) || Chr(10) || ' CLIENT_IDENTIFIER: ' || Upper(Sys_Context('USERENV', 'CLIENT_IDENTIFIER')) || Chr(10) || ' BG_JOB_ID: ' || Upper(Sys_Context('USERENV', 'BG_JOB_ID')) || Chr(10) || ' AUDITED_CURSORID: ' || Upper(Sys_Context('USERENV', 'AUDITED_CURSORID')) || Chr(10) || ' CURRENT_SCHEMA: ' || Upper(Sys_Context('USERENV', 'CURRENT_SCHEMA')) || Chr(10) || ' CURRENT_SQL: ' || Upper(Sys_Context('USERENV', 'CURRENT_SQL')) || Chr(10) || ' DB_DOMAIN: ' || Upper(Sys_Context('USERENV', 'DB_DOMAIN')) || Chr(10) || ' POLICY_INVOKER: ' || Upper(Sys_Context('USERENV', 'POLICY_INVOKER')) || Chr(10) || ' PROXY_USER: ' || Upper(Sys_Context('USERENV', 'PROXY_USER')), 1, 4000); If Inserting Or Updating Then R_Row.Object_Name := :New.Object_Name; R_Row.Object_Id := :New.Object_Id; If Inserting Then R_Row.Log$_Operation := 'I'; Else R_Row.Log$_Operation := 'U'; End If; Else -- DELETING R_Row.Object_Name := :Old.Object_Name; R_Row.Object_Id := :Old.Object_Id; R_Row.Log$_Operation := 'D'; End If; Insert Into Tmp_Log Values R_Row; End; / -- CREATE FLASHBACK DATA ARCHIVE (FBA) Drop Flashback Archive FBA_10year ; Create Flashback Archive FBA_10year Tablespace Users Quota 1g Retention 10 Year; Exec Dbms_Flashback_Archive.Set_Context_Level('ALL'); -- CREATE TABLE AND ADD TO FBA Drop Table Tmp_FBA Purge; Create Table Tmp_FBA As Select Object_Name, Object_Id From All_Objects Where Rownum < 2; Alter Table Tmp_FBA Flashback Archive FBA_10year; -- BASE TABLE FOR TEST DATA Drop Table T_Base_Data Purge; Create Table T_Base_Data As Select Object_Name, Object_Id From All_Objects; -- generate more data Insert Into T_Base_Data Select * From T_Base_Data; Commit; -- LOAD INTO MEMORY Select /*+ full(t) */ Max(Object_Id) From T_Base_Data; 1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283 -- OLD SCHOOL WAY LOG TRIGGERDrop Table Tmp Purge;Create Table Tmp As Select Object_Name, Object_Id From Dba_Objects Where Rownum < 2; Drop Table Tmp_Log;Create Table Tmp_Log(Object_Name Varchar2(128),Object_Id Number,Log$_Date Timestamp,Log$_Operation Varchar2(1), -- I: INSERT, U: UPDATE, D: DELETELog$_User Varchar2(30),Log$_Session_Info Varchar2(4000)) Pctfree 0Partition By Range (Log$_Date) Interval (Numtoyminterval(1, 'MONTH'))( Partition P1 Values Less Than (To_Timestamp('01/03/2019', 'DD/MM/YYYY'))); Create Or Replace Trigger Tr_Tmp_Log After Insert Or Update Or Delete On Tmp For Each RowDeclare R_Row Tmp_Log%Rowtype;Begin R_Row.Log$_Date := Systimestamp; R_Row.Log$_User := User; R_Row.Log$_Session_Info := Substr( 'OS_USER: ' || Upper(Sys_Context('USERENV', 'OS_USER')) || Chr(10) || ' CLIENT_INFO: ' || Upper(Sys_Context('USERENV', 'CLIENT_INFO')) || Chr(10) || ' SID: ' || Upper(Sys_Context('USERENV', 'SID')) || Chr(10) || ' INSTANCE: ' || Upper(Sys_Context('USERENV', 'INSTANCE')) || Chr(10) || ' HOST: ' || Upper(Sys_Context('USERENV', 'HOST')) || Chr(10) || ' IP_ADDRESS: ' || Upper(Sys_Context('USERENV', 'IP_ADDRESS')) || Chr(10) || ' MODULE: ' || Upper(Sys_Context('USERENV', 'MODULE')) || Chr(10) || ' TERMINAL: ' || Upper(Sys_Context('USERENV', 'TERMINAL')) || Chr(10) || ' SESSION_USER: ' || Upper(Sys_Context('USERENV', 'SESSION_USER')) || Chr(10) || ' CLIENT_IDENTIFIER: ' || Upper(Sys_Context('USERENV', 'CLIENT_IDENTIFIER')) || Chr(10) || ' BG_JOB_ID: ' || Upper(Sys_Context('USERENV', 'BG_JOB_ID')) || Chr(10) || ' AUDITED_CURSORID: ' || Upper(Sys_Context('USERENV', 'AUDITED_CURSORID')) || Chr(10) || ' CURRENT_SCHEMA: ' || Upper(Sys_Context('USERENV', 'CURRENT_SCHEMA')) || Chr(10) || ' CURRENT_SQL: ' || Upper(Sys_Context('USERENV', 'CURRENT_SQL')) || Chr(10) || ' DB_DOMAIN: ' || Upper(Sys_Context('USERENV', 'DB_DOMAIN')) || Chr(10) || ' POLICY_INVOKER: ' || Upper(Sys_Context('USERENV', 'POLICY_INVOKER')) || Chr(10) || ' PROXY_USER: ' || Upper(Sys_Context('USERENV', 'PROXY_USER')), 1, 4000); If Inserting Or Updating Then R_Row.Object_Name := :New.Object_Name; R_Row.Object_Id := :New.Object_Id; If Inserting Then R_Row.Log$_Operation := 'I'; Else R_Row.Log$_Operation := 'U'; End If; Else -- DELETING R_Row.Object_Name := :Old.Object_Name; R_Row.Object_Id := :Old.Object_Id; R_Row.Log$_Operation := 'D'; End If; Insert Into Tmp_Log Values R_Row;End;/ -- CREATE FLASHBACK DATA ARCHIVE (FBA)Drop Flashback Archive FBA_10year ; Create Flashback Archive FBA_10year Tablespace Users Quota 1g Retention 10 Year;Exec Dbms_Flashback_Archive.Set_Context_Level('ALL'); -- CREATE TABLE AND ADD TO FBADrop Table Tmp_FBA Purge;Create Table Tmp_FBA As Select Object_Name, Object_Id From All_Objects Where Rownum < 2;Alter Table Tmp_FBA Flashback Archive FBA_10year; -- BASE TABLE FOR TEST DATADrop Table T_Base_Data Purge;Create Table T_Base_Data As Select Object_Name, Object_Id From All_Objects;-- generate more dataInsert Into T_Base_Data Select * From T_Base_Data; Commit; -- LOAD INTO MEMORYSelect /*+ full(t) */ Max(Object_Id) From T_Base_Data; So we have two tables, TMP and TMP_FBA. I created a logging trigger on TMP and write every DML into TMP_LOG table with some context information like, client identifier, os_user, terminal etc. In this point you can see that my trigger is a for each row trigger and it will be writing every change one by one to log table. Some might use a compound trigger and store changed rows into a collection write it to log table at after statement section. This can optimize your logging while using Bulk DMLs but if your DMLs change too much rows then this can cause you consume too much PGA and memory problems. So I didn’t use it in my example. By the way to provide stability I created T_Base_Data table and I will use this to insert my original test tables. My FBA is not compressed one (I didn’t use “optimize data” clause) as well as my Tmp_Log table too. I will do some DML and compare the performance. Also I want to compare size of the tables it will give useful information too. First I will insert some data with “insert select” statement then insert same data row by row using a for loop. Oracle PL/SQL Set Timing On; Insert Into Tmp_FBA Select * From T_Base_Data; 137162 rows created. Elapsed: 00:00:00.07 Insert Into Tmp Select * From T_Base_Data; 137162 rows created. Elapsed: 00:00:22.13 /******************************************************************/ Begin For Rec In (Select * From T_Base_Data) Loop Insert Into Tmp_FBA Values Rec; End Loop; End; / PL/SQL procedure successfully completed. Elapsed: 00:00:05.12 Begin For Rec In (Select * From T_Base_Data) Loop Insert Into Tmp Values Rec; End Loop; End; / PL/SQL procedure successfully completed. Elapsed: 00:00:33.62 Commit; Commit complete. Select 'After Insert TRIGGER LOG TABLE : ' || Trunc(Sum(Bytes/1024/1024)) || 'MB' Size_MB From User_Segments Where Segment_Name = 'TMP_LOG'; ---------------------------------------------------------------------------- After Insert TRIGGER LOG TABLE : 112MB Select 'After Insert FBA Related Objects: ' || Trunc(Nvl(Sum(Bytes/1024/1024),0))|| 'MB' Size_MB From User_Segments Where Segment_Name In(Select Object_Name From User_Objects Where Object_Name Like 'SYS_FBA%' ); ---------------------------------------------------------------------------- After Insert FBA Related Objects: 35MB 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748 Set Timing On;Insert Into Tmp_FBA Select * From T_Base_Data; 137162 rows created. Elapsed: 00:00:00.07 Insert Into Tmp Select * From T_Base_Data; 137162 rows created. Elapsed: 00:00:22.13 /******************************************************************/ Begin For Rec In (Select * From T_Base_Data) Loop Insert Into Tmp_FBA Values Rec; End Loop;End;/ PL/SQL procedure successfully completed. Elapsed: 00:00:05.12 Begin For Rec In (Select * From T_Base_Data) Loop Insert Into Tmp Values Rec; End Loop;End;/ PL/SQL procedure successfully completed. Elapsed: 00:00:33.62 Commit;Commit complete. Select 'After Insert TRIGGER LOG TABLE : ' || Trunc(Sum(Bytes/1024/1024)) || 'MB' Size_MB From User_Segments Where Segment_Name = 'TMP_LOG';----------------------------------------------------------------------------After Insert TRIGGER LOG TABLE : 112MB Select 'After Insert FBA Related Objects: ' || Trunc(Nvl(Sum(Bytes/1024/1024),0))|| 'MB' Size_MB From User_SegmentsWhere Segment_Name In(Select Object_Name From User_Objects Where Object_Name Like 'SYS_FBA%' );----------------------------------------------------------------------------After Insert FBA Related Objects: 35MB When we check timings, we see unbelievable difference: Trigger Bulk Insert: 22.13 seconds FBA Bulk Insert : 00.07 seconds Trigger Row By Row Insert: 33.62 seconds FBA Row By Row Insert : 05.12 seconds so for performance of our Insert statements, winner is definitely FBA. if we check log sizes, Our log table which is inserted by trigger, has reach to 112MB but FBA related objects are 35MB. One of the best things about FBA is it does not generate much INSERT log records because the original data is already in our table. This feature has already given us a lot of space. So we can say that about logging size FBA is winner again! PS: While running my codes, I want you to know that only FBA table is Tmp_FBA so while checking size of FBA related object I used “object_name like ‘SYS_FBA%” condition. I will explain those objects at the end. Let’s run some UPDATE: Oracle PL/SQL Update Tmp_FBA Set Object_Id = Object_Id + 1; 274,285 rows updated. Elapsed: 00:00:02.255 Update Tmp Set Object_Id = Object_Id + 1; 274,285 rows updated. Elapsed: 00:00:53.383 Commit; Commit complete. Select 'After Update TRIGGER LOG TABLE : ' || Trunc(Sum(Bytes/1024/1024)) || 'MB' From User_Segments Where Segment_Name = 'TMP_LOG'; ---------------------------------------------------------------------------- After Update TRIGGER LOG TABLE : 208MB Select 'After Update FBA Related Objects: ' || Trunc(Sum(Bytes/1024/1024))|| 'MB' From User_Segments Where Segment_Name In(Select Object_Name From User_Objects Where Object_Name Like 'SYS_FBA%' ); ---------------------------------------------------------------------------- After Update FBA Related Objects: 126MB 1234567891011121314151617181920 Update Tmp_FBA Set Object_Id = Object_Id + 1; 274,285 rows updated. Elapsed: 00:00:02.255 Update Tmp Set Object_Id = Object_Id + 1; 274,285 rows updated. Elapsed: 00:00:53.383Commit;Commit complete. Select 'After Update TRIGGER LOG TABLE : ' || Trunc(Sum(Bytes/1024/1024)) || 'MB' From User_Segments Where Segment_Name = 'TMP_LOG'; ----------------------------------------------------------------------------After Update TRIGGER LOG TABLE : 208MB Select 'After Update FBA Related Objects: ' || Trunc(Sum(Bytes/1024/1024))|| 'MB' From User_Segments Where Segment_Name In(Select Object_Name From User_Objects Where Object_Name Like 'SYS_FBA%' );----------------------------------------------------------------------------After Update FBA Related Objects: 126MB Trigger Update : 53.38 seconds FBA Update : 02.25 seconds Trigger Log Size : 96 MB (208 – 112) FBA Log Size : 91 MB (126 – 35) Winner is still FBA. Delete: Oracle PL/SQL Delete Tmp_FBA ; 274285 rows deleted. Elapsed: 00:00:01.92 Delete Tmp ; 274285 rows deleted. Elapsed: 00:00:48.24 Commit; Commit complete. Select 'After Delete TRIGGER LOG TABLE : ' || Trunc(Sum(Bytes/1024/1024)) || 'MB' From User_Segments Where Segment_Name = 'TMP_LOG'; ---------------------------------------------------------------------------- After Delete TRIGGER LOG TABLE : 312MB Select 'After Delete FBA Related Objects: ' || Trunc(Sum(Bytes/1024/1024))|| 'MB' From User_Segments Where Segment_Name In(Select Object_Name From User_Objects Where Object_Name Like 'SYS_FBA%' ); ---------------------------------------------------------------------------- After Delete FBA Related Objects: 174MB 1234567891011121314151617181920212223 Delete Tmp_FBA ; 274285 rows deleted. Elapsed: 00:00:01.92 Delete Tmp ; 274285 rows deleted. Elapsed: 00:00:48.24Commit; Commit complete. Select 'After Delete TRIGGER LOG TABLE : ' || Trunc(Sum(Bytes/1024/1024)) || 'MB' From User_Segments Where Segment_Name = 'TMP_LOG';----------------------------------------------------------------------------After Delete TRIGGER LOG TABLE : 312MB Select 'After Delete FBA Related Objects: ' || Trunc(Sum(Bytes/1024/1024))|| 'MB' From User_Segments Where Segment_Name In(Select Object_Name From User_Objects Where Object_Name Like 'SYS_FBA%' );----------------------------------------------------------------------------After Delete FBA Related Objects: 174MB Trigger Delete : 48.24 seconds FBA Delete : 01.92 seconds Trigger Log Size : 104 MB (312-208) FBA Log Size : 48 MB (174-126) and winner is again FBA! Everything is awesome but how can we see our logs in FBA? where are those logs? of course we can check the tables that FBA created automatically but there is a better way to see logs, Flashback Query: Oracle PL/SQL Select Versions_Startscn, Versions_Endscn, Versions_Xid, Versions_Operation, TMP_FBA.*, Dbms_Flashback_Archive.Get_Sys_Context(Versions_Xid, 'USERENV','SESSION_USER') As Session_User, Dbms_Flashback_Archive.Get_Sys_Context(Versions_Xid, 'USERENV','CLIENT_IDENTIFIER') As Client_Identifier, Dbms_Flashback_Archive.Get_Sys_Context(Versions_Xid, 'USERENV','OS_USER') As Os_User From TMP_FBA Versions Between Timestamp Minvalue And Systimestamp Where Object_Name ='COL$' Order By 1; 1234567891011 Select Versions_Startscn, Versions_Endscn, Versions_Xid, Versions_Operation, TMP_FBA.*, Dbms_Flashback_Archive.Get_Sys_Context(Versions_Xid, 'USERENV','SESSION_USER') As Session_User, Dbms_Flashback_Archive.Get_Sys_Context(Versions_Xid, 'USERENV','CLIENT_IDENTIFIER') As Client_Identifier, Dbms_Flashback_Archive.Get_Sys_Context(Versions_Xid, 'USERENV','OS_USER') As Os_UserFrom TMP_FBA Versions Between Timestamp Minvalue And SystimestampWhere Object_Name ='COL$'Order By 1; you don’t even need to find the log table, just a flashback query will be enough to see historical data. In my example I inserted 2 times all dba_objects into t_base_data table and I used this table to insert 2 times again into Tmp_FBA, that is why you see 4 COL$ tables. finally, if you want to see FBA tables: Oracle PL/SQL select table_name from user_tables Where table_Name In(Select Object_Name From User_Objects Where Object_Name Like 'SYS_FBA%' ); TABLE_NAME ------------------------------------------------ SYS_FBA_DDL_COLMAP_76919 SYS_FBA_TCRV_76919 SYS_FBA_HIST_76919 1234567 select table_name from user_tables Where table_Name In(Select Object_Name From User_Objects Where Object_Name Like 'SYS_FBA%' ); TABLE_NAME------------------------------------------------SYS_FBA_DDL_COLMAP_76919SYS_FBA_TCRV_76919SYS_FBA_HIST_76919 SYS_FBA_DDL_COLMAP_nnnnn is used to store column changes. SYS_FBA_TCRV_nnnnn is used to store transaction informations. SYS_FBA_HIST_nnnnn is used to store data changes. also there are 2 default indexes on those tables. Why FBA is so much faster? trigger logging cause 2 actions. first calling a trigger which is a plsql object and then running an another insert statement. That means too much job to complete and context switch between sql and plsql. FBA is using UNDO segments so basically it does no extra job! whenever you run a DML statement, Oracle copies all data which you are about to change to undo segments. if you commit, undo segments become obsolete (unless there is no select actively running) but if you rollback then all data in undo segments copied back to original table blocks. that is why commit is too fast but rollback is slow. Anyway, FBA reads undo segments which means your DML already generated undo blocks and FBA just read and save them. That’s all. How about the Security? One more time, FBA is the winner! You can not modify FBA related tables and by saying modify we mean any DML or DDL. even if SYS user can not drop or delete FBA related tables: Oracle PL/SQL SQL> show user USER is "SYS" SQL> delete from mustafa.SYS_FBA_HIST_76919; delete from mustafa.SYS_FBA_HIST_76919 * ERROR at line 1: ORA-55622: DML, ALTER and CREATE UNIQUE INDEX operations are not allowed on table "MUSTAFA"."SYS_FBA_HIST_76919" SQL> drop table mustafa.SYS_FBA_HIST_76919; drop table mustafa.SYS_FBA_HIST_76919 * ERROR at line 1: ORA-55622: DML, ALTER and CREATE UNIQUE INDEX operations are not allowed on table "MUSTAFA"."SYS_FBA_HIST_76919" 123456789101112131415 SQL> show userUSER is "SYS"SQL> delete from mustafa.SYS_FBA_HIST_76919;delete from mustafa.SYS_FBA_HIST_76919 *ERROR at line 1:ORA-55622: DML, ALTER and CREATE UNIQUE INDEX operations are not allowed ontable "MUSTAFA"."SYS_FBA_HIST_76919" SQL> drop table mustafa.SYS_FBA_HIST_76919; drop table mustafa.SYS_FBA_HIST_76919 *ERROR at line 1:ORA-55622: DML, ALTER and CREATE UNIQUE INDEX operations are not allowed ontable "MUSTAFA"."SYS_FBA_HIST_76919" any user with drop any table or delete any table can delete your trigger base logging table but not with FBA! that brings a huge security advantage. of course a user who has flashback archive administer privilege can remove FBA from your table but this will be an obvious action because previous data will be also lost! In Conclusion Based on results of my test case I decided to convert all my log structure to FBA but there are a few more tests that I must complete first like checking PMOs (partition management operation), compression on FBA (since I have advanced compression license) etc. thanks for reading. 12c Administration Compression Development Security SQL / PLSQL 11g12cFBAflashback_data_archivelog_trigger
Hy I gave it a try and realized the the flasback query gets inaceptabel slow if sys_context Information grows. Therefore a gave up this attempt. Do you made other experiences ? Reply
Hi Peter, thanks for your comment. I use FDA for a customer who has no problem until now. of course it is not fast as you are selecting the main table but performance difference is not worse than old trigger based logging (writing whole dml into another table and querying it). As a developer I did not use it in an application so I did not measure the performance but I will check it out. I don’t think your context size is the issue on this because, mostly, table data is much more bigger than the context data. so it just causes a little more extra storage on FDA tables. Reply
Dear Mustafa Sorry I had to describe the problem more clear. I am working mainly with 3tier applications where I get the user from client_identified and therefore I needed the feature exec dbms_flashback_archive.set_context_level(level => ‘ALL’); This is what makes the flashback query to find out who did the change incredible slow! At https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:9531914100346195699 are more disadvantages and problems described Reply
Hi Peter, nice to hear from you again. I looked at what you point. The tables I added to FDA are mostly definition tables so I didn’t realize how much affected of performance. Also I have just learn something from you(link that you provide), I never realized about SYS_FBA_CONTEXT_AUD table and it is huge! I just wanted to test it with a big table but on my local database I got a strange error (ORA-01405: fetched column value is NULL) when I tried to get a context value. so I can’t do it now but my first impression is SYS_FBA_CONTEXT_AUD table has no indexes and probably it is queried by XID column so adding an index on that column can increase the performance. of course SYS objects shouldn’t be touched but even support has no documentation about this table. so I will try this and probably write another post. thanks for informing me about those subjects. also holding SYS_FBA_CONTEXT_AUD table at system tablespace is a huge mistake for Oracle. at least they should allow to move it to another tablespace. Reply
Thanks for above explanation. In above example and in case of update, I am seeing only ‘U’ records but I have a requirement where for one update I need to capture before update rows and after update rows in this flashback tables. Is that possible? If yes kindly share example to achieve that. Reply
Hi Madhan, thanks for asking. here is a simple example: create table tmp (id number, mydata varchar2(80)); alter table tmp flashback archive fda_test; insert into tmp values (1, 'test1'); commit; exec dbms_lock.sleep(5); update tmp set mydata = 'test1-update1' where id = 1; commit; exec dbms_lock.sleep(5); update tmp set mydata = 'test1-update2' where id = 1; commit; exec dbms_lock.sleep(5); select Versions_Starttime, versions_endtime, Versions_Operation, tmp.* from tmp versions between scn minvalue and maxvalue order by Versions_Starttime; output of this code is: VERSIONS_STARTTIME VERSIONS_ENDTIME V ID MYDATA ———————————– ———————————– – ———- ——————– 25-OCT-20 12.03.57.000000000 PM 25-OCT-20 12.04.06.000000000 PM I 1 test1 25-OCT-20 12.04.06.000000000 PM 25-OCT-20 12.04.15.000000000 PM U 1 test1-update1 25-OCT-20 12.04.15.000000000 PM U 1 test1-update2 so, FDA does not store “before&after” information. it would be unnecessary information to store. “before update” row is the previous version (previous update or insert which means previous row) in this result. “before update” of first update (test1-update1) is insert row and “before update” of second update (test1-update2) is first update. I ordered results using versions_starttime so you can follow the hierarchy. Reply
Thanks for your quick reply. My Current application does further processing based on before update and after update using CDC. i.e. when single update occurs in tmp(table) it makes two entries in cdc_tmp(cdctable) with operation as “UU”(before update) and operation as “UN”(after update) by copying all column values from tmp table. Believe in this Flash back archive option, we wont get this type of dataset(list how cdc providing today). Do we have any other options to achieve this ?(don’t want to go to trigger as it impact performance) Reply
Hi Madhan, it is not a good idea to store “before&after” data both because it will double your log size, you will store some data twice. if you want to see data as before and after you can join your flashback query twice: select t_before.id as before_id, t_before.mydata as before_mydata, t_after.id as after_id, t_after.mydata as after_mydata, t_after.Versions_Starttime, t_after.versions_endtime, t_after.Versions_Operation from tmp versions between scn minvalue and maxvalue t_before join tmp versions between scn minvalue and maxvalue t_after on t_before.id = t_after.id and t_before.versions_endtime = t_after.versions_starttime order by t_after.Versions_Starttime; I hope this helps. Reply
We dont keep it for long. cdc_tmp(CDC) table is like a normal table. once application consumes/processed the data from cdc_tmp(CDC), it purge the transaction from cdc_tmp. Thanks for your query. Seems to be it is not displaying ‘I’ operation records. Golden gate one of the option but seeing if we have any other alternative.
it is just a matter of sql now. use full join: select t_before.id as before_id, t_before.mydata as before_mydata, t_after.id as after_id, t_after.mydata as after_mydata, t_after.Versions_Starttime, t_after.versions_endtime, t_after.Versions_Operation from tmp versions between scn minvalue and maxvalue t_before full join tmp versions between scn minvalue and maxvalue t_after on t_before.id = t_after.id and t_before.versions_endtime = t_after.versions_starttime order by t_after.Versions_Starttime; you can get any data in any format. it is up to you. Reply
Appreciate your help. 1) If below is the seq of transaction in tmp Seq1: insert Seq2: update 1 Seq3: update 2 Seq4: delete Then need output in below format From Archive Row1: insert Row2:before update Row3: after update Row4: before update Row5: after update Row6: delete 2) possible to include rowid and rsid$ In the output From archive table? Please advise
Hi Madhan, this kind of questions should be asked on community.oracle.com since this is just a blog 🙂 but I would like to help you of course and as I said earlier, you have the data with flashback archive and the rest is just an sql command here is an example for what you asked for: with mydata (id, mydata,type,Versions_Starttime,Versions_endtime, Versions_Operation, time_order) as ( select t1.id , t1.mydata, Decode(Versions_Operation, 'D', 'DELETE', 'I', 'INSERT', 'AFTER UPDATE') type,t1.Versions_Starttime, t1.versions_endtime, t1.Versions_Operation, t1.Versions_Starttime time_order from tmp versions between scn minvalue and maxvalue t1 union all select t2.id, t2.mydata, Decode(mydata.Versions_Operation, 'D', 'BEFORE DELETE', 'BEFORE UPDATE'), t2.Versions_Starttime, t2.versions_endtime, t2.Versions_Operation, mydata.Versions_Starttime time_order from tmp versions between scn minvalue and maxvalue t2, mydata where t2.id = mydata.id and t2.versions_endtime = mydata.versions_starttime and mydata.type in ('AFTER UPDATE', 'DELETE', 'INSERT') and t2.Versions_Operation <> 'D' ) select * from mydata order by time_order, versions_starttime; have a nice day. edit: I edited my reply for better “before&after” representation.
Thanks a lot. I am back. can we give a table_name for this archive table and then same can be deleted like a normal table? Your flashback really helping me to replace CDC. In current design, i pick transactions from CDC table and then purge it once i processed those and hence looking if we have an similar option in this flash back design. Don’t want to create separate program to do this. Please let me know if any option available in this. Please assist on this.
VERSIONS_STARTTIME / VERSIONS_ENDTIME are both TIMESTAMP(9) columns – how come we don’t see fractions of a second ? Reply
Hi Oj, very good explanation is here: https://connor-mcdonald.com/2019/06/13/smon_scn_time-and-ora-8161-digging-deeper/ Oracle doesn’t store every corresponding timestamp for every scn. Reply