Using Temp instead of Undo for GTTs

Hello everyone,

with 12c we have a very nice new option; Temp Undo for GTTs (Global Temporary Tables). as you know temporary tables are used for storing temporary data. those tables data are stored in TEMP tablespace so they are not vital. Temporary tables can generates very little redo data so they are  faster than heap tables. if you experience an instance crash those data will be lost but as I said those data is not vital.

even we say gtts generates less redo, they still cause to generate redo because of UNDO! temporary table data does not need to recover after an instance crash but you might need to use a “rollback”. whenever you run a DML on a table Oracle will copy the original data to UNDO tablespace in case you need to rollback your operation and if you run a rollback, original data will be copied back to table from UNDO tablespace. By the way, that is why commit is so fast and rollback takes as much as the dml operation itself.

So, whenever you run a DML, oracle genrates 2 basic group of redo data. one for the change of table blocks itself and one for the change of undo blocks. GTTs does not generate (at least very little) redo data so they will have a better performance but your dml will cause original data to copy to UNDO and that will cause an UNDO block change and that will cause to redo log data. GTTs cause to generate redo log data indirectly. After your dml on the GTTs if you issue a rollback original data will be copied from undo to GTT.

With 12c, we have an option to choose where undo data to be written and we can choose to write undo data to temporary tablespace itself. Point is almost there won’t be any redo log data and that will make our GTTs much more faster. To achieve that you need to set a new database parameter called TEMP_UNDO_ENABLED. by default this parameter’s value is false and you can set in system or session.

Let’s demonstrate this but I would like to add a note before we go! if you use a GTT on a session, parameter’s value change won’t affect the result. before using the GTT you must set temp_undo_enabled parameter (or simply set in system level)

first to check the redo size I wrote a small package that will show us used redo size difference between the previous call of this package and current.

whenever you call pkg_get_stats.sp_get_redo_size procedure, it will write the difference yo dbms_output.

so Let’s start with a new fresh sqlplus session (always use sqlplus for tests because UIs might do some extra jobs which can affect your results). My test case will show us three things.

1- How much redo generated for dml operations (including table create command )
2- How much temp space is being used by that session
3- How much time required to complete the dmls

session1 without using temp undo:

I suppressed dml results. as you can see job completed in 24 seconds, generated ~733 KB redo and used 559KB temp space. Total generated data is around 1292KB.

Now let’s run the same code after enabling temp undo:

session 2 with temp undo:

as you can see total run time decreased to 15 from 24 which is a huge gain for performance. almost no redo generated but of course temp usage increased even the total amount (1118KB) is less than previous (1292 KB).

PROS:
1- You will probably get faster execution time! I say probably because if your undo tablespace has much more faster disk structures than temp then you might experience a performance loss! on my system they have same disk structure and disk performance.
2- You will generate less redo which is so much important. Don’t think this is just a performance issue. if you have a data guard on a distance location, every redo log data will be shipped to that location over network. decreasing the generated redo size will decrease your network traffic also the amount of job on data guard db because unnecessary temp table insert/update/delete operations won’t be run. Again a performance gain for everything.
3- You will need less UNDO space since your GTTs won’t use it anymore.

CONS:

1- You will need more space on TEMPORARY tablespace because it will use for undo data of GTTs
2- your TEMPORARY tablespaces should be on fast disks so you can get the performance gain.

In conclusion, I can’t wait to see the active usage results and I will active this on a customer’s db these days. I will also share the information about active usage.

thanks to all.

Edit: I already used this new feature on one of my customer’s report and got very good results. Performance of the report increased around %10 and redo size increased significantly but I want to be cautious before set this parameter database level and I use it on session level for specific operations.

EM Express Basic Troubleshooting

After 12c, unfortunately, we lost Enterprise Manager Console but we have pre-installed EM Express now. if you remember EM console you would install it with emca utility with many parameters. EM express save you from this because it is embedded. you just need to set a few things in your DB. if you already try to start em express and failed I hope this article will give the basic idea about it.

First, EM express runs on an specific port obviously. to learn this port:

select dbms_xdb_config.getHttpsPort() from dual;

this will give you the port for EM and you can just simply run your EM using this link:

https://10.0.0.1:5050/em

please change 10.0.0.1 with your database ip address and it should work. it is just that simple.  I also want you to remind that you can also use http ports not https. simply use sethttpport and gethttport in dbms_xdb_config.

but if you don’t see em page here then you should follow these:

1- if you didn’t see any port from dbms_xdb_config.gethttpsport() function, you might need to set it first using:

exec dbms_xdb_config.sethttpsport(5050);

you can change 5050 with any port you want.

2- you set the port and can see it as a result of gethttpsport() function but em express page still not working.

check your database parameters first:

show parameter shared_servers

you must see at least 1 for shared_servers parameter, if you see zero or non set it to 1.

EM express calls are handled by LISTENER process so your listener should be aware of your em express service:

lsnrctl status

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=Mustylaptop)(PORT=1521)))
STATUS of the LISTENER
————————
Alias LISTENER
Version TNSLSNR for 64-bit Windows: Version 12.2.0.1.0 – Production
Start Date 08-JUN-2019 23:32:44
Uptime 0 days 23 hr. 34 min. 43 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File C:\app\musta\product\12.2.0\dbhome_1\network\admin\listener.ora
Listener Log File C:\app\musta\diag\tnslsnr\MustyLaptop\listener\alert\log.xml
Listening Endpoints Summary…
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=MustyLaptop)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC1521ipc)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=MustyLaptop)(PORT=5500))(Security=(my_wallet_directory=C:\APP\MUSTA\admin\orcl12\xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary…
Service “CLRExtProc” has 1 instance(s).
Instance “CLRExtProc”, status UNKNOWN, has 1 handler(s) for this service…
Service “orcl12” has 1 instance(s).
Instance “orcl12”, status READY, has 1 handler(s) for this service…
Service “orcl12XDB” has 1 instance(s).
Instance “orcl12”, status READY, has 1 handler(s) for this service…
The command completed successfully

as you can see my database name is “orcl12” and necessary service name for em express is “orcl12XDB”. your listener should be listing this. if your listener is not on 1521 port (default) then your local_listener parameter in your DB should be pointing this listener (by it’s name) or should be null for default listener.

since EM uses a shared connection your dispatchers parameter should set properly:

show parameter dispatchers

dispatchers string (PROTOCOL=TCP) (SERVICE=orcl12XDB)

again, orcl12XDB (<sid>XDB) is indicated in dispatchers parameter.

so this should be enough but one common mistake is if there are more than one DB on the server they might be trying to use same port be sure every database has it’s own http/https port for EM Express.

EM express is very useful but I will be missing EM console at 11g and most important reason is the performance page. Please consider that “performance hub” in EM Express requires “diagnostic pack” so that means extra cost!

thanks.

edit: by mistake I used dbms_xdb instead of dbms_xdb_config. corrected.

Goodbye Log Triggers Welcome Flashback Data Archive

Hello,

I would like to talk about Flashback Data Archive (or Flashback Archive – FBA) in 12c. FBA was introduced in 11g. It is not new but it has very important new features that allow us to use FBA very efficiently and for free.  I would like to talk about new features more than what it does and and how it works but let’s give a quick look.

What is FBA?

Basically FBA is a module that let you store historical information about data in your table. When you enable FBA for a table then Oracle will start to watch this table and store every change on the table. this is a large definition and it is not wrong because all the DML changes will be started to log but also physical changes will be recorded too. you will be able to flashback your table before truncate or any other table alter. if you dropped a column, you can take it back or vice versa.

What is new in 12c?

First of all, and I believe the most important one, it is free anymore! in 11g FBA was using compressed tables by default which requires “Advanced Compression License” and that means additional costs. in 12c this is an optional features. By default Oracle does not create those table as compressed so you don’t have to pay anything unless you don’t want to use compression option.

Secondly, FBA can store context information along the data changes anymore which I needed most and couldn’t use it in 11g just because of that. if you have a web application then probably  application will be using a common user and managing users itself. this causes you not to identify sessions because they are all same users but if you have a good developer team then you can ask them to set some context information like client_identifier. This data can be used to separate sessions and identify real users for example. With 12c FBA is able to store those information with changes and when we check historical data we can see all context information too.

Is FBA better than Log Triggers?

In my opinion, YES! of course there are many things to check but I will try to make a demonstration about performance of FBA.

This is my test case:

So we have two tables, TMP and TMP_FBA. I created a logging trigger on TMP and write every DML into TMP_LOG table with some context information like, client identifier, os_user, terminal etc. In this point you can see that my trigger is a for each row trigger and it will be writing every change one by one to log table. Some might use a compound trigger and store changed rows into a collection write it to log table at after statement section. This can optimize your logging while using Bulk DMLs but if your DMLs change too much rows then this can cause you consume too much PGA and memory problems. So I didn’t use it in my example. By the way to provide stability I created T_Base_Data table and I will use this to insert my original test tables.

My FBA is not compressed one (I didn’t use “optimize data” clause) as well as my Tmp_Log table too. I will do some DML and compare the performance. Also I want to compare size of the tables it will give useful information too. First I will insert some data with “insert select” statement then insert same data row by row using a for loop.

When we check timings, we see unbelievable  difference:

Trigger Bulk Insert: 22.13 seconds
FBA Bulk Insert       : 00.07 seconds

Trigger Row By Row Insert: 33.62 seconds
FBA Row By Row Insert       : 05.12 seconds

so for performance of our Insert statements, winner is definitely FBA. if we check log sizes, Our log table which is inserted by trigger, has reach to 112MB but FBA related objects are 35MB. One of the best things about FBA is it does not generate much INSERT log records because the original data is already in our table. This feature has already given us a lot of space. So we can say that about logging size FBA is winner again!

PS: While running my codes, I want you to know that only FBA table is Tmp_FBA so while checking size of FBA related object I used “object_name like ‘SYS_FBA%” condition. I will explain those objects at the end.

Let’s run some UPDATE:

Trigger Update : 53.38 seconds
FBA Update        : 02.25 seconds

Trigger Log Size : 96 MB (208 – 112)
FBA Log Size        : 91 MB (126 – 35)

Winner is still FBA.

Delete:

Trigger Delete : 48.24 seconds
FBA Delete        : 01.92 seconds

Trigger Log Size : 104 MB (312-208)
FBA Log Size        : 48 MB (174-126)

and winner is again FBA!

Everything is awesome but how can we see our logs in FBA? where are those logs? of course we can check the tables that FBA created automatically but there is a better way to see logs, Flashback Query:

you don’t even need to find the log table, just a flashback query will be enough to see historical data.

In my example I inserted 2 times all dba_objects into t_base_data table and I used this table to insert 2 times again into Tmp_FBA, that is why you see 4 COL$ tables.

finally, if you want to see FBA tables:

SYS_FBA_DDL_COLMAP_nnnnn is used to store column changes.
SYS_FBA_TCRV_nnnnn is used to store transaction informations.
SYS_FBA_HIST_nnnnn is used to store data changes.

also there are 2 default indexes on those tables.

Why FBA is so much faster?

trigger logging cause 2 actions. first calling a trigger which is a plsql object and then running an another insert statement. That means too much job to complete and context switch between sql and plsql.

FBA is using UNDO segments so basically it does no extra job! whenever you run a DML statement, Oracle copies all data which you are about to change to undo segments. if you commit, undo segments become obsolete (unless there is no select actively running) but if you rollback then all data in undo segments copied back to original table blocks. that is why commit is too fast but rollback is slow. Anyway, FBA reads undo segments which means your DML already generated undo blocks and FBA just read and save them. That’s all.

How about the Security?

One more time, FBA is the winner! You can not modify FBA related tables and by saying modify we mean any DML or DDL. even if SYS user can not drop or delete FBA related tables:

any user with drop any table or delete any table can delete your trigger base logging table but not with FBA! that brings a huge security advantage. of course a user who has flashback archive administer privilege can remove FBA from your table but this will be an obvious action because previous data will be also lost!

In Conclusion

Based on results of my test case I decided to convert all my log structure to FBA but there are a few more tests that I must complete first like checking PMOs (partition management operation), compression on FBA (since I have advanced compression license) etc.

thanks for reading.