How to Check Empty Space in a Securefile LOB

Hi,

PS: please check last sql statement in this post for shorter and faster solution. first sql is for showing all information about the lob.

Yesterday, while checking segment space usage data in a client’s database and some of the LOB segments were absurdly big. So, I want to check what is in it and how much free space I can gain.

To do that, I cannot use just DBMS_LOB.GetLength function because for CLOB data type, it takes much more space than original data because of charsets (especially if you use AL32UTF8 charset) so I just want to check currently empty space in lob segment. if you need that kind of data then, DBMS_SPACE is the solution. DBMS_SPACE has a SPACE_USAGE procedure which checks a segment and  return space information but starting 12c, SECUREFILE is the default lob type and for securefile you cannot use same space_usage procedure with the one you use for tables.

space_usage has overloaded versions and second and third overloaded version (order in dbms_space package from top to bottom) is for SECUREFILE lob segments.

here is a sample script I wrote to check currently empty space in securefile lobs:

Caution: I check all LOBS in the database so, you might not want to do that, just modify sql and add necessary filters.

first with function block is to call dbms_space.space_usage procedure and return data in string, on second with clause (get_info) call the function and get lob information, by the way SEGMENT_TYPE for a lob segment is “LOB SEGMENT” but dbms_space.space_usage does not accept it as segment_Type, you must send “LOB” instead, then parse result as columns and list SEGMENT_SIZE_MB which is total size of LOB segment size in megabytes
USED_SIZE_MB which is actual used size in megabytes
EXPIRED_SIZE_MB which is expired data size (for versioning) in megabytes.

to calculate empty space just subtract used size from segment size. this is approximately empty size in your lob segment. if you have a partitioned table then you will get result for each partition (like mustafa.tmp table in the example). I hope this script helps in your job.

EDIT:

I forgot to mentioned why lob segments have empty spaces. biggest reason is delete operation of course. if you delete a big portion of the table (or lob segment) then deleted space will be considered as empty space. So, if you delete on a table, should you shrink the lob segment or table? NO! that space will be used by the table again. if this is your standard operation then empty spaces in table/lob segment will be reused but let’s say you inserted many rows by a mistake or because of a bug in your code and then deleted those rows. if you have enough disk space for the database then there is nothing to worry about (mostly). That table will get many inserts in time and it will use that space eventually but if too much space is reserved in the table / lob segment and maybe you have a disk space issues then you can reclaim it (by shrinking or moving table/lob segment). So, this is not a regular thing.

EDIT 2:

sql above is aiming to show all information about lob like used space, total space, expired space etc. probably , you will just need to see empty space in lob. so you can use this sql statement. this will work faster because no regexp usage and parsing operations. I also subtracted “unexpired bytes” from total size since this space is still in use.

 

 

thanks for reading, your comments are always welcome. wish you healthy days.

Audits are not Purged

Hello,

This week I realized that some of my databases’ audit data are not purged and audit tables get bigger. I already have a scheduler job to purge my audit records but somehow job has been completed successfully but audit records are not deleted. This is 19.15 version, NON-CDB databases.

I am using unified audit in all of my databases and purge audit records based on last archive timestamp. here is a sample code to purge unified audit records:

This code could belong to Tim Hall (www.oracle-base.com)  not sure. This basically sets a date to delete audit records which are before that date. So, to understand the problem I started to digging but didn’t find much (before starting a trace, I found it out).

I checked what the saved last time stamp is:

that is interesting because this is NON-CDB database but it has 2 different database_id for same AUDIT_TRAIL type. One of them is actual id (2158381708) of current database but other one is unknown (at first). Also, for the rows with 2158381708 database id, last_archive_ts is correct but the other one was pretty old. So, some of my databases has this not purged audit problems and all of them has double rows in this data dictionary view. So, I thought there might be a bug and maybe code gets an too many rows error. Simply I deleted foreign database id rows. base table of DBA_AUDIT_MGMT_LAST_ARCH_TS is SYS.DAM_LAST_ARCH_TS$. I deleted rows:

then I re-run audit purge code and it worked!

Of course, I didn’t stop at here. Why there were different rows on this table? while checking database id’s I realized that all databases I had this problem are duplicated databases. I duplicated PROD to create a TEST or ACP for example and every time I duplicate a database I always change database id with “nid” tool.

I learned that after nid, those table remain to store old rows. I deleted old database id rows from all databases and problem solved. purge operation is a success. this is something new that I will add to my duplicate database steps. if you have duplicated database and didn’t check audit rows, you should.

thanks for reading. wish you all healthy, happy, peaceful days.

Hierarchical Blocking Session List

Hi,

Today, I needed to list all blocking sessions hierarchically. As you know, all information about sessions (and blocking sessions) are in gv$session (or just v$session if you are using single instance) view. I wanted to see who is blocking who and blocker at the top. you can write simple connect by (or with clause) query but it will list blockers at the bottom so I just developed it a little bit. if I remember correctly there were a small script under rdbms/admin but it was creating some tables etc etc. I just want to get it using a simple sql statements.

here is the sql:

first, I get all blockers’ sids (blockers data source in the query) then I use them to generate the hierarchy (blocker_hierarcy data source in the query) then I remove some blockers from the hierarchy because they are also blocked by some other sessions and will be listed under them.

for example, let’s say sid 10 is blocking 2 different sessions (sid 20 and 30) and also sid 30 is blocking some other session (sid 40). so this query will produce a result like this:

you can see the hierarch in a visual way and the first 40 character of their currently running sql statements.

Hope this will be useful for you.

wish you healthy days.

 

X Forwarding for an Oracle DBA

Hi,

It’s been a long time. I want to write something about X Forwarding. you might feel more comfortable to use a GUI instead of silent installations sometimes. if you are connecting to a linux server remotely then you must enable X Forwarding to enable GUI for Oracle product like dbca, netca etc.

First of all, use MobaXterm to eliminate many problems. Moba sets some parameters for you and it also shows you if X Forwarding is enabled or disabled for that session:

just using MobaXterm is not enough of course:

1- create .Xauthority file under your linux user home directory:

touch ~/.Xauthority

2- check ssh config if X11Forwarding parameter is set to yes or not:

sudo vim /etc/ssh/sshd_config

it must be yes of course.

3- install xauth packages:

sudo yum install xorg-x11-xauth xorg-x11-utils xorg-x11-fonts-* xorg-x11-font-utils xorg-x11-fonts-Type1

(fonts are not mandatory)

4- create a magic cookie:

that is pretty much it. after these steps MobaXterm should be showing you X11Forwarding as enabled.

so far so good but there is a specific scenario. many company uses VPN accounts now and you are logging on to servers with your VPN account but oracle installation could be under another user like “oracle” and you might not have password of oracle user since it is also a directory user and managed by different teams on hosting company (or your own company). in that case you might be using “sudo su – oracle” command to switch to oracle user and if you do that X11 Forwarding will not work! as a quick hack you can use this:

my vpn user: mustafa

oracle installation user: oracle

login as mustafa, then copy .Xauthority file to oracle user’s home directory, then change owner of the .Xauthority file as oracle:

you can put this into a sh file and run. you must run these commands with sudo of course.

last step is $DISPLAY parameter. echo $DISPLAY parameter with mustafa user and after switching to oracle user set this parameter:

that’s it. you should be able to use X11 Forwarding.

hope this helps.

Note: in my country there is a huge fire at almost all sea side. Many beautiful towns and villages are burned. People died, dozens of hills are turned into ash and thousands of animals had died. I hope it will end soon and climate change is real and here. every human should be aware of this.

export import ACL & ORA-24244 error during import

Hello everyone,

it’s been a while since my last post. So, I wanted to write about export and import ACL privileges. They always become a pain (at least for me). after 12c, Oracle introduce us a new and easy way of ACL copying from db to db. export&import.

so here is the basic command on your source database:

this will export all stored ACL privileges on your database and to import those ACLs to a new db:

so easy but during the import you can get ORA-24244:  invalid host or port for access control list (ACL) assignment error.

I encountered this error while creating a new db for a client. In my case this has happened because RESOLVE privileges shouldn’t be used with port definition!

so, somehow previous db has port definitions for RESOLVE privileges and those ones wasn’t imported. to add them into new db, I simply used this sql:

run this select on the source db and execute the output on the target db.

wish you all happy and healthy days.

Which Column has Changed in Every Row?

Hi,

Edit: Thanks to Chris Saxon who is from asktom team, checked my code and warned me about a problem. Not to overload the PGA fetch_rows function only fetches 1024 rows at once so every row set chunk size is 1024. In this case for the 1025th row, previous row information is lost because fetch_rows procedure ended and recalled. He suggested to use XStore to store data from previous run of fetch_rows and I added it. While doing that I removed some unnecessary lines too. Thanks again Chris.

PS: this is not a Polymorphic Table Function (PTF) explanation or introduction, I just wanted to share a code that I wrote before and very useful (at least for me). if you want to learn more about PTFs check https://blog.sqlora.com/en/polymorphic-table-functions-example-transposing-columns-to-rows/

PTF is a great tool that enhances the sql. I think I only need a “pointer” definition in PLSQL after PTF and that’s all I would be very happy đŸ™‚ even in this example, it would be nice if I can use a pointer to point data collections which makes the code shorter and more readable.

About a few months ago, I debugged a code for some unnecessary rows. It comes to an sql statement which has a MINUS set operator to eliminate rows but some rows are not eliminated as needed. I found the rows and try to find out what column has different values so that row is not eliminated in MINUS but this table has so many NUMBER columns which has many decimal points and it was really hard to find which column(s) has different values.

Since I am a so lazy person to check all column by column in two rows, I created a Polymorphic Table Function (PTF) to find the differences for me and it worked as I needed. I added a new varchar2 column to the base query (select … minus select …) called DIFFERENT_COLUMNS  and I put different column names, old value and new value into that column and got all of them. After solving my problems, I realized that I can use this code for something even better!!!

While looking to a log data of table (trigger base or flashback or flashback data archive), I have been asked many times  “what changed”. which column(s) has changed? it is not easy and mostly required to check row by row, column by column. So, my polymorphic table function is perfect solution for that! Just provide necessary table or select statement and find all columns changed every step.

Here is my basic code:

this package has it’s own polymorphic table functions (SF_Find_Differences) which has two parameter, first is the table or query that you want to find differences with previous rows. By saying “previous” I am using default order what you provided to my PTF I am not ordering data in any way. I will explain second parameter later. let’s make an example:

this will be my sample data (flashback query). one problem is while you are looking for a historical data mostly you will be checking one specified row history not all rows. so you must filter rows using primary key or something. Unfortunately you can not pass a “subquery” to PTF but you can pass a WITH data source.

So, I just created a dummy emp table and then updated employee with id 100. first change first_name then last_name then salary and commission_pct both. here is my query:

I think this is so cool đŸ™‚ if you check the last column it will be listing what is changed (which columns). As you can see I use a WITH clause to define specific historical data and order it using VERSIONS_ENDTIME. One problem is, I wanted to see when these changes happend (versions_starttime and endtime) but since they are also columns in this query my DIFFERENT_COLUMNS has those changed columns too! that is where the second parameter gets involved. second parameter is EXCLUDED column list from change control. so:

much clear. on the second row, Steven becomes Mustafa, on the third row, King becomes KALAYCI and on the last one, salary and commission_pct are changed. it is more easier to track it or listing in your application.

Of course this is not a complete code, I just checked for varchar2, number, date and timestamp columns but not the others. Also didn’t work on “formatting” like date and timestamp data types which can be done but still pretty nice.

I wanted to share this with whole community. I hope this helps.

Edit: while checking whether old value and new value are different or not, I used if conditions but I would like to use SYS_OP_MAP_NONNULL function. unfortunately SYS_OP_MAP_NONNULL is not defined in plsql.

Edit: I made a change about excluding columns. instead of for loop I used “member of” clause.

wish you healthy corona & earthquake free days. Unfortunately, an earthquake happened in my country and neighbor countries. Rescue teams are still trying to save people. I hope this never happens in any place and no one experience this pain.

CSV Data in Clob to Columns Using Select

Hi,

Thanks to one of my customers, I am dealing with csv data in clob lately a lot. so they asked me if I can get a csv data in a clob column as separated columns using select statement. They wanted to show this data on their applicaitons. I suggested different methods but they said this won’t be used much just once or twice on a month. so I created this code and wanted to share with you:

I am using a pipe lined function in this solution. I created a type to define columns with 80 columns (c1 to c80) then my pipe lined function gets the clob value and parse it to first “lines” then “columns”. On this problem, we have some strict rules like every line separated by new line (chr(10)) and every column separated by a comma and every column surrounded by double quotes. So I made them hard coded (if I find time I will make them dynamic).

I also used a small dynamic sql to assign column values because I am lazy and didn’t want to write 80 rows like “v_line.c1 := regexp_substr….”

In addition, I created a “constructor function” to my type because I want to use this:

” v_line t_csv_to_table_row := t_csv_to_table_row();”

if I didn’t create a constructor function then I have to add 80 NULL at the statement about like this:

” v_line t_csv_to_table_row := t_csv_to_table_row(null,null,null,……..null);”

so result is:

result:

as you see my query will always return 80 columns and columns data will be assigned from csv data. if there is no enough column in csv then remaining columns will be null.

using a polymorphic function could be better option on this but since I had limited time I produced a quick solution.

hope it helps.

How to convert a LONG column to CLOB directly in a query

Hi,

This is a big problem since LONG is a very painful data type. You cannot search in it, use it in CTAS (create table as), pass it to a plsql object as parameter etc. Oracle strongly suggest not to use a LONG column anymore but unfortunately there are many LONG columns in data dictionary views like DBA_TRIGGERS, DBA_TAB_COLS, DBA_MVIEWS, DBA_VIEWS, DBA_TAB_PARTITIONS… I struggled with DBA_TAB_PARTITIONS a lot for example. HIGH_VALUE column is a LONG column and some of my codes to maintain partition operations need that data to identify the necessary partitions. I did so many works to get that data and today, finally I have found a way to convert LONG data to CLOB directly in a query!

I love Oracle community (community.oracle.com). so many Gurus are answering to so many questions and I just learn a lot just by reading. I just saw a thread based on a different topic but a valuable member of community Odie_63 provides a brilliant answer: https://community.oracle.com/message/15591644#15591644

SYS_DBUriGen function is allow you to convert long to clob directly:

Trigger_Body is a LONG column. first parameters in the function (Owner, trigger_name) are required for uniqueness of the row and they must be columns in the table. you can find information about Sys_DbUriGen at the documentation: https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/SYS_DBURIGEN.html#GUID-ABA33BEB-F7B7-477B-9FF2-028D62768797

Of course performance is not much good but LONG data type is responsible of this. I hope Oracle will remove all LONG columns as soon as possible. even there is an idea thread for this at the community: https://community.oracle.com/ideas/3319

Thanks to community and Odie_63.

Insert multi line text data using sqlldr

Hello Everyone,

I just answered an OTN question about inserting multi line data using sqlldr and simply wanted to share it here (https://community.oracle.com/thread/4278952)

so if have a data like this in your text file:

you need to do some extra work to accomplish this problem. In my example I have 4 rows but 2 of them includes multi line character data. By default sqlldr uses “newline” character (actually maybe saying enter character could be more appropriate) and every new line will be accepted as new row! To solve this problem you need to change your “row delimiter/terminator” but this will bring some extra burden because your current data file (csv, txt etc) is needed to be changed and your “new” row delimiter must be added to end of every row!

this is my table:

This row delimiter could be more than one character and it should be because row delimiter must not be seen in your original data. for example if you have comma in your data you can not use comma as row delimiter because every comma will be count as end of row. In my example I will use $%&/| as row delimiter.

First change your data file and add new delimiter to your data:

secondly, we need to specify this new delimiter in our control file:

probably you already saw “STR X’2425262F7C'”. this is specifying new row delimiter. X means this is an hex code and 2425262F7C is the hex code of my delimiter $%&/|

how to produce this:

select UTL_RAW.cast_to_raw(‘$%&/|’) from dual;
————————
2425262F7C

so you can use any combination for your delimiter and just simply use utl_raw to generate hex code and use this in controlfile.

rest is same. just run sqlldr command with this control file and it will load the rows as expected.

select * from tmp_x;

please consider that your data must be in another file than control file! you can not use “infile *” and BEGINDATA in this structure.

I hope this helps.

Edit 01/09/2020 : I changed sample data formating because of extra enter chars and double quote formating issues.