DBCA Templates and Dangerous “General Purpose” Template

Hi,

I still see that many dba is creating their databases using “General Purpose or Transaction Processing” template.

 

Why you shouldn’t use it? Because it is way too general! if you click the “view details” on the right:

Almost all the “Components” are true. So it installs everything even if you don’t use it. is that a bad thing? Yes, because;

  1. some of them requires Licenses (that means Extra Cost) and you are installing it and if someone uses it then, you have to pay to Oracle. As a small note, you cannot remove them during the installation (I will explain why in a minute)
  2. after you created your database, your upgrade/patching times will increase dramatically. when you upgrade your database, Oracle upgrades everything in the database and that includes the components too whether you use or not! This means more down time (based on your upgrade/patching method).
  3. This database is created with predefined set-up like 200M redo log files. of course you can change them but again you might missed it and that means extra job after db creation. By the way, you cannot change some of them for example default block size! (I will explain why)
  4. you cannot use “some” options with latest installed versions! for example timezone file version! even if your database home is patched (or have already) the latest timezone file version, if you create database with “General Purpose” template, then you will surprised when you open your database, an old version of timezone file will be in use! (again, I am about to explain why)

Why? because this is a “used database”! This database is not “new”. General Purpose database makes things easy for you and speed up the installation processes a little bit more and the reason is, this database is already created! Go to your ORACLE_HOME directory (for ex: /u01/app/oracle/product/19c/dbhome_1) and then go to assistants/dbca/templates. The templates that you see in dbca are here, and also “Seed_Database” is here. if you use General Purpose, Oracle will RESTORE a database from Seed_Database.dbf file (and if this is a container database then pdbseed.dfb too). this is a pre-created database.

This is why, if you use this template, you cannot change block size because pre-created database is used (you cannot change default block size after creation). This is why, your database will have an old timezone file because it is installed in already, you must update timezone file. this is why you cannot remove options during the installation because they are already installed.

So, don’t use General Purpose template. create your own or at least use “custom template”. that way, you will have full control of your database and it is really easy. actually go to templates directory that I mentioned above, copy General_Purpose.dbc with a new name, and then open and change whatever you want but I don’t recommend because general templates has some extra sections and missing some sections since it is restoring a seed database.

I won’t go into whole details of template file but here is the head of it:

here are our “Components” for example.

OMS: Oracle Label Security
JSERVER: Java virtual machine
SPATIAL: Spatial
IMEDIA: Multi-media (it is deprecated any more, don’t install it)
ORACLE_TEXT: Oracle Text search
CWMLITE: OLAP
APEX: Apex but don’t install it, it has a separate installation.
DV: Database Vault
NET_EXTENSIONS: .net extensions for Windows OS only.

Some components are depended for example if you want to install Database Vault, you must install Oracle Label Security too. You can check this on Oracle Support via the specific versions. Be careful,  pre 18c versions, Spatial is depended to Multi-media (IMEDIA) but after 18c it is deprecated and Spatial is no longer depended to it. So, the version is important.

if you down in the template file, you will see database parameters. you can set your defaults in that section. For example, I use unified auditing so disable traditional audit by setting audit_trail to NONE.

you can create a template via dbca (on the first screen choose “Manage Template”). Also, you can create a template from an existing database via dbca using cli:

Use your own templates and remove unnecessary components from your databases. it will be a relief for you and your databases 🙂

Wish you all healthy happy days.

 

NON-CDB to PDB Migration Horror

Hello everyone,

Lately I migrated a non-cdb to pdb for a client database and it almost become a horror movie for over 2 weeks to me. before I start, if you want to know how to migrate from a non-cdb to pdb you can check one of my previous post: http://mustafakalayci.me/2022/03/25/3-ways-to-migrate-a-non-cdb-database-to-a-pdb/

Basically this is not a complicated or hard task. Especially, if you are using autoupgrade tool, it does everything for you. So, why did this task become a horror movie? First of all I am not an APEX guy (which I should be) and since there is an unknown component in this problem to me, I led to the wrong direction. What happened? I created a new container database on the server and then I used autoupgrade to migrate that non-cdb to pdb. Pretty straight and easy, then developer team told me that they cannot login to ords! here it is started. When they try to connect the ords url it just says “Contact to your application administratior. Details about this incident are available via debug id” and ends with some number. after a quick search I found these tables that debug messages are recorded: WWV_FLOW_DEBUG_MESSAGES and WWV_FLOW_DEBUG_MESSAGES2 and checked the tables for error messages. almost constantly new rows are inserted (since there are some integrations) and mostly error messages like this:

these are different error messages in same debug id (page_view_id). Errors are pointing HTMLDB_UTIL package but this package name has changed from time to time. same errors are raised for WWV_APEX_JSON and many others too.

So, first things first, I checked invalid objects and there weren’t any. I actually find the target procedure in HTMLDB_UTIL which is raised the error and when I run it in sql developer, it works properly. I also saw few “ORA-04068: existing state of packages has been discarded” error too. Well this should lead me to solution but…

From that moment, I thought something is wrong in apex since these errors are only raising for apex codes and also one of the developer team leader had told me that ords must be reinstalled if dbid has changed! That was a problem because during the non-cdb to pdb, new pdb database has a new db id and (as far as I know) it cannot be changed. What did I do? I started to learn how to reinstall ords! It was not that complicated and got it. I reinstall the ords but error kept coming.

Before migrations like this, I always test it on virtual machines by using same environments as much as possible. In that case, I didn’t have a RedHat and I did the tests on Oracle linux. as you can guess there were no error at all.

in one of my tries, I recompiled HTMLDB_UTIL package by “alter package … compile” (and body as well) but compilation is completed with some “warnings”.  I opened the source code and re-run the code (create or replace ….) and that cause a brand new error which I didn’t see before:

I checked libraries if this is an existing library but it wasn’t. I  thought there might be some problems about WWV_FLOW_PAGE package and then I tried to recompile it with alter package command but it raises similar error for some other packages and this goes on an on. Also, whenever I compiled a package in APEX schema, some other apex packages become invalid too. Since I was frustrated and really furious, I started to open every package in error messages and re run their source code one by one. Finally, it worked! all packages become VALID and ords url was working. I was able to login apex builders etc.

I thought that I solved it but it didn’t last long. after making a change in an application package which uses some apex packages, suddenly all the errors I fought with come back. run time errors, invalid objects etc.

by the way I also tried  dblink and dbms_pdb methods for migration instead of autoupgrade and nothing worked.

Then, this run time errors got my attention and search for them and there are some couple of MOS documents. to recompile invalid objects we use utlrp.sql file as you know but there is also another file that does the opposite, utlirp.sql (there is an I letter after utl). this package invalidates all objects in the database. so what the heck, I give it a try:

first close pdb and open in migrate mode:

now everything is invalidated, so recompile them:

and yes, that’s it. That solved the problem. if I focused on run time errors I would find the solution days earlier but not knowing APEX make me look into it.

By the way, those errors sometimes comes after hours later than pdb migration. so, I think I will use utlirp and then utlrp for every non-cdb to pdb migration anymore.

I hope this helps if you encounter such problems and you don’t suffer as I do. thanks for reading.

wish you all healthy and beautiful days.

Not Deploying is also Important!

Hello there,

I mentioned about writing a nasty ORA-600 error and here it is; One day I woke up and one my databases was started to raise some ORA-600 errors:

ORA-00600: internal error code, arguments: [ktfacht1-0], [], [], [], [], [], [], [], [], [], [], []

there were also some “snapshot too old” errors too but don’t confuse with manual or insufficient undo etc. Application on this database is using Flashback Data Archive. some of the tables are in FBA (short for FlashBack [data] Archive). some thimes developers or app users are trying to get even before the flashback retention time and they got snapshoot too old error so we are used to it. My first response was “naah probably nothing” well, it wasn’t 🙂

Suddenly some developers contact me about getting snapshot error on flashback archive tables and they were getting error for any time even just for 5 minutes ago (this short time range even doesn’t need flashback archive, standard undo retention is enough for that) What was happening? For some table that are in flashback data archive we started to get snapshot too old error, then number of tables are started to increase! first 1 then 4 then 6…

When I check the trace files for that ora-600 errors I got statements like this:

(I deleted actual column list, owner and table_name)

you might familiar with flashback archive but if you are not check this post: http://mustafakalayci.me/2019/03/02/goodbye-log-triggers-welcome-flashback-data-archive/

Preliminary Information

Flashback archive has some tables in SYS schema (mostly stores context data) and when you add a table into flashback archive, Oracle creates 3 tables in actual table’s schema.

SYS_FBA_DDL_COLMAP_nnnnn : column mapping table, when you add/remove/modify a column a new row will be inserted here
SYS_FBA_HIST_nnnnn : actual table historical data.
SYS_FBA_TCRV_nnnn : historical transaction data.

“nnnnn” is OBJECT_ID in dba_objects for that table.

when you add a table into flashback archive, Oracle does not create those SYS_FBA tables immediately! They created after some transactions on the table and whenever the flashback data required to written into those tables. (that is I think a problem by the way, they should be created immediately).

End of Preliminary Information

When I checked some important tables data in SYS_FBA_HIST_nnnnn table, I did see that new data is coming but when I run a flashback query I got ORA-00600. when you search for “ktfacht1-0” you get some docs about “virtual columns” which are not related to our case, there were no any virtual columns (or functioned based indexes etc).

Of course create an SR with severity 1 and as you can guess this was happening on only PROD environment! Murphy laws. After around 2 weeks (yes it was a little bit long), uploading mass amount of log/trace files to Oracle support and talking to many different support technicians, they offered a zoom call, during the call we check the system again and trace files as well, finally they told me that only one table has a problem and the problem is empty SYS_FBA_DDL_COLMAP table!

When a table is added to FBA and SYS_FBA_DDL_COLMAP is created, current columns of table are written into ddl_colmap. Somehow it wasn’t exist on that table. Because of that Oracle cannot write other flashback data (like old table data and transaction information) cannot be written into sys_fba tables and as a result of that, Oracle blocks some rollback segments which also contains other tables data (other tables that cannot run flashback query on them).

Support told me to remove this table from flashback archive will solve the problem and since this tables newly added into FBA and there were no historical data, I talked to dev team and removed table from FBA and viola! everything get back to normal. that is it. Oracle support couldn’t tell why this could be happen (and they cannot because actually we did it 🙂 ).

After talking to support and solving the problem, I started to dig why that happened. During my checks, I realized that there some SYS_FBA tables but they are not related to any actual tables! empty SYS_FBA_HIST_nnnnn tables were there, then I started to check my audits about DDL operations on the problematic table and I found it! it was added to FBA on Thursday (day is important) then weekend and on Wednesday BAM, we got ORA-00600s.

after getting into really deep I finally solved the mystery: as said before, SYS_FBA tables are not created immediately, they are created after some transactions. This table is a parameter definition table and not get much transactions. When we add this table into FBA on THURSDAY, there were no any SYS_FBA tables. here is step by step how Murphy laws are activated:

  1. Table is added into Flashback archive on THURSDAY (for all databases, dev, test, acp and prod)!
  2. on that weekend there were “Application PRODUCTION Release”. Dev team is using a third party tool to get differences between two databases (ACP and PROD), generate scripts and run them in PROD database. Release is completed.
  3. ACP database is copied from PROD few years ago so some of the table has same object id including this problematic table.
  4. on ACP database some DML statements has run on that table and SYS_FBA tables are created on ACP db.
  5. since there were no limitation on third party tool (to get differences between dbs) this tool generates create script for all SYS_FBA tables that is not exists in PROD. This tool does not copy data , just structure and SYS_FBA table for that table created on PROD without data (that is way sys_fba_ddl_colmap was empty).
  6. after weekend on Wednesday, a new row is inserted into table and Oracle found SYS_FBA tables are created (even if they are not actual sys_fba tables) so tried to use them but data was missing and that eventually caused ORA-00600.

and also that is why there some SYS_FBA tables that are not related to actual tables (they were not in dba_flashback_Archive_Tables or their object_id (nnnnn at the end) was not exists in dba_objects). this tool was copying them into upper databases (dev => test => acp => prod).

After finding the root cause, we defined some filters on the application not to get sys_fba (and some other Oracle related) tables and removed all un-related SYS_FBA tables in databases. by the way SYS_FBA tables cannot be dropped by default using DROP TABLE command but as you guess, these ones dropped because they are actual sys_fba tables, they were just user defined tables.

Thanks for reading if you come to the bottom 🙂 Deployment is one of the most important parts of application development, so choose your methods wisely.

Wish you good, healthy days.

Damaged rowid is referenced! Health Monitor

Hi,

a quick note! today while checking health monitor results, I found this:

SQL dictionary health check: objauth$.grantor# fk 246 on object OBJAUTH$ failed
Damaged rowid is AAAAA9AABAAATI1AAa – description: Object MY_OWNER.SOME_VIEW is referenced

I replaced actual owner and view name with “MY_OWNER.SOME_VIEW”. Health check is telling me that a view is referencing to a “damaged” rowid. I have no idea what that is and I couldn’t find anything about it on google or oracle support. that view is selecting from one table only and on that table, this rowid doesn’t exist. So, I tried few things and dropping view and recreating it makes health check results disappear. probably something happened old days but I missed it or I wasn’t managing this database back then.

this is just a quick note. I just want people to be able to reach information. wish you good healthy days.

Old Obsolete Backup is not Deleted

Hi,

Happy new year (after almost a month). I was quite busy and lazy to write a new blog post but today I found the energy.

What I realized newly is a little bit shocking for me because I had no idea! Maybe this is just an ignorance of me but I never read anything about this (I had read entire backup and recovery documentation of oracle) and never heard of it. Here is the problem: “Report Obsolete”  or “Delete Obsolete” commands don’t report or delete an old backup even if it is beyond the retention period.

At first, I thought probably I am missing something about retention period or maybe a datafile is not backed up correctly (which is not much a possibility). After debugging RMAN report obsolete command, some debug logs takes my attention and I think I found the issue but have no idea whether this is a bug or a feature. I will demonstrate it with a single non-cdb database instance to keep things easy. This is a 19.16 database in a virtual machine and this is happening in both Linux and windows. an empty database, here is the schema report:

Let’s create a dummy tablespace first:

Database has no backup at all at that point, Let’s get one:

our new test tablespace TBS_TEST is backup up along with the database. Now, my retention period is default which is REDUNDANCY 1:

that means only 1 last backup for everything (database, tablespace, datafile) is enough and all previous backups are “obsolete”. Now before getting a second backup let’s delete TBS_TEST tablespace:

and get a new backup:

Good, I have two backups now and TBS_TEST is not included on the last one. since my retention policy is “REDUNDANCY 1”, previous backup should be “obsolete”. List backups:

as you see datafile 5 row in the first backup has no Datafile Name anymore because it is dropped. Now, lets report obsolete backups:

Huh! Only autobackup is obsolete and report command doesn’t mention about previous backup! Please remember my retention is redundancy 1 so only last backup should be kept but RMAN is keeping previous backup, the one with dropped tablespace/datafile.

if I keep taking new backups and run report obsolete (or delete obsolete) commands, it doesn’t report the first backup, only the ones after that.

I tried this with a 19.3 databases and it didn’t happen! it marked old backup as obsolete immediately. So as I said, I don’t know if this is my ignorance, lack of knowledge of just missing something or a bug but on different systems (on virtual machines, linux and windows servers) for 19.16, I see this behavior.  Also, it could be a Container database or a non-cdb. You might want to check your systems if there are older backups than your retention periods.

Thanks for reading, wish you all a happy, healthy new year. My next post will be about a ORA-00600 which drives me crazy for a few weeks.

EDIT: I should add one more thing, by the retention policy requirements all archivelogs between old “obsolete” backup and the current one are deleted! so, this old backup file is completely useless.

Autoupgrade doesn’t Upgrade Timezone Version While RU Patching

Hi,

during some tests, I realized that autoupgrade tool (the magic tool for upgrade and patching) doesn’t upgrade timezone version of database. here is the environment:

Oracle Linux 8.6

ORACLE_HOME 1 :/u01/app/oracle/product/dbhome_1/dbhome_19_3
Database orcl created in 19.3 with default time zone version (V32)

ORACLE_HOME 2 :/u01/app/oracle/product/dbhome_1/dbhome_19_16
Time zone version is upgraded to V39.

this is autoupgrade config file:

as you see, “upg1.timezone_upg” is et to yes so it must be upgraded when “orcl” database is moved to 19.16 home but when autoupgrade completes its job. output of V$TIMEZONE_FILE shows that db time zone version is still v32. interesting part, there is no error in log files actually, on prechecks ( run autoupgrade in analyze mode first of course) it is not mentioning about time zone upgrade. Autoupgrade does not plan to upgrade time zone file and it didn’t 🙂

So I started to dig a little bit. I created a new database via dbca (using custom database template of course) on 19.16 home and timezone file version was v39 as expected. So, no problem about home.

Secondly, I  thought there might be some bugs/problems on 19.3 (base release) home and patched it to 19.8 recreate orcl database and run autoupgrade again but timezone version was still v32.

Thirdly, I installed an 18c home, created “orcl” database and run autoupgrade to move it on 19.16. Viola, it worked, after autoupgrade new timezone file version was v39. in prechecks this time, autoupgarde says that time zone will be updated and it did.

What is different between starting from 18c and 19.3? 18c has v31 as default time zone file and 19c has v32. maybe autoupgrade checks default time zone files first compare them (maybe hardcoded) and decide to make a timezone file upgrade. I don’t know just a speculation.

Or maybe autoupgrade does not upgrade timezone file while patching and only in upgrades! I asked this to Mike Dietrich and he told me this should be working and create an SR. So, I created an SR and a bug has been reported. I hope it will be fixed quickly. Be careful if you patching out of place and always check you time zone files.

wish you healthy, happy days.

EDIT: It turned out my guess is correct. autoupgrade is checking default timezone versions but while upgrading, it checks latest version and upgrade it too. that is why it doesn’t upgrade while RU patching, since both homes are 19c and default is 32 for both.

and again many thanks to Mike Dietrich, for his close interest and fast response. AU team already discussed the situation and will be solved soon.

Unique Columns For V$Sql

Hi,

Until yesterday I was saying SQL_ID and CHILD_NUMBER is enough to get a sql child cursor from V$SQL and today I know I was wrong.

Probably many times you needed the sql statement that a session is running. To do that you query V$session and find the session information and then use SQL_ID column to reach out V$Sql. if you write a query like this:

you can write any filter you want here but you will immediately realized that some session rows are multiplied. Why? Because you didn’t use “CHILD_NUMBER” while reaching v$sql. V$SQL is not main data source for unique sql statements in your database. V$SQLAREA is. So instead of v$sql, you can use v$sqlarea in this query but if you need specific information about the exact sql cursor that session is currently running then you must use v$sql.

Note: I will not go deep Parent-Child cursors here. basically, every sql statements has child cursors. they all same sql but for some reason they have different properties and your session is running a “child cursor”.

so your query should be like this:

and now you reached to exact correct sql child cursor that is running for that session. right? well, I did know so but I find out, it is not! On one of my clients system, I discovered that this query returns 2 rows from v$sql which means for same SQL_ID and CHILD_NUMBER there are 2 rows in v$sql. I couldn’t find anything on docs or google (maybe couldn’t search enough) and started to dig about this sql statements.

here is the necessary information about this sql:

almost everything is same but ADDRESS and CHILD_ADDRESS. CHILD_ADDRESS is the memory address that this cursor resides on so it must be different obviously. Only thing that create this difference is ADDRESS column. ADDRESS column in V$sql is the address of the PARENT cursor which is in V$SQLAREA view. So let’s check this view:

there is only one of them exists. So some of those child cursors in v$sql are not (I am not sure the term) “valid”. they might be used before (and they were in my example) but now they are remaining of a past sql. So which child cursor in v$sql is the sql statements that session is currently running? Now we need to use ADDRESS column too. SQL_ADDRESS column in v$session is the parent address of the sql statement. So, I should add it into my queries too:

thanks for reading. comments are welcome. wish you healthy happy days.

Should Run datapatch After DBCA

Yesterday, I was testing some stuff in database and I needed to create new homes/new databases (new life new beginning). at a point I realized that something is not right about database. One thing to another I found the reason and it was interesting to me.

I used DBCA to create database ( also I used General Purpose template which we shouldn’t use in a production environment. this was just a test and it was a virtual machine). starting from, I don’t know 12.2 I guess, dbca runs datapatch after database creation. datapatch will apply necessary sql patches into your database like RU (Revision Update) or one off patches. I waited dbca to complete its job and it completed without any error. if dbca says “it’s ok” then you don’t look more right? How many times did you check all log files that dbca produced? Anyway, during my actual tests, as I said before, I realized something is not right so I wanted to check if all patches are applied correctly. I checked CDB_REGISTRY_SQLPATCH view (this is a noncdb database by the way so I can use dba version of view too):

at first it seems OK to me but then I realized there were two “RU APPLY” rows with a small time difference. As you see all patch operations are done to container 0, They both applied to same container(which this is a noncdb database). Since it was around 02:40 AM, I went to bed and in the morning I run datapatch (by the way date of virtual machine is not correct). here is the output of datapatch:

pay attention on “Current state of release update SQL Patches:” section. it says for SQL registery: “Applied 19.16.0.0.0 Release_Update 220703022223 with errors on 02-NOV-22 07.04.56.062757 PM”. there is an error on update of this apply. I don’t know what that was and it also says 1 patch installed. Maybe I did something wrong during the installation. I rechecked registry:

here is a third row for 19.16 RU apply and from that point everything worked smoothly. So, there is no harm to run datapatch after creating a new database via dbca. You must run datapatch if you create your database with CREATE DATABASE command anyway. We must be careful out there, anything can happen 🙂 there is no harm run datapatch and see:

“Installation queue:
No interim patches need to be rolled back
No release update patches need to be installed
No interim patches need to be applied”

wish you all healthy, beautiful days.

APPLY with status WITH ERRORS in the PDB

Yesterday, I was doing some tests about noncdb to pdb conversions and during one of these tests I got a strange error. By the way, I am doing those tests because of a side effect of noncdb to pdb. I have already created an SR about it and write a post here later.

here is the environment in my test case; I installed Linux 8.6 on Virtual Box, Oracle 19c (19.16 RU) and two databases on same home, orcl and cdb19. orcl is noncdb database and cdb19 is the container database. I created both databases via dbca at the same time and dbca completed successfully (at least I thought so). Both databases are up and running. So I started noncdb to pdb conversion via autouptrade tool (latest version 22.5). I also tried, db link and unplug/plug methods too.

So after starting autoupgrade it completed successfully but when I login into database and open new pdb, it raises some errors. in pdb_plug_in_violations, I saw “APPLY with status WITH ERRORS in the PDB” error. For some reason, datapatch didn’t run correctly after dbca operation for orcl (noncdb) database. I run datapatch for orcl database again, it did some patching and re-run autoupgrade. After that operation completed and pdb created successfully.

by the way, my main problem still exists. this migrated pdb has its own data dictionary (which shouldn’t be because data dictionary is in cdb$root). anyway if you ever encounter with this error you can try datapatch again.

wish you all healthy happy days.

How to Check Empty Space in a Securefile LOB

Hi,

PS: please check last sql statement in this post for shorter and faster solution. first sql is for showing all information about the lob.

Yesterday, while checking segment space usage data in a client’s database and some of the LOB segments were absurdly big. So, I want to check what is in it and how much free space I can gain.

To do that, I cannot use just DBMS_LOB.GetLength function because for CLOB data type, it takes much more space than original data because of charsets (especially if you use AL32UTF8 charset) so I just want to check currently empty space in lob segment. if you need that kind of data then, DBMS_SPACE is the solution. DBMS_SPACE has a SPACE_USAGE procedure which checks a segment and  return space information but starting 12c, SECUREFILE is the default lob type and for securefile you cannot use same space_usage procedure with the one you use for tables.

space_usage has overloaded versions and second and third overloaded version (order in dbms_space package from top to bottom) is for SECUREFILE lob segments.

here is a sample script I wrote to check currently empty space in securefile lobs:

Caution: I check all LOBS in the database so, you might not want to do that, just modify sql and add necessary filters.

first with function block is to call dbms_space.space_usage procedure and return data in string, on second with clause (get_info) call the function and get lob information, by the way SEGMENT_TYPE for a lob segment is “LOB SEGMENT” but dbms_space.space_usage does not accept it as segment_Type, you must send “LOB” instead, then parse result as columns and list SEGMENT_SIZE_MB which is total size of LOB segment size in megabytes
USED_SIZE_MB which is actual used size in megabytes
EXPIRED_SIZE_MB which is expired data size (for versioning) in megabytes.

to calculate empty space just subtract used size from segment size. this is approximately empty size in your lob segment. if you have a partitioned table then you will get result for each partition (like mustafa.tmp table in the example). I hope this script helps in your job.

EDIT:

I forgot to mentioned why lob segments have empty spaces. biggest reason is delete operation of course. if you delete a big portion of the table (or lob segment) then deleted space will be considered as empty space. So, if you delete on a table, should you shrink the lob segment or table? NO! that space will be used by the table again. if this is your standard operation then empty spaces in table/lob segment will be reused but let’s say you inserted many rows by a mistake or because of a bug in your code and then deleted those rows. if you have enough disk space for the database then there is nothing to worry about (mostly). That table will get many inserts in time and it will use that space eventually but if too much space is reserved in the table / lob segment and maybe you have a disk space issues then you can reclaim it (by shrinking or moving table/lob segment). So, this is not a regular thing.

EDIT 2:

sql above is aiming to show all information about lob like used space, total space, expired space etc. probably , you will just need to see empty space in lob. so you can use this sql statement. this will work faster because no regexp usage and parsing operations. I also subtracted “unexpired bytes” from total size since this space is still in use.

 

 

thanks for reading, your comments are always welcome. wish you healthy days.