catcon.pl cheat sheet

Hello,

Oracle uses a container database architecture permanently anymore, you should be familiar with “catcon.pl” perl file. Let’s say you have container database with 10 different PDBs and you need to run a script file on all of those pdbs (maybe even in root and seed too) so what you need is catcon.pl.

catcon.pl will automatically execute your script file and run it on all or specified containers. Since it is a perl file it must be called with perl binary (perl in linux, perl.exe in windows) and you can find it under $ORACLE_HOME/perl/bin directory and catcon.pl itself is under $ORACLE_HOME/rdbms/admin .

so, you have a sql file to run under the /home/oracle/scripts directory called myfile.sql. let’s see how we can execute it in all pdbs:

that is it. myfile.sql file will be run in all containers (including root and seed) with the command above. Let’s dissect the command:

$ORACLE_HOME/perl/bin/perl => PERL binary to run catcon.pl
$ORACLE_HOME/rdbms/admin/catcon.pl => catcon.pl path
-d /home/oracle/scripts => directory where script is located.
-b MyScript => Prefix for the log files so you can separate them from other scripts logs.
myfile.sql => actual script name

actual script file must be the last parameter. It is that easy. in this example ORACLE_HOME and ORACLE_SID should be set before you run the command otherwise you cannot connect to the database. This is a local connection and by default it is a SYSDBA connection. So whatever you run will be run under SYS user.

To add some more functionality:

  1. -c : list of containers that you want to execute the script:
    ….catcon.pl -c ‘SALES PDB1 TEST’ => execute the script only at SALES, PDB1 and TEST pdbs.
  2. -C : list of containers that you don’t want to execute the script.
  3. -l : log directory
  4. -s : spool the output of script files.
  5. -S : (capital S) don’t execute the script in root and seed. (Edit: this option also don’t set “_oracle_scripts” parameter as true, so if you are creating a user it won’t be Oracle Maintained user)
  6. -u : specify username/password.
  7. -z : ez-connect string
  8. -n : number of workers. if you want to execute a script for 100 pdbs then you can increase the number of workers to execute script simultaneously on different pdbs. default is 1

Can we use catcon.pl to run a script on a remote server? Yes with a combination of -u and -z

run myfile.sql script under /home/oracle/scripts directory on CDBX database and only specified pdbs (PDB1…PDB8) which is on mydbserver.mydomain server using mustafa user and create 2 worker process so script will be run on 2 pdbs at a time and write log files under /home/oracle/logs.

catcon has more parameters but I just write about the ones I used most.

thanks for reading.

How to Convert LONG to Clob PART 2

Hi,

I just want to write a small post today. Some time ago, I shared a way to convert long to clob in a query directly: https://mustafakalayci.me/2020/04/06/how-to-convert-a-long-column-to-clob-in-a-query/ Even if this method works, it works really slow and anything slow annoys me!

So, I want to share another method which is much much faster than Sys_dburigen function but THIS IS UNDOCUMENTED and UNSUPPORTED! It is used by internally by oracle so it works fine but you must be careful. Since, no one uses LONG column in their application (right? you don’t! please don’t!) this kind of code is handy for mostly admins because for backward compatibility data dictionary has some LONG data type columns.

What we are going to use is a LIBRARY called UTL_XML_LIB . I almost never create objects under SYS user, so I will create this code under a privileged admin user. first we must grant privileges on UTL_XML_LIB library and then create an EXTERNAL procedure in plsql.

now create external procedure:

“kuxLong2Clob” is a function which returns void (basically it is a procedure) written in C by Oracle. this procedure takes a select statement for your LONG column and rowid. procedure returns CLOB data as an out parameter. I created my_long2clob procedure but to be able to call it via SQL statement, it must be a function, so:

to test it, I won’t create a new table with a LONG column but instead I will use one of in the data dictionary (which I use these kind of codes against). Let’s use SYS.TRIGGER$ table which is the base table for DBA_TRIGGERS view. ACTION# column is a LONG data type. of course we must first grant select privilege on SYS.TRIGGER$ to mustafa user.

I suppressed the output but here it is. really fast long to clob conversion in SQL.

as a CON, it will only work against tables not VIEWs! you cannot use it as “f_my_long2cloc(rowid, ‘DBA_TRIGGERS’, ‘TRIGGER_BODY’)”. Since it requires ROWID, you must be referring underlying table and its rowid.

beyond that, if you want to get LONG as VARCHAR2, similarly you can use “kpdbLong2Varchar2” procedure in DBMS_PDB_LIB library. as UTL_XML_LIB it is very fast. only difference is, it returns VARCHAR2 and trims the overflow part. Where is it in use? Check how DBA_VIEWS shows TEXT_VC column data 😉

I hope this helps to fight against LONG columns in your codes.

Wish you all healthy, happy days.

Tables with Memoptimize for Read (fast lookup)

Hi,

I would like to share my initial thoughts about memoptimized for read (fast lookup) tables in this post. fast lookup tables came with 18c and they are here to provide faster access to a table if you are using primary key with a equality condition. this is basically single table hash cluster. instead of using primary key index, Oracle creates a hash index and searching on a hash index should be much more faster than a unique index.

first, create the test environment.

to use a fast lookup table you must set memoptimize_pool_size paramter to a number higher than 0. this is a static memory component in sga to store hash index of the table. this memory component is not resizable which means it won’t grow or become smaller with automatic shared memory management.

create a table for fast lookup:

segment creation must be immediate otherwise you will get an error! Your table must have a primary key as well, since whole structure is based on PK, it is logical to have one. I roughly inserted 146.000 rows into table.

EDIT: I also tried those tests with 14 million rows too but results are similar.

this is not enough! you must use dbms_memoptimize package to use this table as fast lookup table.

let’s see what is happening now:

I skipped first execution statistics! you will see a small consistent gets at the first execution but consecutive executions will be like above.

as you see there are new execution plan steps “TABLE ACCESS BY INDEX ROWID READ OPTIM” and “INDEX UNIQUE SCAN READ OPTIM”, so these steps tell us, an hash index is used to retrieve data like as in key-value pairs. there is no “consistent gets”. this is amazing. almost nothing has read for this query and that should make this query so much faster.  Is it? well, I couldn’t find it, not as much as I expected at least. Actually, I found different blog posts mentioning about fast lookup tables but non of them has made a performance test.

So, I want to test the speed but there is a problem. for now, there are many limitations on fast lookup tables. one of them is “you cannot use it via plsql”. this means I can’t create a simple plsql procedure and run a query for 100000 times and compare timings. to do a test, I wrote a small python code.

Warning: I am no python developer, I just now python to solve my small problems, this code that I share below probably has many bugs but I just use it for this test purposes.

here is the python code I use:

after changing code many times, I used this final code. this code creates 9 threads and every thread run “select * from tmp_optimize where id = :myid” query for 100.000 times and myid variable value is between 1 and 10000 in circular way in this loop. each threads print total execution time for 100.000 runs of the query. here is the result:

timing is changed between 16.0 and 16.1 seconds for this fast lookup tables. how about a normal table with normal primary key?

I used same python code just changed table name from “tmp_optimize” to “tmp_not_optimize” and result is like this:

it is between 17.1 and 17.2 seconds. there is %6 performance gain (give or take).

I am not sure whether %6 is a good gain or not because to use fast lookup table actively, we need to separate serious amount of memory area. Also, there are few limitations like, cannot be used with partitioned or compressed tables, cannot be used in plsql (not yet anyway). only equality conditions on primary keys can advantage of fast lookup.

During my tests I didn’t check wait events. I expect that less latch waits on fast lookup table since no consistent gets occur. Maybe there is something I missed and I will do more tests on these tables (I didn’t even test “memoptimize for write” (fast ingest) yet).

wish you all healthy days.

Blockchain vs Immutable tables

Hi,

as you know 21c is here for linux and some new features are (as always) pretty exciting. two of them are Blockchain and Immutable tables. Let’s explain and test.

first of all, those are new features for 21c but they are backported to 19c. at 19.10 you are able to use BLOCKCHAIN tables and 19.11 IMMUTABLE tables. I will make my tests at 19.12

if you are familiar to Bitcoin or any other alt coin then you probably now what blockchain is. basically it is a distributed ledger. every transaction is a “block” and each block contains information about transaction like sender, receiver, amount etc and also a hash value of the previous transaction (using some hash functions). That way all new transactions are added to the end of the “chain” and that creates the chain. This ensures that chain is unbreakable you can not tamper with old records because that will breaks the chain (hash values won’t match with next blocks).

So, Oracle uses this technology to create a new table called Blockchain table. every inserted row in this table are stored with some additional information like current timestamp and most importantly, previous row’s  hash values! Hash value operations are made at the “COMMIT” time not before. that way Oracle guarantees that every row is a part of chain and you can not tamper with those rows. if you do, chain will be broken and can not be verified. Also, Oracle won’t let you delete or update those rows using any DML statement like delete, update, merge etc.

Immutable tables are pretty similar to Blockchain tables. you can not update those table using any DML as blockchain tables. Difference is rows in the Immutable tables are not linked to each other. Rows does not store any hash value from previous rows.

both of them are designed to create “unchangeable”, Insert only tables. if you think that this is a real necessity especially auditing staff. for example if you store a log information for a credit card information you don’t want anyone to change them and you want to be sure that those rows are intact and unchanged on an investigation.

of course you just need to store table data for a specific amount of time like 5 years etc. No one can store them forever so at some point you must be able to delete old rows. This is something you can define while creating those tables. Also, even if I may not able to delete the rows what would happen if someone drops the table! this is of course forbidden and again you can define a drop policy while creating tables. Let’s start with a blockchain table:

you need BLOCKCHAIN keyword (or IMMUTABLE for immutable tables) at the create statement.

“NO DROP UNTIL 31 DAYS IDLE” defines how can you drop this table. if you have no actions on that table for 31 days then and only then you can drop it. By the way after creating table, if you don’t insert any row then you can drop it immediately.  you can just use NO DROP to disable dropping of this table (only way is dropping database entirely).

“NO DELETE until 30 days after insert” defines how can you delete “old” rows. in my example you can delete a row after 31 days of it’s insertion. Again you can use NO DELETE LOCKED to disable deleting from table.

hashing part is the default and you have to write this down to define blockchain structure.

let’s insert a row and try to delete or update:

as you can see modifying is forbidden. you can not even delete rows that passes the retention time that you defined (no delete until…) with a delete statement. You must use DBMS_BLOCKCHAIN_TABLE.DELETE_EXPIRED_ROWS procedure to delete rows beyond the retention.

This is BLOCKCHAIN table. of course there are much more properties like user signs etc but I will leave it here for now. For IMMUTABLE tables, almost everything is the same:

just change “blockchain” keyword to “immutable” and remove hash clause. pretty straight. as expected you can not delete or update rows. if you want to delete old rows then you must use DBMS_IMMUTABLE_TABLE.DELETE_EXPIRED_ROWS procedure.

Common Feature of both tables is that they can not be modified by authorized db users not even SYS or db vault admins. No one in the database can change those rows. Basic difference of the both tables  is Blockchain table has steps for cryptographic linking between the last row and new row but Immutable table has no option like that. what does this provide? Immutable tables can not be modified via Oracle Database Software but how about bypassing DB? if someone modify datafile using Operating system tools then your table data will be modified and you can not be aware of it. Blockchain tables, on the other hand, has a verify function. since every row linked to each other with some hash value, if you modify this data using any tool, like OS tools, verification will be failed because if a row changes then it’s hash value will be change and that hash value is stored on the next row.

Then why immutable tables are exists since blockchain tables are more secure? You might guess that, creating hash values and storing them on the next row causes some extra work which means performance issues.

Let’s make a test (with a flow on purpose):

I inserted rows using bulk insert and row by row on both table and here are the results. huh! blockchain table is almost faster than immutable table? Actually no! as I said earlier hashing operations are done at the commit time not before on the blockchain table! in my code there is no commit! that’s why we see the same performance result. how about adding commits before printing time:

here it is! at worst, immutable tables are faster more than 6 times (row by row insert) and at best, almost 40 times faster than blockchain tables. So, while choosing table type be careful. if immutable tables meet the need then use them.

this is just a glance for new table types. I will make more tests and come with new posts.

wish you healthy days.

Edit: I didn’t mentioned about COMPATIBLE parameter. it should be at least 19.10 to create blockchain table and 19.11 for immutable table.

Pragma UDF & Deterministic

Hello everyone,

I wanted to write about something simple and can improve the performance of your sql statements. Unfortunately, many database developers use too much plsql function in their SQL statements. I say too much because if you learn more about the SQL, you will use less plsql. SQL is very capable language to process data. Of course we don’t design and develop a whole system all the time. We start to work somewhere and use their standards and run old codes. Many of my clients have some basic plsql functions which does not interact with SQL like just doing basic calculations or conversions. there plsql functions are pure algorithms which can be also run by SQL too. for example:

this is a very simple VAT adding function. As you can see this calculation can be done by SQL too but let’s say someone writes this code 20 years ago and now it is used everywhere in database (maybe also in the application too). So you might not be able to change all of them but you can increase it’s performance by deterministic or pragma UDF.

let’s make a small example first:

this will create a table with 800.000 rows. now I will run a query against this table with and without function:

it takes almost 10 times more to complete the query when you use a PLSQL function.

Reason is, whenever you use a plsql function inside a query (or vice versa) something called “context switch” is happening. you can think this as, SQL statements are run by SQL Engine and PLSQL statements are run by PLSQL engines and when ever these two engines meet with opposite statement, they have to call the other engine. this is context switching. You must avoid as much as you can from this but as I said if the system you work on is run the queries like this, you might not have to change all but you can, at least, increase the performance. that where pragma udf and deterministic come.

Deterministic tells Oracle, this function will return same result for same parameter all the time! if your plsql object does not interact with sql objects like tables or time variables like sysdate then probably it is a good candidate for deterministic. Also, if you want to index a user defined plsql function it must be deterministic. you can add deterministic clause to a function which is interact with sql objects or sysdate etc but this will only cause you headaches. so be careful about that.

Pragma udf is a way to reduce context switch overhead. it does not reduce the number of context switch but it makes it smaller and faster. that way if you use a plsql function in sql, context switch will be lighter.

Let’s make a small test now, my example is very good candidate for both deterministic and pragma udf:

I created 3 version of my vat function. first is the base one, second is with a DETERMINISTIC clause and third is both deterministic and pragma udf. they all do the same work. let’s compare their performance:

I run 4 versions, with basic function, deterministic function, deterministic & pragma udf and finally of course pure sql statement. As you can see BASE version is the worst and takes 1.15 seconds average to complete. just adding a deterministic clause decreases time spent almost 5 times and adding pragma udf decrease another 30-35% based on deterministic and finally pure sql is of course the fastest. pure sql is 10 times faster than BASE version. so, if you can use pure sql use it! if it is not possible at least you can check for deterministic candidates and use pragma udf.

Of course there is something called, “subquery caching” :

as you see, subquery caching might be even faster than pragma udf and deterministic but it depends. this solution also requires changing the sql statement too.

So you can improve the performance just by small changes. find a plsql function that is so simply and used everywhere in your database (I am pretty sure there is one) and try the suggestions above. by the way, 19c has something even stronger called “SQL Macro” which I mentioned a little bit before:

http://mustafakalayci.me/2020/10/28/sql-macro-is-finally-here/

http://mustafakalayci.me/2020/10/30/sql-macro-part2-column_value-nvlparam-column_value/

I mostly mentioned for table sql macro but you can use sql macro to generate parts of sql statement too.

I hope this helps you to tune your sql & plsql statements.

wish you all healthy days.

PLSQL and Flashback Data Archive Problems.

Hi,

I have written about flashback data archive before. if you didn’t read please check these post first.

http://mustafakalayci.me/2019/03/02/goodbye-log-triggers-welcome-flashback-data-archive/

http://mustafakalayci.me/2020/09/10/flashback-data-archive-problems/

I’ve already mentioned about some problems on my second post but these are not kind of problems which will stop your work. The problem I am about talk is that kind of problem.

From my perspective, it seems that flashback is designed to be used in SQL. track the history of your tables (even ddl changes) etc but if you start to use flashback in PLSQL, you will hit some walls!

PLSQL is not much dynamic language yet. Please don’t get me wrong. I am and Oracle Database Developer and Administrator. I am not interested any other databases and use SQL and PLSQL all the time but there some limits that cause to use very hard solutions some times.

I have tested the codes below at 19.8 database.

Flashback Data Archive(FDA) is a very useful option. Even if you add, modify or drop a column, you will be able to see old versions of a table and that is where my problem starts:

I created a table and add it to a flashback data archive. I add some sleep codes to wait necessary internal tables to be created. in the end, at 14:49:42 (my local time) I have a table which contains 2 rows. a simple procedure called SP_X  is getting a parameter as DATE and uses this date in flashback query.

as you can see this procedure works well. I didn’t demonstrate different data in different times but it works as well.

Let’s drop a column from this table:

and re run the procedure:

as you see even if I am trying to get data from sysdate, I got an error. problem is even if I tried to query with previous dates cause same error starting now!

even dropping and recreating the procedure is not working. as far as I understand, Oracle cannot decide exact definition of the query anymore. adding a column does not cause this problem but dropping a column does.

you might think of using FOR loop to fetch cursor but you will hit the same error.

by the way queries that we are trying to run are working correctly in sql:

so, using flashback queries in plsql seems a little bit dangerous. Actually one of my client also gets “ORA-01007: variable not in select list” error in plsql (sql is perfectly fine). so be careful while using flashback in plsql.

EDIT:

during my tests, I rename cust_id column and I started to got “ORA-00932: inconsistent datatypes: expected NUMBER got DATE” error. after cust_id column I got, order_date column so that make me think that, column order might be changing and I realized there is a greater problem in this!

here is sample code:

there columns c1,c2,c3 and they have ordered number in them (1,2,3).

column order is not changed but when I used this with flasback query:

column order has changed even if I asked for sysdate. in plsql this could cause problems because you might not aware of this and wrong variables might store your data:

as you see r_tmp.C20 variable is storing data of C3 column because it comes first in select list when flashback query is used.

so, very very careful about flashback query and plsql.

wish you healthy days.

ORA-24244: invalid host or port for access control list (ACL) assignment

Hello everyone,

So you wanted to migrate your ACLs to another database or you are trying to add a new acl entity but you got ORA-24244. There are many reasons for that error unfortunately. I wish Oracle provides us more detail about what is the error but I wanted to share what I have seen so far and how to generate acl records. What I am about to write is for 19c databases.

  1. First of all, of course, you can get this error by setting an invalid port like -123 or 999999999. check them first.
  2. secondly, port range you use might intersect with another definition. for example, there is an acl entry for www.example.com for 3000 and 3010 ports. you can not add a new port range for same host, same principal and same privileges with port range 3010-3015. since 3010 is already defined in first entry you can not use it again, second entry must start with 3011 at least.
  3. thirdly, you might be trying to insert an entry for RESOLVE privilege with port definition! this is not also allowed. for resolve privilege, lower and upper ports must be null.

these are my basic steps to check. even expdp utility with “include=NETWORK_ACL” option might failed while importing them because of these 3 steps. Lately I was copying acls from a 12c db to 19c and after getting too much error I decided to generate a simple select statements to generate acls append command. you can either copy, paste and run the result of statement or run it in a loop and run the command with dynamic sql.

Please don’t judge for not using match_recognize. I just wrote it fastest way I can think of. script is first dividing port ranges to row by row single ports and them regroup them (some old entries had intersected port ranges). also uses NULL for resolve privileges. I didn’t add all options of course. I hope this script helps you.

 

wish you healthy days.

export import ACL & ORA-24244 error during import

Hello everyone,

it’s been a while since my last post. So, I wanted to write about export and import ACL privileges. They always become a pain (at least for me). after 12c, Oracle introduce us a new and easy way of ACL copying from db to db. export&import.

so here is the basic command on your source database:

this will export all stored ACL privileges on your database and to import those ACLs to a new db:

so easy but during the import you can get ORA-24244:  invalid host or port for access control list (ACL) assignment error.

I encountered this error while creating a new db for a client. In my case this has happened because RESOLVE privileges shouldn’t be used with port definition!

so, somehow previous db has port definitions for RESOLVE privileges and those ones wasn’t imported. to add them into new db, I simply used this sql:

run this select on the source db and execute the output on the target db.

wish you all happy and healthy days.

Which Column has Changed in Every Row?

Hi,

Edit: Thanks to Chris Saxon who is from asktom team, checked my code and warned me about a problem. Not to overload the PGA fetch_rows function only fetches 1024 rows at once so every row set chunk size is 1024. In this case for the 1025th row, previous row information is lost because fetch_rows procedure ended and recalled. He suggested to use XStore to store data from previous run of fetch_rows and I added it. While doing that I removed some unnecessary lines too. Thanks again Chris.

PS: this is not a Polymorphic Table Function (PTF) explanation or introduction, I just wanted to share a code that I wrote before and very useful (at least for me). if you want to learn more about PTFs check https://blog.sqlora.com/en/polymorphic-table-functions-example-transposing-columns-to-rows/

PTF is a great tool that enhances the sql. I think I only need a “pointer” definition in PLSQL after PTF and that’s all I would be very happy 🙂 even in this example, it would be nice if I can use a pointer to point data collections which makes the code shorter and more readable.

About a few months ago, I debugged a code for some unnecessary rows. It comes to an sql statement which has a MINUS set operator to eliminate rows but some rows are not eliminated as needed. I found the rows and try to find out what column has different values so that row is not eliminated in MINUS but this table has so many NUMBER columns which has many decimal points and it was really hard to find which column(s) has different values.

Since I am a so lazy person to check all column by column in two rows, I created a Polymorphic Table Function (PTF) to find the differences for me and it worked as I needed. I added a new varchar2 column to the base query (select … minus select …) called DIFFERENT_COLUMNS  and I put different column names, old value and new value into that column and got all of them. After solving my problems, I realized that I can use this code for something even better!!!

While looking to a log data of table (trigger base or flashback or flashback data archive), I have been asked many times  “what changed”. which column(s) has changed? it is not easy and mostly required to check row by row, column by column. So, my polymorphic table function is perfect solution for that! Just provide necessary table or select statement and find all columns changed every step.

Here is my basic code:

this package has it’s own polymorphic table functions (SF_Find_Differences) which has two parameter, first is the table or query that you want to find differences with previous rows. By saying “previous” I am using default order what you provided to my PTF I am not ordering data in any way. I will explain second parameter later. let’s make an example:

this will be my sample data (flashback query). one problem is while you are looking for a historical data mostly you will be checking one specified row history not all rows. so you must filter rows using primary key or something. Unfortunately you can not pass a “subquery” to PTF but you can pass a WITH data source.

So, I just created a dummy emp table and then updated employee with id 100. first change first_name then last_name then salary and commission_pct both. here is my query:

I think this is so cool 🙂 if you check the last column it will be listing what is changed (which columns). As you can see I use a WITH clause to define specific historical data and order it using VERSIONS_ENDTIME. One problem is, I wanted to see when these changes happend (versions_starttime and endtime) but since they are also columns in this query my DIFFERENT_COLUMNS has those changed columns too! that is where the second parameter gets involved. second parameter is EXCLUDED column list from change control. so:

much clear. on the second row, Steven becomes Mustafa, on the third row, King becomes KALAYCI and on the last one, salary and commission_pct are changed. it is more easier to track it or listing in your application.

Of course this is not a complete code, I just checked for varchar2, number, date and timestamp columns but not the others. Also didn’t work on “formatting” like date and timestamp data types which can be done but still pretty nice.

I wanted to share this with whole community. I hope this helps.

Edit: while checking whether old value and new value are different or not, I used if conditions but I would like to use SYS_OP_MAP_NONNULL function. unfortunately SYS_OP_MAP_NONNULL is not defined in plsql.

Edit: I made a change about excluding columns. instead of for loop I used “member of” clause.

wish you healthy corona & earthquake free days. Unfortunately, an earthquake happened in my country and neighbor countries. Rescue teams are still trying to save people. I hope this never happens in any place and no one experience this pain.

SQL Macro Part2 column_value = nvl(:param, column_value)

Hi,

Since I am working on “SQL Macro”, I am keep testing things. so please see my previous post: http://mustafakalayci.me/2020/10/28/sql-macro-is-finally-here/

as mentioned in previous post, is it possible to get rid of column_value = nvl(:param, column_value) conditions? since they become a huge pain in big and complex queries. I provided a very cumbersome solution in previous post., I wanted to test and do more about it.

As I said I wasn’t able to read any parameter value in sql macro except numbers. So today I tried COLLECTIONS (Array) and got different results. Let’s start with COLUMNS. we can send a column list using DBMS_TF package:

so we are able to group any table based on a column and get count of it. since P_Columns is an array, we can send more columns in it but how can we use them dynamically in sql macro?

great we can use plsql to generate column list and use it in sql statement but of course this could be a little bit dangerous because we started to use “string concatenation”. Not much but a little bit because columns in DBMS_TF.COLUMS_T array must be actual columns of the previous parameter or you got an error.

since we can use collections and data is passed to plsql (unlike varchar2 parameter) may be we can use user defined collections to pass a parameter and we can! that is really excited me at first(not then).

So since I wanted to eliminate unnecessary conditions like ” column1 = nvl(:param1, column1″ if :param1 is null, I planned to pass those column name and parameter value and check them in sql macro. if parameter value is not null then add it to sql condition.

here is VERY basic version:

Viola! if I send null as the value of employee_id, just “select * from hr.employees” statement is run and a full table scan is done but when I send 100 as employee_id value “where employee_id = 100” condition is added.

PS: I am aware that my conditions are just an equality conditions and no AND/OR operator selection or grouping conditions (parenthesises for related OR  conditions) etc etc etc. this is just an example after all. I will write a proper version of it when I decided to use it.

as you can see, I send 90 as department_id value and it is in my conditions but value of salary is null and it is not in my conditions. so did we get rid of those conditions? well not exactly.

event if it seems working (and it is working) I did not used any “bind variable” in my examples. those parameters are constant values and so the query. this is the actual running code for my last example:

select “EMPLOYEE_ID”,”DEPARTMENT_ID”,”FIRST_NAME”,”SALARY” from p where EMPLOYEE_ID = 100 and DEPARTMENT_ID = 90

so no bind variables. Let’s use one:

unfortunately, when I used bind variable I got an error from line 10 of my Sf_Generate_Sql sql macro and line 10 is where the P_Where collection is read. so somehow SQL does not initiate this collection when I used bind variable and/or it is not passed to sql macro. Even if I tried the plsql version, it is not working:

so I can not use it right now. since this is a 20c feature may be we should wait for 21c for more details and more options.

Edit:

I also wanted to add one more example, Let’s get only specified type of columns from any table:

I defined 1 as number, 2 as varchar2 etc etc. so I can get only NUMBER columns from a table (any table actually) or only varchar2 columns.

SQL Macro has a great functionality improvement for SQL in my opinion. since 20c is not accessible yet I can not try SCALAR sql macro but I am confident that it will save us to write many same sql fragments for many different sql statements.

wish you healthy days.