Tables with Memoptimize for Read (fast lookup) Mustafa, 2022-02-152022-02-15 Hi, I would like to share my initial thoughts about memoptimized for read (fast lookup) tables in this post. fast lookup tables came with 18c and they are here to provide faster access to a table if you are using primary key with a equality condition. this is basically single table hash cluster. instead of using primary key index, Oracle creates a hash index and searching on a hash index should be much more faster than a unique index. first, create the test environment. Oracle PL/SQL sqlplus / as sysdba alter system set memoptimize_pool_size=200M scope=spfile; shutdown immediate; startup; 1234567 sqlplus / as sysdba alter system set memoptimize_pool_size=200M scope=spfile; shutdown immediate; startup; to use a fast lookup table you must set memoptimize_pool_size paramter to a number higher than 0. this is a static memory component in sga to store hash index of the table. this memory component is not resizable which means it won’t grow or become smaller with automatic shared memory management. create a table for fast lookup: Oracle PL/SQL create table tmp_optimize ( id number primary key, object_name varchar2(128), object_type varchar2(128), created date) segment creation immediate memoptimize for read ; insert into tmp_optimize select rownum, object_name, object_type, created from dba_objects d, (select * from dual connect by level < 3; commit; 123456789101112 create table tmp_optimize ( id number primary key, object_name varchar2(128), object_type varchar2(128), created date) segment creation immediate memoptimize for read ; insert into tmp_optimize select rownum, object_name, object_type, created from dba_objects d, (select * from dual connect by level < 3; commit; segment creation must be immediate otherwise you will get an error! Your table must have a primary key as well, since whole structure is based on PK, it is logical to have one. I roughly inserted 146.000 rows into table. EDIT: I also tried those tests with 14 million rows too but results are similar. this is not enough! you must use dbms_memoptimize package to use this table as fast lookup table. Oracle PL/SQL exec dbms_memoptimize.populate('MUSTAFA', 'TMP_OPTIMIZE'); 1 exec dbms_memoptimize.populate('MUSTAFA', 'TMP_OPTIMIZE'); let’s see what is happening now: Oracle PL/SQL sqlplus / as sysdba set autotrace on select * from tmp_optimize where id = 4689; ID OBJECT_NAME OBJECT_TYPE CREATED ---------- ------------ ----------- --------- 4689 DBA_EDITIONS VIEW 30-MAY-19 30-MAY-19 Elapsed: 00:00:00.13 Execution Plan ---------------------------------------------------------- Plan hash value: 2577027856 ------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 154 | 1 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID READ OPTIM| TMP_OPTIMIZE | 1 | 154 | 1 (0)| 00:00:01 | |* 2 | INDEX UNIQUE SCAN READ OPTIM | SYS_C008357 | 1 | | 2 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("ID"=4689) Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 0 consistent gets 0 physical reads 0 redo size 810 bytes sent via SQL*Net to client 404 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed 1234567891011121314151617181920212223242526272829303132333435363738394041424344 sqlplus / as sysdba set autotrace on select * from tmp_optimize where id = 4689; ID OBJECT_NAME OBJECT_TYPE CREATED---------- ------------ ----------- --------- 4689 DBA_EDITIONS VIEW 30-MAY-19 30-MAY-19 Elapsed: 00:00:00.13 Execution Plan----------------------------------------------------------Plan hash value: 2577027856 -------------------------------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |-------------------------------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 1 | 154 | 1 (0)| 00:00:01 || 1 | TABLE ACCESS BY INDEX ROWID READ OPTIM| TMP_OPTIMIZE | 1 | 154 | 1 (0)| 00:00:01 ||* 2 | INDEX UNIQUE SCAN READ OPTIM | SYS_C008357 | 1 | | 2 (0)| 00:00:01 |------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id):--------------------------------------------------- 2 - access("ID"=4689) Statistics---------------------------------------------------------- 0 recursive calls 0 db block gets 0 consistent gets 0 physical reads 0 redo size 810 bytes sent via SQL*Net to client 404 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed I skipped first execution statistics! you will see a small consistent gets at the first execution but consecutive executions will be like above. as you see there are new execution plan steps “TABLE ACCESS BY INDEX ROWID READ OPTIM” and “INDEX UNIQUE SCAN READ OPTIM”, so these steps tell us, an hash index is used to retrieve data like as in key-value pairs. there is no “consistent gets”. this is amazing. almost nothing has read for this query and that should make this query so much faster. Is it? well, I couldn’t find it, not as much as I expected at least. Actually, I found different blog posts mentioning about fast lookup tables but non of them has made a performance test. So, I want to test the speed but there is a problem. for now, there are many limitations on fast lookup tables. one of them is “you cannot use it via plsql”. this means I can’t create a simple plsql procedure and run a query for 100000 times and compare timings. to do a test, I wrote a small python code. Warning: I am no python developer, I just now python to solve my small problems, this code that I share below probably has many bugs but I just use it for this test purposes. here is the python code I use: Python import cx_Oracle import time from threading import Thread def get_row(): dsn_tns = cx_Oracle.makedsn('localhost', '1521', service_name='orcl') conn = cx_Oracle.connect(user=r'mustafa', password='mustafa', dsn=dsn_tns) # amazing security on password c = conn.cursor() start_time = time.time() for k in range(1, 100000): v = (k % 10000) + 1 c.execute("select /*+ findme */ * from tmp_optimize where id = :myid", myid=v) print("Processing time of: {} seconds.".format(time.time() - start_time)) conn.close() t = list() for i in range(1, 10): thread = Thread(target=get_row) t.append(thread) thread.start() for thread in t: thread.join() print('done!') 123456789101112131415161718192021222324252627282930 import cx_Oracleimport timefrom threading import Thread def get_row(): dsn_tns = cx_Oracle.makedsn('localhost', '1521', service_name='orcl') conn = cx_Oracle.connect(user=r'mustafa', password='mustafa', dsn=dsn_tns) # amazing security on password c = conn.cursor() start_time = time.time() for k in range(1, 100000): v = (k % 10000) + 1 c.execute("select /*+ findme */ * from tmp_optimize where id = :myid", myid=v) print("Processing time of: {} seconds.".format(time.time() - start_time)) conn.close() t = list() for i in range(1, 10): thread = Thread(target=get_row) t.append(thread) thread.start() for thread in t: thread.join() print('done!') after changing code many times, I used this final code. this code creates 9 threads and every thread run “select * from tmp_optimize where id = :myid” query for 100.000 times and myid variable value is between 1 and 10000 in circular way in this loop. each threads print total execution time for 100.000 runs of the query. here is the result: Oracle PL/SQL Processing time of: 16.03737473487854 seconds. Processing time of: 16.03542184829712 seconds. Processing time of: 16.06372594833374 seconds. Processing time of: 16.021758556365967 seconds. Processing time of: 16.089550113677979 seconds. Processing time of: 16.11350131034851 seconds. Processing time of: 16.094431495666504 seconds. Processing time of: 16.009099006652832 seconds. Processing time of: 16.085646486282349 seconds. 123456789 Processing time of: 16.03737473487854 seconds.Processing time of: 16.03542184829712 seconds.Processing time of: 16.06372594833374 seconds.Processing time of: 16.021758556365967 seconds.Processing time of: 16.089550113677979 seconds.Processing time of: 16.11350131034851 seconds.Processing time of: 16.094431495666504 seconds.Processing time of: 16.009099006652832 seconds.Processing time of: 16.085646486282349 seconds. timing is changed between 16.0 and 16.1 seconds for this fast lookup tables. how about a normal table with normal primary key? Oracle PL/SQL create table tmp_not_optimize ( id number primary key, object_name varchar2(128), object_type varchar2(128), created date) segment creation immediate; insert into tmp_not_optimize select rownum, object_name, object_type, created from dba_objects, (select * from dual connect by level < 3); commit; 1234567891011 create table tmp_not_optimize ( id number primary key, object_name varchar2(128), object_type varchar2(128), created date) segment creation immediate; insert into tmp_not_optimize select rownum, object_name, object_type, created from dba_objects, (select * from dual connect by level < 3);commit; I used same python code just changed table name from “tmp_optimize” to “tmp_not_optimize” and result is like this: Oracle PL/SQL Processing time of: 17.14604425430298 seconds. Processing time of: 17.22217082977295 seconds. Processing time of: 17.25145125389099 seconds. Processing time of: 17.258281707763672 seconds. Processing time of: 17.267066955566406 seconds. Processing time of: 17.27292227745056 seconds. Processing time of: 17.186059951782227 seconds. Processing time of: 17.22607660293579 seconds. Processing time of: 17.190939903259277 seconds. 123456789 Processing time of: 17.14604425430298 seconds.Processing time of: 17.22217082977295 seconds.Processing time of: 17.25145125389099 seconds.Processing time of: 17.258281707763672 seconds.Processing time of: 17.267066955566406 seconds.Processing time of: 17.27292227745056 seconds.Processing time of: 17.186059951782227 seconds.Processing time of: 17.22607660293579 seconds.Processing time of: 17.190939903259277 seconds. it is between 17.1 and 17.2 seconds. there is %6 performance gain (give or take). I am not sure whether %6 is a good gain or not because to use fast lookup table actively, we need to separate serious amount of memory area. Also, there are few limitations like, cannot be used with partitioned or compressed tables, cannot be used in plsql (not yet anyway). only equality conditions on primary keys can advantage of fast lookup. During my tests I didn’t check wait events. I expect that less latch waits on fast lookup table since no consistent gets occur. Maybe there is something I missed and I will do more tests on these tables (I didn’t even test “memoptimize for write” (fast ingest) yet). wish you all healthy days. 18c 19c 21c Administration Development Performance fast lookupmemoptimize for readmemoptimize performance.memoptimize tabletable performance