Skip to content
Keep Learning Keep Living
Keep Learning Keep Living
Keep Learning Keep Living

Tables with Memoptimize for Read (fast lookup)

Mustafa, 2022-02-152022-02-15

Hi,

I would like to share my initial thoughts about memoptimized for read (fast lookup) tables in this post. fast lookup tables came with 18c and they are here to provide faster access to a table if you are using primary key with a equality condition. this is basically single table hash cluster. instead of using primary key index, Oracle creates a hash index and searching on a hash index should be much more faster than a unique index.

first, create the test environment.

Oracle PL/SQL
1
2
3
4
5
6
7
sqlplus / as sysdba
 
alter system set memoptimize_pool_size=200M scope=spfile;
 
shutdown immediate;
 
startup;

to use a fast lookup table you must set memoptimize_pool_size paramter to a number higher than 0. this is a static memory component in sga to store hash index of the table. this memory component is not resizable which means it won’t grow or become smaller with automatic shared memory management.

create a table for fast lookup:

Oracle PL/SQL
1
2
3
4
5
6
7
8
9
10
11
12
create table tmp_optimize (
  id number primary key,
  object_name varchar2(128),
  object_type varchar2(128),
  created date)
  segment creation immediate memoptimize for read ;
 
insert into tmp_optimize
  select rownum, object_name, object_type, created
  from dba_objects d, (select * from dual connect by level < 3;
 
commit;

segment creation must be immediate otherwise you will get an error! Your table must have a primary key as well, since whole structure is based on PK, it is logical to have one. I roughly inserted 146.000 rows into table.

EDIT: I also tried those tests with 14 million rows too but results are similar.

this is not enough! you must use dbms_memoptimize package to use this table as fast lookup table.

Oracle PL/SQL
1
exec dbms_memoptimize.populate('MUSTAFA', 'TMP_OPTIMIZE');

let’s see what is happening now:

Oracle PL/SQL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
sqlplus / as sysdba
 
set autotrace on
 
select * from tmp_optimize where id = 4689;
 
        ID OBJECT_NAME  OBJECT_TYPE CREATED
---------- ------------ ----------- ---------
      4689 DBA_EDITIONS VIEW        30-MAY-19                                                                                                                      30-MAY-19
 
 
Elapsed: 00:00:00.13
 
Execution Plan
----------------------------------------------------------
Plan hash value: 2577027856
 
-------------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                       |              |     1 |   154 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID READ OPTIM| TMP_OPTIMIZE |     1 |   154 |     1   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN READ OPTIM         | SYS_C008357  |     1 |       |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("ID"=4689)
 
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        810  bytes sent via SQL*Net to client
        404  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

I skipped first execution statistics! you will see a small consistent gets at the first execution but consecutive executions will be like above.

as you see there are new execution plan steps “TABLE ACCESS BY INDEX ROWID READ OPTIM” and “INDEX UNIQUE SCAN READ OPTIM”, so these steps tell us, an hash index is used to retrieve data like as in key-value pairs. there is no “consistent gets”. this is amazing. almost nothing has read for this query and that should make this query so much faster.  Is it? well, I couldn’t find it, not as much as I expected at least. Actually, I found different blog posts mentioning about fast lookup tables but non of them has made a performance test.

So, I want to test the speed but there is a problem. for now, there are many limitations on fast lookup tables. one of them is “you cannot use it via plsql”. this means I can’t create a simple plsql procedure and run a query for 100000 times and compare timings. to do a test, I wrote a small python code.

Warning: I am no python developer, I just now python to solve my small problems, this code that I share below probably has many bugs but I just use it for this test purposes.

here is the python code I use:

Python
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
import cx_Oracle
import time
from threading import Thread
 
 
def get_row():
    dsn_tns = cx_Oracle.makedsn('localhost', '1521', service_name='orcl')
    conn = cx_Oracle.connect(user=r'mustafa', password='mustafa', dsn=dsn_tns) # amazing security on password
 
    c = conn.cursor()
    start_time = time.time()
 
    for k in range(1, 100000):
        v = (k % 10000) + 1
        c.execute("select /*+ findme */ * from tmp_optimize where id = :myid", myid=v)
    print("Processing time of: {} seconds.".format(time.time() - start_time))
    conn.close()
 
 
t = list()
 
for i in range(1, 10):
    thread = Thread(target=get_row)
    t.append(thread)
    thread.start()
 
for thread in t:
    thread.join()
 
print('done!')

after changing code many times, I used this final code. this code creates 9 threads and every thread run “select * from tmp_optimize where id = :myid” query for 100.000 times and myid variable value is between 1 and 10000 in circular way in this loop. each threads print total execution time for 100.000 runs of the query. here is the result:

Oracle PL/SQL
1
2
3
4
5
6
7
8
9
Processing time of: 16.03737473487854 seconds.
Processing time of: 16.03542184829712 seconds.
Processing time of: 16.06372594833374 seconds.
Processing time of: 16.021758556365967 seconds.
Processing time of: 16.089550113677979 seconds.
Processing time of: 16.11350131034851 seconds.
Processing time of: 16.094431495666504 seconds.
Processing time of: 16.009099006652832 seconds.
Processing time of: 16.085646486282349 seconds.

timing is changed between 16.0 and 16.1 seconds for this fast lookup tables. how about a normal table with normal primary key?

Oracle PL/SQL
1
2
3
4
5
6
7
8
9
10
11
create table tmp_not_optimize (
  id number primary key,
  object_name varchar2(128),
  object_type varchar2(128),
  created date)
  segment creation immediate;
 
insert into tmp_not_optimize
  select rownum, object_name, object_type, created
  from dba_objects, (select * from dual connect by level < 3);
commit;

I used same python code just changed table name from “tmp_optimize” to “tmp_not_optimize” and result is like this:

Oracle PL/SQL
1
2
3
4
5
6
7
8
9
Processing time of: 17.14604425430298 seconds.
Processing time of: 17.22217082977295 seconds.
Processing time of: 17.25145125389099 seconds.
Processing time of: 17.258281707763672 seconds.
Processing time of: 17.267066955566406 seconds.
Processing time of: 17.27292227745056 seconds.
Processing time of: 17.186059951782227 seconds.
Processing time of: 17.22607660293579 seconds.
Processing time of: 17.190939903259277 seconds.

it is between 17.1 and 17.2 seconds. there is %6 performance gain (give or take).

I am not sure whether %6 is a good gain or not because to use fast lookup table actively, we need to separate serious amount of memory area. Also, there are few limitations like, cannot be used with partitioned or compressed tables, cannot be used in plsql (not yet anyway). only equality conditions on primary keys can advantage of fast lookup.

During my tests I didn’t check wait events. I expect that less latch waits on fast lookup table since no consistent gets occur. Maybe there is something I missed and I will do more tests on these tables (I didn’t even test “memoptimize for write” (fast ingest) yet).

wish you all healthy days.

18c 19c 21c Administration Development Performance fast lookupmemoptimize for readmemoptimize performance.memoptimize tabletable performance

Post navigation

Previous post
Next post

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • How to call HTTPS Url Without SSL Wallet in 19c
  • Is Table Unnecessary for Pipelined Function
  • Password Rollover Time in Seconds
  • PDB Syncronization Issue
  • How to limit DB Link Connection Timeout
  • Cloud Base Database Service
  • 29th of February and interval
  • How to Copy Local Files to Oracle Object Storage

Recent Comments

  • Mustafa on How to call HTTPS Url Without SSL Wallet in 19c
  • Накрутка авито on How to call HTTPS Url Without SSL Wallet in 19c
  • Mustafa on Cloud Base Database Service
  • Raja on Cloud Base Database Service
  • Mustafa on Refreshable PDB and ORA-17627: ORA-12578: TNS:wallet open failed

Categories

  • 11g
  • 12c
  • 18c
  • 19c
  • 21c
  • 23ai
  • Administration
  • Cloud
  • Compression
  • Development
  • Materialized View
  • Multi-tenant
  • Performance
  • Security
  • SQL / PLSQL
  • Uncategorized
  • Undocumented
  • Useful Scripts

Archives

  • April 2025
  • November 2024
  • July 2024
  • April 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • October 2021
  • September 2021
  • August 2021
  • April 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • July 2019
  • June 2019
  • May 2019
  • March 2019
  • February 2019
  • June 2018

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

RSS Follow This Blog

  • How to call HTTPS Url Without SSL Wallet in 19c
  • Is Table Unnecessary for Pipelined Function
  • Password Rollover Time in Seconds
  • PDB Syncronization Issue
  • How to limit DB Link Connection Timeout
  • Cloud Base Database Service
  • 29th of February and interval
  • How to Copy Local Files to Oracle Object Storage
  • Guid vs Sequences
  • Refreshable PDB and ORA-17627: ORA-12578: TNS:wallet open failed

Archives

  • April 2025
  • November 2024
  • July 2024
  • April 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • October 2021
  • September 2021
  • August 2021
  • April 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • July 2019
  • June 2019
  • May 2019
  • March 2019
  • February 2019
  • June 2018

RSS Follow This Blog

  • How to call HTTPS Url Without SSL Wallet in 19c
  • Is Table Unnecessary for Pipelined Function
  • Password Rollover Time in Seconds
  • PDB Syncronization Issue
  • How to limit DB Link Connection Timeout
  • Cloud Base Database Service
  • 29th of February and interval
  • How to Copy Local Files to Oracle Object Storage
  • Guid vs Sequences
  • Refreshable PDB and ORA-17627: ORA-12578: TNS:wallet open failed
RSS Error: A feed could not be found at `http://www.mywebsite.com/feed/`; the status code is `200` and content-type is `text/html; charset=UTF-8`
©2025 Keep Learning Keep Living | WordPress Theme by SuperbThemes