Tables with Memoptimize for Write (fast ingest) Mustafa, 2022-02-232022-03-03 Hi There, this is the second part of this post: https://mustafakalayci.me/2022/02/15/tables-with-memoptimize-for-read/ Oracle adda new feature called memoptimized tables for mostly devices like IoTs. Fast lookup and fast ingest tables. I have already mentioned about fast lookup (memoptimize for read) tables in my previous post and now I want to write about Fast Ingest tables. as Fast ingest tables, those tables are optimized for inserting new data. Especially small devices (like any IoTs) are tend to be send data more than processing or querying data and we have billions of them. for the Fast Lookup tables (in my previous post), plsql is not an option but fortunately we can use fast ingest tables in plsql. My tests will be based on plsql codes. as fast lookup tables, you will set table as fast ingest table (memoptimize for write) and while inserting you need to use a special hint “MEMOPTIMIZE_WRITE”. this way Oracle will understand this is a fast ingest table and inserts will be completed accordingly. How it is work? your insert statements (either row by row or bulk) will be stacked in LARGE_POOL (a memory component in database) and after passing a threshold, data will be inserted (as if a bulk insert). this provides significant performance increment but as always, nothing is perfect and there is always a “but”. So, this will increase your write speed “but” your data will not be immediate anymore! you might not see your data just after inserting and committing it because it will be cached in large pool and after a time will be inserted to the table. Secondly, you might loose some data! this is a big “but”! since data is in large pool for a certain amount of time, any thing that can go wrong like an instance crash, might cause loose some of the data. you must be careful while using this feature. So let’s test and see the difference. To demonstrate this, I will shutdown and startup my database to remove everything in the memory and then connect as “mustafa” user. Oracle PL/SQL C:\Users\musta>sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 23 11:24:42 2022 Version 19.12.0.0.0 Copyright (c) 1982, 2021, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.12.0.0.0 SQL> shu immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 2147482424 bytes Fixed Size 9030456 bytes Variable Size 1761607680 bytes Database Buffers 369098752 bytes Redo Buffers 7745536 bytes Database mounted. Database opened. SQL> 123456789101112131415161718192021222324252627 C:\Users\musta>sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 23 11:24:42 2022Version 19.12.0.0.0 Copyright (c) 1982, 2021, Oracle. All rights reserved. Connected to:Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - ProductionVersion 19.12.0.0.0 SQL> shu immediate;Database closed.Database dismounted.ORACLE instance shut down.SQL> startupORACLE instance started. Total System Global Area 2147482424 bytesFixed Size 9030456 bytesVariable Size 1761607680 bytesDatabase Buffers 369098752 bytesRedo Buffers 7745536 bytesDatabase mounted.Database opened.SQL> ok, before start my test, I would like to check my large pool size first. large_pool parameter is 0 as usual because I am using ASMM (sga_target is set) so I need to check dynamic components view: Oracle PL/SQL select round(current_size/1024/1024,2) current_size, round(min_size/1024/1024,2) min_size, round(max_size/1024/1024,2) max_size from v$memory_dynamic_components where component='large pool'; CURRENT_SIZE MIN_SIZE MAX_SIZE ------------ ---------- ---------- 96 96 768 12345678 select round(current_size/1024/1024,2) current_size, round(min_size/1024/1024,2) min_size, round(max_size/1024/1024,2) max_sizefrom v$memory_dynamic_components where component='large pool'; CURRENT_SIZE MIN_SIZE MAX_SIZE------------ ---------- ---------- 96 96 768 sizes are in megabytes. as you see I have 96MB assigned large pool. Let’s create two tables, one for memoptimize for write (fast ingest) and one for standard heap table. Oracle PL/SQL CREATE TABLE test_fast_ingest ( id NUMBER(20) PRIMARY KEY, test_col VARCHAR2(15)) segment creation immediate MEMOPTIMIZE FOR WRITE; CREATE TABLE test_standard ( id NUMBER(20) PRIMARY KEY, test_col VARCHAR2(15)) ; 123456789 CREATE TABLE test_fast_ingest ( id NUMBER(20) PRIMARY KEY, test_col VARCHAR2(15)) segment creation immediate MEMOPTIMIZE FOR WRITE; CREATE TABLE test_standard ( id NUMBER(20) PRIMARY KEY, test_col VARCHAR2(15)) ; now, I will do inserts row by rows and commit at the end: Oracle PL/SQL begin for i in 1..1000000 loop INSERT /*+ MEMOPTIMIZE_WRITE */ INTO test_fast_ingest VALUES (i,'test'); end loop; commit; end; / PL/SQL procedure successfully completed. Elapsed: 00:00:25.44 ----------------------------------------------------- begin for i in 1..1000000 loop INSERT INTO test_standard VALUES (i,'test'); end loop; commit; end; / PL/SQL procedure successfully completed. Elapsed: 00:00:25.54 123456789101112131415161718192021222324 begin for i in 1..1000000 loop INSERT /*+ MEMOPTIMIZE_WRITE */ INTO test_fast_ingest VALUES (i,'test'); end loop; commit;end;/PL/SQL procedure successfully completed. Elapsed: 00:00:25.44 ----------------------------------------------------- begin for i in 1..1000000 loop INSERT INTO test_standard VALUES (i,'test'); end loop; commit;end;/ PL/SQL procedure successfully completed. Elapsed: 00:00:25.54 as you see I use MEMOPTIMIZE_WRITE hint in the first plsql code for test_fast_ingest table. of course timing takes our attention: fast ingest : 25.44 seconds standard heap: 25.54 seconds huh! is there a problem? well actually not, let’s check large pool size again: Oracle PL/SQL select round(current_size/1024/1024,2) current_size, round(min_size/1024/1024,2) min_size, round(max_size/1024/1024,2) max_size from v$memory_dynamic_components where component='large pool'; CURRENT_SIZE MIN_SIZE MAX_SIZE ------------ ---------- ---------- 768 96 768 12345678 select round(current_size/1024/1024,2) current_size, round(min_size/1024/1024,2) min_size, round(max_size/1024/1024,2) max_sizefrom v$memory_dynamic_components where component='large pool'; CURRENT_SIZE MIN_SIZE MAX_SIZE------------ ---------- ---------- 768 96 768 as you see, large pool has reached it’s limits. it is now 768MB. as I said, memoptimize for write uses large pool, since it was not enough to do the job we waited larges pool’s growth operation. let’s run the same script again: Oracle PL/SQL truncate table test_fast_ingest; truncate table test_standard; --truncate tables first begin for i in 1..1000000 loop INSERT /*+ MEMOPTIMIZE_WRITE */ INTO test_fast_ingest VALUES (i,'test'); end loop; commit; end; / PL/SQL procedure successfully completed. Elapsed: 00:00:12.02 begin for i in 1..1000000 loop INSERT INTO test_standard VALUES (i,'test'); end loop; commit; end; / PL/SQL procedure successfully completed. Elapsed: 00:00:26.23 123456789101112131415161718192021222324252627 truncate table test_fast_ingest;truncate table test_standard;--truncate tables first begin for i in 1..1000000 loop INSERT /*+ MEMOPTIMIZE_WRITE */ INTO test_fast_ingest VALUES (i,'test'); end loop; commit;end;/ PL/SQL procedure successfully completed. Elapsed: 00:00:12.02 begin for i in 1..1000000 loop INSERT INTO test_standard VALUES (i,'test'); end loop; commit;end;/ PL/SQL procedure successfully completed. Elapsed: 00:00:26.23 here it is. fast ingest table is more than two times faster than standard heap table. this is a major performance increment. Let’s try commit row by row: Oracle PL/SQL truncate table test_fast_ingest; truncate table test_standard; begin for i in 1..1000000 loop INSERT /*+ MEMOPTIMIZE_WRITE */ INTO test_fast_ingest VALUES (i,'test'); commit; end loop; end; / PL/SQL procedure successfully completed. Elapsed: 00:00:27.24 ---------------------------------------------------------------- begin for i in 1..1000000 loop INSERT INTO test_standard VALUES (i,'test'); commit; end loop; end; / PL/SQL procedure successfully completed. Elapsed: 00:01:08.20 1234567891011121314151617181920212223242526272829 truncate table test_fast_ingest;truncate table test_standard; begin for i in 1..1000000 loop INSERT /*+ MEMOPTIMIZE_WRITE */ INTO test_fast_ingest VALUES (i,'test'); commit; end loop;end;/ PL/SQL procedure successfully completed. Elapsed: 00:00:27.24 ---------------------------------------------------------------- begin for i in 1..1000000 loop INSERT INTO test_standard VALUES (i,'test'); commit; end loop;end;/ PL/SQL procedure successfully completed. Elapsed: 00:01:08.20 this is a hard job to do and even for this job, fast ingest table is doing a great job. for row by row commit, fast ingest time increased 2.25 times (approximately) and heap table time increased 2.6 times. if you have many devices sending many insert statements then fast ingest table will be able to handle it. even so, increasing large pool size would increase the performance. I have already mentioned about some cons like inserts are not immediate (even if they are committed) but also there is one more cons! lets truncate the table and make inserts. please remember that ID column is PRIMARY KEY. Oracle PL/SQL SQL> INSERT /*+ MEMOPTIMIZE_WRITE */ INTO test_fast_ingest VALUES (1,'test'); 1 row created. Elapsed: 00:00:00.00 SQL> INSERT /*+ MEMOPTIMIZE_WRITE */ INTO test_fast_ingest VALUES (1,'test'); 1 row created. Elapsed: 00:00:00.00 SQL> INSERT /*+ MEMOPTIMIZE_WRITE */ INTO test_fast_ingest VALUES (1,'test'); 1 row created. Elapsed: 00:00:00.00 SQL> commit; Commit complete. Elapsed: 00:00:00.00 123456789101112131415161718192021 SQL> INSERT /*+ MEMOPTIMIZE_WRITE */ INTO test_fast_ingest VALUES (1,'test'); 1 row created. Elapsed: 00:00:00.00SQL> INSERT /*+ MEMOPTIMIZE_WRITE */ INTO test_fast_ingest VALUES (1,'test'); 1 row created. Elapsed: 00:00:00.00SQL> INSERT /*+ MEMOPTIMIZE_WRITE */ INTO test_fast_ingest VALUES (1,'test'); 1 row created. Elapsed: 00:00:00.00 SQL> commit; Commit complete. Elapsed: 00:00:00.00 as you see, I inserted three rows and they all have same ID, so there must be a unique constraint violations right? well, in my tests it didn’t probably because data is not inserted yet. it is gathered in large pool and then will be inserted but it is not my session doing the inserts so I won’t get any unique constraint violation for this action. you must be careful while using this. fast ingest tables have more satisfying results than fast lookup tables for me. we can increase the performance at least 2 times here! And it works with bulk inserts too: Oracle PL/SQL SQL> truncate table test_fast_ingest; Table truncated. SQL> truncate table test_standard; Table truncated. SQL> create table tmp_Base as select level id, 'test' txt from dual connect by level <=1000000; Table created. SQL> set timing on SQL> INSERT /*+ MEMOPTIMIZE_WRITE */ INTO test_fast_ingest 2 select * from tmp_Base; 1000000 rows created. Elapsed: 00:00:00.33 SQL> SQL> INSERT INTO test_standard 2 select * from tmp_Base; 1000000 rows created. Elapsed: 00:00:01.41 SQL> 123456789101112131415161718192021222324252627 SQL> truncate table test_fast_ingest; Table truncated. SQL> truncate table test_standard; Table truncated. SQL> create table tmp_Base as select level id, 'test' txt from dual connect by level <=1000000; Table created. SQL> set timing onSQL> INSERT /*+ MEMOPTIMIZE_WRITE */ INTO test_fast_ingest 2 select * from tmp_Base; 1000000 rows created. Elapsed: 00:00:00.33SQL>SQL> INSERT INTO test_standard 2 select * from tmp_Base; 1000000 rows created. Elapsed: 00:00:01.41SQL> created a base table (tmp_base) with 1M rows and use it to insert the data. as a result, fast ingest table is 4.2 times faster than heap table. I expect memoptimize tables (both read and write) will be developed much more and become more popular. Just careful using fast ingest because it has significant cons. agan; you might loose data even if you committed them! wish you all healthy days. 18c 19c 21c fast ingestfast ingest tablefast insertinsert performancememoptimize for read