In my first blog post i will talk about one of the new feature added in Oracle 220.127.116.11, that is automatic big table caching.
Like the name suggest automatic big table caching enable caching of big table in memory precisely in a portion of the buffer cache (avoiding direct path read). First, what is considered a big table and how table scan was handled in pervious release that is 11g (When direct path read does kick in)? And then we will discover this new feature.
All tests are done in 18.104.22.168 and 22.214.171.124.9 single instance database. (Using default buffer cache and default parameters)
Table scan with serial query:
In version 11g oracle introduced “direct path read” for serial query to avoid wiping out many buffer from the buffer cache by a single large (bigger than _small_table_threshold )table scan. The decision to do a direct path read was covered on many blogs like stated by Mihajlo Tekic in his blog:http://oracle-tech.blogspot.com/2014/04/directreaddecisionstatistcsdriven.html
“if any of the following is true, oracle will scan the data in the buffer cache, otherwise direct path reads will be performed:
- the number of blocks in the segment is lower or equal than _small_table_threshold
- at least 50% of the segment data blocks are in the buffer cache
- at least 25% of the data blocks are dirty “
UPDATE 23/02/2016 : for more in depth info on direct path read decision please check Frits Hoogland blog post
Table scan with parallel query:
In version 11g parallel processing by-passed the database buffer cache if the size of the table is bigger than the value “_parallel_min_table_threshold“ and In-Memory Parallel Execution does not kick in (For more info in In-Memory Parallel Execution see TanelPoder Blog post : http://blog.tanelpoder.com/2013/11/27/when-do-oracle-parallel-execution-slaves-issue-buffered-physical-reads-part-1/).
Big table caching overview:
Adaptive table scan feature can in certain case negatively impact table scan in data warehouses and database with mixed workload especially for serial query. For example in a data warehouse big table acceded using serial query will not be cached although they are very solicited (unless using the keep buffer pool), these can badly impact the performance of the database (repeated disk read, delayed block clean out, Object checkpoints, etc) .Like stated in the documentation “Automatic big table caching is designed primarily to enhance performance for data warehouse workloads, but it also improves performance in mixed workloads.”
The algorithm used to cache data is based on temperature (how frequently data is acceded).It track only big tables and no small tables.
If the table to be cached does not fit into the big table cache then it will be mostly cached. Object with the higher temperature will age out blocks of other tables.
As you may know caching of large objects is already implemented in the previous release in case of parallel processing using In-Memory Parallel Execution. So how these features interact when used together? After some testing (I have done only a couples of tests so maybe i missed some things) it appear to me that when using In-Memory Parallel Execution and setting Big table caching, in-Memory Parallel Execution will use the limit set by DB_BIG_TABLE_CACHE_PERCENT_TARGETto limit the amount of data to be cached and doesn’t refer any more to “_parallel_cluster_cache_pct” for table scan.
To enable it
- For serial query set DB_BIG_TABLE_CACHE_PERCENT_TARGET
- For parallel query set DB_BIG_TABLE_CACHE_PERCENT_TARGET and PARALLEL_DEGREE_POLICYto AUTO or ADAPTIVE (So in-Memory Parallel Execution is enabled in the same time with Big table caching)
Automatic big table caching is not supported for serial query in an Oracle RAC environment. Also is seem that it doesn’t work when using the keep buffer pool and I think it’s reasonable because the tables will be cached in this case (no direct path read unless the table is bigger than the keep pool)
Let’s try an example of serial query:
We will be using these views:
- V$BT_SCAN_CACHE shows the parameters and status of the big table cache section.
- V$BT_SCAN_OBJ_TEMPS shows the active objects currently tracked by the big table cache.
I will begin by putting DB_BIG_TABLE_CACHE_PERCENT_TARGET=0(is the default)
Check the value of “_small_table_threshold” on my instance
Create a “big” table :
Query the table and check the number of buffer cached:
So only the segment header is cached no matter how many times i execute the query.
Let’s now activate our new feature
So 50% of our buffer cache will be ‘reserved’ for big table caching. The “MIN_CACHED_TEMP” is the minimum temperature of an object to be cached. There is no tracked object for now.
Lets’ reread our table:
So the table is now cached in memory and it was assigned a temperature of 1000.
If we scan (update,delete) the table two times the temperature will be 2000 and then 3000, etc.(I haven’t checked all the possibility so maybe there is special cases)
Suppose now that we shrink the size of our big cache that we cannot put two tables on it.
So 3% of 36750 will be 1102.
If we create a second table with 1024 block on it and then scan it multiple time what will happen?
The block from the first table with the lower temperature was aged out by the newly created table which is much hotter.
Now let try an example of parallel query:
So here I will quickly show how in-memory parallel query interact with big table caching.
I begin by limiting the memory of big table caching in my test case to 367 blocks.(1% of 36750 )
Activate both in-memory parallel query and big table caching:
And then execute our parallel query and check for the result:
We can activate the flowing tracing to check if in memory parallel query kicked in:
We can see that only 364 blocks of 1024 blocks was cached in memory (limited by DB_BIG_TABLE_CACHE_PERCENT_TARGET). If we redo the test with DB_BIG_TABLE_CACHE_PERCENT_TARGET set to 0 more block will be cached.
So it was a big overview of big table caching feature and also my first blog post. Feel free to leave comment and any advice.