IN-MEMORY population/[trickle] re-population : How much work ? You may be surprised !

We all know that IN-Memory is a great new feature, but how much work is done on behalf to maintain the in-memory column store ? You may be surprised !

In this post i will trace the work done by the In-memory  background worker processes W0nn during population and  trickle re-population .I will be using systemtap and some scripts developed by Luca Canali.So let’s begin !

Continue reading

In-Memory : Optimizer cost calculation some anomaly ! (IM Quotient, IM Journal Rows)

When investigating how the optimizer calculate the cost of In-memory operation i hit some anomaly. Let’s begin with a big overview of how the cost of in-memory operation is calculated (For more info please check http://www.vldb.org/pvldb/vol8/p1770-das.pdf ). The cost of in-memory operation is based on this new keys statistics which are maintained in real time (as stated by the previous doc and demonstrated by Riyaj Shamsudeen https://t.co/tp0Q1sNZrB for example function “ktmpjUpdInMemExtStat”  update In-memory extent statistics in the same session ) :

Continue reading

V$SQL_CS_STATISTICS return no rows in 12C ! It’s time to trace ! [systemtap tracing]

In my last blog post i tried to demonstrate the effect of activating row source statistics on the calculation of “rows_processed” in v$sql_cs_statistics and the effect this may have in Adaptive cursor sharing. Sadly V$SQL_CS_STATISTICS  appear to return no rows in 12c (12.1.0.2) and also as Mohamed Houri said “Even in 11g the number of rows processed was not updated before a cursor is bind aware” (In the comment section of my last post ). So there is apparently no way to see the different raw execution statistics used by the monitoring component of adaptive cursor sharing in 12C or before the cursor is marked bind aware in 11G ! Really  ? It’s time to dig deeper !

Continue reading

Effect of wrong cardinality estimate on Hash join run time execution (Fanout)

Finding the optimum fanout (represent the number of partition the build input will be split into) is one of  the keys to optimum performance of a hash join. “High fan-out can produce a large number of small partitions, resulting in inefficient I/O. At the other extreme, small fan-out can produce a small number of large partitions, which will not fit in hash memory.”(as quoted by hash-joins-implementation-and-tuning)

In this blog post i will not discuss how the hash join algorithm work in fact it’s well covered on other location like “Jonathan lewis book oracle cost based optimizer” and http://docslide.us/documents/hash-joins-implementation-and-tuning.html (old but still helpful) and Steve Adams http://www.scribd.com/doc/16883906/Adams-Hash-Joins-Oracle .

The calculation of the optimum fanout is based on the Build input estimated size which is Dependant on the column sizes and row counts.

Here is a quick example to demonstrate how sufficiently inaccurate cardinality can affect hash join runtime performance :

Continue reading