ORACLE 12C new cost to time (I/O calibration effects)

As stated by Randolf Geist in his blog post oracle introduced a new model for predicting the execution time from the cost calculation starting from patch set and this kick in after running the I/O calibration routine.Without I/O calibration the execution time will still be as usual COST*SINGLE-BLOCK read time.

Extract from Randolf Geist post  Cost Is Time: Next Generation :

“The cost that has been calculated according to the System Statistics model – which is already a time estimate based on three different components – time for single-block and multi-block reads as well as the estimated CPU time is now converted into a data volume by simply multiplying the resulting cost with the default block size. Dividing this data volume by the throughput as indicated by the I/O calibration results (it looks like the value MAX_PMBPS is relevant) arrives at a new estimated execution time.”

So what about 12C ? Let’s run some test and observe :

Test database

1- First case without I/O calibration

Let’s begin by initializing system stats and check I/O calibration :


Then create a table and check how the execution time is calculated :


Let’s try the usual formula TIME=COST*SREADTIM=2894*0.005=15 Sec.So it seem that there is a new formula let’s check the optimizer trace file for more info :


So although we have not run  I/O calibration routine it seem that there are some default value used (MAX_PMBPS=200MB).

As by oracle documentation Database VLDB and Partitioning Guide

“If I/O calibration is not run to gather the required statistics, a default calibration value is used to calculate the cost of operations and the degree of parallelism.”

200MB ? Why ? as of metalink note  (Doc ID 1269321.1)

200 is a value that works well for Machine with a fast I/O subsystem, as example for Exadata Machines. If you set the number (max_pmbps) lower your calculated DOP will increase. If you set max_pmbps higher the calculated DOP are decreased. It appears that 200 is a reasonable value to deal with concurrency on a system.

Let’s use the formula indicated by Randolf Geist TIME=COST*BLOCK-SIZE/MAX_PMBPS=(22.6MB)/(200MB/S)=113ms rounded up it give us 1 second as expected.But sill the predicted time in the optimizer file (Explain plan dump section) is using the old formula (15 Sec as we already calculated)


Jonathan Lewis already pointed to this change in the comment section of this post

2- Second case with I/O calibration

There is no change on the model used for predicting the execution time when using I/O calibration stats from SYS.RESOURCE_IO_CALIBRATE$ (TIME=COST*BLOCK_SIZE/MAX_PMBPS) but i want to show some odd things that i observed so let’s run the calibration routine :


Then run our query and check how the execution time is calculated :



Although we have run our calibration routine the optimizer is still using the default value 200MB ! But if i reboot the database he will take the value set in  RESOURCE_IO_CALIBRATE$ .

NOTE : It appear that the new value will take effect only after restarting the instance.

That’s it 😀

2 thoughts on “ORACLE 12C new cost to time (I/O calibration effects)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s