This is just a quick a note to show how we can monitor some important kernel parameters limits. This can be very handy in case of highly consolidated database environment where the default limits are usually not enough.
Orachrome Lighty is a great tool for monitoring our oracle databases even in case we do not have the diagnostic packs license or in case we are using the standard edition. It really on statspack and L-ASH for collecting the needed information.
This blog post is not about the different feature offered by the tool or how to use, there are already many articles covering that. I will focus on the overhead I noticed when trying this great tool and the different solution that can be implemented to alleviate them.
It can sometimes be handy to automate the generation of presentations slides ,such as listing KPIs/Health for production servers needed for capacity planning or other purpose.
The source data may be stored in different locations/different databases.So let’s see how we can do that in python !
CPU usage is one of the KPIs usually used for capacity planning, it’s supposed to allow us to determine the remaining available capacity. But with hyper-threading enabled things can become much more complicated as the Linux operating system assumes that all threads are equal and thus overstates the CPU capacity. So the CPU usage may be wrongly interpreted if we don’t take into account that !
For big database servers (used for consolidating multiple databases) with a lot of memory and a lot of preallocated free HugePages it’s important to take into consideration the number of free HugePage for capacity planning .
The default “memory used” metric calculated as (MemTotal – (MemFree + Buffers + Cached)) and as (MemTotal – (MemFree – Buffers – Cached – Slab)) in recent version as shown by the free command (Ref: https://access.redhat.com/solutions/406773) don’t take into consideration the amount of Free HugePages. Using the metric extension feature of cloud control we can easily alleviate that.
Let’s suppose that we have activated our database auditing as recommended and put in place a centralized auditing solution so that the audit data can be sent to a remote server and protected (Like in my previous blog post) . Let’s now think like a hacker, can we hide our database activities (or some of it) ?
In this part, we will see one way of sending unified auditing data to a centralized logging solution outside the Oracle Database. We will not be looking at remote SYSLOG as there is many missing information when redirecting audit data to syslog (Missing Audit Infomation In The Unified Audit Trail Records Sent To SYSLOG (Doc ID 2520613.1))
Still for remote syslog auditing we can set the parameter “unified_audit_systemlog= ‘LOCAL5.INFO’”
In addition, add the following entry in “rsyslog.conf” to enabled Reliable Message forwarding (https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system_administrators_guide/s1-working_with_queues_in_rsyslog) :
On the remote audit server just uncomment the lines “$ModLoad imtcp $InputTCPServerRun 514”.
Ok but this is not the purpose of this blog post, here we are going to look at how we can integrate oracle unified audit data with SPLUNK using Splunk DB Connect and the oracle add-on.