All of us use Oracle Database. Most use ASM. Some even embrace ACFS. But how about Clusterware core feature – which is by definition "clustering"?
To me Clusterware is unique in the way it hides itself behind other Oracle software. Like it wanted to say to us: "I'm just internal layer, I do not require administration nor maintenance, don't tell anybody that I'm free of charge and can do a lot more than I'm doing now...".
So, let's have a look at this software as at typical cluster technology which can bring High Availability to our applications and environment. To achieve this I'll discuss some important fundamentals and share my experience in writing custom Clusterware Agents (good and bad patterns, traps, debugging and logging, etc.). At the end of session we will write together simplest, yet fully functional Clusterware agent for popular application called Single Instance Oracle Database 19c ?
The Heart of Oracle - how the core RDBMS engine worksrozwiń opis
The Oracle database is big, complex, and highly capable. But how the very core of it works is relatively simple - and yet is not often explained. I will describe the overall architecture of the database, how data is moved from disc to memory *and back*, what blocks are and why their size & distribution is vital to performance. This will include how user data is spread between the SGA nd the PGA and why. Next I will explain the vital role of redo and how it is the most important part of the database. Then I will cover how Oracle maintains a perfect point-in-time view of data and what a "consistent get" really is. By the end of the talk you will probably understand how the database works better than many experienced DBAs. This knowledge makes all further talks on Oracle performance and features make much more sense. During the talk you can ask any questions you want. I might even answer them.
The foundations of understanding Execution Plans.rozwiń opis
One of the key components of good applications is efficient SQL, and if you need to understand why some piece of SQL is not executing efficiently then it's important that you are able to create, share, and interpret truthful execution plans. This presentation will give you a solid understanding of how to meet all three of those targets.
Most presentations on execution plans start with a simple instructions on how to create them and then how to read very simple plans. This presentation will start at the opposite end of the problem by looking at an SQL statement and asking what the optimizer might do with it and only then look at what that means in terms of the possible execution plans. In this way we gain an introduction to query blocks, transformations, and the reason why we only ever need to look at simple execution plans in order to understand what's happening in complex queries.
From this point we look at plans for a single query block; then examine the ways that Oracle presents plans that involve multiple query blocks in several different circumstances. This will lead us to the problems that the optimizer has with working out how to choose between doing a transformation that eliminates a query block, or isolating a query block and having to execute it multiple times at run-time - at the same time we'll discover that there are run-time optimisations (tricks) that Oracle uses that will make the optimizer's calculations (or guesswork) produce totally unrealistic estimates.
From a purely technical viewpoint we will be covering the packages dbms_xplan and dbms_momitor, and the three most important parts of an execution plan - the operation (body) of the plan, the predicate information, the statistics information (estimated and actual) with some passing references to the outline information and the projection information.
Your first sight on your database performance is probably from the Enterprise Manager "Top Activity" or equivalent: the load of the database on the time axis, displayed with colors. Green for CPU, blue for I/O, red for locks... Behind those colors are the wait events which you can also query from many monitoring places: V$ views, Statspack report, SQL Trace. They tell a lot about what the database is doing or waiting on, as long as we understand exactly what they measure. We will get through the most common ones to understand what they tell us, and how we can improve the performance.
In this session we'll analyze three AWR reports, and introduce you to a structured approach in doing so. As an AWR report has a ton of information, it is good to first zoom in on a few key-areas of the report to get a general understanding of the activity in the database. And only after that let that understanding guide you to go to specific other sections in the report for further analysis.
The logwriter 2020: the good, the stats and the uglyrozwiń opis
The way the logwriter works is mostly entirely undocumented, and what is officially documented can put you on your wrong foot. The purpose of this talk is to highlight the way the logwriter works, and more importantly what you can see and measure for yourself using common database statistics.
This talk looks at the relevant internal/c functions executed by the logwriter, and provides the sequence in which they are executed and statistics they produce. This should allow database tuners to understand what the impact of logwriter performance is, and in what part time is spend.
Another topic is the difference in how the database is working by choices like RAC and multi-tenant.
Lets talk abut the Oracle 10053 trace and how it explains what decision Oracle is making when optimizing your SQL. We will see how Oracle uses your statistics to make judgements about which optimization path to take and how it short-cuts the process to minimise optimization time and get to a reasonable plan as quickly as possible.
Linux OS troubleshooting Tools for Databases: How Low(Tech) Can You Go?rozwiń opis
In the production reality of the enterprise database world, RHEL 6 and similar old Linux distributions are still widely used. With kernel versions all the way down to 2.6.x, one can only dream of using leading-edge tools like eBPF for advanced tracing in production. Upgrading your 10-year old Linux distro with the latest custom kernel is out of the question for production systems.
Tools like FTrace and Perf alter kernel behavior and may come with performance overhead or safety issues if you’re unlucky enough to hit a bug, so using these tools in critical production systems requires a little planning and validation.
In this presentation we will take a different approach to the advanced Linux troubleshooting & performance analysis for databases. We will start from extremely low-tech tools, just reading some Linux /proc filesystem entries and aggregating the results with standard Unix command-line utilities. This approach is safe to use and provides reasonable visibility into the Linux kernel. Thanks to its extreme simplicity, you can use this toolset on any Linux system, starting from RHEL6-clones and up (limited functionality available even on RHEL5!)
After we are done with the low-tech tools, we’ll fill some gaps with Perf and will even do some eBPF magic too!
POUG is not only the official Oracle community – first of all it is the base of very active members, who are engaged in the group development both during the meetings and the preparations to them. Our message gets to the 400 people connected with databases – from the developers to the administrators and from the begginers to the experts with years of experience.
We would like to invite you to be a part of our event – it is a chance to show your brand during and before the meeting.