tag:blogger.com,1999:blog-216189652024-03-08T08:49:00.401-05:00The Eric S. Emrick BlogEric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.comBlogger46125tag:blogger.com,1999:blog-21618965.post-60409168047939356212009-03-06T22:33:00.002-05:002009-03-06T23:30:11.117-05:00Low Cardinality != Bitmap IndexSorry but this post is a bit of a rant. I was called into a performance issue yesterday. The users were complaining of slow performance. I issued extended <span class="blsp-spelling-error" id="SPELLING_ERROR_0">SQL</span> tracing on the session and found the <span class="blsp-spelling-error" id="SPELLING_ERROR_1">SQL</span> statement was a simple SINGLE ROW insert statement using bind variables. No triggers on the table.<br /><br />What I <span >found were hundreds</span> of thousands of <em>db file sequential read</em> wait events to insert a single row. I checked out the data dictionary for any supporting indexes and found 10 indexes on the table, 4 of which were bitmap indexes. Fortunately, this was a 10g database, so the object number associated with the sequential reads were easily plucked using a simple <span class="blsp-spelling-error" id="SPELLING_ERROR_2">AWK</span> script.<br /><br /><span style="font-family:courier new;font-size:85%;">wait #22: <span class="blsp-spelling-error" id="SPELLING_ERROR_3">nam</span>='db file sequential read' <span class="blsp-spelling-error" id="SPELLING_ERROR_4">ela</span>= 377 file#=17 block#=20988904 blocks=1 <span style="color:#cc0000;">obj#=725386</span> <span class="blsp-spelling-error" id="SPELLING_ERROR_5">tim</span>=2691112678912</span><br /><br />I found that nearly 99.99% of these wait events were owed to this object, a bitmap index.This application is not your standard <span class="blsp-spelling-error" id="SPELLING_ERROR_6">OLTP</span> as the underlying table gets loaded with thousands of rows each day with SINGLE ROW inserts. The dreaded concurrency and deadlocking did not come into play, well, because the load process is single threaded. However, all queries against this table need to perform very quickly. So, in that sense it has an <span class="blsp-spelling-error" id="SPELLING_ERROR_7">OLTP</span> face. Here is the rub. First, I asked if these indexes (in particular the bitmap indexes) could be dropped prior to their "load" and recreated after. The answer I received was essentially, "no, that is the way the application works." I then asked them to tell me why this index was a bitmap index. The developer stated the rationale was the fact that the data was uniformly distributed over 6 distinct values. I suppose that seems reasonable. I then asked the developer if this column was used in join conditions for other queries. The answer was a resounding NO.<br /><br />Not to my surprise the index built as a standard b*tree index was just as efficient and lacked the horrific index maintenance overhead associated with SINGLE ROW inserts. The only reason the index was defined as a bitmap index was its cardinality and nothing more. I had them drop the index. The load that was taking 20+ hours to complete finished in under a minute. The lesson here is: Know your data, know your code and then evaluate the use of bitmap indexes to support your table access. The simple fact of low cardinality does not alone justify the use of a bitmap index. As a matter of fact, this bitmap index was so chubby that after it was re-created post load, it had been reduced in size by 99%. I suppose that is another point: Bitmap indexes aren't necessarily space savers either if used in an improper context.<br /><br />BTW, the hundreds of thousands of blocks reads were not what you might have thought: locks against rows with the same bitmap as the inserted value for the bitmap column. Oracle was ranging over the index nonsensically looking for the proper place to dump the row. As the hundreds of thousands of sequential reads rolled by not a single TM lock was obtained and ZERO db block changes had accumulated. It was only when the row finally inserted that a few blocks changes showed up. This is just another example of a peculiarity with bitmap indexes that can crop up if used <em>unlawfully.</em>Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com27tag:blogger.com,1999:blog-21618965.post-60366969128885939972009-03-04T16:03:00.005-05:002009-03-04T22:42:21.651-05:00Database ContinuityEver just have a burning desire to do something that never seems to go away? For me, that desire has been to write a book; more specifically an Oracle technology-based book. (Okay, maybe a novel later on in life...) I thoroughly enjoy researching Oracle technology and finding solutions to puzzling questions. I am, however, pragmatic and seek to understand that which I can put to good use in the future.<br /><br />I was recently discussing this desire with a colleague. I told him that I felt there was a need for a really good backup and recovery book. Actually, I expounded a bit and said that there is a need for a good database continuity book. It just feels that backup and recovery is an overused phrase for a dramatically underutilized and uncultivated set of skills. After all, how frequently are we involved in backup and recovery exercises? I would guess that backup and recovery activities comprise less than 5% of the time spent by a DBA during the course of any given year. That would be less than 100 hours in a full work year. I suspect it could be much less for some.<br /><br />Isn't spending little or no time on backup and recovery a good thing? That does imply our systems are resilient and few faults surface that require us to exercise our recovery plan. And, in the age of RMAN we simply don't have to worry about the nuances of recovery, right? RMAN knows exactly what is needed for restoration, and all the DBA needs to do is execute a few commands to restore and recover the database. What technology has afforded us with regard to ease of backup configurations and redundant infrastructure, it has equally reduced our ability to confidently take control when up against a critical database recovery scenario. In short, we become complacent and our knowledge of backup and recovery diminishes over time. How confident are we that our backup strategy meets the recovery point (RPO) and recovery time (RTO) objective of our business?<br /><br />I digress. Let’s get back to the conversation with my colleague and this notion of database continuity. I defined for him database continuity as follows: Database continuity is a superset of knowledge, processes and tools that fulfill the data protection requirements of an organization. By consequence, backup and recovery become processes in the continuity methodology. Database continuity is a broadened perspective of Oracle database recovery and is intended to include: disaster recovery, standby databases, archive log management, user-managed backups, RMAN, RPO and RTO, etc. Each of these aspects of database continuity requires the DBA to have a firm understanding of Oracle database recovery. If we truly understand recovery these different continuity dimensions converge rapidly. You can plug in your knowledge of recovery to assist with any dimension. So, while the notion of database continuity has greater breadth at face value, it can be reduced to recovery mechanics, constructs and objectives.<br /><br />That being said, I have many ideas about a book on Oracle database continuity. However, I want to hear from you. What do you find lacking in the backup and recovery books on the market? Maybe one text speaks to an aspect for which you wish the author had given more detail. Or, maybe there is an overindulgence of certain topics that you wish had been left out. What material would help you retain and reuse your recovery knowledge? I am not out to write a book on RMAN or Data Guard; thousands of pages have already been devoted to the treatment of these technologies. I view guides on such topics as utilities to affect my recovery objectives and mobilize my recovery knowledge.Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com6tag:blogger.com,1999:blog-21618965.post-21080735473172793452008-02-17T18:26:00.006-05:002008-03-26T10:37:36.083-04:00RMAN, RAC, ASM, FRA and Archive LogsThe topic, as the title suggests, concerns RMAN, RAC, ASM and archive logs. This post is rather different than my prior posts, in that, I want to open up a dialogue concerning the subject matter. So, I’ll start the thread by posing a question: Are any of you that run RAC in your production environments backing up your archive logs to an FRA that resides in an ASM disk group (and of course backing up the archive logs to tape from the FRA)? Managing your free space within your FRA is paramount as are judicious backups of the FRA (actually these really go hand in hand). However, I am very interested in your experience. Have you come across and “gotchas”, bad experiences, positive experiences, more robust alternatives, extended solutions, etc.? Being somewhat of a backup and recovery junky, I am extremely interested in your thoughts. Let the dialogue commence!<br /><br /><strong>Update: 03/26/2008<br /></strong><br />A colleague of mine has been doing some testing using RMAN, RAC, ASM, FRA for archive log management. Also, he has tested the integration of DataGuard into this configuration. To be more precise, he has tested using an FRA residing in an ASM disk group as the only local archive log destination. In addition to the local destination, each archive log is sent to the standby destination. Based on his testing this approach is rather robust. The archive logs are backed up via the "BACKUP RECOVERY AREA" command with a regular periodicity. This enables the FRA's internal algorithm to remove archive logs that have been backed up, once the space reaches 80% full. No manual intervention is required to remove the archive logs. Moreover, the archive logs in this configuration will only be automatically deleted from the FRA if both of the following are true: 1) the archive log has been backed up satisfying the retention policy and 2) the archive log has been sent to the standby. When there is a gap issue with the standby database, the archive logs are read from the FRA and sent to the standby. It works real nice!Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com13tag:blogger.com,1999:blog-21618965.post-74016612699253446312008-01-04T21:44:00.001-05:002008-02-09T00:20:11.821-05:00Last Blog Entry (sysdate-364)Well, it has been nearly one year, to the day, since my last post (sorry for the confessional like preamble). I was at a luncheon today with some former colleagues and some were asking me when I was going to start blogging again. I hope to start back up here pretty soon. So, if anyone is still dropping by, I hope to resume with some new material. However, I might try and keep it a bit less technical (fewer bits and more bytes); more light hearted, yet hopefully still informative and fun. Redo log dumps and SCN dribble probably sends most into a coma. Heck, I read some of my prior posts and nearly fell asleep. I will continue the "Oracle Riddles" posts as they seem to generate interesting and fun dialogue. The key is to have FUN with it. If blogging becomes a chore then you are doing it for the wrong reason. I actually visited Tom Kyte's blog this evening and started reviewing some of his more recent entries - to get the juices flowing. BTW, who is the chap with the Johnathan Lewis-ian beard pictured on his blog? :-).Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com12tag:blogger.com,1999:blog-21618965.post-21386231163591889052007-02-10T18:28:00.000-05:002007-03-19T20:51:00.295-04:00Physical Standby Turbo BoostIs your physical standby database lagging way behind your production database? Maybe an outage to your standby environment has produced a lag that will not meet certain business requirements: reporting needs, disaster recovery time objective, testing, etc. When you don't have the luxury of performing a full production restore into your standby environment and your archive log files are not being consumed at an acceptable pace, you still have options that don't involve immediate architectural changes.<br /><br />In some cases you can dramatically speed up your recovery time by copying a small subset of your production database to your standby environment and resume recovery. For example, if a large percentage of your database's write activity is absorbed by a small subset of your database you are primed for a standby recovery turbo boost. Notice I did not say small percentage of your data files. After all, you could have 90% of your writes going to 10% of your data files, but those data files might comprise 90% of your database footprint. In most cases a small percentage of your database files equates to a small subset of your database, but not always.<br /><br />If a vast majority of writes go against a small subset of your database, how would copying these files to your standby give your recovery a boost? During recovery if Oracle does not need to recover a file it won't. All of those redo entries dedicated to recovering those files will just get passed over. Knowing this simple fact can help you get your physical standby database back on track to meet the needs of your business quickly.<br /><br />The first order of business is to determine if the write skew condition exists in your database: those files, if copied to your standby, benefiting your recovery time the most. Fortunately, this information can be easily gathered using the <em>v$filestat</em> and <em>v$datafile</em> dynamic performance views in your production database. The following query will get you the top N most written to files in your database.<br /><br />select * from<br />(select a.name, b.phyblkwrt from v$datafile a, v$filestat b<br />where a.file# = b.file# order by 2 desc)<br />where rownum < <em>N</em>;<br /><br />If you know the data files that are getting written to the most in production then you also know the most frequently written to files on your standby during recovery. If Oracle can skip over redo entries during recovery then you avoid all of that physical and logical I/O against your standby data files. To recover a database block you have to perform a read <em>and</em> a write of that block. If your writes are somewhat evenly distributed amongst the files in your database then it will be more difficult to get that turbo boost. But, if 60+% of your database writes are absorbed by <= 10% of the database footprint you could gain a significant boost in the recovery time by shipping those files to your standby.<br /><br />I know this is a rather short post, but this little tidbit just might help you get out of a physical standby database recovery dilemma.Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com0tag:blogger.com,1999:blog-21618965.post-78698784999406274462007-01-18T23:36:00.000-05:002007-01-19T00:36:41.036-05:00Logical Reads and Orange TreesMy previous post was a riddle aimed to challenge us to really think about logical I/O (session logical reads). Usually we think of I/O in terms of OS block(s), memory pages, Oracle blocks, Oracle buffer cache buffers, etc. In Oracle, a logical I/O is neither a measure of the number of buffers visited, nor the number of distinct buffers visited. We could of course craft scenarios yielding these results, but these would be contrived special cases - like an episode of Law and Order only better. Instead, logical I/O is the number of buffer visits required to satisfy your SQL statement. There is clearly a distinction between the number of buffers visited and the number of buffer visits. The distinction lies in the target of the operation being measured: the visits not the buffers. As evidenced in the previous post we can issue a full table scan and perform far more logical I/O operations than there are blocks in the table that precede the high water mark. In this case I was visiting each buffer more than one time gathering up ARRAYSIZE rows per visit.<br /><br />If I had to gather up 313 oranges from an orchard using a basket that could only hold 25 oranges, then it would take me at least 13 visits to <strong>one or more</strong> trees to complete the task. Don't count the trees. Count the visits.Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com4tag:blogger.com,1999:blog-21618965.post-58795154020209612492007-01-15T19:03:00.000-05:002007-01-15T19:56:23.662-05:00Oracle Riddles: What's Missing From This Code?The SQL script below has one line intentionally omitted. The missing statement had a material impact on the performance of the targeted query. I have put diagnostic bookends around the targeted query to show that no DML or DDL has been issued to alter the result. In short, the script inserts 32K rows into a test table. I issue a query requiring a full table scan, run a single statement and rerun the same query - also a full table scan. While the second query returns the same number of rows, it performs far fewer logical I/O operations to achieve the same result set. Review the output from the script. Can you fill in the missing statement? Fictitious bonus points will be awarded for the Oracle scholar that can deduce the precise statement :)<br /><br />/* Script blog.sql<br /><br /><span style="font-family:courier new;font-size:85%;"><br />spool blog.out<br />set feed on echo on;<br />select * from v$version;<br />drop table mytable;<br />create table mytable (col1 number) tablespace users;<br />insert into mytable values (3);<br />commit;<br />begin<br />for i in 1..15 loop<br />insert into mytable select * from mytable;<br />commit;<br />end loop;<br />end;<br />/<br />analyze table mytable compute statistics;<br />select count(*) from mytable;<br />select blocks from dba_tables where table_name = 'MYTABLE';<br />select blocks from dba_segments where segment_name = 'MYTABLE';<br />select index_name from user_indexes where table_name = 'MYTABLE';<br />set autot traceonly;<br />select * from mytable;<br />set autot off;<br />REM Bookends to show no DML or DDL statement has been executed.<br />select statistic#, value from v$mystat where statistic# in (4,134);<br /><span style="color:#ff0000;">... missing statement</span><br />REM Bookends to show no DML or DDL statement has been executed.<br />select statistic#, value from v$mystat where statistic# in (4,134);<br />set autot traceonly;<br />select * from mytable;<br />set autot off;<br />select blocks from dba_tables where table_name = 'MYTABLE';<br />select blocks from dba_segments where segment_name = 'MYTABLE';<br />select index_name from user_indexes where table_name = 'MYTABLE';<br />select count(*) from mytable;<br />spool off;</span><br /><span style="font-family:Courier New;font-size:85%;"></span><br />End Script blog.sql */<br /><br />/* Output<br /><span style="font-family:courier new;font-size:85%;"><br />oracle@eemrick:SQL> select * from v$version;<br />BANNER<br />----------------------------------------------------------------<br />Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi<br />PL/SQL Release 10.2.0.1.0 - Production<br />CORE 10.2.0.1.0 Production<br />TNS for Solaris: Version 10.2.0.1.0 - Production<br />NLSRTL Version 10.2.0.1.0 - Production<br />5 rows selected.<br />oracle@eemrick:SQL> drop table mytable;<br />Table dropped.<br />oracle@eemrick:SQL> create table mytable (col1 number) tablespace users;<br />Table created.<br />oracle@eemrick:SQL> insert into mytable values (3);<br />1 row created.<br />oracle@eemrick:SQL> commit;<br />Commit complete.<br />oracle@eemrick:SQL> begin<br />2 for i in 1..15 loop<br />3 insert into mytable select * from mytable;<br />4 commit;<br />5 end loop;<br />6 end;<br />7 /<br />PL/SQL procedure successfully completed.<br />oracle@eemrick:SQL> analyze table mytable compute statistics;<br />Table analyzed.<br />oracle@eemrick:SQL> select count(*) from mytable;<br />COUNT(*)<br />----------<br />32768<br />1 row selected.<br />oracle@eemrick:SQL> select blocks from dba_tables where table_name =<br />'MYTABLE';<br />BLOCKS<br />----------<br />61<br />1 row selected.<br />oracle@eemrick:SQL> select blocks from dba_segments where segment_name =<br />'MYTABLE';<br />BLOCKS<br />----------<br />64<br />1 row selected.<br />oracle@eemrick:SQL> select index_name from user_indexes where table_name =<br />'MYTABLE';<br />no rows selected<br />oracle@eemrick:SQL> set autot traceonly;<br />oracle@eemrick:SQL> select * from mytable;<br />32768 rows selected.<br /><br />Execution Plan<br />----------------------------------------------------------<br />Plan hash value: 1229213413<br />-----------------------------------------------------------------------------<br />Id Operation Name Rows Bytes Cost (%CPU) Time<br /><br />-----------------------------------------------------------------------------<br />0 SELECT STATEMENT 32768 65536 26 (4) 00:00:01<br /><br />1 TABLE ACCESS FULL MYTABLE 32768 65536 26 (4) 00:00:01<br /><br />-----------------------------------------------------------------------------<br /><br />Statistics<br />----------------------------------------------------------<br />1 recursive calls<br />0 db block gets<br /><span style="color:#ff0000;">2248 consistent gets<br /></span>0 physical reads<br />0 redo size<br />668925 bytes sent via SQL*Net to client<br />24492 bytes received via SQL*Net from client<br />2186 SQL*Net roundtrips to/from client<br />0 sorts (memory)<br />0 sorts (disk)<br />32768 rows processed<br />oracle@eemrick:SQL> set autot off;<br />oracle@eemrick:SQL> REM Bookends to show no DML or DDL statement has been<br />executed.<br />oracle@eemrick:SQL> select statistic#, value from v$mystat where statistic#<br />in (4,134);<br />STATISTIC# VALUE<br />---------- ----------<br />4 18 <span style="color:#33cc00;"><-- Statistic #4 is user commits </span></span><br /><span style="font-family:courier new;font-size:85%;">134 461920 <span style="color:#33cc00;"><-- Statistic #134 is redo size</span><br />2 rows selected.<br />oracle@eemrick:SQL> <span style="color:#ff0000;">... missing echo of statement<br /></span>oracle@eemrick:SQL> REM Bookends to show no DML or DDL statement has been<br />executed.<br />oracle@eemrick:SQL> select statistic#, value from v$mystat where statistic#<br />in (4,134);<br />STATISTIC# VALUE<br />---------- ----------<br />4 18<br />134 461920<br />2 rows selected.<br />oracle@eemrick:SQL> set autot traceonly;<br />oracle@eemrick:SQL> select * from mytable;<br />32768 rows selected.<br /><br />Execution Plan<br />----------------------------------------------------------<br />Plan hash value: 1229213413<br />-----------------------------------------------------------------------------<br />Id Operation Name Rows Bytes Cost (%CPU) Time<br /><br />-----------------------------------------------------------------------------<br />0 SELECT STATEMENT 32768 65536 26 (4) 00:00:01<br /><br />1 TABLE ACCESS FULL MYTABLE 32768 65536 26 (4) 00:00:01<br /><br />-----------------------------------------------------------------------------<br /><br />Statistics<br />----------------------------------------------------------<br />0 recursive calls<br />0 db block gets<br /><span style="color:#ff0000;">173 consistent gets</span><br />0 physical reads<br />0 redo size<br />282975 bytes sent via SQL*Net to client<br />1667 bytes received via SQL*Net from client<br />111 SQL*Net roundtrips to/from client<br />0 sorts (memory)<br />0 sorts (disk)<br />32768 rows processed<br />oracle@eemrick:SQL> set autot off;<br />oracle@eemrick:SQL> select blocks from dba_tables where table_name =<br />'MYTABLE';<br />BLOCKS<br />----------<br />61<br />1 row selected.<br />oracle@eemrick:SQL> select blocks from dba_segments where segment_name =<br />'MYTABLE';<br />BLOCKS<br />----------<br />64<br />1 row selected.<br />oracle@eemrick:SQL> select index_name from user_indexes where table_name =<br />'MYTABLE';<br />no rows selected<br />oracle@eemrick:SQL> select count(*) from mytable;<br />COUNT(*)<br />----------<br />32768<br />1 row selected.<br />oracle@eemrick:SQL> spool off;</span><br /><br />End Output */<br /><br />Clue: The missing statement is not "alter system set do_less_work = true;"Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com6tag:blogger.com,1999:blog-21618965.post-1165427457512611342006-12-06T12:49:00.000-05:002007-01-11T19:42:59.668-05:00Increasing the Longevity of Your CPU<p>One of my current assignments is to evaluate the potential to increase CPU headroom on a server running a large OLTP Oracle database. Of course, any project such as this is typically motivated by the desire to save money by foregoing a seemingly imminent hardware upgrade. Realizing more CPU headroom for your Oracle database server can be achieved, but not limited to, the following approaches:<br /><br /><em><span style="font-size:85%;">1. Add more same-speed CPUs to your existing server.<br />2. Replace your existing CPUs with faster CPUs.<br />3. Replace your existing CPUs with a greater number of faster CPUs.<br />4. Commission a new server platform with more same-speed and/or faster CPUs.<br />5. Commission a new server platform with a greater number of slower CPUs.<br />6. Chronologically distribute the load on the system to avoid spikes in CPU.<br />7. Reduce the work required of the system to satisfy the business.</span><br /><br /></em>More times than not I suspect approaches 1-5 are chosen. I am of the opinion that 1) and 3) are more predictable when trying to evaluate the expected CPU headroom yield. Propositions 2), 4) and 5) can be a little less predictable. For example, if I double the speed of my current CPUs will I yield the upgraded CPU cycles as headroom? That is, if I am running 10x500MHz and upgrade to 10x1GHz will I now have the additional 5GHz as headroom? It has been my experience that upgrades such as these do not produce such predictable results, especially if your current box approximates 100% utilization. Certainly, moving to a new server with a greater number of same-speed and/or faster CPUs is a tricky proposition. New servers need to be tested using Production volume with great rigor. While at face value 500MHz would appear to be universally “portable” to any server, there are many other factors that can influence your CPU horsepower: memory architecture, amount of processor cache, processor crosstalk, etc. Options 1-5 can all be very costly and in some cases yield undesirable and unpredictable results. </p><p>If you have the luxury of distributing the load on your system to avoid spikes in CPU then that is a great first option. It could buy you more time to evaluate a longer-term solution. For example, shifting any batch jobs to off-peak OLTP hours might give you immediate relief.<br /><br />What if we can simply “do less” to satisfy the needs of the business? This concept is not new to most Database Administrators and rings a rather cliché tone. After all, aren’t we brow-beaten by the dozens of books and countless articles that speak to database and SQL optimization? The “do less” principle is very sound, but it can be intractable. Reducing the work required of an application often requires management support and can run into political obstacles at every turn. Getting Application Developers and Database Administrators to work in lockstep can require a significant effort. If Management, Developers and Database Administrators buy into a synergistic endeavor the benefits can be amazing – and can save the company a large sum of money.<br /><br />If you are lucky enough to be working on a project where the common goal of all parties is to reduce the CPU load on your system then I have learned a few things that I hope can help you.<br /></p><p><strong>Identify the Targets for Optimization</strong></p><p>Identify those SQL statements that contribute the greatest to the CPU load on your database server. These statements usually relate to those that produce the most logical I/O on your database. Caution needs to be taken when trying to identify these statements. You shouldn’t focus solely on those statements that have the highest logical I/O (LIO) to execution ratio. Often you will find statements that are well optimized but are executed with extremely high frequency. Look for the aggregate LIO footprint of a SQL statement. Without Statspack or AWR this analysis might be very difficult. However, if you collect this diagnostic data you can use the LEAD analytical function to craft a nice SQL statement to identify the top CPU consuming statements on your system (join stats$sql_summary and stats$snaphot).<br /><br />Don’t limit your SQL statement identification to just those statements flagged by your analysis as a top CPU consumer. Go another step and identify the most frequently executed statements. Some of the most frequently executed statements are the most optimized on your system. These statements if executed by many programs concurrently can influence concurrency and thusly CPU load. One approach I took recently identified the top 20 CPU consuming statements during a 12 hour window of each week day. I then ran the same analysis against the most frequently executed statements on the system. The results yielded only 31 distinct statements as 9 were on both lists. The amazing thing is that, on average, these 31 statements contributed to 58% of all logical reads on the system and 59% of all executions. Keep in mind that there were over 20 thousand distinct statements cached in the Shared Pool. It is rather amazing that such a small subset of the application footprint contributed so greatly to the aggregate load.<br /></p><p><strong>Ask The Right Questions</strong></p><p>The identification phase is crucial as you want to optimize that which will yield the greatest benefit. Subsequent to the identification phase the Database Administrators and Developers can sit and discuss approaches to reduce the load incurred by these SQL statements. Here are some of the key points I have taken away during such collaborative efforts.<br /><br /><em><span style="font-size:85%;">1. Is there a better execution plan for the statement? Optimization is often achieved by rewriting the query to get at a better execution plan. While I don’t like hinting code, they can relieve pressure in a pinch. </span></em></p><p><em><span style="font-size:85%;">2. Does the statement need to be executed? If you see SQL statements that seldom/never return rows (rows/exec approaches 0) there is a possibility it can be eliminated from your application. </span></em></p><p><span style="font-size:85%;"><em>3. Does the statement need to be executed so frequently? You might be surprised that Developers often have other application-side caching techniques that can dramatically </em><em>reduce the frequency of a statement’s execution against the database. Or, the application might simply call the statement needlessly. It doesn’t hurt to ask! </em></span></p><p><em><span style="font-size:85%;">4. Are the requirements of the business immutable? Sometimes you can work an optimization by simply redefining what is required. This is not the tail wagging the dog here. It is possible that the business would be completely happy with a proposed optimization. For example, can the query return just the first 100 rows found instead of all rows.</span></em></p><p><em><span style="font-size:85%;">5. Do the rows returned to the application need to be sorted? Highly efficient SQL statements can easily have their CPU profile doubled by sorting the output. </span></em></p><p><em><span style="font-size:85%;">6. Are all columns being projected by a query needed? If your application retrieves the entire row and it only needed a very small subset of the attributes it is possible you could satisfy the query using index access alone. </span></em></p><p><em><span style="font-size:85%;">7. Is the most suitable SQL statement being executed to meet the retrieval requirements of the application? Suitability is rather vague but could apply to: the number of rows fetched, any misplaced aggregation, insufficient WHERE clause conditions etc. </span></em></p><p><em><span style="font-size:85%;">8. Are tables being joined needlessly? I have encountered statements that Developers have determined are joining a table, projecting some of its attributes, without using its data upon retrieval. The inclusion of another table in such a manner can dramatically increase the logical I/O required. This is extremely difficult for a DBA to discern without intimate application code knowledge. </span></em></p><p><em><span style="font-size:85%;">9. How well are your indexes clustered with your table(s)? Sometimes data reorganization techniques can greatly reduce the logical I/O required of a SQL statement. Sometimes IOTs prove to be very feasible solutions to poor performing queries. </span></em></p><p><em><span style="font-size:85%;">10. Can I add a better index or compress/rebuild an existing index to reduce logical I/O? Better indexing and/or index compression could take a query that required 10 logical I/O operations down to 5 or 6. This might feel like a trivial optimization. But, if this statement is executed 300 times each second that could save your system 1,500 logical I/Os per second. Never discount the benefit of a 50% reduction of an already seemingly optimized statement. </span></em></p><p><span style="font-size:85%;"><em>11. Can I reorganize a table to reduce logical I/O?</em><br /></span><br /><br />I suspect most of us have read that 80% of optimization is application centric (I tend to feel that the percentage is higher). Usually the implication is that the SQL being generated and sent to the database is 80% of the target for optimization. More specifically, optimization requires the tuning of SQL 80% of the time. However, don’t limit your efforts to optimize your application to “tuning the SQL.” Sometimes a portion of your optimization will include “tuning the algorithms” used by the application. Needless execution and improper execution of SQL statements can be equally destructive. Hardware resources, in particular CPUs, can be very expensive to purchase and license for production Oracle databases. It is well worth the effort to at least investigate the possibilities of increasing CPU headroom by decreasing CPU utilization.</p><p>Update: An astute reader suggested I mention Cary Millsap's Optimizing Oracle Performance with regard to this topic. I highly recommend reading this book as it weighs in heavy on Oracle optimization and Method-R. Trust me if you have optimization on the brain don't miss this read.</p>Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com6tag:blogger.com,1999:blog-21618965.post-1157897516985630172006-09-10T09:53:00.000-04:002006-09-10T10:11:57.000-04:00Oracle Riddles: What's The Point?I am frequently asked for directions. Sometimes I am not the best to ask and will just be a waste of your time and energy. Other times I am sought exclusively. I try to lead a balanced life. But, hey, I am not perfect. What exactly am I?Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com8tag:blogger.com,1999:blog-21618965.post-1157769370515412112006-09-08T22:21:00.000-04:002006-09-08T23:47:05.320-04:00Don't Get Caught With Your GUI DownIn a recent wave of interviews I was amazed how little prospective DBA candidates knew about user-managed hot backups. Most could give the BEGIN and END backup stuff and convey that it causes more redo to be generated during this time. But, when asked to give a little more of their insight into the mechanics or performance implications, 9 in 10 had plenty to say - just nothing that was correct. 90% could not explain the significance of putting a tablespace in hot backup mode. That is, why do it? Why not just copy the file while the database is open and cooking? Of course, most understood that Oracle needs us to do this so that the backup is "good", but few knew how Oracle went about doing it. Moreover, few knew why the extra redo was generated. And most amazing, nearly all thought the data files were locked and changes were written to the redo logs and reapplied when the END BACKUP command was given. Where are DBA-types reading this? Irrespective, the DBA population is evolving.<br /><br />I am not basing my opinion on one simple question concerning user-managed backups, but a host of other questions given as mental exercises. What are some of the Oracle wait events? What do they represent? How would you go about troubleshooting systemic response time degradation in your production database? What is extended SQL tracing and why use it? Time after time candidates struggled to give lucid, well thought out responses. A vast majority of responses could be summarized as, "I would go into OEM and check for A or B." I don't have a problem with using OEM, but usually the A’s and B’s had little relevance to the question.<br /><br />The herd of <em>available</em> DBAs that are able to navigate the database using native SQL to get at critical performance diagnostic information has thinned dramatically. Sometimes I wonder what would happen to some of these shops being supported by some I interview if OEM, Database Control or Grid Control took the night off. When relegated to digging into the database and troubleshooting armed only with a SQL prompt, many appear to be lost. I certainly appreciate what the Oracle GUI database management tools bring to the table. I even like them. My point is, don't throw away your shovel just because you have a snow blower. The day will come when your GUI will fail you and it will be just you and your SQL prompt.<br /><br /><em>P.S.> Oracle does not lock the content of its data files during the course of a user-managed hot backup. Actually, Oracle only locks one thing, the </em><a href="http://esemrick.blogspot.com/2006/02/pleasure-of-finding-oracle-things-out.html"><em>master checkpoint SCN</em></a><em> inside the file header. Some other constructs in the file header stay mutable. Blocks in data files being backed up can be modified as per normal database operation. The changes to blocks are indeed recorded in the redo, but they are not replayed when the END BACKUP is issued. More redo is possible because Oracle must accommodate the potential presence of fractured blocks.</em>Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com8tag:blogger.com,1999:blog-21618965.post-1157046813141257172006-08-31T13:52:00.000-04:002006-09-04T15:22:24.366-04:00SQL Gone Wild!Ever see something so inefficient it evokes images of grape stomping to produce wine? I have, in Oracle 10g no less. A colleague of mine brought me a situation the other day that made me do a double-take, no triple-take, on a 10046 trace. The scenario involved a single row delete from a table using the index associated with the primary key on said table to delete the row, simple right? Well, the delete hung. The 10046 showed "db file sequential reads" spewing at a very rapid clip. The process was reading a child table that contained a column that referenced the primary key of the table being deleted. Okay, this is to be expected. We don't want to break our self-imposed business rules by orphaning child records. So what is my beef with this situation?<br /><br />The child table had millions of rows that would have been orphaned had the delete succeeded. Keep in mind the constraint was NOT defined with ON DELETE CASCADE. Also, a single column index on the child table was associated with the child key. The stage was set for a swift and proper decline by Oracle to perform our delete. But this did not happen. Oracle was visiting ALL of the child rows then returning ORA-00292 "... - child record found." Yes, each and every child index entry was being visited. My colleague opened as SR with a very elegant little test case that reproduces the problem. Here it is. Try it for yourself and watch the trace with wonder and amazement. We have performed the test in 8i, 9i and 10g with the same results.<br /><br /><span style="font-family:courier new;">DROP TABLE CHILD;<br />DROP TABLE PARENT;<br />CREATE TABLE PARENT (COL1 NUMBER);<br />ALTER TABLE PARENT ADD CONSTRAINT PARENT_PK PRIMARY KEY (COL1);<br />CREATE TABLE CHILD (COL1 NUMBER);<br />CREATE INDEX CHILD_IX_01 ON CHILD (COL1);<br />ALTER TABLE CHILD ADD CONSTRAINT CHILD_FK_01 FOREIGN KEY (COL1) REFERENCES PARENT;<br />INSERT INTO PARENT VALUES (999999999999);<br />INSERT INTO CHILD VALUES (999999999999); </span><br /><span style="font-family:courier new;">COMMIT;<br /><br />-- Insert approximately 1 million records into CHILD<br />begin<br />for i in 1..20 loop<br />insert into child select * from child;<br />commit;<br />end loop;<br />end;<br />/</span><br /><span style="font-family:courier new;"></span><br /><span style="font-family:courier new;">alter session set events '10046 trace name context forever, level 12';</span><br /><span style="font-family:courier new;"></span><br /><span style="font-family:courier new;">DELETE FROM PARENT WHERE COL1 = 999999999999; </span><br /><p>Why doesn't Oracle stop once it encounters the first index entry indicating a foreign key violation has just occurred? Isn't a single found entry sufficient to fail my statement? It seems a bit indulgent to check each and every child row irrespective of my barbaric attempt to break my own business rules. Is it a classic case of stupid is as stupid does? Nope. It is a good old fashioned Oracle bug.<br /><br />By the way, the Oracle support analyst recommended putting the index associated with a child key in a read only tablespace as a workaround. Think about that for a second...</p>Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com5tag:blogger.com,1999:blog-21618965.post-1154471738009861452006-08-01T18:33:00.000-04:002006-08-01T18:40:47.783-04:00Instructive Presentation on Logical I/O MechanicsIf a picture says a thousand words than a good animation can say ten thousand. Check out this <a href="http://julian.dyke.users.btopenworld.com/com/Presentations/LogicalIO.ppt">offering</a> by Julian Dyke. His presentations relating to Oracle mechanics still reign supreme in my book. Once you see a mechanical concept "in motion" you simply don't forget it. What a great didactic device. Anyway, I just wanted to pass this along. Enjoy.Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com2tag:blogger.com,1999:blog-21618965.post-1154046561067848712006-07-27T20:07:00.000-04:002006-08-22T21:38:46.636-04:00Training Class (Final Day)To round off the material covered in this class the following topics were covered today:<br /><ol><li>Tuning Block Space Usage.</li><li>Tuning I/O.</li><li>Tuning PGA and Temporary Space.</li><li>Performance Tuning: Summary.</li></ol><p>I found the Tuning I/O lecture somewhat interesting. The first portion of the lecture focused on the advantages and disadvantages of the various forms of RAID protection. While informative, I could've spent 5 minutes on Google had I not already been armed with the knowledge of this technology. The remainder of this lecture focused on ASM (Automatic Storage Management). This rather non-trivial feature in 10g sounds very cool; define some Data disk group(s) , the relevant protection and striping granularity and let Oracle do the all of the I/O tuning. Of course, this is a severe over simplification of what it really does (or doesn't, as your mileage may vary). But, the point is, this feature is supposed to free the DBA from the often times laborious chore of tuning the I/O subsystem. Truthfully, I think the degree to which Oracle touts the hands-off nature of this feature is overstated; especially for busy production systems. I, nor anyone in the class, had worked with the product. Consequently, I feel there are probably very few shops out there migrating their production databases to ASM. Is it more of a political battle? After all, if DBAs will be able to someday create and manage the logical volumes/file systems this might make the System Administrators feel a little encroached upon. It is just a hunch, but widespread conversions to ASM will probably not happen anytime soon. Anyone reading this blog have any good/bad experience with ASM in a production environment? I am very interested in your feedback.</p><p>The most engaging lecture of the day was the Tuning Block Space Usage. I am really keen to the Automatic Segment Space Management (ASSM) feature. This feature warrants serious consideration given the upside: free list elimination and a considerably more robust approach to reusing blocks for inserts. As much as I liked the discussion on ASSM, the subsequent topic grabbed my utmost attention: segment shrinking. What a great (and might I add way overdue) feature. If one of my production environments was on 10g today I could see using this tool to reclaim vast amounts of space in some of my very large heap tables, index-organized tables and indexes. Oracle claims that the majority of the work can be done online. Moreover, the indexes associated with your heap tables are still usable even after the row movement inherent to the SHRINK has completed. I like the idea of having the freedom to perform these "online" activities, but I still prefer to perform these kinds of operations during quite periods. The course material gives a fantastic, albeit brief, description of the mechanics. Very nice Oracle! Once again, are there any readers of this blog that have experience with this feature and want to share your experiences?</p><p>The final two lectures, Tuning PGA and Temporary Space and Performance Tuning Summary, were good, but not great. The material seemed to belabor a few points.</p><p>In summary, if you are considering taking this course I think you are best served if you do not have much 10g experience in production environments. If your experience with 10g and some of the "tuning" features is even moderate, I recommend you not take the course. Your time would be better spent reading up on this material in the Oracle documentation set. </p><p>Eric's rating of the course: B+.</p>Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com3tag:blogger.com,1999:blog-21618965.post-1153961179488274752006-07-26T20:37:00.000-04:002006-08-22T21:36:45.130-04:00Training Class (Day 3)Another day of training is in the books. What was on today's menu?<br /><ol><li>Tuning the Shared Pool.</li><li>Tuning the Buffer Cache.</li><li>Automatic Shared Memory Management.</li><li>Checkpoint and Redo Tuning.</li></ol><p>Apparently, Oracle is migrating some of its serialization protection from latches to mutexes. For example, the structures previously protected by the <em>Library Cache Pin</em> latch are now protected by a mutex and evidenced by the <em>cursor:pin S</em> wait event. Actually there are several new mutexes and mutex related wait events new to 10g. For example:</p><p>- <em>cursor:mutex</em> indicates mutex waits on parent cursor operations and statistic block operations.</p><p>- <em>cursor:pin</em> events are waits for cursor pin operations (library cache pin now protected by mutex).</p><p>There are a couple interesting facts about Oracle and mutexes. A mutex get is about 30-35 instructions, compared to 150-200 instructions for a latch get. Also, a mutex is around 16 bytes in size, compared to 112 bytes for a latch in Release 10.2 (in prior releases, it was 200 bytes). </p><p>One of the appeals of the mutex, per the documentation, is the reduced potential for false contention. That is, a mutex can protect a single structure; often times stored with the structure it protects. However, latches often protect many structures (see cache buffers chain latch) and can yield what the documentation calls false contention. It is called false contention because "the contention is for the protection mechanism rather than the target object you are attempting to access." This all sounds really great, right? Well, maybe. If Oracle goes to more widespread use of mutexes instead of latches to protect target objects that would be a boatload more mutexes. I am sure the porters at Oracle are not intending to use mutexes exclusively in the future. But, I can see where contention in Oracle could be dramatically reduced at the cost of CPU cycles and memory. What would happen if Oracle protected each buffer with a mutex? While each mutex is less expensive with regard to memory and CPU than an individual latch, you will need considerably more mutexes for each replaced latch. 50 mutexes used to replace a single latch could run the CPU up considerably for the "same" application workload. </p><p>I have one final note on mutexes. As of version 10.2.0.2 a SELECT against V$SQLSTAT and searches of child cursor lists are mutex protected.</p><p>I found the Tuning the Buffer Cache discussion somewhat interesting. Unless you have been hiding under a rock the past 4-5 years, I am sure you have heard the Oracle experts preaching the notion that ratios are not very helpful in diagnosing the health of a database. In particular, the buffer cache hit ratio is frequently tagged as meaningless. A smile came to my face when I read the following excerpt from the course material:</p><p>"A badly tuned database can still have a hit ratio of 99% or better...hit ratio is only one part in determining tuning performance...hit ratio does not determine whether a database is optimally tuned..."</p><p>Oracle is finally teaching what the experts have been saying for years!</p><p>I have been to several Hotsos events/training classes. They often talk about the need to include the <em>buffer is pinned count</em> statistic in the tally for logical reads. These operations are simply latch-reduced logical reads. Why doesn't Oracle integrate this information into their course material or documentation set? They still only claim that <em>db block gets</em> and <em>consistent gets</em> constitute logical reads. I monitored a process recently in one of my production environments and noticed the process did 2 <em>buffer is pinned count</em> logical reads for every 1 (<em>db block gets</em> + <em>consistent gets</em>). That is a substantial percentage of work owed to operations not officially categorized as a measure of work by Oracle.</p><p>Lastly, the on-topic impromptu discussions were fruitful. That always makes the training session more interesting :)</p>Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com3tag:blogger.com,1999:blog-21618965.post-1153918975695662812006-07-26T08:57:00.000-04:002006-07-26T09:38:00.326-04:00Training Class (Day 2)The second day of training was much better than the first. I suspected it would get better based on the material to be covered. The topic set de jour was:<br /><br /><ol><li>Metrics, Alerts and Baselines.</li><li>Using Statspack.</li><li>Using Automatic Workload Repository.</li><li>Reactive Tuning.</li></ol><p>Having limited exposure to 10g in any true production environment, I found 75% of these topics interesting (Statspack chapter was not valuable to me). I really like what Oracle has accomplished with 10g with regard to the gathering and reporting of statistics and metrics (the rates of changes for given statistics). About 5 years ago I wrote a utility for 9i that allowed me to compare Oracle-captured statistics and wait event durations to similar reference points. This utility, I dubbed AppSnap (written in PL/SQL), captured the statistics and wait event durations each hour and calculated and stored the deltas in a separate tablespace. This permitted me to compare what is considered "typical" load to current load and evaluate the deviations rather quickly. I wrote a Unix shell script reporting tool called Instance Health that reports each hour the deltas as they relate to what I call peer hours. For example, each hour a report is generated as a text file and stored in a log directory. The most previous delta is compared to the same hour of day for the past 30 days, the same hour of day and day of week for the past 12 weeks and against all hours for the past 30 days. This has proved to be very valuable for detecting systemic anomalies after application upgrades, etc. </p><p>Okay. Now Oracle has come along with 10g and provides the same functionality (albeit not free). I appreciate the graphical conveyance of this type of analysis provided by Enterprise Manager. Shoot, Oracle even calculates the variance within the sampled timeframe for each metric. This is really cool because you can easily write a query that can ascertain if some metric is statistically anomalous (i.e. +-3 standard deviations). At first glance, some of the AWR reports are not very intuitive. But, the more you stare at them the more sense they appear to make. The Active Session History reporting is also a very nice feature (once again, not free). </p><p>If you already have considerable work experience with AWR/ASH/ADDM then this class probably won't provide you much value. The course does go into the mechanics of the data capturing and touches rather superficially on the reporting capabilities. So there is a good chance you probably have more knowledge about these products than this class affords. However, if you are like me and have yet to dig in your heels on a 10g production environment this class could serve as a very nice primer.</p><p>Well, I am off to day 3 of this training class.</p>Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com4tag:blogger.com,1999:blog-21618965.post-1153778562465010582006-07-24T18:02:00.000-04:002006-07-24T21:29:30.410-04:00Training Class (Day 1)As I mentioned in my last post, I am attending an Oracle training class this week. The class is Oracle Database 10g: Performance Tuning. Having been to the Oracle 10g New Features for Database Administrators class last year, I was hoping for a substantially more in-depth look at 10g instance tuning in this class.<br /><br />The first day was just okay for me. I learned a few things but I felt the class dragged a bit: 30 minutes getting to know each other and too many unrelated tangents or related, yet gratuitous, topic embellishments. I don't mind an occasional anecdotal deviation as it relates to the topic at hand, but those that are completely off topic really slow down the course. You know when you are reading a story and everything seems to flow nicely (proper balance of dialogue and narrative), then you run into a couple pages of narrative? It really puts the brakes on the interest.<br /><br />I tend to evaluate courses on:<br /><ol><li>The conveyed knowledge of the instructor.</li><li>The presentation of the subject matter.</li><li>The quality of the course material.</li><li>The value of the impromptu discourse along the way (icing on the cake stuff). </li></ol><p>Based on the first day, I feel the instructor has a good command of the subject matter and adds value to the class based on relevant experience. I have been to several Oracle classes where the instructor did nothing more than read the slides verbatim and/or appeared to have little relevant experience. Aside from the introductions, we spent the remainder of the day on two topics: a performance tuning overview and 10g statistics/wait events. The study material that accompanies the course is rather good; I have taken the liberty to skip ahead and get into the meat of the course material. I am looking forward to the class tomorrow as I feel we will be digging our heels in a bit more (or at least I hope).<br /><br />Given the sparse amount of substantive material covered on the first day, I don't have any really interesting takeaways. I'll give the first day of class a B. </p>Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com0tag:blogger.com,1999:blog-21618965.post-1153167950808629912006-07-17T16:17:00.000-04:002006-07-17T20:19:42.566-04:00iLoveitWell, this weekend I replaced my "archaic" Generation 4 IPod with a snazzy new Generation 5 Video IPod. I chose the black facing this time for a change of pace.<br /><br /><img style="DISPLAY: block; MARGIN: 0px auto 10px; WIDTH: 320px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="http://static.flickr.com/51/192100438_e4ab3ce186_m.jpg" border="0" /><br />I really like the IPod product - versatile, convenient and just loads of entertainment/educational potential for my ~2 hours of public transit commute each day.<br /><br />With more and more news and entertainment mediums adding video Podcasts this little device has become a cool way to pass the commuting time. I enjoy reading on my commute, but watching an interesting show or funny movie can pass the time at record clips. I won't be surprised if I ask the engineer to make another loop so I can finish watching my movie before heading into work :)<br /><br />I wonder when Mr. Kyte will be Podcasting some material...<br /><br />I'll be in Oracle training next week attending the Oracle Database 10g: Performance Tuning (Database) class. It should be a really good course. I'll be blogging some of the interesting takeaways from the class, so stay "tuned"!Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com3tag:blogger.com,1999:blog-21618965.post-1151535348584140592006-06-28T18:51:00.000-04:002006-06-28T18:56:36.740-04:00Cool IllusionNothing Oracle related this time. Just some fun. I received this nice little <a href="http://www.milaadesign.com/wizardy.html">illusion</a> today via email. Enjoy.Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com4tag:blogger.com,1999:blog-21618965.post-1151029634079204302006-06-22T21:56:00.000-04:002006-06-22T22:33:11.403-04:0010,000 Visits and CountingWell, today my blog registered its 10,000th visit. While that is probably an average couple of days for Tom Kyte and a few others, I am kind of proud of my meager slice of the Oracle blogging pie. Of course, I get oodles of traffic from Google. But, that's not too bad is it?<br /><br />Seriously, it has been a fun 4+ months of blogging. I haven't been able to blog as much as I would have liked over the past 2 months. The fact that I still desire to blog is a good sign though. Time and other constraints in life always crop up. Anyway, it has been really fun and I hope some of you have enjoyed your visits. I have received some very interesting emails and questions from Oracle enthusiasts all over the world.<br /><br />I want to send one special thanks out to Tom Kyte for recommending I start a blog (and placing me on his metablog even though I stole the naming style of his blog) and another to Doug Burns who featured my blog on his site several months back, giving my traffic a jump start.<br /><br />Actually, Tom, my first choice for a blog name was The Emusing Blog :)Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com5tag:blogger.com,1999:blog-21618965.post-1150933196519478782006-06-21T19:38:00.000-04:002006-06-21T21:21:06.363-04:00Oracle BlooperWell, instead of assimilating the suitable, albeit plentiful, Oracle manuals to cover the material required for the Oracle 7.3 -> Oracle 9i upgrade certification exam I opted to search for a single resource. I came across a third-party study tool that was recommended by Oracle and seemed to foot the bill. Very much to my surprise the material was disjointed, loaded with typographical errors and often flat out wrong.<br /><br />I was willing to forgive the copious typographical mistakes per/page (which approached the Golden Ratio mind you) as I could deduce the intentions. I could stomach the disjointed word salads and sparse information. But I refused to read another page after encountering a heinously blatant, careless and nonsensical bit of misinformation. How could I possibly continue to use this material as a study reference if I could not trust the content? To the misinformation at hand, the material states the following verbatim:<br /><br />PGA_USED_MEM - The process is using PGA memory.<br />PGA_ALLOC_MEM- The process has been allocated PGA memory.<br />PGA_MAX_MEM - The process has been allocated maximum memory.<br />PGA_GIBBERISH - The process has found gibberish in the PGA and wishes to purge. (OK, this was my invention)<br /><br />I scratched my head. Re-read, scratched head some more. Finished beer and reached for another. Nothing seemed to alleviate my consternation. I was well aware of these attributes of <em>v$process</em> and was not so much concerned with the incorrectness, as I knew their meaning. It was the gross negligence that left my jaw drooping for a minute.<br /><br />The values for these attributes are NOT Boolean as you well know. You don't query <em>v$process</em> and find a Y or N associated with the values for these attributes. The Oracle documentation defines these attributes in a very straightforward manner. Is there any other way?<br /><br />PGA_USED_MEM number PGA memory currently used by the process<br />PGA_ALLOC_MEM number PGA memory currently allocated by the process (including free PGA memory not yet released to the operating system by the server process)<br />PGA_MAX_MEM number Maximum PGA memory ever allocated by the process<br /><br />Simply stated, I was shocked that the author(s) and editor(s) put such little thought into the material and subsequent proof reading. Actually, I think the author's brain was tied behind his back while writing this material. If one aspires to put together training material and includes attribute definitions that are pre-defined for you in the Oracle documentation set, might I recommend taking a cursory glance at said documentation? You can't just feed me a heaping helping of documentation rubbish without expecting me to pitch the kindling into the nearest can - I know, I've seen me do it! Did I mention the material is several fold more expensive than any of Tom Kyte's or Jonathan Lewis' books? Lesson learned.Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com6tag:blogger.com,1999:blog-21618965.post-1149456709315305282006-06-04T17:30:00.000-04:002006-06-22T22:29:43.330-04:00Grapes of Math PuzzleFirst, let me say that I was really excited about the title I gave to this post. It hit me while mowing my lawn and made me stop and laugh - thankfully the neighbors were not watching (as far as I know). Anyway, I did a quick Google search on the title <em>Grapes of Math</em> to see how original it was. While I had not heard of the title, it has many hits on the Web. Oh well, so much for absolute originality.<br /><br />Tom Kyte has a really good <a href="http://tkyte.blogspot.com/2006/06/very-cool.html">puzzle</a> on his blog. I enjoy a good puzzle and submitted my response, which I believe is the solution to the puzzle. But, for my personal satisfaction and to address Mr. Ed's concerns, I wanted to prove that my response was the only possible correct answer, given some conditions I have derived from Tom's post, the problem (picture) itself and intuition. If you are interested in the puzzle, please visit Tom's blog and try to solve it for yourself and skip the remainder of this post.<br /><br />You mathematicians will please forgive any seemingly barbaric notations or proof layout :) Cary, Jonathan, Tom or any other mathematician lurking about, please feel free to critique the proof if it is incorrect.<br /><br /><br /><em>Grapes of Math</em> equation generated from picture:<br />(10(banana)+apple)/pear = 10(grapes)+peach+(strawberry/pear)<br /><br />Prove the only solution set (banana, apple, pear, grapes, peach, lemon, strawberry) for the <em>Grapes of Math</em> is (9,3,2,4,6,8,1) given the following conditions:<br /><br /><strong>Conditions<br /></strong><br />0) each fruit represents a distinct integer that must be in [0-9]. Negative integers don't really make much sense in this case - how do you ascribe a negative integer to any fruit but the pear?<br />1) the numerator concat(banana,apple) is in [00-99].<br />2) pear cannot be 1 because grapes * pear would equal grapes and it does not. grapes * pear = lemon.<br />3) pear cannot be 0 because x/0 is undefined for all integers x.<br />4) from 2) and 3) 9 >= pear > 1.<br />5) grapes cannot be 1 because grapes * pear would equal pear and it does not. grapes * pear = lemon.<br />6) grapes cannot be greater than 4 because that would yield a concat(banana,apple) that is > two digits, which cannot be (condition 1). For example, the integer portion of the quotient concat(grapes,peach) must be less than 50, based on 4).<br />7) from 5) and 6), 4 >= grapes > 1.<br />8) from 7) banana is in [4-9]. If the integer portion of the quotient (grapes) is 2, 3 or 4, then given 4) the numerator, concat(banana,apple), must be in [40-99] . The least the numerator could be is 40 given 4) and 7). The highest would be 99 by definition.<br />9) all fruits taste really yummy (This is for Mr. Ed)<br /><br /><strong>Proof by Exhaustion (brute force method): Grapes of Math Puzzle<br /></strong><br />Case 1: grapes = 4<br /><br />If grapes = 4 then banana can only be 8 or 9 because of 4).<br /><br />Case 1.1: banana = 8<br /><br />If grapes = 4 and banana = 8 then pear = 2 and lemon = 8. Lemon cannot equal banana by condition 0) and, thusly, banana != 8. Therefore, concat(banana,apple) is not in [80-89].<br /><br />Case 1.2: banana = 9<br /><br />If grapes = 4 and banana = 9 then pear = 2 and lemon = 8. Then by subtraction (banana - lemon ) = (9-8) = 1 = strawberry.<br /><br />Case 1.2.1: apple = 0<br /><br />If apple = 0 then peach = 5 and concat(grapes,peach) = 45 with no remainder. We know that there must be a remainder because strawberry is 1 in this case. Therefore, apple != 0 and concat(banana,apple) is not 90.<br /><br />Case 1.2.2: apple = 1<br /><br />If apple = 1 then apple = strawberry = 1. Therefore, apple != 1 and concat(banana,apple) is not 91.<br /><br />Case 1.2.3: apple = 2<br /><br />If apple = 2 then apple = pear = 2. Therefore apple != 2 and concat(banana,apple) is not 92.<br /><br />Case 1.2.4: apple = 3<br /><br />If apple = 3 then peach = 6 and concat(strawberry,apple) - concat(strawberry,pear) = strawberry = 1. Therefore, grapes = 4, banana = 9, pear = 2, lemon = 8, strawberry = 1 and apple = 3.<br /><br />Therefore, concat(banana,apple) = 93 is a numerator solution.<br /><br />Case 1.2.5: apple = 4<br /><br />If apple = 4 then apple = grape = 4. Therefore, apple != 4 and concat(banana,apple) != 94.<br /><br />Case 1.2.6: apple = 5<br /><br />If apple = 5 then peach = 7 and pear = 4. But, pear is assumed to be 2 and cannot 2 != 4. Therefore, apple != 5 and concat(banana,apple) != 95.<br /><br />Case 1.2.7: apple = 6<br /><br />If apple = 6 then peach = lemon = 8. Therefore, apple != 6 and concat(banana,apple) != 96.<br /><br />Case 1.2.8: apple = 7<br /><br />If apple = 7 then peach = lemon = 8. Therefore, apple != 7 and concat(banana,apple) != 97.<br /><br />Case 1.2.9: apple = 8<br /><br />If apple = 8 then apple = lemon = 8. Therefore, apple != 8 and concat(banana,apple) != 98.<br /><br />Case 1.2.10: apple = 9<br /><br />If apple = 9 then apple = banana = 9. Therefore, apple != 9 and concat(banana,apple) != 99.<br /><br />Therefore, for grapes = 4, the only solution for numerator concat(banana,apple) in [80-99] is 93.<br /><br />Case 2: grapes = 3<br /><br />If grapes = 3 then banana can only be 6 or 7 because pear > 1 from condition 4).<br /><br />Case 2.1: banana = 6<br /><br />If grapes = 3 and banana = 6 then pear = 2 and lemon = 6, and lemon = banana = 6. Therefore, banana != 6 and concat(banana,apple) is not in [60-69].<br /><br />Case 2.2: banana = 7<br /><br />If grapes = 3 and banana = 7 then pear = 2 and lemon = 6. This means apple can only be in [4-5] (cannot be 6 because lemon = apple = 6 violates condition 0).<br /><br />Case 2.2.1: apple = 4<br /><br />If apple = 4 then peach = 3, and peach = grapes = 3. Therefore, apple !=4.<br /><br />Case 2.2.2: apple = 5<br /><br />If apple = 5 then peach = 5. Therefore, apple != 5.<br /><br />Therefore, concat(banana,apple) is not in [70-79].<br /><br />Case 3: grapes = 2<br /><br />If grapes = 2 then banana must be in [4-5] because of condition 4).<br /><br />Case 3.1: banana = 4<br /><br />If banana = 4 then lemon = banana. Therefore, banana != 4 and concat(banana,apple) is not in [40-49].<br /><br />Case 3.2: banana = 5<br /><br />If grapes = 2 and banana = 5 then pear = grapes = 2. Therefore, banana != 5 and concat(banana,apple) is not in [50-59].<br /><br />Therefore, concat(apple,banana) is not in [40 - 59].<br /><br />From grapes in [2-4], we have proved that only one solution (93) exists for the numerator concat(banana,apple) between 40 and 99. By condition 7), concat(banana,apple) is not in [00-39].<br /><br /><strong>Conclusion</strong><br /><br />Therefore, 93 is the only solution for concat(banana,apple) in [00-99]. After exhausting all possible two digit values for concat(banana,apple) only one solution set (banana, apple, pear, grapes, peach, lemon, strawberry) was found:<br /><br />(9,3,2,4,6,8,1)<br /><br />and for Mr. Ed...<br /><br />(9,3,-2,4,6,8,1) iff concat(grapes,peach) = -46<br /><br />Solution set applied to equation of <em>Grapes of Math</em>:<br /><br />(10(9)+3)/2=93/2=46 ½=10(4)+6+(1/2) <em>quod erat demonstrandum</em><br /><br />Note: lemon is absorbed in the equation, given the correctness of strawberry = 1.Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com9tag:blogger.com,1999:blog-21618965.post-1146193673810711592006-04-27T22:52:00.000-04:002006-04-30T21:13:54.703-04:00The Oracle Certification Process<p>In late 1997, on what was a very shiny day (for all of DBA-kind I am sure), I proudly exited my local facility that proctored Oracle certification exams. On this glorious day I had passed the last of four exams required to obtain the coveted Oracle Certified Professional (OCP) title. I was certified on Oracle 7.3 and could not have been more proud. After waiting a few weeks to receive my certificate I brandished it in my home study. Make no mistake about it. I felt this had legitimized my 3 <em>long</em> years of Oracle work to date - I had reached an Oracle summit. At that time the OCP title was not nearly as pervasive as it is today. In hindsight, I suppose my enthusiasm was not entirely unjustified.<br /><br />Let’s roll time forward nine years to 2006. I have <strong>not</strong> renewed my certification. For all practical purposes I am not an OCP. I certainly wouldn’t claim such on my resume having only achieved version 7.3 certification. Why haven’t I renewed my certification? After all, Oracle has bent over backwards to assist this erstwhile OCP by offering an upgrade exam. I can take a single exam and immediately upgrade my certification status to an Oracle 9i OCP. If I labored a bit more, I could take another upgrade exam and attain the highest OCP level available. Does this mean that I could, nearly overnight, claim expertise in all of the concepts and elegant nuances Oracle has built into its database since version 7.3? Professionally, on my resume, I suppose the answer is yes. Realistically, the answer is, no way!</p><p>I feel the only real way to stay current with our Oracle knowledge and exhibit the technical acumen associated with a proficient Oracle practitioner is to read (and reread) documentation and test features. There is absolutely no substitute for good old-fashioned studying in conjunction with trial and error exercises. I have interviewed dozens of Oracle Certified Professionals over the years, many of which struggled with the basics. I do believe that today, more than ever, the ubiquitous OCP title provides little insight into the qualifications of an Oracle DBA. However, I do believe that the certification process can lay an excellent framework for a strong understanding of the Oracle database. Just, not by necessity. It varies from person to person. One person with the same temporal experience with Oracle and an OCP title might appear lacking when compared to another with equal “qualifications” and accomplishments. Why? We all have different approaches to storing information for retrieval. I remember cramming for exams in college for the courses I loathed. I always seemed to make out okay. But, did I really learn the material or just buffer it long enough so that my mind could hurl it back out in the nick of time? I know, for those “undesirable” classes it was the latter. For me to learn I must:<br /><br />1. Want to learn.<br />2. Be passionate about the topic.<br />3. and study, study, study.<br /><br />Of course, there are exceptions to the rules, those supremely intelligent humans that roam the earth with a glut of gray matter that have little need for 3), leaving it for the rest of us to toil.<br /><br />Am I a better DBA than I was nine years ago? I certainly hope so. Could I augment the breadth and depth of my Oracle knowledge by revisiting the certification process? Absolutely. But, couldn’t I really do the same by studying the material covered by the exams? After all, I am passionate about the topic and want to learn. I know. I know. It sounds like a really cheap excuse. Read the material, but, uh hum, skip the exams right? How convenient.<br /><br />For those of you with your OCP please don’t think I am minimizing your achievements. I am certainly not doing so. I believe that the Oracle certification process can yield a very productive learning experience, insofar as we really take the time to authentically learn the material we are studying. It has been my experience, that if I have ostensibly forgotten what I have learned, as long as I truly <em>understood</em> the material while in the learning process, re-learning can be a very quick enterprise.</p><p>By the way, I think I will take the upgrade exams this year. But, this time I refuse to cram. I will revisit the exam topics with a cheerful willingness, as the science of Oracle database administration is a very exciting and challenging branch of knowledge.</p>Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com9tag:blogger.com,1999:blog-21618965.post-1145592214335640612006-04-20T23:53:00.000-04:002006-04-25T18:11:55.306-04:00Solaris and High Wait I/O CPUA few days back a familiar little situation surfaced. Someone monitoring the OS was making claims that a particular machine was running at 100% cpu utilization during a period when a portion of the application was running slower than normal. The assertion being made was our system had a cpu shortage.<br /><br />Given the fact that the application was running on a Solaris platform I looked at the <em>vmstat</em> history logs kept for just such an investigation. Per <em>vmstat</em>, for the time in question, there was plenty of idle cpu. Immediately, I thought this person must have been looking at the <em>sar</em> data on the machine in question. Sure enough, the <em>sar</em> data indicated a very low percentage of idle cpu. As you might have guessed, the percentage of time the system was waiting for I/O was rather large according to <em>sar</em> and, consequently, low idle time was being reported. I explained that it was typical for this system to run a high wait I/O percentage as reported by <em>sar</em>; after all, it is a database server with many processors. I also explained that low idle time as reported by <em>sar</em> does not necessarily mean a cpu bottleneck exists.<br /><br />I remembered reading in Adrian Cockroft’s book, Sun Performance Tuning, that <em>vmstat</em> lumps wait I/O into idle time. So, naturally I was confident in my counter-assertion that our cpu utilization was just fine. I assuredly reached for my copy of the Sun Performance Tuning book to show where I had read this information years ago. I searched the index of the book and gave the book a cursory once-over to no avail. I started doubting whether I had reached for the wrong text! A bit frustrated I decide to perform a full book scan. Low and behold, I only got past two pages before my memory was vindicated. On page 3 it reads. “Whenever there are any blocked processes, all cpu idle time is treated as wait for I/O time! The <em>vmstat</em> command <em><span style="color:#ff0000;">correctly</span></em> includes wait for I/O in its idle value…” Viola!<br /><br />The clock interrupt handler in the Solaris operating system runs every 10ms (or at least used to) to get cpu utilization information. It will search the state structure for each cpu and find that each cpu is in one of five states: user, system, idle, waiting for I/O or quiesced. Based on my understanding, the quiesced state is not really indicated by a value stored in a structure or variable associated with a cpu. It is simply the state when a cpu is not running user, system or idle threads and not waiting for I/O.<br /><br />The point is, a high value for wait I/O generated from <em>sar</em> on a Solaris platform does not indicate a cpu bottleneck. Moreover, high wait I/O values do not necessarily indicate an I/O bottleneck. However, an I/O bottleneck could very easy manifest in high wait I/O percentages. You really need to look at your I/O service times to determine if the I/O subsystem is performing poorly.<br /><br />For those wanting to know more on the algorithm used by Solaris to calculate idle and wait I/O cpu percentages read <a href="http://sunsite.uakom.sk/sunworldonline/swol-08-1997/swol-08-insidesolaris.html">here</a>. It is a bit dated, but describes how wait I/O is tallied in the Solaris operating system (at least in earlier versions). Interestingly enough this article cites Sun Performance Tuning, my trusty reference.Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com7tag:blogger.com,1999:blog-21618965.post-1145401437901735792006-04-18T18:41:00.003-04:002009-06-09T14:43:39.863-04:00Getting a Handle on Logical I/OThe other day a colleague brought to my attention an interesting situation related to one of the databases he supports. The database was, rather consistently, experiencing heavy <em>cache buffers chains</em> (CBC) latch wait events while processing against a set of “related” tables. The solution devised to mitigate the CBC latch contention involved range partitioning said tables. I believe proper partitioning can be a very reasonable approach to minimize the probability of CBC latch collisions. Of course, you must know the manner in which your data is accessed and partition accordingly, as you don’t want to sacrifice existing solid execution plans among other considerations.<br /><br />As it turned out, the partitioning approach did indeed reduce the CBC collisions; albeit another form of contention surfaced as a corollary, <em>cache buffer handles</em> latch collisions. I must admit I had a very limited knowledge of buffer handles prior to being made aware of this situation. My colleague pointed me to a very interesting <a href="http://www.jlcomp.demon.co.uk/buffer_handles.html">article</a> on Jonathan Lewis' site. This article gives a pithy description of buffer handles. I highly recommend you carve out a few minutes to read it. Not only might you learn something about buffer handles, you might be surprised that the more traditional notions of logical I/O do not really suffice. I was first suitably introduced to the <em>buffer is pinned count</em> statistic during a <a href="http://www.hotsos.com/">Hotsos</a> training course. Essentially, this statistic indicates the presence of latch-reduced logical I/O.<br /><br />While, generally speaking, Oracle recommends that hidden parameters not be changed, sometimes they need to be modified to accommodate very specific issues your database is encountering. In this particular case, increasing the value of the <strong>_db_handles_cached</strong> parameter got rid of the newly surfaced collisions on the <em>cache buffer handles</em> latch. I love learning from others’ experiences. It is amazing how many interesting little tales such as this exist. Also, this type of unforeseen contention shifting reinforces the need to properly test production changes - or maybe better said, the <strong>ability</strong> to properly test production changes.Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com6tag:blogger.com,1999:blog-21618965.post-1143577785874700182006-03-28T15:27:00.000-05:002006-03-29T22:11:24.566-05:00Oracle Riddles: Now that is interesting.A process can prevent me from staking the claim for another. Users sometimes need me to know if they can secure a spot where another process may have been before. Needless to say, I am a pretty big deal, and users are very interested in me. However, when some stop by for a visit they often feel I haven't cleaned up very well. Do you know what I am?Eric S. Emrickhttp://www.blogger.com/profile/16274261199118127152noreply@blogger.com6