Tuesday, February 28, 2006

LogMiner and Dynamic Performance Views in Harmony

I want to revisit a topic covered in my paper The Oracle Redo Generation. More specifically, the notion of determining a transaction’s redo size from LogMiner output. A few folks have emailed me with questions so I think a clarification post is warranted, as I anticipate others might have similar questions.

Like I state in the paper, you can use v$mystat/v$sesstat dynamic performance views or LogMiner to discern the volume of redo a transaction generates. These two measures should always agree based on my testing. Let’s examine a very simple transaction that attempts to perform an insert into a test table t but fails on a unique constraint. I chose a statement that fails a constraint to exhibit the undo replay in the redo stream. We will list the redo size data from v$mystat before and after the transaction.

eric@10g:SQL> desc t
Name Null? Type
----------------------------------------- -------- ----------------------------

eric@10g:SQL> select * from t;

---------- ----------
1 1

eric@10g:SQL> select constraint_name, constraint_type, status, deferrable,
rely from user_constraints where table_name = 'T';

------------------------------ - -------- -------------- ----

eric@10g:SQL> select 'REDO_START',
(select value from v$mystat where a.statistic#=statistic#) value
from v$statname a where name = ('redo size');

---------- ----------

eric@10g:SQL> select 'ENTRY_START',
(select value from v$mystat where a.statistic#=statistic#) value
from v$statname a where name = ('redo entries');

----------- ----------

eric@10g:SQL> insert into t values (1,1);
insert into t values (1,1)
ERROR at line 1:
ORA-00001: unique constraint (ERIC.SYS_C0010643) violated

eric@10g:SQL> select 'REDO_STOP',
(select value from v$mystat where a.statistic#=statistic#) value
from v$statname a where name = ('redo size');

--------- ----------

eric@10g:SQL> select 'ENTRY_STOP',
(select value from v$mystat where a.statistic#=statistic#) value
from v$statname a where name = ('redo entries');

---------- ----------

eric@10g:SQL> select sequence#, status from v$log;

---------- ----------------

eric@10g:SQL> alter system switch logfile;

System altered.

eric@10g:SQL> select sequence#, archived from v$log where sequence#=273;

---------- ---
273 YES

eric@10g:SQL> connect / as sysdba

'/tmp/eric_273.arc', Options => dbms_logmnr.ADDFILE);

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.

sys@10g:SQL> select scn,data_obj#,substr(operation,1,20) OP,
rbablk,rbabyte from v$logmnr_contents;

--------------- ---------- -------------------- ---------- ----------
10237995 0 START 2 16
10237995 64105 INSERT 2 16
10237995 64105 DELETE 3 80
10237996 0 ROLLBACK 3 248

sys@10g:SQL> alter system dump logfile '/tmp/eric_273.arc';

System altered.

sys@10g:SQL> exit

Okay, let’s review what we see. The v$mystat dynamic performance view claims that 808 bytes of redo has been generated from this failed transaction. Moreover, v$mystat claims our user session has created 3 redo entries. Let’s reconcile this information with the output from LogMiner. The only change vectors in the mined file are those associated with my transaction as I was the only user on the database. I switched into the mined log before starting my transaction and switched out immediately after.

The test platform OS block size is 512 bytes. Therefore, each redo block (RBABLK) is comprised of 512 bytes. The RBABYTE (OFFSET) value represents the offset into the RBABLK where a particular redo record begins. The first 16 bytes of redo in each RBABLK are reserved for header information. This limits the actual space for transaction redo entries to 496 bytes per RBABLK.

If we walk through each entry we should be able to calculate the 808 bytes of redo v$mystat claims my session created. The START and INSERT redo records are the very same redo record. LogMiner simply gives us the START line for our benefit. It comes in handy when reviewing a log file with many transactions. This means we should not double count (indeed even hard to do given the method of calculation) the redo, but only consider the INSERT redo record as our “redo start” position. LogMiner tells us that the INSERT statement consumes redo from RBABLK=2 OFFSET=16 and spans through RBABLK=3 OFFSET=79. To calculate the redo owed to the INSERT redo record we add the redo owed to the INSERT in RBABLK=2 to that in RBABLK=3. That is, 496 bytes for RBABLK=2 and (80-16=64) bytes for RBABLK=3. Therefore, the total redo attributed to the INSERT redo record is 496 + 64 = 560 bytes.

Why do we see a DELETE operation? Well, remember the INSERT failed a uniqueness constraint and needed to be rolled back. The DELETE operation is the undo portion of this transaction. It is easy to see that the DELETE operation accounts for only (248-80=168) bytes. This means our INSERT and DELETE operations comprise 560 + 168 = 728 bytes. We can then deduce that the ROLLBACK entry comprises the remaining 80 bytes. However, let’s take a slightly different approach so that we can be 100% certain our ROLLBACK marker is indeed 80 bytes.

I used a Solaris egrep command to scrape the log file dump and get at the redo record sizes.

/opt/oracle/admin/10g/udump # egrep 'REDO RECORDCHANGE' 10g_ora_10436.trc
REDO RECORD - Thread:1 RBA: 0x000111.00000002.0010 LEN: 0x0230 VLD: 0x05
CHANGE #1 TYP:0 CLS:17 AFN:2 DBA:0x00800009 OBJ:4294967295 SCN:0x0000.009c3824 SEQ: 1 OP:5.2
CHANGE #2 TYP:0 CLS:18 AFN:2 DBA:0x00800a7f OBJ:4294967295 SCN:0x0000.009c3823 SEQ: 1 OP:5.1
CHANGE #3 TYP:0 CLS: 1 AFN:4 DBA:0x01000056 OBJ:64105 SCN:0x0000.009c3805 SEQ: 2 OP:11.2
CHANGE #4 MEDIA RECOVERY MARKER SCN:0x0000.00000000 SEQ: 0 OP:5.19

REDO RECORD - Thread:1 RBA: 0x000111.00000003.0050 LEN: 0x00a8 VLD: 0x01
CHANGE #1 TYP:0 CLS: 1 AFN:4 DBA:0x01000056 OBJ:64105 SCN:0x0000.009c382b SEQ: 1 OP:11.3
CHANGE #2 TYP:0 CLS:17 AFN:2 DBA:0x00800009 OBJ:4294967295 SCN:0x0000.009c382b SEQ: 1 OP:5.11
REDO RECORD - Thread:1 RBA: 0x000111.00000003.00f8 LEN: 0x0050 VLD: 0x01
CHANGE #1 TYP:0 CLS:17 AFN:2 DBA:0x00800009 OBJ:4294967295 SCN:0x0000.009c382b SEQ: 2 OP:5.4

Each redo record has a LEN attribute denoting the length of the redo record in hexadecimal format. The redo byte address or RBA is also given. This can be used to cross reference the RBABLK and RBABYTE information from the LogMiner output to ensure we are getting at the correct data. The redo byte address contains the log sequence#, redo block number and the redo byte offset. Note the first redo record is positioned at redo block 0x2=2 and redo byte 0x0010=16. This corresponds with the first redo record from the LogMiner output RBABLK=2, RBABYTE=16. The LEN attribute for this record is 0x0230=560. This also matches the 560 bytes calculated via the LogMiner output for the INSERT redo record. The next redo record is DELETE operation with 0x00a8=168 bytes. The last redo record at redo block 0x3=3 and redo byte 0x00f8=248 is the ROLLBACK entry. The LEN attribute for this redo record is 0x0050=80. The ROLLBACK redo record is now shown to be 80 bytes that we deduced from LogMiner.

One final comment on this topic, notice there are 3 redo records that constitute this transaction’s redo footprint. I displayed the redo entries statistic before and after the transaction to show that a redo record is synonymous with a redo entry catalogued in the v$mystat/v$sesstat/v$sysstat dynamic performance views. We hear a lot about redo records and change vectors when Oracle redo is mentioned in documentation. Redo records are comprised of one or more change vectors. An SCN is associated with a redo record and not the individual change vectors directly. Additionally, many redo records can have the same SCN.

I apologize for this being such a cryptic read. I promise I perpetually strive to eschew obfuscation ;-)


Blogger daspeac said...

I have heard about another way of copied pdfs now damaged will not open. Besides, you can visit my blogs at: http://daspeac.livejournal.com/ or http://daspeac.blogspot.com/ where I’m trying to share my experience with regard to data corruption issues

10/30/2010 12:10 PM  

Post a Comment

<< Home