SQL*Loader and Redo...
Conventional path SQL*Loader is a very nice and reliable tool. But, we must be a bit careful if the data we load has a high incidence of failed rows resulting from unique constraint violations. In my recent white paper The Oracle Redo Generation I assert that SQL*Loader can incur serious redo overhead if unique constraint violations abound. More specifically, I make the claim that Oracle will actually reprocess an entire batch of rows (ROWS=n) if only a single row in the batch fails an integrity constraint.
Let's first establish the facts before we try to understand Oracle's rationale. With the aid of two very nice devices, LogMiner and log file dumps, we can easily demonstrate this reprocessing behavior. Certainly, a test case is warranted.
I created a simple test table with a primary key constraint enforced by a non-unique index. The non-unique index is used to show the index-optimistic behavior of Oracle. I staged my 10 rows to be loaded such that the 8th row would fail the primary key constraint. The batch size for the load was ROWS=5. I switched into a new redo log prior to loading the data and switched out immediately after the load. This made my life easier when interpreting the LogMiner data (exponentially easier when swimming through the raw log file dump).
Here is what LogMiner revealed:
SQL> select scn, rel_file#, data_blk#, data_obj#, substr(operation,1,15) OPER, rbablk, rbabyte from v$logmnr_contents;
If you are keeping score the redo associated with this small 10 row load constituted the following: 12 rows inserted in separate bulk operations and the index maintenance therein, 2 rows inserted in separate change vectors and their index maintenance, 5 rows deleted in a single bulk operation and, yes, the index maintenance associated with these deletes.
To load 10 rows SQL*Loader actually performed 14 inserts, 5 deletes and plenty of index maintenance. If the batch size were much larger then a single unique constraint violation could have caused many more rows to be reprocessed one-by-one.
Based on the LogMiner data it appears Oracle, when reprocessing a failed batch, will employ singleton inserts until the known failed row is encountered. After skipping the failed row Oracle then attempts to bulk-insert the remaining rows in the batch.
Ok, so it is nice to know that Oracle behaves this way. But, why does it? Why the reprocessing of rows? From a redo perspective bulk-inserts are cheaper than singleton inserts for the same number of rows. When Oracle constructs a redo record for a bulk activity there is less overhead per unit of work. More precisely there are fewer change vectors. Look at the RBABLK and RBABYTE columns above in the LogMiner query output. Notice the first 5 row bulk insert is associated with a single redo record, i.e. they have the same RBABLK and RBABYTE values. Actually the first five-row bulk-insert that succeeded and the relevant index maintenance generated 1452 bytes of redo. The two singleton inserts (rows 9 and 10) and its index maintenance generated 1020 bytes of redo. If you extrapolate the redo owed to five singleton inserts for this test case (510*5=2550) the singleton inserts are (2550/1452=1.76) 76% more expensive.
Given the relative redo savings of bulk-inserts (and the resultant performance savings) and the all-or-nothing quality of the single redo record for the bulk activity, Oracle is hedging that your data will not violate uniqueness constraints. In the case where violations are encountered Oracle reprocesses the rows one-by-one until it knows that another bulk-insert can be attempted against the batch. The upside to bulk-inserts can quickly diminish if batches frequently contain rows that will violate a uniqueness constraint.
If your data predisposes conventional path SQL*Loader to violate constraints your redo can be dramatically increased. For more on the redo associated with constraint violations you can get all of the lurid details in The Oracle Redo Generation :) It just so happens that the Oracle Import utility also uses a reprocessing technique.
Let's first establish the facts before we try to understand Oracle's rationale. With the aid of two very nice devices, LogMiner and log file dumps, we can easily demonstrate this reprocessing behavior. Certainly, a test case is warranted.
I created a simple test table with a primary key constraint enforced by a non-unique index. The non-unique index is used to show the index-optimistic behavior of Oracle. I staged my 10 rows to be loaded such that the 8th row would fail the primary key constraint. The batch size for the load was ROWS=5. I switched into a new redo log prior to loading the data and switched out immediately after the load. This made my life easier when interpreting the LogMiner data (exponentially easier when swimming through the raw log file dump).
Here is what LogMiner revealed:
SQL> select scn, rel_file#, data_blk#, data_obj#, substr(operation,1,15) OPER, rbablk, rbabyte from v$logmnr_contents;
SCN REL_FILE# DATA_BLK# DATA_OBJ# OPER RBABLK RBABYTE
--------------- ---------- ---------- ---------- --------------- ---------- ----------
3016007 0 0 0 START 2 16
3016007 3 394 6459 INSERT 2 16
3016007 3 394 6459 INSERT 2 16
3016007 3 394 6459 INSERT 2 16
3016007 3 394 6459 INSERT 2 16
3016007 3 394 6459 INSERT 2 16
3016007 0 0 6460 INTERNAL 3 92
3016007 0 0 6460 INTERNAL 3 332
3016007 0 0 6460 INTERNAL 4 52
3016007 0 0 6460 INTERNAL 4 268
3016007 0 0 6460 INTERNAL 4 484
3016008 0 0 0 COMMIT 5 204
3016009 0 0 0 START 6 16
3016009 3 394 6459 INSERT 6 16
3016009 3 394 6459 INSERT 6 16
3016009 3 394 6459 INSERT 6 16
3016009 3 394 6459 INSERT 6 16
3016009 3 394 6459 INSERT 6 16
3016009 0 0 6460 INTERNAL 7 60
3016009 0 0 6460 INTERNAL 7 300
3016009 0 0 6460 INTERNAL 8 20
3016009 0 0 6460 INTERNAL 8 220
3016009 0 0 6460 INTERNAL 8 412
3016009 0 0 6460 INTERNAL 9 116
3016009 0 0 6460 INTERNAL 9 332
3016009 0 0 6460 INTERNAL 9 464
3016009 0 0 6460 INTERNAL 10 100
3016009 0 0 6460 INTERNAL 10 232
3016009 0 0 6460 INTERNAL 10 364
3016009 0 0 6460 INTERNAL 10 496
3016009 3 394 6459 DELETE 11 148
3016009 3 394 6459 DELETE 11 148
3016009 3 394 6459 DELETE 11 148
3016009 3 394 6459 DELETE 11 148
3016009 3 394 6459 DELETE 11 148
3016009 3 394 6459 INSERT 11 320
3016009 0 0 6460 INTERNAL 12 148
3016009 3 394 6459 INSERT 12 388
3016009 0 0 6460 INTERNAL 13 132
3016009 3 394 6459 INSERT 13 348
3016009 3 394 6459 INSERT 13 348
3016009 0 0 6460 INTERNAL 14 92
3016009 0 0 6460 INTERNAL 14 308
3016010 0 0 0 COMMIT 15 28
What is all of this LogMiner stuff telling us? With the help of a log file dump it was apparent that the following occurred. Oracle attempted to bulk-insert the first batch of 5 rows and succeeded. Index maintenance for those five rows followed, as denoted by the INTERNAL operations. Oracle attempted to bulk-insert the second batch. Oracle was optimistic regarding the table data changes and bulk-inserted the 5 rows in the second batch. The index leaf rows were written for the first 3 entries and upon failing the constraint were subsequently purged. This accounts for the twelve INTERNAL operations. This was followed by the bulk-delete against the table. Oracle re-inserted the first two rows of the second batch one-by-one, maintaining the index for each table change vector. It discarded the 3rd row in the second batch and bulk-inserted the remaining 2 rows. Index maintenance for those two rows followed.If you are keeping score the redo associated with this small 10 row load constituted the following: 12 rows inserted in separate bulk operations and the index maintenance therein, 2 rows inserted in separate change vectors and their index maintenance, 5 rows deleted in a single bulk operation and, yes, the index maintenance associated with these deletes.
To load 10 rows SQL*Loader actually performed 14 inserts, 5 deletes and plenty of index maintenance. If the batch size were much larger then a single unique constraint violation could have caused many more rows to be reprocessed one-by-one.
Based on the LogMiner data it appears Oracle, when reprocessing a failed batch, will employ singleton inserts until the known failed row is encountered. After skipping the failed row Oracle then attempts to bulk-insert the remaining rows in the batch.
Ok, so it is nice to know that Oracle behaves this way. But, why does it? Why the reprocessing of rows? From a redo perspective bulk-inserts are cheaper than singleton inserts for the same number of rows. When Oracle constructs a redo record for a bulk activity there is less overhead per unit of work. More precisely there are fewer change vectors. Look at the RBABLK and RBABYTE columns above in the LogMiner query output. Notice the first 5 row bulk insert is associated with a single redo record, i.e. they have the same RBABLK and RBABYTE values. Actually the first five-row bulk-insert that succeeded and the relevant index maintenance generated 1452 bytes of redo. The two singleton inserts (rows 9 and 10) and its index maintenance generated 1020 bytes of redo. If you extrapolate the redo owed to five singleton inserts for this test case (510*5=2550) the singleton inserts are (2550/1452=1.76) 76% more expensive.
Given the relative redo savings of bulk-inserts (and the resultant performance savings) and the all-or-nothing quality of the single redo record for the bulk activity, Oracle is hedging that your data will not violate uniqueness constraints. In the case where violations are encountered Oracle reprocesses the rows one-by-one until it knows that another bulk-insert can be attempted against the batch. The upside to bulk-inserts can quickly diminish if batches frequently contain rows that will violate a uniqueness constraint.
If your data predisposes conventional path SQL*Loader to violate constraints your redo can be dramatically increased. For more on the redo associated with constraint violations you can get all of the lurid details in The Oracle Redo Generation :) It just so happens that the Oracle Import utility also uses a reprocessing technique.
1 Comments:
I have heard about another dbf file repairing tool. Besides, you can visit my blogs at: http://daspeac.livejournal.com/ or http://daspeac.blogspot.com/ where I’m trying to share my experience with regard to data corruption issues
Post a Comment
<< Home