I answered a question on the OTN forum this morning that I figured is pertinent enough to post on my blog…it’s a very common question among DBAs and developers, and I think there are plenty of myths surrounding it as well.
Question: I am updating 1 million rows in Oracle 10g platform, normally when I do it in oracle 9i I run it as batch process and commit after each batch. Obviously to avoid/control undo generation. But in Oracle 10g I am told undo management is automatic and I do not need run the update as batch process.
Is this right please throw some light on this new feature – automatic undo management
Answer: Automatic undo management was available in 9i as well, and my guess is you were probably using it there. However, I’ll assume for the sake of this writing that you were using manual undo management in 9i and are now on automatic.
Automatic undo management depends upon UNDO_RETENTION, a parameter that defines how long Oracle should try to keep committed transactions in UNDO. However, this parameter is only a suggestion. You must also have an UNDO tablespace that’s large enough to handle the amount of UNDO you will be generating/holding, or you will get ORA-01555: Snapshot too old, rollback segment too small errors.
You can use the UNDO advisor to find out how large this tablespace should be given a desired UNDO retention, or look online for some scripts…just google for: oracle undo size
Oracle 10g also gives you the ability to guarantee undo. This means that instead of throwing an error on SELECT statements, it guarantees your UNDO retention for consistent reads and instead errors your DML that would cause UNDO to be overwritten.
Now, for your original question…yes, it’s easier for the DBA to minimize the issues of UNDO when using automatic undo management. If you set the UNDO_RETENTION high enough with a properly sized undo tablespace you shouldn’t have as many issues with UNDO. How often you commit should have nothing to do with it, as long as your DBA has properly set UNDO_RETENTION and has an optimally sized UNDO tablespace. Committing more often will only result in your script taking longer, more LGWR/DBWR issues, and the “where was I” problem if there is an error (if it errors, where did it stop?).
Lastly (and true even for manual undo management), if you commit more frequently, you make it more possible for ORA-01555 errors to occur. Because your work will be scattered among more undo segments, you increase the chance that a single one may be overwritten if necessary, thus causing an ORA-01555 error for those that require it for read consistency.
It all boils down to the size of the undo tablespace and the undo retention, in the end…just as manual management boiled down to the size, amount, and usage of rollback segments. Committing frequently is a peroxide band-aid: it covers up the problem, tries to clean it, but in the end it just hurts and causes problems for otherwise healthy processes.
I should probably add here that commits should occur at the end of a logical transaction. This means you should not do them based on a modulus-driven commit or other method, but by the business rules.
I would have different wish for UNDO behavior. Just wondering if you have any opinion …
I am interested in migrating a large database (9i) on a different platform – and I am forced to use export-import (full) technique. I have some tables with few hundreds-millions records and lot of indexes on those tables. Besides other speed tuning techniques for import (e.g. disabling archive log and even the use of _disable_logging=TRUE), I would like to change the behavior of Oracle engine to completely avoid the UNDO generation – I mean, for the duration of import only, I would like to have just the data coming into place, bypassing any unnecessary process like generating UNDO. I am also interested in finding a way of forcing Oracle engine to use a particular amount of PGA and a particular size of SORT_AREA in order to avoid the sorting taking place in TEMP.
…
So, just adding something to this (or to another) topic …
Please remember that from 10g Release 2 (10.2.x) and above the UNDO_RETENTION – parameter has become obsolete. You do not need to set it anymore since Oracle doesn’t use it.
The size of the actual UNDO tablespace will limit the amount of Undo records held => size the UNDO tablespace properly to hold enough Undo data.
Great site and cool examples!
Pasi Parkkonen
i have not understood that if undo_retention is higher as u suggest then obviously it take that much time to hold the undo objects . i think if undo_retention is lower then only it will work fine to insert millions of record.
In regards to Pasi’s post, that is incorrect. UNDO_RETENTION is alive and well in the versions of Oracle used on planet Earth. Old post, but wrong is wrong.
I have a big table 3 TB ( 3 billions rows ) I am rebuild the table using a plsql script,
3 plsql , readind the table and insert in a new table , I am alone in the db , after 16 hours I got ORA-1555, the commit is after 200.000 rows ( forall ) , Is there a way to avoid ORA-1555?
hi,
we are getting ORA-01555: snapshot too old: rollback segment number error when we are trying to fetch a data from other database and try to insert in to ours.we are running this as a batch and moreover we are not using any cursors in this
example
delete our_db_table;
begin
insert into our_db_table
(col1,col2,col3)
select col1,col2,col3 from other_db_table;
update db_erro_table;
commit;
end
we think instead of hitting other database everytime , if we use cursor or buffer it will reduce the executing time and get rid of the above error.
kindly suggest