Quantcast
Channel: OracleBuffer
Viewing all articles
Browse latest Browse all 58

OGG-01973: The redo record indicates data loss on object

$
0
0

Problem Description

In one of our GoldenGate setup, the EXTRACT process got ABENDED with OGG-01973 error (The redo record indicates data loss on object)

2015-08-08 00:35:37  ERROR   OGG-01973  Oracle GoldenGate Capture for Oracle, EX_MYPROD.prm:  The redo record indicates data loss on object 13,315.
2015-08-08 00:35:38  INFO    OGG-01055  Oracle GoldenGate Capture for Oracle, EX_MYPROD.prm:  Recovery initialization completed for target file /oraggsdata/dirdat/MYPROD/EX_MYPROD/et000042, at RBA 1161.
2015-08-08 00:35:38  INFO    OGG-01478  Oracle GoldenGate Capture for Oracle, EX_MYPROD.prm:  Output file /oraggsdata/dirdat/MYPROD/EX_MYPROD/et is using format RELEASE 11.2.
2015-08-08 00:35:38  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, EX_MYPROD.prm:  Rolling over remote file /oraggsdata/dirdat/MYPROD/EX_MYPROD/et000042.
2015-08-08 00:35:38  INFO    OGG-01053  Oracle GoldenGate Capture for Oracle, EX_MYPROD.prm:  Recovery completed for target file /oraggsdata/dirdat/MYPROD/EX_MYPROD/et000043, at RBA 1161.
2015-08-08 00:35:38  INFO    OGG-01057  Oracle GoldenGate Capture for Oracle, EX_MYPROD.prm:  Recovery completed for all targets.
2015-08-08 00:35:38  INFO    OGG-00991  Oracle GoldenGate Capture for Oracle, EX_MYPROD.prm:  EXTRACT EX_MYPROD stopped normally.
2015-08-08 00:36:34  INFO    OGG-00948  Oracle GoldenGate Manager for Oracle, mgr.prm:  Lag for REPLICAT RT_MYPROD is 00:00:00 (checkpoint updated 00:00:07 ago).
2015-08-08 00:36:34  INFO    OGG-00948  Oracle GoldenGate Manager for Oracle, mgr.prm:  Lag for EXTRACT DP_MYPROD is 00:00:00 (checkpoint updated 00:00:05 ago).
2015-08-08 00:36:34  WARNING OGG-00946  Oracle GoldenGate Manager for Oracle, mgr.prm:  EXTRACT EX_MYPROD abended.
2015-08-08 00:36:34  WARNING OGG-00947  Oracle GoldenGate Manager for Oracle, mgr.prm:  Lag for EXTRACT EX_MYPROD is 14:49:00 (checkpoint updated 00:00:56 ago).
2015-08-08 00:47:35  INFO    OGG-00975  Oracle GoldenGate Manager for Oracle, mgr.prm:  EXTRACT EX_MYPROD starting.
2015-08-08 00:47:35  INFO    OGG-00965  Oracle GoldenGate Manager for Oracle, mgr.prm:  EXTRACT EX_MYPROD restarted automatically.


GGSCI (ggserver1) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     DP_MYPROD     00:17:36      00:00:05
EXTRACT     ABENDED     EX_MYPROD     14:49:00      00:00:56
REPLICAT    RUNNING     RT_MYPROD     00:00:00      00:00:01

Finding the root cause

As per Oracle documentation, OGG-01973 has the following description

OGG-01973: The redo record indicates data loss on object {0}.
Cause: Logging of the specified object is not enabled.
Action: Enable logging on the object or use the TRANLOGOPTIONS parameter with the ALLOWDATALOSS option.

Based on this information, I need to find out if the table/object has LOGGING enabled or not. But before that we need to know the name of the table/object which needs to be checked for LOGGING settings. (I don’t want to use the ALLOWDATALOSS option as that would lead to data loss in my replication setup which is not desired)

The name of the table/object can be easily determined from the following OGG error message.

2015-08-08 00:35:37  ERROR   OGG-01973  Oracle GoldenGate Capture for Oracle, EX_MYPROD.prm:  The redo record indicates data loss on object 13,315.

As per the error message, the object with ID 13315 is the cause of the failure. Lets find out which object is this.

SQL> select owner,object_name,object_type from dba_objects where object_id=13315;

OWNER                          OBJECT_NAME               OBJECT_TYPE
------------------------------ ------------------------- -------------------
MYAPP                   	   WEB_DOC_CONTENTS          TABLE

We have found out the table, which is causing GoldenGate EXTRACT to abend. Now, lets check if LOGGING is enabled for this table (as the error description suggests its not and which is causing the issue)

SQL> select owner,table_name,logging from dba_tables where table_name=(select object_name from dba_objects where object_id=13315);

OWNER                          TABLE_NAME                     LOG
------------------------------ ------------------------------ ---
MYAPP                          WEB_DOC_CONTENTS               YES

This table has LOGGING enabled, then why EXTRACT had abended with an indication that LOGGING is not enabled for this table?

Well, there are other possibilities, where an object can be defined in NOLOGGING mode. You can refer here for the list of database objects that can be defined is NOLOGGING mode.

This table has a LOB column and we have the option where we can specify whether to enable or disable LOGGING for a LOB object. Lets check if LOGGING is enabled for LOB segments of this table in question.

SQL> select owner,table_name,column_name,TABLESPACE_NAME,LOGGING from dba_lobs where OWNER='MYAPP' and TABLE_NAME='WEB_DOC_CONTENTS';

OWNER      TABLE_NAME           COLUMN_NAME     TABLESPACE_NAME LOGGING
---------- -------------------- --------------- --------------- -------
MYAPP 	   WEB_DOC_CONTENTS     DOCUMENT        MYPROD_DATA     NO

As we can see, the LOB column is defined in NOLOGGING mode and this is causing GoldenGate EXTRACT to abend.

Fixing the Problem

We have now identified that it is not the table which is causing EXTRACT to abend rather it is the underlying LOB segment within the table which was defined in NOLOGGING mode. Lets change the LOGGING settings for the LOB column.

SQL> alter table MYAPP.WEB_DOC_CONTENTS modify LOB(DOCUMENT) (NOCACHE LOGGING);

Table altered.

Lets validate if LOGGING is now enabled for the LOB column.

SQL> select owner,table_name,column_name,TABLESPACE_NAME,LOGGING from dba_lobs where OWNER='MYAPP' and TABLE_NAME='WEB_DOC_CONTENTS';

OWNER      TABLE_NAME           COLUMN_NAME     TABLESPACE_NAME LOGGING
---------- -------------------- --------------- --------------- -------
MYAPP 	   WEB_DOC_CONTENTS     DOCUMENT        MYPROD_DATA     YES

Now lets start the EXTRACT process.

GGSCI (ggserver1) 2> start er *

Sending START request to MANAGER ...
EXTRACT DP_MYPROD starting

Sending START request to MANAGER ...
EXTRACT EX_MYPROD starting

Sending START request to MANAGER ...
REPLICAT RT_MYPROD starting


GGSCI (ggserver1) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     DP_MYPROD     00:17:36      00:00:07
EXTRACT     RUNNING     EX_MYPROD     00:00:00      00:00:00
REPLICAT    RUNNING     RT_MYPROD     00:00:00      00:00:01

GGSCI (ggserver1) 2> !
info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     DP_MYPROD     00:00:00      00:00:06
EXTRACT     RUNNING     EX_MYPROD     00:00:00      00:00:03
REPLICAT    RUNNING     RT_MYPROD     00:00:00      00:00:02

EXTRACT is now up and running fine without any LAG. Problem resolved !!

Conclusion

Sometimes errors can be misleading. Occasionally, we need to dig a little more to find out the root cause and this situation was one of those occasions.


Viewing all articles
Browse latest Browse all 58

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>