Introduction
This article outlines the steps required to relocate the Grid Infrastructure Management Repository (GIMR) database MGMTDB to a different file system storage. By default OUI (Oracle Universal Installer) creates the MGMTDB on the same file system where OCR and VOTING files reside.
This default storage location for MGMTDB may cause problem and impact the availability of Clusterware (GI) when there is space limitation or when we want to increase the repository size/retention. It is recommended to relocate the repository database MGMTDB to its dedicated storage once the Clusterware binaries are installed. This gives us more flexibility in terms of maintaining and managing the GIMR database.
Oracle has published a MOS note 1589394.1, which outlines the steps required to move the MGMTDB database to a different storage. However, as part of my testing I have found that the document is not complete and we need to take few additional steps in order to have a functional MGMTDB database on a different (non-default) storage location.
The relocation of the MGMTDB database involves dropping the database and recreating it on the new file system. The relocation process will not backup the existing data stored in the MGMTDB database and hence if required we can use the following command from any cluster node to back up the data in text format.
---// command to take backup diagnostic data //-- $GRID_HOME/bin/oclumon dumpnodeview [[-allnodes] | [-n node1 node2 noden] [-last "duration"] | [-s "time_stamp" -e "time_stamp"] [-i interval] [-v]] [-h]
Stop and Disable CHM Clusterware (GI) resource ora.crf
Cluster Health Monitor (CHM) is the clusterware component responsible for collecting and stroing diagnostic data in the repository database MGMTDB. Therefore, before we can start relocating the MGMTDB database, we need to stop the CHM resource ora.crf on each cluster node and disable it to prevent it from starting automatically during the relocation process.
Execute the following commands to stop and disable ora.crf on each cluster node
---// commands to stop ora.crf resource //--- $GRID_HOME/bin/crsctl stop res ora.crf -init ---// disable auto start of ora.crf resource //--- $GRID_HOME/bin/crsctl modify res ora.crf -attr ENABLED=0 -init
Example:
myracserver1 {/home/oracle}: $GRID_HOME/bin/crsctl stop res ora.crf -init CRS-2673: Attempting to stop 'ora.crf' on 'myracserver1' CRS-2677: Stop of 'ora.crf' on 'myracserver1' succeeded myracserver2 {/home/oracle}: $GRID_HOME/bin/crsctl stop res ora.crf -init CRS-2673: Attempting to stop 'ora.crf' on 'myracserver2' CRS-2677: Stop of 'ora.crf' on 'myracserver2' succeeded
Ensure that MGMTDB is up and running and locate the hosting node
We need to delete the MGMTDB by running dbca command from the node where the MGMTDB database is running. For that, we need to first identify the hosting node. This can be done using the following SRVCTL command (or using OCLUMON/CRSCTL)
---// command to locate node hosting MGMTDB database //--- srvctl status mgmtdb
Example
myracserver1 {/home/oracle}: srvctl status mgmtdb Database is enabled Instance -MGMTDB is running on node myracserver1
Please note that, we must not stop the MGMTDB database, before initiating the delete operation. If the MGMTDB database is stopped, the delete operation will fail with following information
---// error when MGMTDB is not stopped //--- Oracle Grid Management database is running on node "". Run dbca on node "" to delete the database.
Use DBCA to delete the management database (MGMTDB)
Once we identify the hosting node and ensured that the MGMTDB database is up and running, we can use the following DBCA command to delete the management database.
---// command to delete MGMTDB database //--- $GRID_HOME/bin/dbca -silent -deleteDatabase -sourceDB -MGMTDB
Example
---// deleting MGMTDB database //--- myracserver1 {/home/oracle}: $GRID_HOME/bin/dbca -silent -deleteDatabase -sourceDB -MGMTDB Connecting to database 4% complete 9% complete 14% complete 19% complete 23% complete 28% complete 47% complete Updating network configuration files 48% complete 52% complete Deleting instance and datafiles 76% complete 100% complete Look at the log file "/app/oracle/cfgtoollogs/dbca/_mgmtdb.log" for further details. myracserver1 {/home/oracle}:
Validate the deletion of the management database
---// validating MGMTDB is deleted //--- myracserver1 {/home/oracle}: srvctl status mgmtdb PRCD-1120 : The resource for database _mgmtdb could not be found. PRCR-1001 : Resource ora.mgmtdb does not exist myracserver1 {/home/oracle}:
Use DBCA to recreate management database on new file system
We can now recreate the management database MGMTDB on a different (non-default) file system using the following commands. This command must be executed on only one cluster node. This will create a single instance container database (CDB)
---// MGMTDB creation command for ASM file system //--- $GRID_HOME/bin/dbca -silent -createDatabase -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName {+NEW_DG} -datafileJarLocation $GRID_HOME/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -oui_internal ---- Where: {+NEW_DG} is the new ASM diskgroup name where the management database MGMTDB needs to be created.
---// MGMTDB creation command for NFS/Shared file system //--- $GRID_HOME/bin/dbca -silent -createDatabase -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType FS -datafileDestination {NEW_FS} -datafileJarLocation $GRID_HOME/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -oui_internal ---- Where: {NEW_FS} is the new NFS or CFS file system where the management database MGMTDB needs to be created.
Example: In the following example, I am recreating the management database in new shared file system “/data/gimr/”
---// re-creating MGMTDB under location "/data/gimr/" //--- myracserver1 {/home/oracle}: $GRID_HOME/bin/dbca -silent -createDatabase -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType FS -datafileDestination /data/gimr/ -datafileJarLocation $GRID_HOME/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -oui_internal Registering database with Oracle Grid Infrastructure 5% complete Copying database files 7% complete 9% complete 16% complete 23% complete 41% complete Creating and starting Oracle instance 43% complete 48% complete 53% complete 57% complete 58% complete 59% complete 62% complete 64% complete Completing Database Creation 68% complete 69% complete 80% complete 90% complete 100% complete Look at the log file "/app/oracle/cfgtoollogs/dbca/_mgmtdb/_mgmtdb.log" for further details. myracserver1 {/home/oracle}:
Validate that the management database is recreated and is running
myracserver1 {/home/oracle}: srvctl status mgmtdb Database is enabled Instance -MGMTDB is running on node myracserver1
However, at this moment only the container database (CDB) is created and there is no pluggable database (PDB) associated with it.
---// repository is not yet created //--- SQL> select name,cdb from v$database; NAME CDB --------- --- _MGMTDB YES SQL> select dbid,name,open_mode from v$pdbs; no rows selected
We need to create the pluggable database (PDB) in the next step which is the actual management repository for storing diagnostic data.
Use DBCA to create the management repository pluggable database
The management repository pluggable database (PDB) must be named after the cluster name. Therefore, we need to first identify the cluster name; which can be done as follows
---// finding the cluster name //--- myracserver1 {/home/oracle}: olsnodes -c my-rac-cluster
Once the cluster name is identified, we can use the following DBCA command to create the management repository pluggable database. One point to note here is, if the cluster name has hyphen (-) in between the name, it needs to be replaced with underscore (_) while mentioning as the pluggable database name.
---// command to create repository pluggable database //--- $GRID_HOME/bin/dbca -silent -createPluggableDatabase -sourceDB -MGMTDB -pdbName {pluggable_db_name} -createPDBFrom RMANBACKUP -PDBBackUpfile $GRID_HOME/assistants/dbca/templates/mgmtseed_pdb.dfb -PDBMetadataFile $GRID_HOME/assistants/dbca/templates/mgmtseed_pdb.xml -createAsClone true -internalSkipGIHomeCheck
Example: In this example, I am creating the management repository pluggable database “my_rac_cluster” named after the cluster “my-rac-cluster”
---// creating repository pluggable database my_rac_cluster //--- myracserver1 {/home/oracle}: $GRID_HOME/bin/dbca -silent -createPluggableDatabase -sourceDB -MGMTDB -pdbName my_rac_cluster -createPDBFrom RMANBACKUP -PDBBackUpfile $GRID_HOME/assistants/dbca/templates/mgmtseed_pdb.dfb -PDBMetadataFile $GRID_HOME/assistants/dbca/templates/mgmtseed_pdb.xml -createAsClone true -internalSkipGIHomeCheck -oui_internal Creating Pluggable Database 4% complete 12% complete 21% complete 38% complete 55% complete 85% complete Completing Pluggable Database Creation 100% complete Look at the log file "/app/oracle/cfgtoollogs/dbca/_mgmtdb/my_rac_cluster/_mgmtdb.log" for further details. myracserver1 {/home/oracle}:
Validate that the pluggable database is now created and accessible.
---// validating repository PDB is created //--- SQL> select dbid,name,cdb from v$database; DBID NAME CDB ---------- --------- --- 1093077569 _MGMTDB YES SQL> select dbid,name,open_mode from v$pdbs; DBID NAME OPEN_MODE ---------- ------------------------------ ---------- 1753287684 MY_RAC_CLUSTER READ WRITE
Secure the management database credential
Use the MGMTCA utility to generate credentials and unlock accounts for the management repository users. This utility also configures Wallet to be able to access the repository database without hard coding the password in repository management tools.
---// command to secure repository password //--- $GRID_HOME/bin/mgmtca
Example:
---// securing repository password //--- myracserver1 {/home/oracle}: $GRID_HOME/bin/mgmtca myracserver1 {/home/oracle}:
Create the pluggable database (PDB) service
As per Oracle documentation, the relocation process is completed with the execution of the previous step and we are ready to restart the Cluster Health Monitor (CHM) services. However, the story is little different here. You are likely to get the following errors if you do not perform an additional step before starting the CHM services.
---// error after relocating MGMTDB //--- myracserver1 {/home/oracle}: oclumon manage -get reppath Connection Error. Could not get RepPath.
I have also discussed about this error in a earlier post, which covers the different situations when this error can occur and the possible workarounds to fix the error.
Here is why the error occurred in the present case. The OCLUMON utility uses the the pluggable database (PDB) name as the service name to query the management repository database. However, when we recreate it manually as part of the relocation process, DBCA doesn’t create that default service name.
Example
---// MGMTDB configuration //--- myracserver1 {/home/oracle}: srvctl config mgmtdb Database unique name: _mgmtdb Database name: Oracle home:Oracle user: oracle Spfile: /data/gimr/_mgmtdb/spfile-MGMTDB.ora Password file: Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Type: Management PDB name: my_rac_cluster PDB service: my_rac_cluster Cluster name: my-rac-cluster Database instance: -MGMTDB
From the Clusterware configuration of MGMTDB, we can see that the PDB service name is set to “my_rac_cluster” which in turn is the PDB name. However, when we query the database there is no service available with that name as shown below.
---// checking if repository PDB service exists //--- SQL> alter session set container=my_rac_cluster; Session altered. SQL> show con_name CON_NAME ------------------------------ MY_RAC_CLUSTER SQL> select name from dba_services; no rows selected
To fix the issue, we need to create the PDB service with the name found from the “srvctl config mgmtdb” command as shown below.
---// commands to create repository PDB service //--- SQL> alter session set container=pluggable_db_name; SQL> exec dbms_service.create_service('pluggable_db_name','pluggable_db_name'); SQL> exec dbms_service.start_service('pluggable_db_name');
Example:
---// creating repository PDB service //--- SQL> show con_name; CON_NAME ------------------------------ MY_RAC_CLUSTER SQL> exec dbms_service.create_service('my_rac_cluster','my_rac_cluster'); PL/SQL procedure successfully completed. SQL> exec dbms_service.start_service('my_rac_cluster'); PL/SQL procedure successfully completed.
Enable and start CHM Clusterware (GI) resource ora.crf
We can now enable and start the CHM Clusterware (GI) resources ora.crf on each of the cluster node using the following commands
---// command to enable auto start ora.crf resource //--- $GRID_HOME/bin/crsctl modify res ora.crf -attr ENABLED=1 -init ---// command to start ora.crf resource //--- $GRID_HOME/bin/crsctl start res ora.crf -init
Example
---// starting ora.crf resources for cluster my-rac-cluster //--- myracserver1 {/home/oracle}: $GRID_HOME/bin/crsctl start res ora.crf -init CRS-2672: Attempting to start 'ora.crf' on 'myracserver1' CRS-2676: Start of 'ora.crf' on 'myracserver1' succeeded myracserver2 {/home/oracle}: $GRID_HOME/bin/crsctl start res ora.crf -init CRS-2672: Attempting to start 'ora.crf' on 'myracserver2' CRS-2676: Start of 'ora.crf' on 'myracserver2' succeeded ---// validating ora.crf resource is ONLINE //--- myracserver1 {/home/oracle}: crsctl status resource ora.crf -init NAME=ora.crf TYPE=ora.crf.type TARGET=ONLINE STATE=ONLINE on myracserver1 myracserver2 {/home/oracle}: crsctl status resource ora.crf -init NAME=ora.crf TYPE=ora.crf.type TARGET=ONLINE STATE=ONLINE on myracserver2
Validate the relocation of management database
We are now done with the relocation of management repository database (MGMTDB) to different location. Lets validate if we can see the changes. We can query the repository path to check if is located on the new storage using the following command.
---// command to check repository path //--- $GRID_HOME/bin/oclumon manage -get reppath
Example
---// validating new repository location //--- myracserver1 {/home/oracle}: oclumon manage -get reppath CHM Repository Path = /data/gimr/_MGMTDB/datafile/o1_mf_sysmgmtd__2313916236273_.dbf
As expected, the management repository database (MGMTDB) is now relocated to new location “/data/gimr/” and is fully functional!!
Oracle has recently come up with a script (MDBUtil), which automates this entire process of relocating the repository to new file system (location). More details about this utility can be found in MOS note 2065175.1