Wednesday, December 08, 2010

Oracle11gR2 RAC: Removing Database Nodes from an Existing Cluster

Phase I - Remove the node from RAC database

Policy-Managed

1. Remove DB Console from the node by running the following from another node in the cluster (as ‘oracle’ user):

$> $RDBMS_HOME/bin/emca -deleteNode db


2. For Policy-Managed database a possible method is to decrease the size of the server pool, and relocate the node to the free pool (assuming all other maximums are met). For relocation to work there should be no active sessions on the associated instance. You can either use the ‘-f’ flag to force relocation, or shutdown the service on the instance prior to running the command:

$> $RDBMS_HOME/bin/srvctl stop instance -d -n
$> $RDBMS_HOME/bin/srvctl relocate server -n -g Free



Admin-Managed

1. For Admin-Managed databases, ensure the instance to be removed is not a PREFERRED or AVAILABLE instance for any Services (i.e. modify Services to exclude the instance to be removed).


2. Remove the instance using ‘$RDBMS_HOME/bin/dbca’, running the command from a node not being removed (as ‘oracle’ user):

$> $RDBMS_HOME/bin/dbca -silent -deleteInstance -nodeList -gdbName -instanceName -sysDBAUserName sys -sysDBAPassword


3. Disable and stop any listeners running on the node (as ‘oracle’ user on any node):

$> $RDBMS_HOME/bin/srvctl disable listener -l -n
$> $RDBMS_HOME/bin/srvctl stop listener -l -n


4. Update the inventory on the node to be removed (run from the node being removed as ‘oracle’ user):

$> $RDBMS_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME= “CLUSTER_NODES={[oldnode]}” -local

5. Deinstall the Oracle home, from the node being removed (as ‘oracle’ user):

$> $RDBMS_HOME/deinstall/deinstall -local


6. From any of the existing nodes run the following to update the inventory with the list of the remaining nodes (as ‘oracle’ user):

$> $RDBMS_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME= “CLUSTER_NODES={[node1,...nodeX]}"


Phase II - Remove node from Clusterware

7. Check if the node is active and unpinned (as ‘root’ user):

#> $GI_HOME/bin/olsnodes -s -t

Note: The node will only be pinned if using CTSS, or using with database version < style="text-align: justify;">8. Disable Clusterware applications and daemons on the node to be removed. Use the ‘-lastnode’ option when running on the last node in the cluster to be removed (as ‘root’ user):

#> $GI_HOME/crs/install/rootcrs.pl -deconfig -force [-lastnode]


9. From any node not being removed delete Clusterware from the node (as ‘root’ user):

#> $GI_HOME/bin/crsctl delete node -n


10. As ‘grid’ user from node being removed:

$GI_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME= "CLUSTER_NODES={[oldnode1,...oldnodeX]}" CRS=TRUE -local

11. Deinstall Clusterware software from the node:

$> $GI_HOME/deinstall/deinstall -local

ISSUE:
$GI_HOME/deinstall/deinstall -local’ results in questions about other nodes, and prompts for running scripts against other nodes. DO NOT RUN!! This process deconifigures CRS on all nodes!

ROOT/CAUSE:
Incorrect documentation as a command is missing.

RESOLUTION:
The correct order (as already given in this document) should be as follows:

$GI_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/oracle/app/11.2.0/grid “CLUSTER_NODES={[oldnode]}” CRS=TRUE -local
$GI_HOME/deinstall/deinstall -local


12. From any existing node to remain, update the Clusterware with the remaining nodes (as ‘grid’ user):

$> $GI_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME= “CLUSTER_NODES={[oldnode1,...oldnodeX]}” CRS=TRUE


13. Verify the node has been removed and the remaining nodes are valid:

$GI_HOME/bin/cluvfy stage -post nodedel -n -verbose


14. Remove OCM host/configuration from the MOS portal.

No comments:

Post a Comment