Cloning Database .
=================
1)First Check do we have enough space or not in target space.
2)Please ensure that no duplicate datafiles are present in database.
If there are duplicate datafiles in the database,make sure to map them to different mountpoints in the restore script .
If restore fails due to duplicate file issues(datafiles switch not happends),then only restore the failed datafiles(get the information from log),
then we need to switching for all datafiles .
3) Make sure that the once restore and recovery of the database is completed, please rename the redo logfiles as per cloned server
mountpoint details and permission of the redo logfiles should be oracle:dba
Now actual steps
1) Backup the database with controlfile to any mountpoint.
2) copy to source database Pfile to Target database and change the Database name.
3) Create a restore script .
How restore script look like ?
it contains..
run {
allocate channel c1 type disk;
allocate channel c2 type disk;
SET NEWNAME FOR DATAFILE 1 to '/glerpq02/data/pnecqa2/data01/pnecqa2/abc.dbf';
SET NEWNAME FOR DATAFILE 2 to '/glerpq02/data/pnecqa2/data01/pnecqa2/bcd.dbf';
…………….
………..
restore database;
switch datafile all;
release channel c1;
release channel c2;
}
Drop the database if it is already existing. First shutdown the database,listener and then use rm -rf command from OS level
to remove the datafiles,tempfiles,controlfile,logfiles present in the mount points of existing database.
Use the following commands to check and remove the datafiles,tempfiles,controlfile,logfiles present in the mount points of existing database.:---
SQL> select name from v$datafile; (To view datafiles)
SQL> select name from v$controlfile; (To view controlfiles)
SQL> select member from v$logfile; (To view logfiles)
SQL> select name from v$tempfile; (To view tempfiles)
Create a restore shell script that would call this restore script.
rman target=/ cmdfile=restore.rcv log=restore.log
CLoning starts now.
1) Edit pfile
2) startup nomount
go to rman prompt
3) rman target /
Restore the controlfile from the backup piece.
RMAN> restore controlfile from 'fullbackup_ctl<along with full path>';
8) Mount the database
9) Now start the restore script.
Now restore the datafiles to new locations and recover
$ nohup restore.sh &
RMAN> run
{
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate channel c4 type disk;
recover database;
release channel c1;
release channel c2;
release channel c3;
release channel c4;
}
12) Once database is recovered .
please rename the redo log files for the database as per existing mountpoints.
SQL>select member from v$logfile ;
SQL> alter database rename file ‘source path’ to ‘destination path’;
13) open the database using resetlogs
14) Shutdown the database
15) Mount the database
SQL> startup mount;
16) Invoke nid utility and allow to complete
nid target=/ dbname=clone
Note:--- If we are performing manual recovery we by using backup controlfile,
First we should have to set the clone database archive destination to the mount point
in which archives logs resides and also change the archive log format of clone server to the archive log format of target server.
RECOVER BY USING BACKUP CONTROLFILE COMMAND:-----
SQl> recover database using backup controlfile;
Specify auto if it prompts for auto | manual | cancel
RECOVER CANCEL BY USING BACKUP CONTROLFILE COMMAND:-----
SQl> recover database using backup controlfile until cancel;
Specify cancel if it prompts for auto | manual | cancel
============
To start or stop your entire cluster database,
================================================
srvctl start database -d name [-o start_options] [-c connect_str | -q]
srvctl stop database -d name [-o stop_options] [-c connect_str | -q]
EXAMPLE :
srvctl start database -d orcl -o mount
To start or stop instances.
================================================
srvctl start instance -d db_name -i "inst_name_list" [-o start_options] [-c connect_str | -q]
srvctl stop instance -d name -i "inst_name_list" [-o stop_options] [-c connect_str | -q]
Example .
srvctl stop instance -d orcl -i "orcl3,orcl4" -o immediate -c "sysback/oracle as sysoper"
======================================================
srvctl start
srvctl start database
Starts the cluster database and its instances
srvctl start instance
Starts the instance
srvctl start service
Starts the service
srvctl start nodeapps
Starts the node applications
srvctl start asm
Starts ASM instances
srvctl start listener
Starts the specified Listener or Listeners.
====================================================================================================
srvctl start
srvctl start database
Starts the cluster database and its instances
srvctl start database -d db_unique_name [-o start_options] [-c connect_str | -q]
where -c for
Connect string (default: / as sysdba)
-q :
Prompt for user credentials connect string from standard input.
ex :
srvctl start database -d crm -o open
========================================================================================================
srvctl start instance
Starts the instance
srvctl start instance -d db_unique_name -i inst_name_list [-o start_options] [-c connect_str | -q]
srvctl start instance -d crm -i "crm1,crm4"
=========================================
srvctl start asm
Starts an ASM instance.
srvctl start asm -n node_name [-i asm_inst_name] [-o start_options] [-c connect_str | -q]
-n node_name
Node name
-i inst_name
ASM instance name.
-h
Display help.
EX:
srvctl start asm -n crmnode1 -i asm1
An example to start all ASM instances on a node is:
srvctl start asm -n crmnode2
========================
srvctl start listener
srvctl start listener -n node_name [-l listener_name_list]
srvctl start listener -n mynode1
========================================
No comments:
Post a Comment