Tuesday, 17 May 2016

Oracle Server :

It is a database management system used to manage the data and Oracle server consist of instance and

database

SGA : (system global area)

Its nothing but a group of shared memory structures that contains data and control information of

oracle instance. Each instance has its own SGA

When users are connected to server then data in the instance is shared among users hence it is also

called as SHARED GLOBAL AREA

THE SGA AND BAGROUND processes constitute an instance

SGA consist of

1) Database buffer cache

2) Redo log buffer cache

3) Java pool

4) Large pool

5) Shared pool

Shared pool

1) Data dictionary cache

2) Library cache

1) Shared sql area

2) Shared plsql area

Redolog Buffer

It is a circular buffer in SGA and it contains information of all changes made to the database.

The changed information is stored in redo entries or redo record. A redo record is a group of change

vectors each of which is a description of a change made to a single block .

(For Ex: if you change the salary value of an employee in employee table , you generate a redo record

containing change vector that describes the changes to the data segment block of the table, the undo

segment data block and the transaction table of the undo segments.)

(Redo records contain all the information needed to reconstruct changes made to the database. During

media recovery, the database will read change vectors in the redo records and apply the changes to the

relevant blocks.)

Every changes made to the database is written to redolog buffer cache before it is written to database

buffer cache



Because The goals of the Logbuffer and Database buffer is entirely different .

The Purpose of the logbuffer is to temporarily keep the changed transaction and get them quickly to

written to current online redo log file whereas the purpose of database buffer is to keep the frequently

accessed blocks in memory as long as possible to increase the performance of processes (using

frequently accessed blocks.)

-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -

When you see commit complete, oracle guarantees that a record of that transaction has been safely

written to the current online redo log file on disk. That does not mean the modified block has been

written to the appropriate data file

The transaction information in the online redologbuffer is very frequently written to online redolog file

by logwriter

whereas modified block in database buffer are intermittently written to datafile by database writer

periodically , all changed block buffers or dirty buffers in memory are written to the datafiles by

database writer . This is known as a checkpoint. When checkpoint occurs , the checkpoint process

records the current checkpoint SCN in the controlfiles and the corresponding scn in the datafile headers

-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

Rman Backup

Rman incremental backup can be used to take the backup only those data that are changed since last

backup.

How algoritham of incremental backup works.?

Each datablock in a datafile consist of System change number(SCN). so this specifies the most recent

was made to the block . so during incremental backup, rman checks or reads the each and every

datablock in the input file and compares it to the checkpoint SCN of parent incremental backup.

if SCN of the datablock is greater than that of checkpoint SCN of parent backup then rman copies that

block

Block change tracking features

if you enable this feature then rman can able to refer change tracking file to identify the changed blocks

in the datafile without scanning the full contents of datafile so it increases performance

Incremental backup can be level 0 or Level 1

Level 0 backup is nothing but full database backup and it is a base for subsequent backup

Level 1 backup may be either differential backup or cumulative backup

Differential backup is nothing but which backups all the blocks that are changed since most recent

Level 0 or Level 1 backup

Cumulative backup is nothing but which backups all the blocks that are changed since most recent level

0 backup

Oracle System Identifier

It is a unique name for Oracle database instance on a specific host

When a client wants to connect to a database he can specify the SID in oracle net architecture or use

a net service name then oracle database convert a service name into an oracle_home and Oracle_sid.

For Single instance , there will be one to one relationship will be there between database and instance

For Rac instance, there will be one to many relationship will be there between database And instance

Overview of Instance and database startup

Startup Nomount

When oracle server starts the instance then it searches for server parameter file if it is not found then it

Looks for init parameter file . even if init parameter also not found then it throws an error. Otherwise

It reads the parameter file and allocates the memory depending upon parameter value

It starts the background process and opens the alertlog file and trace file and write the all explicit

Parameter settings to the alert_log file in valid formats

Startup nomount command is used during

1)Creation of database

2)Recovery of controlfiles or recreation of controlfiles

Startup mount

In this stage, oracle instance mounts database. If instance need to mount the database then it requires

the control file and control file location is defined by control_files parameter. so through that

It locates the datafile and redologfiles but not opened

This is command is used during

1) Datafile rename

2) Redologfile rename

3) When converting from non archivelog to archive log (OR) vice – versa

4) When performing full database recovery

5) When enabling flashback

Startup

In this stage, database is opend

Datafile ,redologfile are opened and read

If database is found to be inconsistent then smon performs instance recovery

Startup force

This is command is used when database is in hung state

This command aborts the currently running instance and again creates the instance,mounts the

database and opens

Startup upgrade : used during upgrade

Startup restrict : This command is used when you want to perform some maintainance tasks

When this command is executed then existing users connections will be continued . users who have

Restricted session privilege can connect to database but only user who don’t have Restricted session

privilege can’t connect to the database.

Readonly mode:

In this mode, oracle allows only read only transactions to execute and controlfiles remains open to

Update and writing to operating system files like alert log,tracefiles and audit files will be continued

The users who have been granted administration privilege can only shutdown the database.

Shutdown

Database shutdown is done in 3 stages

1 Database closed

2 Database dismounted

3 Oracle instance shutdown

If database shutdown with any option other than abort then data in SGA is written to redolog

Files and datafiles and Datafilescloses online redologfiles and datafiles

In this condition ,controlfile is opened even after database is closed

Abnormal Shutdown

If a shutdown abort or abnormal termination occurs then instance of the database closes and shutdown

The database simultaneously. So DBWR doesn’t write the data from SGA to datafile and redologs

So when database is opened then database requires instance recovery

System Change Number(SCN)

It is database ordering primitive , which specifies the most recent changes was made to the database

Data Defination Language statement creates schema objects,changes the structure of the object and

drop the schema objects

Ex: Create , Alter, Drop, Truncate

Granting and revoking(Grant and Revoke)

Auditing on and off(audit)

Data Manipulation Language:

These are the statements which is used to modify the data in the existing object

Ex: select , delete and update

What is Schema?

A named collection of objects is known as schema

Schema object : Logical structure of the data stored in schema is known as Schema object

How DML quiries works in oracle or How oracle database process DML queries?

Oracle uses read consistency mechanism to retrive data.

This mechanism uses undo data to show the past version of the data and guarentees that all the

data retrived by the query are consistent at that time.

For ex: Assume that session fires query that retrives the 100 rows and while the query being

processed , another session fires a update statement to modify 75 th block against the same

table.and doesn’t commit.When session 1 reaches 75 th block. It realizes that change so uses

undo Data to retrive old and sends the output to the user and makes sure that all the data

retrived at that point are consisten at a point in a time.

Database buffer Cache:

The database buffer cache is a portion of the SGA that holds the copies of datablocks read

from the datafiles.

Organization of databuffer cache.

The buffer in cache are organized into 2 lists

1)write list

2) Least recently used list(LRU)

Write list holds dirty buffers which contains data that has been modified but has not yet been

written to disk or datafiles

List recently used list contains free buffers , pinned buffers and dirty buffer that are not yet

been written to the write list

Free buffers are buffers that do not contain any useful data and are avaible for use

Pinned buffers are buffer which are currently being used

If oracle user process needs to access the piece of data then it first searches for the database

buffer cache . if it found the required data then its called cache it otherwise it has to copy the

data from datafile to buffer cache before copying , the process must first find the enough

space in buffer . if enough space is not there then the process should signal DBWR to write

down some dirty buffer to datafile on disk.

Cache miss: if required data is not found on database buffer cache for a oracle user process

then it is called cache miss

DataDictionary is a collection of tables and view containing reference about the database ,its

structure and its users

Rman

To Create Script

Create script my_script(script name)

{

Backup database;

}

To run script

Run

{

Execute script my_script(script name);

}

Replacing script

Replace script my_script(script name)

{

}

To list script

List script names

To get the code of the script

Print script script_name

Create user username Identified by passaword

Default tablespacetablespace_name

Temporary tablespace temp

Quota 100m on users(tablespace_name)

Rman

Scanerio

If you want to take backup of the database by skipping offline tablespaces and readonlytablespaces

Then which command will be used

Rman>Backup database

Skip readonly

Skip offline;

Scenario

Assume that you want skip the tablespace every time while taking the backup so how do you do that

RMAN>configure exclude for tablespace users;

If you want override this feature while taking backup then use

Rman>backup database NOEXCLUDE;

Scenario

Assume that one day last night backup is not succededso you want to backup those data that is not

backed up so how would you do that

Rman>backup

NOT BACKED UP SINCE TIME ‘SYSDATE-1’

MAXSET SIZE 100M

DATABASE PLUS ARCHIVELOG

Oracle DBA roles and Responsibilities

1. Creating and managing development,testing and production databases.

2. Installing new version of oracle RDBMS.

3. Implementing backup and recovery of oracle database.

4. Administration all the database objects like tables, indexes, sequences, views, clusters and

packages and procedures.

5. Providing technical support to the application development team.

6. Create database users and assigning previleges as and when required.

7. Managing sharing of resources amongst applications.

8. Troublshooting problems regarding the databases, applications and development tools.

9. Migration of databases .

10. Interface with oracle corporation for technical support.

11. Upgradation of databases.

Asm Disk Groups

The creation of disk group involves the validation of disks to be added

Following are the attributes to validate a disk

1. The disks cannot be already in some other disk group

2. Disks must not have pre-existingValid Asmheader . if it has, then it can be override using FORCE

option.

Following example shows that a diskgroup is created using four disks that reside in storage array, with

redundancy being handled externally by the storage array.

SQL> SELECT NAME, PATH, MODE_STATUS, STATE FROM V$ASM_DISK;

NAME PATH MODE_ST STATE

-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - -- -- -- --

/dev/rdsk/c3t19d5s4 ONLINE NORMAL

/dev/rdsk/c3t19d16s4 ONLINE NORMAL

/dev/rdsk/c3t19d17s4 ONLINE NORMAL

/dev/rdsk/c3t19d18s4 ONLINE NORMAL

SQL> CREATE DISKGROUP DATA EXTERNAL REDUNDANCY DISK

'/dev/rdsk/c3t19d5s4',

'/dev/rdsk/c3t19d16s4',

'/dev/rdsk/c3t19d17s4',

'/dev/rdsk/c3t19d18s4';

The following output, from v$ASM_DISKGROUP shows newly added disk groups.

SQL> SELECT NAME, STATE, TYPE, TOTAL_MB, FREE_MB FROM V$ASM_DISKGROUP;

NAME STATE TYPE TOTAL_MB FREE_MB

-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - -- -- -- -- -- -- -- -- -- -- -- -- --

DATA MOUNTED EXTERN 34512 34101

After creation of disk group is successufully completed then metadata information like creation date,

disk group name and redundancy type is stored in system global area and on each disk header in the

disk

The following output shows how the V$ASM_DISK view reflects the disk state change after the disk is

incorporated into disk group.

SQL> SELECT NAME, PATH, MODE_STATUS, STATE, DISK_NUMBER FROM V$ASM_DISK;

NAME PATH MODE_ST STATE DISK_NUMBER

-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - -- -- -- -- -- -- -- -- -- -

DATA_0000 /dev/rdsk/c3t19d5s4 ONLINE NORMAL 0

DATA_0001 /dev/rdsk/c3t19d16s4 ONLINE NORMAL 1

DATA_0002 /dev/rdsk/c3t19d17s4 ONLINE NORMAL 2

DATA_0003 /dev/rdsk/c3t19d18s4 ONLINE NORMAL 3

DATA_0000,DATA_0001,DATA_0002,DATA_0003 (disk_group_name+Disk_number)

The above are the automatically created disknames, so if you want to assign disk names then write as

follows

SQL> CREATE DISKGROUP DATA EXTERNAL REDUNDANCY DISK

'/dev/rdsk/c3t19d5s4' name DMX_disk1,

'/dev/rdsk/c3t19d16s4' name DMX_disk2,

'/dev/rdsk/c3t19d17s4' name DMX_disk3,

'/dev/rdsk/c3t19d18s4' name DMX_disk4;

Asm disk Name is used when performing disk management activities such as Drop disk or Resize disk.

Restore is a process of restoring files(datafile,controlfile,server parameter file and not online redologfile)

from backup and recovery is process of applying online redo logfiles on datafiles

Complete recovery means you can recover all the committed transactions until the point of failure or

before failure occurred

Dataguard operates on simple principle that is ship redo and then apply redo because redo contains all

of the information needed by oracle database to recover a database transactions

Difference between Dataguard and Remote Mirroring

Dataguard Remote Mirroing

Dataguard transmits only redo data(data or information) Remote Mirroing doesn’t have any

That is required to recover a database transactions) to sy Knowledge of an oracle transactions

Nchronize a standby database with its primary so it requires every write to every

file

2) it requires 7 times more network

Volume compared to network

needed for dataguard

3) 27 times more network I/O

Operations than dataguard

Primary database transaction generate redo records

A redo record , is also as redo entry , is made up of a group of change vectors, each of which is

description of a change made to a single block in the database.

For Ex: if you change the salary value of an employee in employee table , you generate a redo record

containing change vector that describes the changes to the data segment block to the table, the undo

segment data block and the transaction table of the undo segments.

Redo records contain all the information needed to reconstruct changes made to the database. During

media recovery, the database will read change vectors in the redo records and apply the changes to the

relevant blocks.

 A commit record Whenever a transaction is committed, the LGWR writes the transaction redo

records from the redo log buffer to an ORL and assigns a system change number (SCN) to identify

the redo records for each committed transaction. Only when all redo records associated with a

given transaction have been written to the ORL is the user process notified that the transaction has

been committed.

Redo Transport Services

When Transactions are committed , then primary database LGWR process writes the redo to its online

redo log files and at the same time LOG NETWORK SERVER(LNS) reads the redo records from Redo log

buffer and passes the redo to Oracle Network Services for the transmission to the standby databases

Redo records transmitted by the LNS are received at standby database by another dataguard process

called as Remote File Server(RFS)

Synchornous Redo Transport

It is also called as “ZERO DATA LOSS” method because LGWR is not allowed to acknowledge a commit

has succededuntill LNS confirm that redo required for recover the transaction has been written to

disk(redolog files) at standby databases

ASynchornous Redo Transport

It is reverse of Synchronous

Apply Services

Redo Apply(Physical standby)

SqlApply(Logical Standby)

Redo apply maintains the standby database that is an exact, block by block , physical copy of the primary

database

Types of database parameter files

1. SPFILE(server parameter file) : A binary text file that contains initialization parameter

2. PFILE(initialization parameter file) : A text file that contains initialization parameter

Oracle server

Oracle server is divided into

1) Database

2) Instance

Database inturn divided into

1) Logical structure

2) Physical structure

Logical structures are

1) Tablespace

2) Segments

3) Extents

4) Blocks

Physical structures

1) Datafiles

2) Controlfile

3) Redologfile

4) Archivelog file

5) Parameter file

6) Password file

7) Network file

Instance is divided into

1) Memmory Sturcture

2) Background Process

Memmory structures are divided into

1)SGA

2)PGA

SGA consist of

1) Shared pool area

2) Databuffer cache

3) Log buffer

4) Large pool

5) Java pool

PGA consist of

1) Session Information area

2) Cursor area

3) Software

4) statspack

Metadata is nothing but data about the data

5 Imp Background process

Database Writer(DBWR)

Dbwr writes the data from db buffer to datafile under the following situation

1) Checkpoint occurs.

2) Dirty buffer reach threshold.

3) When there are no free buffer.

4) When timeout occurs.

5) When tablespace put in offline mode

6) When tablespace put in begin backup mode

7) When tablespace made read only

8) When table is dropped or truncated

9) When RAC ping request is made

Logwriter(LGWR)

1) At commit.

2) When 1/3 rd of the buffer is full.

3) When more than 1MB of redo changes.

4) Every 3 secs.

5) Before DBWR writes.

Smon (system Monitor)

Smon is Responsible for Instance Recovery

Instance Recovery

1) Rolls forward what are the changes in the redologs

2) Opens the database for user access(if we are connecting to database means then smon is reason for

that)

3) Rollback uncommitted Transactions

4) Coalesces free space or combines free space

5) Deallocates temporary segments

Pmon (Process monitor)

Pmon periodically cleans up 1) the process that died abnormally

3) The session that were killed.

4) Detached transactions that have exceeded their idle timeout

5) Detached Network connections which exceeded their idle timeout.

Pmon is responsible for registering the information about instance and dispatcher process with

The network Listener

-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

Checkpoint.

Periodically The dirty buffers in the databuffer cache is written to datafile that process is called is

Checkpoint.

When checkpoint occurs the ckpt process will update the control file and header of the datafile with the

current SCN number

Purpose of Checkpoint

1) It ensures that all the committed data is written to disk during consistent shutdown.

2) Ensures that dirty buffer in buffer cache is written to disk regurlarly

3) Reduce the time required for recovery in case of media failure or instance failure

When checkpoint occurs

1) When log switch occurs

2) When instance is shutdown with normal or immediate

3) When forced by initialization parameter FAST_START_MTTR_TARGET

4) When manually by DBA

5) When tablespace put in offline mode

6) When tablespace put in begin backup mode

7) When tablespace put in read only mode

Information about the checkpoint is recorded in alert log file if LOG_CHECKPOINT_TO_ALERT

initialization parameter is set to true.

-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -

You are required to have at least two online redo log groups in your database. Each online redo log group

must contain at

least one online redo log member

Online redo logs are crucial database files that store a record of transactions that have occurred in your

database. Online

redo logs serve several purposes:

n Provide a mechanism for recording changes to the database so that in the event of a media failure you

have a method

of recovering transactions.

n Ensure that in the event of total instance failure, committed transactions can be recovered (crash

recovery) even if

committed data changes have not yet been written to the data files.

n Allow administrators to inspect historical database transactions through the Oracle LogMiner utility.

The contents of the current online redo log files are not archived until a log switch occurs. This means

that if you lose all

members of the current online redo log file, then you'll most likely lose transactions

The contents of the current online redo log files are not archived until a log switch occurs. This means

that if you lose all

members of the current online redo log file, then you'll most likely lose transactions. Listed next are

several mechanisms

you can implement to minimize the chance of failure with the online redo log files:

n Multiplex groups to have multiple members.

n If possible, don't allow two members of the same group to share a controller.

n If possible, don't put two members of the same group on the same physical disk.

n Ensure operating system file permissions are set appropriately (restrictive so that only the owner of the

Oracle binaries

has permissions to write and read).

n Use physical storage devices that are redundant (that is, RAID).

n Appropriately size the log files so that they switch and are archived at regular intervals.

n Consider setting the archive_lag_target initialization parameter to ensure that the online redo logs

are switched at

regular intervals.

Note The only tool provided by Oracle that can protect you and preserve all committed transactions in the

event you

lose all members of the current online redo log group is Oracle Data Guard implemented in Maximum

Protection

Mode. Refer to MOS note 239100.1 for more details regarding Oracle Data Guard protection modes.

The clear logfile command will drop and re-create all members of a log group for you. You can issue

this

command even if you have only two log groups in your database.

If the clear logfile command does not succeed because of an I/O error and it's a permanent problem, then

you will need to

consider dropping the log group and re-creating it in a different location. See the next two subsections for

directions on how to drop and re-create a log file group

When Control file error or Media recover or Instance recovery occur

CF checkpoint SCN < Data file checkpoint SCN

"Control file too old" error

Ans : Restore a newer control file or recover with the using backup controlfile clause.

CF checkpoint SCN > Data file checkpoint SCN

Ans : Media recovery required . Most likely a data file has been restored from a backup. Recovery

is now required.

CF checkpoint SCN = Data file SCN Start up normally

None Database is in mount mode, instance thread status = OPEN Crash recovery required

(And Oracle automatically performs crash recovery(instance Recovery)

Media recovery requires restore and recovery

Instance recovery requires archivelog files and redolog files

v$datafile_header uses physical datafile file on disk as a resource

v$datafile uses controlfile as a resource

When incomplete recovery is need to done ?

When online redolog file and incremental backup required for recovery are lossed then we need to go

for incomplete recovery

When you validate backup sets, RMAN actually reads the backup files as if it were doing a restore

operation ( restore madbekadare rman hege backup file na hege read madtadeyo ade tara validate

madbekadarenu read madtade)

This will indicate how much time it takes to read the files during a real restore operation (this could be

useful for troubleshooting I/O problems

Rman can perform(instead of "do" use) media recovery for physically corrupted block but logically

corrupted blocks cannot be recovered by rman

To recover logically corrupt blocks, restore datafile from backup and perform media recovery

When block ( or )blocks do not match the physical format that oracle expects then it is called Physicall

corruption.

Physical corruptions are generally the result of infrastructure problems

Possible sources of physical corruption are storage array cache corruption , any filesystem bugs ,

application errors , array controller failure

LOGICAL CORRUPTION

When block contents are inconsistent with the logical information that oracle expects to then its called

Logical corruption

For example : one of the block header structures, which tracks the number of locks associated with rows in

the block, differs from the actual number of locks present

Another example would be if the header information on available space differs from the true available space

on the block.

RMAN> recover database until time 'sysdate - 1/48' test;

RMAN> recover database until scn 2328888 test;

RMAN> recover database until sequence 343 test;

Caution If you attempt to issue a recover tablespace until … test, RMAN will attempt to perform a

tablespace point-in- time recovery (TSPITR).

Performing Database-Level Recovery

$ rman target /

RMAN> startup mount;

RMAN> restore database;

RMAN> recover database;

RMAN> alter database open;

When recovery catalog is configured

$ rman target / catalog rcat/rcat@rcat

$ rman target /

RMAN> startup mount;

RMAN> restore database;

RMAN> recover database;

RMAN> alter database open;

Performing tablespace recovery

$ rman target / catalog rcat/rcat@rcat

$ rman target /

RMAN> alter tablespace users offline immediate;

RMAN> restore tablespace users;

RMAN> recover tablespace users;

RMAN> alter tablespace users online;

For 11g and Lower

$ rman target /

RMAN> sql 'alter tablespace users offline immediate';

RMAN> restore tablespace users;

RMAN> recover tablespace users;

RMAN> sql 'alter tablespace users online';

Performing Data File-Level Recovery

$ rman target /

RMAN> startup mount;

RMAN> alter database datafile '/u01/dbfile/o12c/users01.dbf' offline;

RMAN> alter database open;

RMAN> restore tablespace users;

RMAN> recover tablespace users;

RMAN> alter tablespace users online;

Use the RMAN report schema command to list data file names and file numbers

.

.

.

.

.

. Forcing RMAN to Restore a File Problem

As part of a test exercise, you attempt to restore a data file twice and receive this RMAN message:

restore not done; all files read only, offline, or already restored

In this situation, you want to force RMAN to restore the data file again

Use the force command to restore data files and archived redo log files even if they already exist in a

location.

Restoring from an Older Backup

You want to specifically instruct RMAN to restore from a backup set that is older than the last backup that

was taken

Sol: You can restore an older backup a couple different ways: using a tag name or using the restore …

until command

list backup summary

List of Backups

===============

Key TY LV S Device Type #Pieces #Copies Compressed Tag

-- -- -- - -- -- - -- -- -- -- -- - -- -- -- - -- -- -- - -- -- -- -- -- -- -

79 B F A DISK 1 1 NO TAG20120724T210842

80 B F A DISK 1 1 NO TAG20120724T210918

81 B F A DISK 1 1 NO TAG20120729T122948

$ rman target /

RMAN> startup mount;

RMAN> restore database from tag TAG20120724T210842;

RMAN> recover database;

RMAN> alter database open;

Using restore … until

You can also tell RMAN to restore data files from a point in the past using the until clause of the

restore command in

one of the following ways:

n Until SCN

n Until log sequence

n Until restore point

n Until time

RMAN> startup mount;

RMAN> restore database until scn 1254174;

RMAN> recover database;

RMAN> alter database open;

Recovering Through Resetlogs

Problem

You recently performed an incomplete recovery that required you to open your database with the open

resetlogs

command. Before you could back up your database, you experienced another media failure. Prior to

Oracle Database 10g,

it was extremely difficult to recover using a backup of a previous incarnation of your database. You now

wonder whether

you can get your database back in one piece.

Solution

Beginning with Oracle Database 10g, you can restore a backup from a previous incarnation and recover

through a

previously issued open resetlogs command. You simply need to restore and recover your database as

required by the

type of failure. In this example, the control files and all data files are restored:

$ rman target /

RMAN> startup nomount;

RMAN> restore controlfile from autobackup;

RMAN> alter database mount;

RMAN> restore database;

RMAN> recover database;

RMAN> alter database open resetlogs;

When we should open our database with open resetlog command?

Perform an incomplete recovery

Recover with a backup control file

Use a re-created control file and you're missing your current online redo logs

Difference between 10 and 11g

Prior to Oracle Database 10g, you were required to take a backup of your database immediately after you

reset the online

redo log files. This is because resetting the online redo log files creates a new incarnation of your

database and resets

your log sequence number back to 1. Prior to Oracle Database 10g, any backups taken before resetting

the logs could not

be easily used to restore and recover your database.

Starting with Oracle Database 10g, Oracle allows you to restore from a backup from a previous

incarnation of your

database and issue restore and recovery commands as applicable to the type of failure that has occurred.

Oracle keeps

track of log files from all incarnations of your database. The V$LOG_HISTORY view is no longer cleared

out during a

resetlogs operation and contains information for the current incarnation as well as any previous

incarnations.

If you're using an FRA (fast recovery area), then Oracle creates the archive redo logs using the Oracle

Managed File name

format. If you aren't using an FRA, the format mask of the archived redo log files must include the thread

(%t), sequence (%

s), and resetlogs ID (%r). For example

RMAN> list incarnation

Restoring the spfile

If you receive an error such as this when running the restore command:

RMAN-20001: target database not found in recovery catalog

RMAN set dbid 3414586809;

Not Using a Recovery Catalog, RMAN Auto Backup in Default Location

For this scenario you need to know your database identifier before you can proceed. See Recipe 10-3 for

details about

determining your DBID.

This recipe assumes that you have configured your auto backups of the spfile to go to the default

location. The default

location depends on your operating system. For Linux/Unix, the default location is ORACLE_HOME/dbs.

On Windows

systems, it's usually ORACLE_HOME\database.

$ rman target /

RMAN> startup force nomount; # start instance for retrieval of spfile

RMAN> set dbid 3414586809;

RMAN> restore spfile from autobackup;

RMAN> startup force; # startup using restored spfile

Not Using a Recovery Catalog, RMAN Auto Backup Not in Default Location

$ rman target /

RMAN> set dbid 3414586809;

RMAN> startup force nomount; # start instance for retrieval of spfile

RMAN> restore spfile from

'/u01/fra/012C/autobackup/2012_07_30/o1_mf_s_789989279_81fb00rl_.bkp';

RMAN> startup force; # startup using restored spfile

Restoring Archived Redo Log Files

Restoring controlfile

If FRA is configured and autobackup of the controlfile is configured then just use

“restore from auto backup” because Rman know about /<FRA>/<target database

SID>/autobackup/YYYY_MM_DD/<backup piece file>

If FRA is configured but autobackup of the controlfile is configured in rman then rman

uses /<FRA>/<target database SID>/backupset/YYYY_MM_DD/<backup piece file>

If recovery catalog is configured then just use “ restore controlfile “ because

Recovery catalog will be knowing all information of the backup.

If both recovery catalog and fra is not configured then first we need to set dbid

Catlog command : you can add metadata information about the backup pieces to directly

your controlfile

Catalog recovery area

Catalog

To RESTORE SPFILE WE NEED TO USE STARTUP FORCE

ROOT.sh

It creates the additional directories and sets appropriate ownership and permissions on files for root user.

Catalog - creates data dictionary views.

Catproc - create in built PL/SQL Procedures, Packages etc

If any (a long-running )transaction can't find the undo data it needs, then it generates the well-known

Oracle snapshot-too- old error. Here's an example:

Recovering Data Files Not Backed Up

Assume that you have added a datafile and had failure before taking backup of the newly added datafile

so how do you recover it

You wonder how can we restore and recover a datafile that doesn’t exists ?

When media failure occurs then there may be 2 situations

1) .Using current controlfile (means you have loss only datafile and not controlfile , controlfile is

have entry of the newly available datafile and control file is available)

2) Using backup controlfile ( means both newly added datafile and controlfile is lossed so old

controlfile doesn’t contain information of newly added datafile)

For first situation , you can restore and recover at datafile level ,tablespace level and database level

But for 2 nd situation , you can restore and recover at database level

Using a Current Control File

In this example, we use the current control file and are recovering the tools01.dbf datafile in the newly

added tools

tablespace.

$ rman target /

RMAN> startup mount;

RMAN> restore tablespace tools;

You should see a message like the following in the output as RMAN re-creates the data file:

creating datafile file number=5 name=/u01/dbfile/o12c/tools01.dbf

Now issue the recover command and open the database:

RMAN> recover tablespace tools;

RMAN> alter database open;

Using a Backup Control File

This scenario is applicable anytime you use a backup control file to restore and recover a data file that

has not yet been

backed up. First, we restore a control file from a backup taken prior to when the data file was created:

$ rman target /

RMAN> startup nomount;

RMAN> restore controlfile from

'/u01/app/oracle/product/12.1.0.1/db_1/dbs/c-3412777350- 20120730-05';

RMAN> alter database mount;

Now you can verify the control file has no record of the tablespace that was added after the backup was

taken:

RMAN> report schema;

When the control file has no record of the data file, RMAN will throw an error if you attempt to recover

at the tablespace or

data file level. In this situation, you must use the restore database and recover database commands as

follows:

RMAN> restore database;

RMAN> recover database;

Next, you should see quite a bit of RMAN output. Near the end of the output you should see a line

similar to this indicating

that the data file has been re-created:

creating datafile file number=5 name=/u01/dbfile/o12c/tools01.dbf

Since you restored using a backup control file, you are required to open the database with the resetlogs

command:

RMAN> alter database open resetlogs;

How It Works

Starting with Oracle Database 10g, there is enough information in the redo stream for RMAN to

automatically re-create a

data file that was never backed up. It doesn't matter whether the control file has a record of the data

file.

Prior to Oracle Database10g, manual intervention from the DBA was required to recover a data file that

had not been

backed up yet. If Oracle identified that a data file was missing that had not been backed up, the recovery

process would

halt, and you would have to identify the missing data file and re-create it:

SQL> alter database create datafile '/u01/dbfile/o12c/tools01.dbf '

as '/u01/dbfile/o12c/tools01.dbf' size 10485760 reuse;

After re-creating the missing data file, you had to manually restart the recovery session. If you are using

an old version of

the Oracle database, see MOS note 1060605.6 for details on how to re-create a data file in this scenario.

In Oracle Database 10g and newer, this is no longer the case. RMAN automatically detects that there

isn't a backup of a

data file being restored and re-creates the data file from information retrieved from the control file

and/or redo information

as part of the restore and recovery operations

ARCHIVE log

You may have disk space issues and need to spread the restored archived redo logs across multiple

locations. You can do

so as follows:

RMAN> run{

set archivelog destination to '/u01/archrest';

restore archivelog from sequence 1 until sequence 10;

set archivelog destination to '/u02/archrest';

restore archivelog from sequence 11;

}

http://scn.sap.com/community/oracle/blog/2014/11/03/how-to- interpret-awr- reports

4 Instance Efficiency Percentage – these needs to be looked very carefully, since these are not really a good

measurement of the database performance.

For example in very processing-intensive SQL statements which are executed repeatedly, only read blocks from the buffer pool

increases the hit rate of the buffer pool. After optimizing such statements the hit ratio decreased though performance improves.

Buffer Nowait – shows how often buffer cache were accessed with no wait time.

Buffer Hit – shows how often a requested block has been found in the buffer cache without requiring disk access.

Redo NoWait – shows if log_buffer size is set correctly. Preemptive redolog switches in Oracle 11.2

Parse CPU to Parse Elapsd – shows how much time was spent on parsing while waiting for resources.

Non-Parse CPU – in the following example the the figure is close to 100% meaning that the overall CPU usage is only 0.15 % for

statement parsing.

http://docs.oracle.com/cd/B28359_01/backup.111/b28273/rcmsynta027.htm#CHDCCBDF

AWR : (Automatic workload repository)

Is extremely useful to diagnose database performance issues.

ADDM (Automatic Database Diagonostic)

Is extremely used for tuning sql statements

ASH : (Active session history)

This report provides Recent active session activities.

No comments:

Post a Comment