Thursday, September 13, 2018

DB_UNIQUE_NAME Conflict on Exadata


I was trying to restore one of our database on Supercluster M7 server and i came across error in controlfile file restore itself.

RMAN> run
{
allocate channel c1 type 'sbt_tape';
SEND 'NB_ORA_SERV=bkppdb01, NB_ORA_CLIENT=monetadb01-bkp';
restore controlfile from 'cntrl_FMREF_bnt98v1v_1_1';
RELEASE CHANNEL c1;
}2> 3> 4> 5> 6> 7>

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: SID=1141 instance=fmref1 device type=SBT_TAPE
channel c1: Veritas NetBackup for Oracle - Release 7.7.3 (2016051915)

sent command to channel: c1

Starting restore at 05-AUG-18

channel c1: restoring control file
released channel: c1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 08/05/2018 14:55:47
ORA-19870: error while restoring backup piece cntrl_FMREF_bnt98v1v_1_1
ORA-19504: failed to create file "+TSTDATA1"
ORA-15045: ASM file name '+TSTDATA1' is not in reference form
ORA-17502: ksfdcre:5 Failed to create file +TSTDATA1

RMAN>

This backup piece was valid so errors surprised me, on investigation i met some informative errors in asm log as below.

Errors in file /u01/app/oratst/diag/rdbms/fmref/fmref1/trace/fmref1_ora_1268.trc:
ORA-15025: could not open disk "o/192.168.XX.XX;192.168.XX.XX/TSTDATA1_CD_06_vsc02celadm02"
Sun Aug 05 14:55:37 2018
WARNING: Write Failed. group:1 disk:18 AU:18379 offset:0 size:16384
path:Unknown disk
         incarnation:0x12 synchronous result:'I/O error'
         subsys:Unknown library krq:0xffffffff7942e520 bufp:0xffffffff7902b000 osderr1:0xf4 osderr2:0x0
         IO elapsed time: 0 usec Time waited on I/O: 0 usec
Sun Aug 05 14:55:37 2018
Errors in file /u01/app/oratst/diag/rdbms/fmref/fmref1/trace/fmref1_ora_1268.trc:
ORA-15025: could not open disk "o/192.168.XX.XX;192.168.XX.XX/TSTDATA1_CD_11_vsc02celadm03"
WARNING: Write Failed. group:1 disk:35 AU:18408 offset:0 size:16384
path:Unknown disk
         incarnation:0x12 synchronous result:'I/O error'
         subsys:Unknown library krq:0xffffffff786b8030 bufp:0xffffffff7902b000 osderr1:0xf4 osderr2:0x0
         IO elapsed time: 0 usec Time waited on I/O: 0 usec
Sun Aug 05 14:55:37 2018
Errors in file /u01/app/oratst/diag/rdbms/fmref/fmref1/trace/fmref1_ora_1268.trc:
ORA-15025: could not open disk "o/192.168.XX.XX;192.168.XX.XX/TSTDATA1_CD_00_vsc02celadm01"
WARNING: Write Failed. group:1 disk:0 AU:18389 offset:0 size:16384
path:Unknown disk
         incarnation:0x12 synchronous result:'I/O error'
         subsys:Unknown library krq:0xffffffff786b7ba8 bufp:0xffffffff7902b000 osderr1:0xf4 osderr2:0x0
         IO elapsed time: 0 usec Time waited on I/O: 0 usec
Sun Aug 05 14:55:37 2018
Errors in file /u01/app/oratst/diag/rdbms/fmref/fmref1/trace/fmref1_ora_1268.trc:
ORA-15080: synchronous I/O operation failed to write block 0 of disk 18 in disk group TSTDATA1
WARNING: failed to write mirror side 1 of virtual extent 0 logical extent 0 of file 332 in group 1 on disk 18 allocation unit 18379
Sun Aug 05 14:55:37 2018
Errors in file /u01/app/oratst/diag/rdbms/fmref/fmref1/trace/fmref1_ora_1268.trc:
ORA-15080: synchronous I/O operation failed to write block 0 of disk 35 in disk group TSTDATA1
WARNING: failed to write mirror side 2 of virtual extent 0 logical extent 1 of file 332 in group 1 on disk 35 allocation unit 18408
Sun Aug 05 14:55:37 2018
Errors in file /u01/app/oratst/diag/rdbms/fmref/fmref1/trace/fmref1_ora_1268.trc:
ORA-15080: synchronous I/O operation failed to write block 0 of disk 0 in disk group TSTDATA1
WARNING: failed to write mirror side 3 of virtual extent 0 logical extent 2 of file 332 in group 1 on disk 0 allocation unit 18389
WARNING: group 1 file 332 vxn 0 block 0 write I/O failed

Towards these errors i have examined and found that the cause of this issue is that, the same name of database was already created in our storage and creating DB with same name again was triggering error from CELL.

Cause: A code change was done in 12.1.2.1.2  that enforced Global DB_UNIQUE_NAME across all virtual hosts sharing cells.

Workaround:

Edit the cellint.ora on each storage cell to add in _cell_db_unique_name_check=false and then restart all cell services. This can be done rolling one cell at a time.

OR

Use different DB_UNIQUE_NAME to avoid modification on CELL level.

No comments:

Post a Comment