Monday, January 27, 2020

Pluge non-cdb into CDB as PDB



[oracle@db19c noncdb]$ export ORACLE_SID=noncdb
[oracle@db19c noncdb]$ sqlplus / as sysdba

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
SQL> startup open read only;
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size                  8628936 bytes
Variable Size             373294392 bytes
Database Buffers          687865856 bytes
Redo Buffers                3952640 bytes
Database mounted.
Database opened.
SQL>
SQL> BEGIN
  DBMS_PDB.DESCRIBE(
    pdb_descr_file => '/tmp/noncdb.xml');
END;
/  2    3    4    5

PL/SQL procedure successfully completed.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

[oracle@db19c noncdb]$
[oracle@db19c noncdb]$ export ORACLE_SID=cdb1
[oracle@db19c noncdb]$
[oracle@db19c noncdb]$ sqlplus / as sysdba

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> SET SERVEROUTPUT ON
DECLARE
  l_result BOOLEAN;
BEGIN
  l_result := DBMS_PDB.check_plug_compatibility(
                pdb_descr_file => '/tmp/noncdb.xml',
                pdb_name       => 'pdb2');

  IF l_result THEN
    DBMS_OUTPUT.PUT_LINE('compatible');
  ELSE
    DBMS_OUTPUT.PUT_LINE('incompatible');
  END IF;
END;
/SQL>   2    3    4    5    6    7    8    9   10   11   12   13   14
compatible

PL/SQL procedure successfully completed.

SQL>
SQL> CREATE PLUGGABLE DATABASE pdb2 USING '/tmp/noncdb.xml'
  2  COPY
  3  FILE_NAME_CONVERT = ('/u01/app/oracle/oradata/noncdb/', '/u01/app/oracle/oradata/cdb/cdb1/pdb2/');

Pluggable database created.

SQL> col name for a20
SQL> SELECT name, open_mode FROM v$pdbs;

NAME                 OPEN_MODE
-------------------- ----------
PDB$SEED             READ ONLY
PDB1                 READ WRITE
PDB2                 MOUNTED

SQL> ALTER SESSION SET CONTAINER=pdb2;

Session altered.

SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql

SQL> SELECT name, open_mode FROM v$pdbs;

NAME                 OPEN_MODE
-------------------- ----------
PDB2                 MOUNTED

1 row selected.

SQL> ALTER PLUGGABLE DATABASE OPEN;

Pluggable database altered.

SQL> SELECT name, open_mode FROM v$pdbs;

NAME                 OPEN_MODE
-------------------- ----------
PDB2                 READ WRITE

1 row selected.

SQL>

ORA-15080 on ASM rebalance operation


Noticed an error ORA-59035 on re-balance operation and ASM re-balance operation got stuck

Solution

Work around is to disable the hard check on ASM instance and start re-balance operation and post re-balance set back hard check to true

alter diskgroup RECOC1 set attribute 'hard_check.enabled' = 'FALSE';

ALTER DISKGROUP RECOC1 REBALANCE POWER 32;

After rebalance completion re-enable the hard check

alter diskgroup RECOC1 set attribute 'hard_check.enabled' = 'TRUE';

Thursday, January 3, 2019

DBCA does not call 'datapatch' for database changes in 12.1.0.X.


DBCA does not call 'datapatch' for database changes in 12.1.0.X.

After creating a new database make sure to check DB register patch set level.

select PATCH_ID, PATCH_UID, VERSION, STATUS, DESCRIPTION
from DBA_REGISTRY_SQLPATCH
order by BUNDLE_SERIES;

no rows selected

whereas opatch lsinventory shows the patch installed

Patch description: “Database Patch Set Update : 12.1.0.2.5 (21359755)”

Solution

DBCA doesn't execute datapatch in Oracle 12.1.0.X. The solution is to apply the SQL changes manually after creating a new Database .

After creating a new database make sure to run:

cd $ORACLE_HOME/OPatch
./datapatch -verbose

then check DBA_REGISTRY_SQLPATCH view

With a multitenant database datapatch is run for all pluggable databases automatically, which are opened for writing, and for the PDB$SEED as well. No need to run datapatch for PDBs that you create afterwards from the template.

ORA-16857: standby disconnected from redo source for longer than specified threshold


DGMGRL> show configuration

Configuration - usage

  Protection Mode: MaxPerformance
  Members:
  usage   - Primary database
    usagedr - Physical standby database
      Warning: ORA-16857: standby disconnected from redo source for longer than specified threshold

Fast-Start Failover: DISABLED

Configuration Status:
WARNING   (status updated 14 seconds ago)

DGMGRL> show database usage

Database - usage

  Role:               PRIMARY
  Intended State:     TRANSPORT-ON
  Instance(s):
    usage1
    usage2

Database Status:
SUCCESS

DGMGRL> show database usagedr

Database - usagedr

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      23 minutes 55 seconds (computed 2 seconds ago)
  Apply Lag:          23 minutes 56 seconds (computed 2 seconds ago)
  Average Apply Rate: 1.40 MByte/s
  Real Time Query:    OFF
  Instance(s):
    usagedr1 (apply instance)
    usagedr2

Database Status:
SUCCESS

DGMGRL>

Solution

Setting 'TransportDisconnectedThreshold' to 0 seconds will disable the alert else we can set the property to a higher value.

COMMON CAUSES OF OCSSD EVICTIONS



Oracle Clusterware is designed to perform a node eviction by removing one or more nodes from the cluster if some critical problem is detected.  A critical problem could be a node not responding via a network heartbeat, a node not responding via a disk heartbeat, a hung or severely degraded machine, or a hung ocssd.bin process.  The purpose of this node eviction is to maintain the overall health of the cluster by removing bad members.

Common Causes are as below.

  1. Network failure or latency between nodes. It would take 30 consecutive missed checkins (by default - determined by the CSS misscount) to cause a node eviction. 
  2. Problems writing to or reading from the CSS voting disk.  If the node cannot perform a disk heartbeat to the majority of its voting files, then the node will be evicted.
  3. A member kill escalation.  For example, database LMON process may request CSS to remove an instance from the cluster via the instance eviction mechanism.  If this times out it could escalate to a node kill. 
  4. An unexpected failure or hang of the OCSSD process, this can be caused by any of the above issues or something else.
  5. An Oracle bug.

At some extents we can maintain them using below 

$CRS_HOME/bin/crsctl set css misscount n -- default 60 sec in 11g nd 30 sec in 12c
$CRS_HOME/bin/crsctl set css reboottime n -- default 3
$CRS_HOME/bin/crsctl set css disktimeout n -- default 200 sec