Quantcast
Channel: Database Administration Tips
Viewing all 214 articles
Browse latest View live

Transaction Guard | Application Continuity | DML Failover in RAC 19c

$
0
0

Introduction:

Starting from 12c TAF can failover DMLs to the available nodes on a RAC environment in case of any interruption happens to it including node/service/network failure. This feature is called  Application Continuity or Transaction Guard.

Pre-requisites: [For 19c]

The applications should use Oracle Client 19c. Although the Oracle note Doc ID 2011697.1 mentioning that any 12.1+ Oracle Client will work, it didn't work for me while testing on a 19c DB.
Application Continuity works on the assumption that the applications are well written in terms of connection usage:
  • Borrowing connections and returning them to the connection pool instead of pinning connections.
  • If a statement cache at the application server level is enabled it must be disabled when the replay is used. or use JDBC statement cache, which is compatible with Application Continuity. 
  • Additional CPU utilization will happen on the client-side in order to handle garbage collections. 
Implementation:

# srvctl add service -database sprint -service pssfztest_rpt -preferred sprint1,sprint2 -tafpolicy BASIC -failovertype TRANSACTION -commit_outcome TRUE -failovermethod BASIC -failoverretry 100 -stopoption IMMEDIATE -session_state DYNAMIC -role PRIMARY -policy AUTOMATIC -clbgoal long -verbose -failover_restore LEVEL1 -replay_init_time 3600

Notes:

- Don't use replay_init_time along with failoverdelay or you will get this error when trying to start the service:
CRS-2632: There are no more servers to try to place resource 'ora.sprint.pssfztest_gg.svc' on that would satisfy its placement policy

- In order for the applications to use the DML failover feature (Application Continuity) application users should be granted execute permission on DBMS_APP_CONT: [For simplicity I'm granting it for everyone but some applications don't work properly with this feature, so it's recommended to test your application and grant this permission to the users of the applications that support Application Continuity]
SQL>grant execute on DBMS_APP_CONT to public;

- clbgoal=short is less stable than clbgoal=long where failover retries can be exhausted before reaching its max limit.

- clbgoal=short balance the sessions between RAC nodes based on response time, while clbgoal=long balance the sessions based on the total number of sessions on each node.

- The PRECONNECT option for -tafpolicy parameter is deprecated in 19c.

- If you set -failovertype = TRANSACTION, then you must set -commit_outcome to TRUE.

- For -session_state Oracle recommends to set it to DYNAMIC for most applications, to use the default session settings (NLS settings, optimizer preferences,..) after the session fails over.

- replay_init_time: Specifies the time in seconds after which replay (failover) will not happen. [It's set to 3600sec =1 hour above]

The following activities if happened can cause the transaction to failover without being disrupted [transaction will hang for a few seconds till the connectivity get restored on the available nodes]:

- instance crash.
- Partial Network disruption.
- OS kill -9 of the instance main processes (PMON/SMON).
- OS kill -STOP followed by kill -9 of the same session.
- shutdown immediate (from SQLPLUS console).
- shu abort (from SQLPLUS console).
- srvctl stop service -d sprint -i sprint1
- srvctl stop service -d sprint -i sprint1 -force
- srvctl stop instance -d sprint -i sprint1 -failover
- ALTER SYSTEM KILL SESSION command.
- ALTER SYSTEM DISCONNECT SESSION command.

The following activities will terminate the DML transaction WITHOUT failing them over but the session itself will re-connect automatically: [If performed on the node where the session is connected]
- OS kill -9 of the session PID.
- ALTER SYSTEM CANCEL SQL '<SID>,<SERIAL#>';
- srvctl stop instance -d sprint -i sprint1 -force
- crsctl stop cluster
- crsctl stop crs



Conclusion:

- Application Continuity feature can let you carry out activities like software patching, hardware/network maintenance with real ZERO downtime. 

- Before using Application Continuity feature, you have to make sure that your applications are compatible with this feature by testing all the scenarios you may go through. It will be wise if you consult your application vendor before implementing this feature.
Using this feature blindly without proper testing may result in unexpected application behavior.

References:


ORA-28040: No matching authentication protocol

$
0
0
Problem:
When connecting from Oracle Client 11g to an 18c DB or higher it throws this error:
ORA-28040: No matching authentication protocol

Analysis:
Starting from 18c SQLNET.ALLOWED_LOGON_VERSION_SERVER parameter is defaulted to 12, which means; if your applications is using Oracle Client 11g to connect to the DB server they will get ORA-28040 unless you set this parameter to 11.

Solution:
Under $ORACLE_HOME/network/admin Set the parameter  SQLNET.ALLOWED_LOGON_VERSION_SERVER to 11, in case sqlnet.ora file is not exist under  $ORACLE_HOME/network/admin then create it.

[On the Database Server by the oracle user]
Add SQLNET.ALLOWED_LOGON_VERSION_SERVER=11 to sqlnet.ora file:

# vi $ORACLE_HOME/network/admin/sqlnet.ora

SQLNET.ALLOWED_LOGON_VERSION_SERVER=11

Note:Neither restarting the listener nor restarting the DB is required here, the change should take effect immediately once you save the sqlnet.ora file.

Note:sqlnet.ora must be located under $ORACLE_HOME/network/admin, if you want to create a symbolic link of sqlnet.ora under $GRID_HOME/network/admin it's a good idea but not mandatory. sqlnet.ora file must always be located under ORACLE_HOME.

Note:  In case your application is connecting to a 12.2 DB or higher from an Oracle Client older than 11.2 e.g. 11.1 or 10g then you must upgrade the Oracle Client to at least 11.2. According to MOS (Doc ID 207303.1) the least compatible Oracle Client version to connect to a 12.2 DB and higher is Oracle Client 11.2.

References:
Client / Server Interoperability Support Matrix for Different Oracle Versions (Doc ID 207303.1)

CRS-4000: Command Start failed, or completed with errors

$
0
0
Problem:
 
While restarting the clusterware on one cluster node I got this error:

[root@fzppon06vs1n~]# crsctl start cluster
CRS-2679: Attempting to clean 'ora.ctssd' on 'fzppon06vs1n'
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'fzppon06vs1n'
CRS-2672: Attempting to start 'ora.evmd' on 'fzppon06vs1n'
CRS-2680: Clean of 'ora.ctssd' on 'fzppon06vs1n' failed
CRS-2676: Start of 'ora.evmd' on 'fzppon06vs1n' succeeded
CRS-2674: Start of 'ora.drivers.acfs' on 'fzppon06vs1n' failed
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'fzppon06vs1n'
CRS-2679: Attempting to clean 'ora.ctssd' on 'fzppon06vs1n'
CRS-2680: Clean of 'ora.ctssd' on 'fzppon06vs1n' failed
CRS-2679: Attempting to clean 'ora.ctssd' on 'fzppon06vs1n'
CRS-2680: Clean of 'ora.ctssd' on 'fzppon06vs1n' failed
CRS-2674: Start of 'ora.drivers.acfs' on 'fzppon06vs1n' failed
CRS-2672: Attempting to start 'ora.storage' on 'fzppon06vs1n'
CRS-2676: Start of 'ora.storage' on 'fzppon06vs1n' succeeded
CRS-4000: Command Start failed, or completed with errors.



Analysis:
 
When Checking the clusterware alertlog I can find the log stopped on this line:

2019-10-07 12:15:45.495 [EVMD(23031)]CRS-8500: Oracle Clusterware EVMD process is starting with operating system process ID 23031
I've checked the time between both cluster nodes and it was in sync.
Tried to stop the clusterware forcefully and start it up:
[root@fzppon06vs1n~]# crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'fzppon06vs1n'
CRS-2679: Attempting to clean 'ora.ctssd' on 'fzppon06vs1n'
CRS-2680: Clean of 'ora.ctssd' on 'fzppon06vs1n' failed
CRS-2679: Attempting to clean 'ora.ctssd' on 'fzppon06vs1n'
CRS-2680: Clean of 'ora.ctssd' on 'fzppon06vs1n' failed
CRS-2679: Attempting to clean 'ora.ctssd' on 'fzppon06vs1n'
CRS-2680: Clean of 'ora.ctssd' on 'fzppon06vs1n' failed
CRS-2679: Attempting to clean 'ora.ctssd' on 'fzppon06vs1n'
CRS-2680: Clean of 'ora.ctssd' on 'fzppon06vs1n' failed
CRS-2679: Attempting to clean 'ora.ctssd' on 'fzppon06vs1n'
CRS-2680: Clean of 'ora.ctssd' on 'fzppon06vs1n' failed
CRS-2799: Failed to shut down resource 'ora.cssd' on 'fzppon06vs1n'
CRS-2799: Failed to shut down resource 'ora.cssdmonitor' on 'fzppon06vs1n'
CRS-2799: Failed to shut down resource 'ora.ctssd' on 'fzppon06vs1n'
CRS-2799: Failed to shut down resource 'ora.gipcd' on 'fzppon06vs1n'
CRS-2799: Failed to shut down resource 'ora.gpnpd' on 'fzppon06vs1n'
CRS-2795: Shutdown of Oracle High Availability Services-managed resources on 'fzppon06vs1n' has failed
CRS-4687: Shutdown command has completed with errors.
CRS-4000: Command Stop failed, or completed with errors.


Looks there is a problem with stopping cssd and ctssd services as well.

Solution:
 
Restarted the node and the clusterware came up properly without errors:
[root@fzppon06vs1n ~]# sync;sync;sync; init 6
Analyzing such problem was challenging as there were no errors reported in the clusterware logs when the clusterware was hung during its start up. So far, restarting the RAC node is one of the silver bullet troubleshooting techniques for many of non-sense clusterware behaviors ;-)

A Summary of The Remarkable New Features in 12c [12.1 & 12.2]

$
0
0
I intended to post about 19c new features, but thought that most of DBAs have their productions on 11g and it will be informative if I cover 12c [12.1 & 12.2] new features first "with the implementation steps as possible", before I jump to 18 & 19c features which I'll cover in the next posts; just to put the things in a good order.

Architecture New Features:
=======================
12c database introduced a new type of database architecture called Multitenant "which is simliar to the database architecure in SQL Server"-- where one instance can serve more than one database in the same server, the old architecture which is now called Non Container database is still supported as well]
Container [multitenant] DB: Is a DB holds Root + Seed + one or more pluggable DB.
> Only the CDB has a SPFILE, there are few parameters can be changed on PDB level.
> Every container DB will have SYSTEM, SYSAUX, UNDO and Temporary tablespaces plus REDO and controlfiles.
> Every Pluggable DB will have it's own SYSTEM & SYSAUX and [optional] temporary tablespace and it's default tablespaces
  and share the usage of the UNDO and TEMPORARY tablespaces of the container DB with other pluggable DBs.
> You can login directly from OS to the container DB using OS authentication, for the pluggable DB you have to use Ezconnect to connect directly from OS.
> When starting up a container DB it will not startup automatically it's pluggable DBs, you have to do this manually:
  SQL> ALTER PLUGGABLE DATABASE PDB1 OPEN;
  SQL> ALTER PLUGGABLE DATABASE ALL  OPEN;
  SQL> ALTER PLUGGABLE DATABASE ALL EXCEPT <ONE_PDB> OPEN/CLOSE;
> When Shutting down the container DB it will close all pluggable DBs automatically.
> You can close one or more pluggable DB alone without affecting other pluggable DBs on the container:
  SQL> ALTER PLUGGABLE DATABASE PDB1 CLOSE IMMEDITE;
> You can restore one or more pluggable DB with Full or PITR recovery or open it with restricted/read only option without affecting other pluggable DBS on the container.
> Users can be COMMON user, that can access all databases on the container and it's name must start with C##
> Users can be local, just can be created on a pluggable DB and will ONLY access the pluggable DB they created under.
> Pluggable DB can be cloned on the same container DB using one command:
  SQL> CREATE PLUGGABLE DATABASE PDBTEST FROM PDB1 FILE_NAME_CONVERT=('/oradata/orcl/pdb1.dbf','/oradata/orcl/pdbtest.dbf');
> Pluggable DB can be cloned to another container DB in the same server or remotely using a DB link using a single command:
  SQL> CREATE PLUGGABLE DATABASE PDBTEST FROM PDB1@PDB1DBLINK FILE_NAME_CONVERT=('/oradata/orcl/pdb1.dbf','/oradata/neworcl/pdbtest.dbf');
> Pluggable DB can be unplugged and plugged [Moved] to a different container DB.

The aim of the container DB is to combine mutlible numbers of DBs in One DB for ease of administration and well use of the resources.
But what is the diffrence between using Container DB and merge all database on one DB:
> Container DB can segregate the duties between DBAs. (can be done on Non-Container using Vault).
> Easy to clone the plugabble DB on the same Container DB / same server or remote server. (can be done on non caontainer using export/import or transportable tablespace)
> CDB_XXX views show information about the CDB$ROOT and ALL the PDBS, while DBA_XXX views show information for the CURRENTcontainer (PDB or CDB).
I'm still searching for more pro's for using container DB over older releases style ....

Performance New Features:
======================

[12.1] Adaptive Query Optimization: Enabled by default through parameter "optimizer_adaptive_features"
In case the optimizer choosed a bad execution plan and during the execution it figured out a better plan it will switch to use the better plan on the fly.
Starting from 12.2 this parameter been removed and replaced with two parameters:
       OPTIMIZER_ADAPTIVE_PLANS:      Default (TRUE).  Enables/disables adaptive plans.
       OPTIMIZER_ADAPTIVE_STATISTICS: Default (FALSE). Enables/disables SQL plan directives, statistics feedback.
       because it add more time to the parsing phase it's recommended to be kept in FALSE unless the database is a DW it can be set to TRUE.
[12.2] Real Time SQL Monitoring:
SQL>    VARIABLE my_rept CLOB
    BEGIN
      :my_rept :=DBMS_SQLTUNE.REPORT_SQL_MONITOR();
    END;
    /
    PRINT :my_rept
[12.2] V$INDEX_USAGE_INFO Provide more information on indexes usage like frequency of usage.

****
RAC:
****
[12.1] Online Resource Attribute Modification.
[12.2] Application Continuity: RW Transactions can failover to the available node if the service and client prerequisites are met: https://www.oracle.com/technetwork/database/options/clustering/applicationcontinuity/adb-continuousavailability-5169724.pdf


Memory New Features:
======================
In-memory Full database caching: [12.1.0.2]
-------------------------------
This feature will buffer the objects into the buffer cache once they get accessed subsequently, starting from 12.1.0.2, If Oracle determines that the buffer cache is big enough to hold the entire database it will cache all blocks automatically without Admin intervention this is called default caching mode.
In older releases the cahcing was happening but not for all data. e.g. if a user query a large table Oracle might not cache the whole table data as it might remove useful data from buffer cahce in order to fit that whole table inside.

Force entire DB caching in the buffer cache:

Implementation: [Downtime]
SQL>    shutdown immediate;
    startup mount;
    --Enable Force caching
    ALTER DATABASE FORCE FULL DATABASE CACHING;
    --Disable Force caching:
    --ALTER DATABASE NO FORCE FULL DATABASE CACHING;
    alter database open;
    SELECT force_full_db_caching FROM v$database;

Note: Accessed blocks will be cached inside buffer cache, if the buffer cache is smaller than buffered DB blocks the feature will be automatically turned off with this message in the Alertlog:
Buffer Cache Force Full DB Caching mode on when DB does not fit in cache. Turning off Force Full DB Caching advisable

[12.1] PGA_AGGREGATE_LIMIT parameter introduced to limit the PGA size, if it reach its max the biggest PGA consumer sessions will get terminated
       to maintain PGA size under PGA_AGGREGATE_LIMIT. [Risky]


TABLES New Features:
======================

Online table REBUILD: [12.2]
ONLINE table move is now available which DO NOT interrupt DMLs against the table and keep the indexes USABLE:
SQL> ALTER TABLE XX MOVE ONLINE;
    -- DMLS will work fine and not be interrupted.
    -- INDEXES will be USABLE during and after the MOVE operation.

Below is not authentic:
"UPDATE INDEXES" can maintain that the indexes will be USABLE after the MOVE but NO DMLs will be allowed during the MOVE:
SQL> ALTER TABLE XX MOVE UPDATE INDEXES;
    -- DMLS will NOT work.
    -- INDEXES will be UNSABLE during the operation but will be USABLE after the MOVE operation.
    -- NO INDEXES REBUILD required after the MOVE operation.

Invisible Columns: [12.1]

You can make individual table columns invisible. Any generic access like select *, desc does not show the invisible columns, but if you explicitly specify the invisible column name in the query it will show its data.
SQL> ALTER TABLE mytable MODIFY (b INVISIBLE);
SQL> set colinvisible on
           desc mytable
SQL> ALTER TABLE mytable MODIFY (b VISIBLE);


Invisible Indexes: [in 12c Allow multiple indexes on same column with different type]

SQL> create bitmap index indx1 on t(empno) invisible;                        


[12.2] Advanced index compression [High level] provides significant space savings while also improving performance.


SQLPLUS New Features:
=======================
- Select top n rows is now available:
SQL> SELECT sal FROM t ORDER BY sal DESC FETCH FIRST 5 ROWS ONLY;

- Columns with default value will insert the default value instead of NULL in case the user inserts NULL:
SQL> create table tt(empno number,ename varchar2(10) default on null 10);
SQL> insert into tt(empno,ename)values(101,null);
SQL> select * from tt;

     EMPNO ENAME
---------- ----------
       101 10

Enable Extended Data Types: [Not enabled by default and requires a downtime to get it enabled]
VARCHAR2     – 32767 bytes. [Was 4000]
NVARCHAR     – 32767 bytes. [Was 4000]
RAW         – 32767 bytes. [Was 2000]

SQL> PURGE DBA_RECYCLEBIN
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP UPGRADE;
SQL> ALTER SYSTEM SET max_string_size=extended;
SQL> @?/rdbms/admin/utl32.k.sql
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP;


COMPRESSION  New Features:
=============================
You can compress inactive rows which are not getting accessed nor modified in the last n number of days:

Ex: Compress inactive ROWs not accessed/modified in the last 31 days:

Tablespace Level:     SQL> ALTER TABLESPACE tbs1 DEFAULT ILM ADD POLICY ROW STORE COMPRESS ADVANCED SEGMENT AFTER 31 DAYS OF NO ACCESS;
Table Group Level:    SQL> ALTER TABLE tab1 ILM              ADD POLICY ROW STORE COMPRESS ADVANCED GROUP AFTER   31 DAYS OF NO ACCESS;
Segment Level:        SQL> ALTER TABLE tab1 ILM              ADD POLICY ROW STORE COMPRESS ADVANCED ROW AFTER     31 DAYS OF NO MODIFICATION;


RMAN New Features:
======================
RMAN can directly run SQLPLUS commands without the need for "sql '<sql statement...>'" identifier.

Recovery of dropped/purged table:

As we know this require a full database flashback/ tablespace restore/ point in time recovery for the DB in diffrent location.

SQL> Drop table emp purge;

RMAN> recover table emp until scn XXXX auxiliary destination '/tmp/oracle';

Duplicate Enhancements:

[12.1] Active duplication to support SECTION SIZE for faster backup and COMPRESSION for less network overhead:
RMAN> DUPLICATE TARGET DATABASE TO orcl2 FROM ACTIVE DATABASE
                [USING BACKUPSET]
                [SECTION SIZE …]
                [USING COMPRESSED BACKUPSET]  …;

SECTION SIZE         can improve the backup speed it break the datafile into the specified chunks to be backed up in parallel.
COMPRESSED BACKUPSET    Send the backup over the network to the target in compressed format
NOOPEN            Do not perform the final step of opening the DB in resetlogs mode and leave it in the mount mode.

RECOVER DATABASE UNTIL AVAILABLE REDO;     Automatically finds the last available archive redo log file

CloneDB New Features:
======================
CloneDB enables you to clone a database in a non-multitenant environment multiple times without copying the data files into several different locations. Instead, CloneDB uses copy-on-write technology, so that only the blocks that are modified require additional storage on disk.

Requirements:
- Storage type that stores the Production DB backup should be NFS.
- A full RMAN backup (normal, as copy) should be stored on the NFS location where the cloneDB will read from and writing changed blocks.
- Create a PFILE for the Production DB:
  SQL> create pfile='/backup/initorcl.ora' from spfile;
- From OS:
  # export MASTER_COPY_DIR=<The Production RMAN backup full path>
  # export CLONE_FILE_CREATE_DEST=<The location where CloneDB datafiles,logfiles, controlfiles will be stored>
  # export CLONEDB_NAME=<CloneDB database Name>

- Run a Perl script clonedb.pl
  # $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/install/clonedb.pl  prod_db_pfile
  # $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/install/clonedb.pl  /backup/initorcl.ora
  Modify the following generated files: [if necessary]
  PFILE for the cloneDB (SGA, PGA, CLONEDB=TRUE
  crtdb.sql
  dbren.sql
- Connect to the Newly created instance AS SYSDBA and run the generated script respectively:
  SQL> @crtdb.sql
  SQL> @dbren.sql


ASM New Features:
======================

[12.1] ASM disk scrubbing checks/automatic repare logical data corruptions in NORMAL and HIGH redundancy disks groups with minimal I/O.


Security New Features:
======================
[12.1] UNLIMITED TABLESPACE privilege removed from RESOURCE role.
[12.1] SYSBACKUP role introduced for backup admins.
[12.1] SELECT ANY DICTIONARY privilege no longer permits the access to sensitive dictionary tables DEFAULT_PWD$, ENC$, LINK$, USER$, USER_HISTORY$ and XS$VERIFIERS.
[12.2] INACTIVE_ACCOUNT_TIME is a New profile parameter introduced to lock the account if it's inactive for specific days.
[12.2] TDE tablespace encryption include SYSTEM, SYSAUX, and UNDO to cover the whole DB.
[12.2] TDE can encrypt, decrypt, and rekey existing tablespaces online.
[12.2] TDE Tablespace Encryption can happen online without application downtime:
        SQL> alter tablespace users encryption encrypt;

[12.1] Data Redaction:
       - It mask (redact) sensitive data returned from application queries for specific users, other users with EXEMPT REDACTION POLICY privilege can see.
       - Oracle Data Redaction doesn’t make change to data on disk, the sensitive data is redacted on the fly before it get returned to the application.

EX: Redact all data in SALARY column in HR.EMPLOYEES
EX1: SALARY column will show 0 value:
 BEGIN
 DBMS_REDACT.ADD_POLICY
 (object_schema => 'HR',
 object_name => 'EMPLOYEES',
 policy_name => 'redact_EMP_SAL_0',
 column_name => 'SALARY',
 function_type => DBMS_REDACT.FULL,
 expression => '1=1');
 END;
 /

EX2: SALARY column will show RANDOM values:
BEGIN
 DBMS_REDACT.ADD_POLICY(
  object_schema => 'HR',
  object_name => 'employees',
  column_name => 'salary',
  policy_name => 'redact_EMP_SAL_random',
  function_type => DBMS_REDACT.RANDOM,
  expression => 'SYS_CONTEXT(''USERENV'',''SESSION_USER'') = ''HR''');
END;
/


Auditing New Features:
======================
[12.1] Auditing is enabled by default.
[12.1] DBA_USERS records the last_login time for each user:     SQL> select username,last_login from dba_users where username like 'P_FZ%';
[12.2] Role auditing. For example, auditing for new users with the DBA role would begin automatically when they are granted the role.

Privilege Monitoring: [Extra License]
--------------------
If database vault option is enabled, You can audit the privileges if they are getting used, or what are the privilege used by specific user/module.

--Create a database privilege analysis policy
BEGIN
DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE(
        name         => 'all_priv_analysis_policy',
        description  => 'database-wide policy to analyze all privileges',
        type         => DBMS_PRIVILEGE_CAPTURE.G_DATABASE);
END;
/

--Create a privilege analysis policy to analyze privileges from the role e.g. PUBLIC
BEGIN
DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE(
       name         => 'pub_analysis_pol',
       description  => 'Policy to record privilege use by PUBLIC',
       type         => DBMS_PRIVILEGE_CAPTURE.G_ROLE,
       roles        => role_name_list('PUBLIC'));
END;
/

-- Create a policy to analyze privileges from the application module, "Account Payable"
BEGIN
DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE(
  name         => 'acc_pay_analysis_pol',
  type         => DBMS_PRIVILEGE_CAPTURE.G_CONTEXT,
  condition    => 'SYS_CONTEXT(''USERENV'', ''MODULE'') = ''Account Payable''');
END;
/

-- Create a policy that records privileges for session user APPS when running module "Account Payable"
BEGIN
DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE(
  name         => 'acc_pay_analysis_pol',
  type         => DBMS_PRIVILEGE_CAPTURE.G_CONTEXT,
  condition    => 'SYS_CONTEXT(''USERENV'', ''MODULE'') = ''Account Payable'' AND
                   SYS_CONTEXT(''USERENV'', ''SESSION_USER'') = ''APPS''');
END;
/

--Enable the Capture:
SQL> EXEC DBMS_PRIVILEGE_CAPTURE.ENABLE_CAPTURE('MY_CREATED_CAPTURE');

-- Generate the report:
SQL> EXEC DBMS_PRIVILEGE_CAPTURE.GENERATE_RESULT('MY_CREATED_CAPTURE');
Note: You can consult views DBA_USED_SYSPRIVS and DBA_USED_OBJPRIVS for USED privielges and DBA_UNUSED_PRIVS for UNUSED privileges.

-- Disable the Capture:
SQL> EXEC DBMS_PRIVILEGE_CAPTURE.DISABLE_CAPTURE('MY_CREATED_CAPTURE');

-- Drop the Capture:
SQL> EXEC DBMS_PRIVILEGE_CAPTURE.DROP_CAPTURE('MY_CREATED_CAPTURE');

Auditing Data PUMP expdp operation:

Drawbacks: it will not show what tables were actually exported.

-- Create Audit Policy:
SQL> CREATE AUDIT POLICY audit_datapump  ACTIONS COMPONENT=DATAPUMP ALL;
SQL> AUDIT POLICY audit_datapump  BY sys,system;
# expdp \'/ as sysdba \' tables=p_fz.test directory=test dumpfile=test.dmp REUSE_DUMPFILES=Y
SQL> SELECT event_timestamp, dp_text_parameters1, dp_boolean_parameters1 FROM unified_audit_trail WHERE audit_type = 'Datapump';

-- Drop Audit Policy:
SQL> select ENTITY_NAME from AUDIT_UNIFIED_ENABLED_POLICIES where POLICY_NAME=upper('audit_datapump');
SQL> NOAUDIT policy audit_datapump by sys,system;
SQL> drop AUDIT POLICY audit_datapump;


Export/Import New Features:
=========================
New parameters introduced:

include=NETWORK_ACL            Starting from 12c you can export ACLs using this parameter.
TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y    Disable Logging while importing data. [One of the coolest 12c features].
logtime=all                Log the time for each export/import step.
COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH}    The higher the slower the best compression. can be changed during the operation.


MISC Features:
===============
- Writes all DDL operations into the specific DDL log instead of writing to alert log, When setting enable_ddl_logging=true
- Network Compression can be used to reduce the data volume transferred between DB server and clients: [Advanced Compression License]
    SQLNET.COMPRESSION=: (off | on) ON to activate the compression. [Default: off]
    SQLNET.COMPRESSION_LEVELS=: Determines the compression level (LOW | HIGH) the higher the more CPU to be used. [Default: low]
    SQLNET.COMPRESSION_THRESHOLD=: Determines the minimum data size needed to trigger the compression. [Default: 1024 bytes]

- [12.1] Datafiles (including system Tablespace) can be moved ONLINE while being accessed, and Oracle will automatically move the datafile on OS side to the new location as well.
  This operation doesn't get replicated on the standby DB.
  SQL> ALTER DATABASE MOVE DATAFILE '/u01/oracle/rbdb1/user1.dbf' TO '+DATA1/orcl/datafile/user1.dbf'<keep> <reuse>;
  keep    -> Will keep the original datafile in its location and will not delete it by the end of the move.
  reuse -> In case the datafile already exist in the target location it will overwrite it. [Not recommended]
  Note: In case you are moving OFFLINE datafiles, you have to copy them first from OS side before issue ALTER DATABASE MOVE command.

- [12.1] You can create multiple indexes on the same set of columns to perform application migrations without dropping an existing index.
  e.g. A B-tree index and a bitmap index can be created on the same set of columns.
  Limitations: Only one index should be VISIBLE at a time.

The following operations can run ONLINE without locking the table: [ONLINE keywork must be mentioned to utilize it]
DROP INDEX            DROP INDEX schema.index ONLINE;
DROP CONSTRAINT            ALTER TABLE emp DROP CONSTRAINT emp_email_uk ONLINE;
ALTER INDEX UNUSABLE        ALTER INDEX emp_ix UNUSABLE ONLINE;
SET COLUMN UNUSED:        ALTER TABLE emp SET UNUSED (ename) ONLINE;
ALTER TABLE MOVE        ALTER TABLE emp MOVE TABLESPACE tbs2 ONLINE UPDATE INDEXES;
ALTER TABLE MODIFY PARTITION    ALTER TABLE sales MODIFY PARTITION BY RANGE (c1) INTERVAL (100)(PARTITION p1 …, PARTITION p2 …) ONLINE UPDATE INDEXES;
ALTER TABLE SPLIT PARTITION    ALTER TABLE orders SPLIT PARTITION ord_p1 AT ('01-JUN-2015') INTO (PARTITION ord_p1_1 COMPRESS, PARTITION ord_p1_2)                            ONLINE UPDATE INDEXES;
DATAFILE MOVE ONLINEALTER DATABASE MOVE datafile '/disk1/myexample01.dbf' TO '+DATA' REUSE KEEP;
                REUSE: overwrite the datafile if it's already exist in the destination.
                KEEP:  keep the original datafile and don't delete it after the completion of move operation.
                Allow renmae/move datafiles to same/different storage (e.g. from non-ASM to ASM).

- [12.1] Partition or subpartition can be moved ONLINE without interrupting the DMLs.

- [12.1] Online redefinition of a table can happen in one step.

- [12.1] You can make individual table columns invisible.

- [12.1] Undo data on temporary tables can be placed on TEMPORARY tablespace instead of UNDO tablespace [Not Default need to be enabled]
  To enable TEMPORARY UNDO you have to set the patameter TEMP_UNDO_ENABLED=TRUE, this will boost the performance as such data will not be written to REDOLOGS.
  It's ENABLED by Default on ADG standby DB to enable DML on TEMP tables. Once it get enabled you can monitor the TEMP UNDO generation using V$TEMPUNDOSTAT.

- [12.2] The authentication for SYS user happens in the Password File not through the DB Dictionary.

- Whenever you change the password for any user SYSDBA, SYSOPER, SYSBACKUP, SYSKM, SYSDG you have to sync it inside the Password File by revoke the privilege and re-grant it back again, so the user will be removed and re-added back to the Password File with its new password: [All Oracle Versions]
    SQL> SELECT USERNAME FROM V$PWFILE_USERS WHERE USERNAME != 'SYS' AND SYSDBA='TRUE';
    SQL> REVOKE SYSDBA from DBA1;
    SQL> GRANT  SYSDBA to DBA1;

- [12.2] New view called DBA_DB_LINK_SOURCES to show the information of the source databases that opened database links to the local database.

- [12.2] Objects that cause ORA-00600 and ORA-07445 and can cause the whole instance to crash can be quaranteed and isolated to avoid having DB to crash till the database get restarted. V$QUARANTINE stores the information about isolated objects.

- [12.2] INSTANCE_ABORT_DELAY_TIME initialization parameter specifies a delay time in seconds when an error causes an instance to abort to help the DBA gather information. [I don't see it beneficial, as it will increase the outage time and most probably the DBA will not be able to intervene within that time]

- [12.2] Password file on the standby will be automatically synced when it get changed on the primary.


Dataguard New Features:
======================
[12.1] Real Time ADG cascade, Cascade Standby used to lag 1 redo log behind the primary starting from 12.1 the Cascade Standby can be synced in a real time.

[12.2] Recovery of NOLOGGING Blocks on Standby is possible:  https://www.doag.org/formes/pubfiles/11053935/2_DataGuard_122.pdf
Before 12.2 if a nologging transaction happen on the primary it will not be replicated to the standby and in order to replicate them you have to restore the complete datafiles impacted by NOLOGGING operations.
Starting from 12.2 Oracle provided the following command to scan the primary for nologging blocks and recoover them on the standby:
RMAN> validate/recover .. nonlogged block


Deprecated Features:
===================
- All SRVCTL commands are now using full-word options instead of the single-letter options which will be unavailable in future releases.
  e.g. instead of using # srvctl start database -dorcl
                          Use:  # srvctl start database -database orcl


References:
https://apex.oracle.com/database-features
Oracle Database 12c Release 1 (12.1) Upgrade New Features [ID 1515747.1]
NOTE:1520299.1 - Master Note For Oracle Database 12c Release 1 (12.1) Database/Client Installation/Upgrade/Migration Standalone Environment (Non-RAC)
Oracle Database 12c Release 1 (12.1) Upgrade New Features [ID 1515747.1]
RMAN RECOVER TABLE Feature New to Oracle Database 12c [ID 1521524.1]

DBA Bundle V5.5

"virtual circuit wait" Wait Event in AWR on a Dedicated Architecture Server

$
0
0
Problem:

I noticed "virtual circuit wait" wait event in the AWR report for one of the databases in the TOP 5 Wait Events:









Analysis:

Honestly I didn't know what this event for, I googled it and landed on (Doc ID 1415999.1) which is discussing the same event on Shared Server architecture, but wait a second my database architecture is Dedicated not Shared!

Connections come to the Shared Server through Dispatchers, then let's check if the connections are coming through the Dispatcher:

SQL> col Network for a80
SQL> SELECT accept "Accept", bytes "Total Bytes", owned "Current_Connections", CREATED "Connections_History"
FROM   v$dispatcher  ORDER BY 1;


Acc Total Bytes Current_Connections Connections_History
--- ----------- ------------------- -------------------
YES  1760871229         1640          304735


From OS, I can see the connections that are coming through the dispatchers:

$ netstat -tplna|grep ora_d|grep QA1(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 x.x.x.x:1521           x.x.x.x:53642         ESTABLISHED 980/ora_d000_QA1
tcp        0      0 x.x.x.x:1521           x.x.x.x:59358         ESTABLISHED 980/ora_d000_QA1
...


This command shows how many connections are established it should return same number from above query +1
$ netstat -tplna|grep ora_d|grep QA1|wc -l
 1641

Ideally on a Dedicated Connection server, connections should not suppose to come through the dispatcher, now let's check the dispatcher setting on this database:

SQL> sho parameter dispatchers
NAME                     TYPE     VALUE
------------------------------------ -----------
dispatchers                 string     (PROTOCOL=TCP) (SERVICE=QA1)


Now it's clear that the dispatcher is set to the same service name which is being used by the applications, which makes sense why application connections are coming through the dispatcher which also justifies the existence of "virtual circuit wait" wait event in the AWR report !

Solution:

I've fixed this wrong configuration by setting the "dispatchers" dynamic parameter to its default to handle only the connections coming through XDB service:

SQL> alter system set dispatchers='(PROTOCOL=TCP) (SERVICE=QA1XDB)';
System altered.

SQL> sho parameter dispatch

NAME                     TYPE     VALUE
------------------------------------ -----------
dispatchers                 string     (PROTOCOL=TCP) (SERVICE=QA1XDB)


Now the new application connections will not pass through the dispatcher, connections that are already connected through the dispatcher will not get impacted.

New Script For Exporting Data

$
0
0
Download link:
https://www.dropbox.com/s/hk0pfo2tanop35r/export_data.sh?dl=0

Introduction:
Few years back I've shared a script to export data on Oracle DB, it was lacking some features like exporting multiple tables/schemas, excluding schemas from Full Export mode or excluding tables from Schema Export mode.

The new script I'm sharing in this post is coming with the following new features:

- Execution of the export will run in the background "nohup mode".
- Ability to exclude multiple schemas and tables from Full Database Export mode.
- You can provide multiple schemas to export in the Schema Export mode.
- Ability to exclude multiple tables from Schema Export mode.

- You can export multiple tables in Table Export mode.
- Ability to COMPRESS the dump file on the fly for all LEGACY export modes.

  [This eliminates the need of compressing the dump file after the export operation, the thing will save the disk space]
- Parallelism option section will pop-up to the user only if the database edition supports parallelism [i.e. Enterprise Edition]

- Both Data Pump and Legacy export will use FLASHBACK SCN to export the data in a consistent state which is beneficial in cases such as
  replicating tables using Goldengate or Oracle Streams.
- By the end of the Export operation the user will receive an Email notification [optional].

I kept the Legacy export utility (exp) as an option, because it will be beneficial over Data Pump for the following scenarios:

- Exporting data on 9i and older versions.
- Exporting data on a Read-Only Database [i.e. Standby database]
- No sufficient space to accommodate a regular (uncompressed) Data Pump dump file [Due to licensing or having a Standard Edition setup].
Out of the above mentioned reasons; it's always recommended to use Data Pump over Legacy Export to export the data.

How it works:

First it will ask you against which database you want to run the script: [Enter the Database NUMBER]











Then, you will be asked to Enter your Email to receive a notification upon the completion of the Export job: [you can hit enter to skip this part if you don't want to receive an Email notification]







Enter the location where the dump file will be located:





Select from the option list the Export mode you want to run: [Enter a number from 1 to 3]
Enter 1 for Database Full export.
Enter 2 for Schema export.
Enter 3 for Table export.








You will be prompted for the Export utilityto use for this export job:
Enter 1 to select DataPump expdp, which gives you the luxury of excluding schemas/tables from the export, also you can run the export in Parallel (will come next).
Enter 2 to select Legacy exp, which is compatible with very old releases (9i and older) also it can utilize the On-the-fly compression of the export file using mknod OS tool for that purpose.






The script will check if Parallelism feature is enabled in the current DB edition; in case it's enabled it will prompt the user to enter the degree of parallelism to be used during the export, if the user is not interested in running the export in parallel he/she can Enter 1 to disable parallelism.
Note: If you use parallelism during the export this will scatter the final dump file into multiple files based on the degree of parallelism you have entered.





In case the Parallelism option is not enabled in the current Edition, i.e. you are using a Standard Edition, you will not be prompted for this option but a message will popup indicating that Parallelism is disabled.





Next, you will be prompted to use Compression to compress the export dump file:
Warning: If you selected DataPump export expdp make sure you already purchased Oracle Advanced Compression license before accepting to use this feature.
If you selected the Legacy export exp, don't worry about licensing as the compression will happen On-the-fly on OS side, No extra licenses are required!


If you selected DataPump export, you will be prompted for Exclude Schemas section, here you can exclude one or multiple schemas from the Full database export.
Enter the schemas you want to exclude separating them by comma <,> as shown in the red circle: <Not exactly a circle but I tried do my best using MS Paint 😊>










 If you already selected DataPump export, you will be prompted for Exclude Tables section, here you can exclude any tables from the Full database export. [Don't qualify the table with its owner, only the table_name as shown]









Export script will start creating the exporter user which will run the Export job then will create the pre & post scripts which will help you to import this dump file later. [It won't import anything, get back to your seat and relax😉]

Then it will prompt you the commands to use in case you want to control the export job during its execution. i.e. you can PAUSE/RESUME/KILL the export job:















The export job will run in the background in the "nohup" mode. So you don't need to worry about keeping this SSH session opened, you can close it any time without impacting the export job.

Once the export job is completed, it will show you the FLASHBACK SCN that used during the export along with the import guidelines you can use whenever you want to import the same generated dump file later: [Again, the script will not import anything it will just show you guidelines]










By the end of the Export job you will receive an Email notification as well.

The same thing you will experience if you select other Export modes (SCHEMA or TABLE).

Note: When you select Legacy Export utility (exp), options like; Exclude Schemas, Exclude Tables and Parallel options will not appear as they are not available in the Legacy export utility, otherwise an option like Compression will be available because I'm using native OS compression (mknod+bzip2) inside the script.

Hope you like the new script. Let me know if you want to add any feature or report a bug.

This script is part of the DBA BUNDLE, to read more about it please visit this link:
http://dba-tips.blogspot.ae/2014/02/oracle-database-administration-scripts.html

DISCLAIMER: THIS SCRIPT IS DISTRIBUTED IN THE HOPE THAT IT WILL BE USEFUL, BUT WITHOUT ANY WARRANTY. IT IS PROVIDED "AS IS".

GitHUB Version:

DBA BUNDLE V5.6

$
0
0
New features in Bundle 5.6

export_data.sh script: For exporting data has been completely re-coded to come with the following new features:

- Execution of the export will run in the background "nohup mode".
- Ability to exclude multiple schemas and tables from Full Database Export mode.
- You can provide multiple schemas to export in the Schema Export mode.
- Ability to exclude multiple tables from Schema Export mode.

- You can export multiple tables in Table Export mode.
- Ability to COMPRESS the dump file on the fly for all LEGACY export modes.

  [This eliminates the need of compressing the dump file after the export operation, the thing will save the disk space]
- Parallelism option section will pop-up to the user only if the database edition supports parallelism [i.e. Enterprise Edition]

- Both Data Pump and Legacy export will use FLASHBACK SCN to export the data in a consistent state which is beneficial in cases such as
  replicating tables using Goldengate or Oracle Streams.
- By the end of the Export operation the user will receive an Email notification [optional].

Bug fixes applied on the following scripts:
rebuild_table.sh script: for rebuilding tables online.
configuration_baseline.sh script: For gathering the baseline info for OS & DB.
dbdailychk.sh script: for generating Health Check report on the DB.

To download the DBA Bundle latest version:
https://www.dropbox.com/s/xn0tf2pfeq04koi/DBA_BUNDLE4.tar?dl=0

For more reading on what this bundle for and how to use it:
http://dba-tips.blogspot.com/2014/02/oracle-database-administration-scripts.html

The DBA Guide For Managing Oracle Database on Amazon RDS in One Page!

$
0
0
Some Oracle database administration tasks are really challenging to perform on AWS RDS, this is because AWS doesn't provide you a full DBA account to use, but provides a master account that can perform most of DBA tasks but with some restriction; restrictions include the direct execution of the following commands:
- ALTER DATABASE
- ALTER SYSTEM
- GRANT ANY ROLE/PRIVILEGE
- DROP/CREATE ANY DIRECTORY 

But the good news is that AWS has provided some packages to enable you execute some of the above commands but with limited scope.

In this post I've gathered all the Admin commands I've gone through to make your journey with RDS a pleasent one!

Before I start please note that the commands in navy color represent the native Oracle commands --just thought to included them to make this article coherent and handy for you, and the commands in green color are the new RDS related ones.

Sessions Management:

-- Check the active Sessions:
SQL> select
substr(s.INST_ID||'|'||s.USERNAME||'| '||s.sid||','||s.serial#||' |'||substr(s.MACHINE,1,22)||'|'||substr(s.MODULE,1,18),1,69)"INS|USER|SID,SER|MACHIN|MODUL"
,substr(s.status||'|'||round(w.WAIT_TIME_MICRO/1000000)||'|'||LAST_CALL_ET||'|'||to_char(LOGON_TIME,'ddMon HH24:MI'),1,40) "ST|WAITD|ACT_SINC|LOGIN"
,substr(w.event,1,24) "EVENT"
,s.SQL_ID||'|'||round(w.TIME_REMAINING_MICRO/1000000) "CURRENT SQL|REMIN_SEC"
,s.FINAL_BLOCKING_INSTANCE||'|'||s.FINAL_BLOCKING_SESSION "I|BLK_BY"
from     gv$session s, gv$session_wait w where s.USERNAME is not null and s.sid=w.sid and s.STATUS='ACTIVE'
--and    w.EVENT NOT IN ('SQL*Net message from client','class slave wait','Streams AQ: waiting for messages in the queue','Streams capture: waiting for archive log','Streams AQ: waiting for time management or cleanup tasks','PL/SQL lock timer','rdbms ipc message')
order by "I|BLK_BY" desc,w.event,"INS|USER|SID,SER|MACHIN|MODUL","ST|WAITD|ACT_SINC|LOGIN" desc,"CURRENT SQL|REMIN_SEC";


-- Disconnect a session:
SQL> EXEC rdsadmin.rdsadmin_util.disconnect(<SID>, <SERIAL#>, 'IMMEDIATE');

-- Cancel Statement: [18c+]
 SQL> EXEC rdsadmin.rdsadmin_util.cancel(<SID>, <SERIAL#>, '<SQLID>');

-- Enable Restricted Sessions:
SQL> EXEC rdsadmin.rdsadmin_util.restricted_session(p_enable => true);
         
select logins from v$instance;


-- Disable Restricted Sessions:
SQL> EXEC rdsadmin.rdsadmin_util.restricted_session(p_enable => false);

-- List the MASTER BLOCKING Sessions:
SQL> select /*+RULE*/
substr(s.INST_ID||'|'||s.OSUSER||'/'||s.USERNAME||'| '||s.sid||','||s.serial#||' |'||substr(s.MACHINE,1,22)||'|'||substr(s.MODULE,1,18),1,75)"I|OS/DB USER|SID,SER|MACHN|MOD"
,substr(s.status||'|'||round(w.WAIT_TIME_MICRO/1000000)||'|'||LAST_CALL_ET||'|'||to_char(LOGON_TIME,'ddMon HH24:MI'),1,34) "ST|WAITD|ACT_SINC|LOGIN"
,substr(w.event,1,24) "EVENT"
,s.PREV_SQL_ID||'|'||s.SQL_ID||'|'||round(w.TIME_REMAINING_MICRO/1000000) "PREV|CURRENT_SQL|REMAIN_SEC"
from    gv$session s, gv$session_wait w
where    s.sid in (select distinct FINAL_BLOCKING_SESSION from gv$session where FINAL_BLOCKING_SESSION is not null)
and       s.USERNAME is not null
and     s.sid=w.sid
and    s.FINAL_BLOCKING_SESSION is null;


-- KILL the MASTER BLOCKING SESSION: [I suppose you already know what you are doing :-)]
SQL> col "KILL MASTER BLOCKING SESSION"    for a75
select /*+RULE*/ 'EXEC rdsadmin.rdsadmin_util.disconnect('||s.sid||','||s.serial#|| ',''IMMEDIATE'');'"KILL MASTER BLOCKING SESSION"
from     gv$session s
where    s.sid in (select distinct FINAL_BLOCKING_SESSION from gv$session where FINAL_BLOCKING_SESSION is not null)
and       s.USERNAME is not null and s.FINAL_BLOCKING_SESSION is null;


-- List of Victim LOCKED Sessions:
SQL> select /*+RULE*/
substr(s.INST_ID||'|'||s.USERNAME||'| '||s.sid||','||s.serial#||' |'||substr(s.MACHINE,1,22)||'|'||substr(s.MODULE,1,18),1,65)"INS|USER|SID,SER|MACHIN|MODUL"
,substr(w.state||'|'||round(w.WAIT_TIME_MICRO/1000000)||'|'||LAST_CALL_ET||'|'||to_char(LOGON_TIME,'ddMon'),1,38) "WA_ST|WAITD|ACT_SINC|LOG_T"
,substr(w.event,1,24) "EVENT"
,s.FINAL_BLOCKING_INSTANCE||'|'||s.FINAL_BLOCKING_SESSION "I|BLKD_BY"
from    gv$session s, gv$session_wait w
where   s.USERNAME is not null
and     s.FINAL_BLOCKING_SESSION is not null
and     s.sid=w.sid
and     s.STATUS='ACTIVE'
--and     w.EVENT NOT IN ('SQL*Net message from client','class slave wait','Streams AQ: waiting for messages in the queue','Streams capture: waiting for archive log'
--        ,'Streams AQ: waiting for time management or cleanup tasks','PL/SQL lock timer','rdbms ipc message')
order by "I|BLKD_BY" desc,w.event,"INS|USER|SID,SER|MACHIN|MODUL","WA_ST|WAITD|ACT_SINC|LOG_T" desc;


-- Lock Chain Analysis:
SQL> select /*+RULE*/ 'User: '||s1.username || ' | ' || s1.module || '(SID=' || s1.sid ||' ) running SQL_ID:'||s1.sql_id||'  is blocking
User: '|| s2.username || ' | ' || s2.module || '(SID=' || s2.sid || ') running SQL_ID:'||s2.sql_id||' For '||round(s2.WAIT_TIME_MICRO/1000000/60,0)||' Minutes' AS blocking_status
from gv$LOCK l1, gv$SESSION s1, gv$LOCK l2, gv$SESSION s2
 where s1.sid=l1.sid and s2.sid=l2.sid
 and l1.BLOCK=1 and l2.request > 0
 and l1.id1 = l2.id1
 and l2.id2 = l2.id2
 order by s2.WAIT_TIME_MICRO desc;


-- Blocking Locks On Object Level:
SQL> select  /*+RULE*/ OWNER||'.'||OBJECT_NAME "LOCKED_OBJECT", ORACLE_USERNAME||' | '||lo.OS_USER_NAME "LOCK HOLDER: DB_USER | OS_USER",
l.sid||' | '|| lo.PROCESS "DB_SID | OS_PID",decode(TYPE,'MR', 'Media Recovery',
'RT', 'Redo Thread','UN', 'User Name','TX', 'Transaction','TM', 'DML','UL', 'PL/SQL User Lock','DX', 'Distributed Xaction','CF',
'Control File','IS', 'Instance State','FS', 'File Set','IR', 'Instance Recovery','ST', 'Disk Space Transaction','TS', 'Temp Segment'
,'IV', 'Library Cache Invalidation','LS', 'Log Start or Switch','RW', 'Row Wait','SQ', 'Sequence Number','TE', 'Extend Table','TT',
'Temp Table', type)||' | '||
decode(LMODE, 0, 'None',1, 'Null',2, 'row share lock',3, 'row exclusive lock',4, 'Share',5, '(SSX)exclusive lock',6, 'Exclusive', lmode) lock_type,
l.CTIME LOCK_HELD_SEC,
decode(REQUEST,0, 'None',1, 'Null',2, 'row share lock',3, 'row exclusive lock',4, 'Share',5, '(SSX)exclusive lock',6, 'Exclusive', request) lock_requested,
decode(BLOCK,0, 'Not Blocking',1, 'Blocking',2, 'Global', block)
status from    v$locked_object lo, dba_objects do, v$lock l
where   lo.OBJECT_ID = do.OBJECT_ID AND     l.SID = lo.SESSION_ID AND l.BLOCK='1'
order by OWNER,OBJECT_NAME;


-- Long Running Operations:
SQL>select USERNAME||'| '||SID||','||SERIAL# "USERNAME| SID,SERIAL#",SQL_ID,round(SOFAR/TOTALWORK*100,2) "%DONE"
        ,to_char(START_TIME,'DD-Mon HH24:MI')||'| '||trunc(ELAPSED_SECONDS/60)||'|'||trunc(TIME_REMAINING/60) "STARTED|MIN_ELAPSED|REMAIN" ,MESSAGE
        from v$session_longops
    where SOFAR/TOTALWORK*100 <>'100'and TOTALWORK <> '0'
        order by "STARTED|MIN_ELAPSED|REMAIN" desc, "USERNAME| SID,SERIAL#";



Users Management:

-- Grant permission on SYS tables/views: [With GRANT option]
SQL> EXEC rdsadmin.rdsadmin_util.grant_sys_object(p_obj_name  => 'V_$SESSION', p_grantee => 'USER1', p_privilege => 'SELECT', p_grant_option => true);

-- REVOKE permission from a user on SYS object:
SQL> EXEC rdsadmin.rdsadmin_util.revoke_sys_object(p_obj_name  => 'V_$SESSION', p_revokee  => 'USER1', p_privilege => 'SELECT');

-- Grant SELECT on Dictionary:
SQL> grant SELECT_CATALOG_ROLE to user1;

-- Grant EXECUTE on Dictionary:
SQL> grant EXECUTE_CATALOG_ROLE to user1;

-- Create Password Verify Function:
Note: As you cannot create objects under SYS in RDS you have to use the following ready made procedure by AWS to create the Verify Function:
Note: The verify function name should contains one of these keywords: "PASSWORD", "VERIFY", "COMPLEXITY", "ENFORCE", or "STRENGTH"

SQL> begin
    rdsadmin.rdsadmin_password_verify.create_verify_function(
        p_verify_function_name         => 'CUSTOM_PASSWORD_VFY_FUNCTION',
        p_min_length                   => 8,
    p_max_length            => 256,
    p_min_letters            => 1,
    p_min_lowercase            => 1,
        p_min_uppercase                => 1,
        p_min_digits                   => 3,
        p_min_special                  => 2,
    p_disallow_simple_strings    => true,
    p_disallow_whitespace        => true,
    p_disallow_username        => true,
    p_disallow_reverse        => true,
    p_disallow_db_name        => true,
        p_disallow_at_sign             => false);
end;
/



Directory/S3 Management: [Files/Upload/Download]

-- Show all files under DATA_PUMP_DIR directory:
SQL> SELECT * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;

-- Read a logfile under BDUMP directory:
SQL> SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-1580377405757-6339.log'));

-- Read an import log under DATA_PUMP_DIR directory:
SQL> SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('DATA_PUMP_DIR','import_schema.LOG'));     

-- Create a new Directory: [i.e. BKP_DIR]
SQL> EXEC rdsadmin.rdsadmin_util.create_directory(p_directory_name => 'BKP_DIR');

-- Delete a Directory:
Note: Deleting a directory will not delete the underlying files and they will keep utilizing space, so what shall I do?

First, list all the files under the directory to be deleted and delete them one by one: [Die Hard method]

SQL> SELECT * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('BKP_DIR')) order by mtime;
SQL> EXEC SYS.UTL_FILE.FREMOVE ('BKP_DIR','import.LOG');

Then, delete the Directory:
SQL> EXEC rdsadmin.rdsadmin_util.drop_directory(p_directory_name => 'BKP_DIR');

-- Upload a file to S3 bucket:
SQL> SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3(
    p_bucket_name => '<bucket_name>',    --bucket name where you want to upload to
    p_prefix => '<file_name>',        --File name you want to upload
    prefix => '',
    p_directory_name => 'DATA_PUMP_DIR')    --Directory Name you want to upload from
     AS TASK_ID FROM DUAL;



-- Download all files exist in S3 bucket:
SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
      p_bucket_name    =>  'my-bucket',                  
--bucket name where you want to download from.
      p_directory_name =>  'DATA_PUMP_DIR') --Directory Name you want to download to.
   AS TASK_ID FROM DUAL; 


-- Download all files exist inside a named folder in S3 bucket:SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
      p_bucket_name    =>  'my-bucket',      --bucket name where you want to download from.
      p_s3_prefix          =>  'export_files/',    --All files under this folder will be downloaded [don't forget the slash / after the directory name]
      p_directory_name =>  'DATA_PUMP_DIR') --Directory Name you want to download to.
   AS TASK_ID FROM DUAL;

-- Downloadone named fileexist under a folder from an S3 bucket:
SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
      p_bucket_name    =>  'my-bucket',     
--bucket name where you want to download from.
      p_s3_prefix          =>  'prod_db/export_files',  --Folder name full path
     
p_prefix                =>  'EXPORT_STG_04-03-19.dmp', --file name
      p_directory_name =>  'DATA_PUMP_DIR')
--Directory Name you want to download to.
    AS TASK_ID FROM DUAL;


-- Check the download progress: [By TaskID provided from above commands i.e. 1580377405757-6339]
SQL> SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-1580377405757-6339.log'));
 


-- Delete a file from DATA_PUMP_DIR directory: [%U all the files ending with 01,02,03,etc ...]
SQL> EXEC utl_file.fremove('DATA_PUMP_DIR','export_RA%U.dmp');

-- Rename a file under DATA_PUMP_DIR directory:
SQL> EXEC UTL_FILE.FRENAME('DATA_PUMP_DIR', '<Original_filename>', 'DATA_PUMP_DIR', '<New_filename>', TRUE);

  
Database Management:

-- Enable Force Logging:
SQL> EXEC rdsadmin.rdsadmin_util.force_logging(p_enable => true );

-- Disable Force Logging:
SQL> EXEC rdsadmin.rdsadmin_util.force_logging(p_enable => false);

-- Flush Shared Pool:
SQL> EXEC rdsadmin.rdsadmin_util.flush_shared_pool;

-- Flush Buffer Cache:
SQL> EXEC rdsadmin.rdsadmin_util.flush_buffer_cache;

-- Force a Checkpoint:
SQL> EXEC rdsadmin.rdsadmin_util.checkpoint;

-- Switch REDOLOG: [remember ALTER SYSTEM is restricted]
SQL> EXEC rdsadmin.rdsadmin_util.switch_logfile;

-- View REDOLOG switches per hour:[Last 24hours]
SQL> SELECT to_char(first_time,'YYYY-MON-DD') day,
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'9999') "00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'9999') "01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'9999') "02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'9999') "03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'9999') "04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'9999') "05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'9999') "06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'9999') "07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'9999') "08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'9999') "09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'9999') "10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'9999') "11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'9999') "12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'9999') "13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'9999') "14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'9999') "15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'9999') "16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'9999') "17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'9999') "18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'9999') "19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'9999') "20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'9999') "21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'9999') "22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'9999') "23"
from v$log_history where first_time > sysdate-1
GROUP by to_char(first_time,'YYYY-MON-DD') order by 1 asc;


-- Add new REDO LOG Group: [contains 1 redolog file inside with 1GB size (default is 128M)]
SQL> EXEC rdsadmin.rdsadmin_util.add_logfile(p_size => '1G');

-- Drop REDO LOG Group: [REDOLOG Group# 1]
    -- Force a Checkpoint:
        SQL> EXEC rdsadmin.rdsadmin_util.checkpoint;
    -- Switch REDOLOG: [ALTER SYSTEM is restricted]
        SQL> EXEC rdsadmin.rdsadmin_util.switch_logfile;

SQL> EXEC rdsadmin.rdsadmin_util.drop_logfile(1);

-- Set DEFAULT TABLESPACE for database:
SQL> EXEC rdsadmin.rdsadmin_util.alter_default_tablespace(tablespace_name => 'example');

-- Set DEFAULT TEMPORARY TABLESPACE for database:
SQL> EXEC rdsadmin.rdsadmin_util.alter_default_temp_tablespace(tablespace_name => 'temp2');

-- Resize TEMPORARY tablespace in a [Read Replica]:
SQL> EXEC rdsadmin.rdsadmin_util.resize_temp_tablespace('TEMP','4G');

-- Checking the Current retention for ARCHIVELOGS and TRACE Files:
SQL> set serveroutput on
     EXEC rdsadmin.rdsadmin_util.show_configuration;  
            

-- Increasing the Archivelog retention (to be kept on disk) to 1 day [24 hours]: [Default is 0]         
SQL> EXEC rdsadmin.rdsadmin_util.set_configuration(name  => 'archivelog retention hours', value => '24');

-- Add supplemental log: [For goldengate or any replication tool]
SQL> EXEC rdsadmin.rdsadmin_util.alter_supplemental_logging('ADD','ALL');

-- Change DB timezone:
SQL> EXEC rdsadmin.rdsadmin_util.alter_db_time_zone(p_new_tz => 'Asia/Dubai');

-- Collect AWR report: [On Enterprise Edition and make sure you acquired Diagnostic+Tuning license]
copy the script content of $ORACLE_HOME/rdbms/admin/awrrpt from any database server you have [On premises or EC2] and run it on RDS database.

-- Change the default Application Edition:
SQL> EXEC rdsadmin.rdsadmin_util.alter_default_edition('APP_JAN2020');

-- Reset back to the default Application Edition:
SQL> EXEC rdsadmin.rdsadmin_util.alter_default_edition('ORA$BASE');


Storage Management:

-- Tablespace Size:
SQL> select tablespace_name,
       round((tablespace_size*8192)/(1024*1024)) Total_MB,
       round((used_space*8192)/(1024*1024))      Used_MB,
       round(used_percent,2)              "%Used"
from dba_tablespace_usage_metrics;


-- Datafiles:
SQL> select tablespace_name,file_name,bytes/1024/1024 "Size",maxbytes/1024/1024 "Max_Size" from dba_data_files
union select tablespace_name,file_name,bytes/1024/1024 "Size",maxbytes/1024/1024 "Max_Size" from dba_temp_files order by 1;



Objects Management:

-- Object Size:
SQL> col SEGMENT_NAME format a30
SELECT /*+RULE*/ SEGMENT_NAME, TABLESPACE_NAME, SEGMENT_TYPE OBJECT_TYPE, ROUND(SUM(BYTES/1024/1024)) OBJECT_SIZE_MB
FROM   SYS.DBA_SEGMENTS
WHERE  OWNER =        upper('SPRINT_STAGE1')
AND    SEGMENT_NAME = upper('E_INTERLINE_MSGS20')
GROUP  BY SEGMENT_NAME, TABLESPACE_NAME, SEGMENT_TYPE;
--Lob Size:
SELECT /*+RULE*/ SEGMENT_NAME LOB_NAME, TABLESPACE_NAME, ROUND(SUM(BYTES/1024/1024)) OBJECT_SIZE_MB
FROM   SYS.DBA_SEGMENTS
WHERE  SEGMENT_NAME in (select /*+RULE*/ SEGMENT_NAME from dba_lobs where
owner=     upper('SPRINT_STAGE1') and
table_name=UPPER('E_INTERLINE_MSGS20'))
GROUP  BY SEGMENT_NAME, TABLESPACE_NAME;
--Indexes:
SELECT /*+RULE*/ SEGMENT_NAME INDEX_NAME, TABLESPACE_NAME, ROUND(SUM(BYTES/1024/1024)) OBJECT_SIZE_MB
FROM   SYS.DBA_SEGMENTS
WHERE  OWNER = upper('SPRINT_STAGE1')
AND    SEGMENT_NAME in (select index_name from dba_indexes where
owner=     upper('SPRINT_STAGE1') and
table_name=UPPER('E_INTERLINE_MSGS20'))
GROUP  BY SEGMENT_NAME, TABLESPACE_NAME;


-- Table Info:
SQL> set linesize 190
col "OWNER.TABLE"    for a35
col tablespace_name     for a25
col PCT_FREE        for 99999999
col PCT_USED        for 99999999
col "STATS_LOCKED|STALE|DATE"    for a23
col "READONLY" for a8
select t.owner||'.'||t.table_name "OWNER.TABLE",t.TABLESPACE_NAME,t.PCT_FREE
,t.PCT_USED,d.extents,t.MAX_EXTENTS,t.COMPRESSION,t.STATUS,o.created,s.stattype_locked||'|'||s.stale_stats||'|'||s.LAST_ANALYZED "STATS_LOCKED|STALE|DATE"
from dba_tables t, dba_objects o, dba_segments d, dba_tab_statistics s
where t.owner=     upper('SPRINT_STAGE1')
and t.table_name = upper('E_INTERLINE_MSGS20')
and o.owner=t.owner and o.object_name=t.table_name and o.owner=d.owner and t.table_name=d.SEGMENT_NAME and o.owner=s.owner and t.table_name=s.table_name;


-- Getting Table Size [TABLE + ITS LOBS + ITS INDEXES]...
SQL> col Name for a30
col Type for a30
set heading on echo off
COLUMN TABLE_NAME FORMAT A32
COLUMN OBJECT_NAME FORMAT A32
COLUMN OWNER FORMAT A30
SELECT owner, table_name, TRUNC(sum(bytes)/1024/1024) TOTAL_SIZE_MB FROM
(SELECT segment_name table_name, owner, bytes FROM dba_segments WHERE segment_type = 'TABLE'
 UNION ALL
 SELECT i.table_name, i.owner, s.bytes FROM dba_indexes i, dba_segments s WHERE s.segment_name = i.index_name AND   s.owner = i.owner AND s.segment_type = 'INDEX'
 UNION ALL
 SELECT l.table_name, l.owner, s.bytes FROM dba_lobs l, dba_segments s WHERE s.segment_name = l.segment_name AND s.owner = l.owner AND s.segment_type = 'LOBSEGMENT'
 UNION ALL
 SELECT l.table_name, l.owner, s.bytes FROM dba_lobs l, dba_segments s WHERE s.segment_name = l.index_name AND s.owner = l.owner AND s.segment_type = 'LOBINDEX')
 WHERE owner=       UPPER('SPRINT_STAGE1')
 and   table_name = UPPER('E_INTERLINE_MSGS20')
 GROUP BY table_name, owner
 ORDER BY SUM(bytes) desc;


-- INDEXES On the Table:
SQL> set pages 100 feedback on
set heading on
COLUMN OWNER FORMAT A25 heading "Index Owner"
COLUMN INDEX_NAME FORMAT A35 heading "Index Name"
COLUMN COLUMN_NAME FORMAT A30 heading "On Column"
COLUMN COLUMN_POSITION FORMAT 9999 heading "Pos"
COLUMN "INDEX" FORMAT A40
COLUMN TABLESPACE_NAME FOR A25
COLUMN INDEX_TYPE FOR A15
SELECT IND.OWNER||'.'||IND.INDEX_NAME "INDEX", IND.INDEX_TYPE, COL.COLUMN_NAME, COL.COLUMN_POSITION,IND.TABLESPACE_NAME,IND.STATUS,IND.UNIQUENESS,
       S.stattype_locked||'|'||S.stale_stats||'|'||S.LAST_ANALYZED "STATS_LOCKED|STALE|DATE"
FROM   SYS.DBA_INDEXES IND, SYS.DBA_IND_COLUMNS COL, SYS.DBA_IND_STATISTICS S
WHERE  IND.TABLE_NAME =  upper('OA_FREQUENT_FLYERS')
AND    IND.TABLE_OWNER = upper('SPRINT_STAGE1')
AND    IND.TABLE_NAME = COL.TABLE_NAME AND IND.OWNER = COL.INDEX_OWNER AND IND.TABLE_OWNER = COL.TABLE_OWNER AND IND.INDEX_NAME = COL.INDEX_NAME
AND    IND.OWNER = S.OWNER AND IND.INDEX_NAME = S.INDEX_NAME;


-- CONSTRAINTS On a Table:
SQL> col type format a10
col constraint_name format a40
COL COLUMN_NAME FORMAT A25 heading "On Column"
select    decode(d.constraint_type,'C', 'Check','O', 'R/O View','P', 'Primary','R', 'Foreign','U', 'Unique','V', 'Check view') type
,d.constraint_name, c.COLUMN_NAME, d.status,d.last_change
from    dba_constraints d, dba_cons_columns c
where    d.owner =      upper('SPRINT_STAGE1')
and    d.table_name = upper('E_INTERLINE_MSGS20')
and    d.OWNER=c.OWNER and d.CONSTRAINT_NAME=c.CONSTRAINT_NAME
order by 1;



Block Corruption on RDS: [Skipping Corrupted Blocks procedure]

In case you have a corrupted blocks on a table/index whenever any query try to access those corrupted blocks it will keep getting ORA-1578

From the error message try to find the corrupted object_name and its owner :
SQL> Select relative_fno,owner,segment_name,segment_type
from dba_extents
where
file_id = <DATAFILE_NUMBER_IN_THE_ERROR_MESSAGE_HERE>
and
<CORRUPTED_BLOCK_NUMBER_IN_THE_ERROR_MESSAGE_HERE> between block_id and block_id + blocks - 1;


Next follow the next steps to SKIP the corrupted blocks on the underlying object to allow queries running against to succeed:

-- Create REPAIR tables:
exec rdsadmin.rdsadmin_dbms_repair.create_repair_table;
exec rdsadmin.rdsadmin_dbms_repair.create_orphan_keys_table;                       
exec rdsadmin.rdsadmin_dbms_repair.purge_repair_table;
exec rdsadmin.rdsadmin_dbms_repair.purge_orphan_keys_table;


-- Check Corrupted blocks on the corrupted object and populate them in the REPAIR tables:
set serveroutput on
declare v_num_corrupt int;
begin
  v_num_corrupt := 0;
  rdsadmin.rdsadmin_dbms_repair.check_object (
    schema_name => '&corrupted_Object_Owner',
    object_name => '&corrupted_object_name',
    corrupt_count =>  v_num_corrupt
  );
dbms_output.put_line('number corrupt: '||to_char(v_num_corrupt));
end;
/


col corrupt_description format a30
col repair_description format a30
select object_name, block_id, corrupt_type, marked_corrupt, corrupt_description, repair_description from sys.repair_table;


select skip_corrupt from dba_tables where owner = upper('&corrupted_Object_Owner') and table_name = upper('&corrupted_object_name');                  
                                       

-- Enable the CORRUPTION SKIPPING on the corrupted object:
begin
  rdsadmin.rdsadmin_dbms_repair.skip_corrupt_blocks (
    schema_name => '&corrupted_Object_Owner',
    object_name => '&corrupted_object_name',
    object_type => rdsadmin.rdsadmin_dbms_repair.table_object,
    flags => rdsadmin.rdsadmin_dbms_repair.skip_flag);
end;
/


select skip_corrupt from dba_tables where owner =  upper('&corrupted_Object_Owner') and table_name = upper('&corrupted_object_name');        

              
-- Disable the CORRUPTION SKIPPING on the corrupted object:

begin
  rdsadmin.rdsadmin_dbms_repair.skip_corrupt_blocks (
    schema_name => '&corrupted_Object_Owner',
    object_name => '&corrupted_object_name',
    object_type => rdsadmin.rdsadmin_dbms_repair.table_object,
    flags => rdsadmin.rdsadmin_dbms_repair.noskip_flag);
end;
/


select skip_corrupt from dba_tables where owner =  upper('&corrupted_Object_Owner') and table_name = upper('&corrupted_object_name');                     

-- Finally DROP the repair tables:
exec rdsadmin.rdsadmin_dbms_repair.drop_repair_table;
exec rdsadmin.rdsadmin_dbms_repair.drop_orphan_keys_table;                  
                             


Auditing:
-- Enable Auditing for all the privileges on SYS.AUD$ table:
SQL> EXEC rdsadmin.rdsadmin_master_util.audit_all_sys_aud_table;
SQL> EXEC rdsadmin.rdsadmin_master_util.audit_all_sys_aud_table(p_by_access => true); 

-- Disable Auditing on SYS.AUD$ table:
SQL> EXEC rdsadmin.rdsadmin_master_util.noaudit_all_sys_aud_table; 


RMAN Tasks:

-- Full RMAN database Backup in RDS:
 SQL> BEGIN
    rdsadmin.rdsadmin_rman_util.backup_database_full(
        p_owner               => 'SYS',
        p_directory_name      => 'BKP_DIR',
        p_level               => 0,               -- 0 For FULL, 1 for Incremental
     --p_parallel            => 4,              -- To be hashed if using a Standard Edition
        p_section_size_mb     => 10,
        p_rman_to_dbms_output => TRUE);
END;
/
 

-- Backup ALL ARCHIVELOGS: 
SQL> BEGIN
    rdsadmin.rdsadmin_rman_util.backup_archivelog_all(
        p_owner                     => 'SYS',
        p_directory_name      => 'BKP_DIR',
     --p_parallel                   => 6,              -- To be hashed if using a Standard Edition
        p_rman_to_dbms_output => TRUE);
END;
/


-- Backup ARCHIVELOGS between a date range:
 SQL> BEGIN
    rdsadmin.rdsadmin_rman_util.backup_archivelog_date(
        p_owner                 => 'SYS',
        p_directory_name  => 'BKP_DIR',
        p_from_date           => '01/15/2020 00:00:00',
        p_to_date                => '01/16/2020 00:00:00',
     --p_parallel                => 4,              -- To be hashed if using a Standard Edition
        p_rman_to_dbms_output => TRUE);
END;
/

           
Note: in case of using SCN/sequence replace "p_from_date" with "p_from_scn" or "p_from_sequence" and "p_to_date" with "p_to_scn" or "p_to_sequence".

-- Show Running RMAN Backups: [Manual backups running by you not by RDS as RDS is using Hot Backup method to backup the DB]
 SQL> SELECT to_char (start_time,'DD-MON-YY HH24:MI') START_TIME, to_char(end_time,'DD-MON-YY HH24:MI') END_TIME, time_taken_display, status,
input_type, output_device_type,input_bytes_display, output_bytes_display, output_bytes_per_sec_display,COMPRESSION_RATIO COMPRESS_RATIO
FROM v$rman_backup_job_details
WHERE status like 'RUNNING%';


-- Show current Running Hot Backups:
SQL> SELECT t.name AS "TB_NAME", d.file# as "DF#", d.name AS "DF_NAME", b.status
FROM V$DATAFILE d, V$TABLESPACE t, V$BACKUP b
WHERE d.TS#=t.TS#
AND b.FILE#=d.FILE#
AND b.STATUS='ACTIVE';


-- Validate the database for Physical/Logical corruption on RDS: 
SQL> BEGIN
    rdsadmin.rdsadmin_rman_util.validate_database(
        p_validation_type     => 'PHYSICAL+LOGICAL', 

      --p_parallel                  => 2,              -- To be hashed if running a Standard Edition
        p_section_size_mb     => 10,
        p_rman_to_dbms_output => TRUE);
END;
/


-- Enable BLOCK CHANGE TRACKING on RDS: [Avoid enabling it on 11g as you will hit a bug will hamper you from restoring the database later consistently]
SQL> SELECT status, filename FROM V$BLOCK_CHANGE_TRACKING;
SQL> EXEC rdsadmin.rdsadmin_rman_util.enable_block_change_tracking;

-- Disable BLOCK CHANGE TRACKING on RDS:
SQL> EXEC rdsadmin.rdsadmin_rman_util.disable_block_change_tracking;

-- Crosscheck and delete expired ARCHIVELOGS: [Which are not exist anymore on disk]
 SQL> EXEC rdsadmin.rdsadmin_rman_util.crosscheck_archivelog(p_delete_expired => TRUE, p_rman_to_dbms_output => TRUE);


Oracle Scheduler Jobs Management:

-- List all jobs:
SQL> select OWNER||'.'||JOB_NAME "OWNER.JOB_NAME",ENABLED,STATE,FAILURE_COUNT,to_char(LAST_START_DATE,'DD-Mon-YYYY hh24:mi:ss TZR')LAST_RUN,to_char(NEXT_RUN_DATE,'DD-Mon-YYYY hh24:mi:ss TZR')NEXT_RUN,REPEAT_INTERVAL,
extract(day from last_run_duration) ||':'||
lpad(extract(hour from last_run_duration),2,'0')||':'||
lpad(extract(minute from last_run_duration),2,'0')||':'||
lpad(round(extract(second from last_run_duration)),2,'0') "DURATION(d:hh:mm:ss)"
from dba_scheduler_jobs order by ENABLED,STATE,"OWNER.JOB_NAME";


-- List AUTOTASK INTERNAL MAINTENANCE WINDOWS:
SQL> SELECT WINDOW_NAME,TO_CHAR(WINDOW_NEXT_TIME,'DD-MM-YYYY HH24:MI:SS') NEXT_RUN,AUTOTASK_STATUS STATUS,WINDOW_ACTIVE ACTIVE,OPTIMIZER_STATS,SEGMENT_ADVISOR,SQL_TUNE_ADVISOR FROM DBA_AUTOTASK_WINDOW_CLIENTS;

-- FAILED JOBS IN THE LAST 24H:
SQL> select JOB_NAME,OWNER,LOG_DATE,STATUS,ERROR#,RUN_DURATION from DBA_SCHEDULER_JOB_RUN_DETAILS where STATUS='FAILED' and LOG_DATE > sysdate-1 order by JOB_NAME,LOG_DATE;

-- Current running jobs:
SQL> select j.RUNNING_INSTANCE INS,j.OWNER||'.'||j.JOB_NAME ||' | '||SLAVE_OS_PROCESS_ID||'|'||j.SESSION_ID"OWNER.JOB_NAME|OSPID|SID"
,s.FINAL_BLOCKING_SESSION "BLKD_BY",ELAPSED_TIME,CPU_USED
,substr(s.SECONDS_IN_WAIT||'|'||s.WAIT_CLASS||'|'||s.EVENT,1,45) "WAITED|WCLASS|EVENT",S.SQL_ID
from dba_scheduler_running_jobs j, gv$session s
where   j.RUNNING_INSTANCE=S.INST_ID(+)
and     j.SESSION_ID=S.SID(+)
order by "OWNER.JOB_NAME|OSPID|SID",ELAPSED_TIME;


-- Disable a job owned by SYS: [Only available on 11.2.0.4.v21 & 12.2.0.2.ru-2019-07.rur-2019-07.r1 and higher]
 SQL> EXEC rdsadmin.rdsadmin_dbms_scheduler.disable('SYS.CLEANUP_NON_EXIST_OBJ');

-- Modify the repeat interval of a job: [Only available on 11.2.0.4.v21 & 12.2.0.2.ru-2019-07.rur-2019-07.r1 and higher] 
SQL> BEGIN
     rdsadmin.rdsadmin_dbms_scheduler.set_attribute(
          name      => 'SYS.CLEANUP_NON_EXIST_OBJ',
          attribute => 'repeat_interval',
          value     => 'freq=daily;byday=FRI,SAT;byhour=20;byminute=0;bysecond=0');
END;
/


For more reading on Importing Data on RDS:
http://dba-tips.blogspot.com/2019/04/how-to-import-schema-on-amazon-rds.html

References:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.CommonDBATasks.Database.html#Appendix.Oracle.CommonDBATasks.TimeZoneSupport
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.CommonDBATasks.Misc.html
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.CommonDBATasks.Log.html
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.CommonDBATasks.RMAN.html
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.CommonDBATasks.Scheduler.html

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.CommonDBATasks.System.html#Appendix.Oracle.CommonDBATasks.CustomPassword

Maximum Availability Architecture For New Databases

$
0
0
Just thought it may be helpful if I share the major points of the Maximum Availability Architecture I'm following. It will be difficult to provide references as I've gathered/developed these points throughout a decade!

I'm sharing it with the hope it will be helpful for you without any warranty.

19c Clusterware fail to Startup due to CRS-41053: checking Oracle Grid Infrastructure for file permission issues

$
0
0
I had a 19c cluster node crashed and the clusterware failed to startup due to this error:

[root@fzppon05vs1n ~]# crsctl start crs
CRS-41053: checking Oracle Grid Infrastructure for file permission issues
PRVG-11960 : Set user ID bit is not set for file "/u01/grid/12.2.0.3/bin/extjob" on node "fzppon05vs1n".
PRVG-2031 : Owner of file "/u01/grid/12.2.0.3/bin/extjob" did not match the expected value on node "fzppon05vs1n". [Expected = "root(0)" ; Found = "oracle(54321)"]
CRS-4124: Oracle High Availability Services startup failed.
CRS-4000: Command Start failed, or completed with errors.


That's weird! Because the file mentioned in the error message already has the right ownership; which is supposed to be owned by Grid owner --which is Oracle in my setup, and it shouldn't be owned by root as advised by the error message:

[root@fzppon05vs1n ~]# ll /u01/grid/12.2.0.3/bin/extjob
-rwxr-xr-x 1 oracle oinstall 2.9M Mar  4 11:42 /u01/grid/12.2.0.3/bin/extjob


The same permissions and ownership on the other RAC node as well:

[oracle@fzppon06vs1n ~]$ ls -l /u01/grid/12.2.0.3/bin/extjob
-rwxr-xr-x 1 oracle oinstall 2.9M Mar  4 12:57 /u01/grid/12.2.0.3/bin/extjob


I've tried to stop the clusterware on this node with force option and start it back, but this didn't help.

Before trying to restart the OS, just thought to check the clusterware background processes, and here is the catch:

[root@fzppon05vs1n ~]# ps -ef | grep -v grep| grep '\.bin'
root     19786     1  1 06:18 ?        00:00:39 /u01/grid/12.2.0.3/bin/ohasd.bin reboot

root     19788     1  0 06:18 ?        00:00:00 /u01/grid/12.2.0.3/bin/ohasd.bin reboot
root     19850     1  0 06:18 ?        00:00:13 /u01/grid/12.2.0.3/bin/orarootagent.bin
root     19958     1  0 06:18 ?        00:00:14 /u01/grid/12.2.0.3/bin/oraagent.bin

...

Found lots of ohasd.bin are running, while it supposed to be only one ohasd.bin process

Checking all ohasd related processes:

[root@fzppon05vs1n ~]# ps -ef | grep -v grep | grep ohasd
root      1900     1  0 06:17 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run>/dev/null 2>&1 </dev/null
root      1947  1900  0 06:17 ?     00:00:00 /bin/sh /etc/init.d/init.ohasd run>/dev/null 2>&1 </dev/null
root      19786     1  1 06:18 ?        00:00:00 /u01/grid/12.2.0.3/bin/ohasd.bin reboot
root      19788     1  0 06:18 ?        00:00:00 /u01/grid/12.2.0.3/bin/ohasd.bin reboot


Now, let's kill all ohasd processes and give it a try:

[root@fzppon05vs1n ~]# kill -91900  1947 19786 19788           

Starting back the clusterware:

[root@fzppon05vs1n ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.


Voilà! Started up.

Conclusion:

Above error message may look vague... I know. Moreover, it may mention a different file in the error message rather than extjob.
Don't rush and change the file's ownership as advised by the error message, first check for any redundant clusterware background processes and kill it, then try to startup the clusterware. If this didn't help; restart the node and check again for any redundant processes.

dbalarm Script Updated!

$
0
0
I've added one more feature to the system monitoring script dbalarm to monitor the inodes number on each mounted filesystem on the system to provide an early warning before the inodes number get exhausted.

I've published one post explaining the impact of getting inodes number exhausted:
http://dba-thoughts.blogspot.com/2020/04/no-space-left-on-device-error-while.html

To download the final version of dbalarm script:
https://www.dropbox.com/s/a8p5q454dw01u53/dbalarm.sh?dl=0

To read more about dbalarm script and how it monitors the DB server:
http://dba-tips.blogspot.com/2014/02/database-monitoring-script-for-ora-and.html

Soon I'll update the same in the bundle tool as well.

Here is the GitHub version:

CRS-6706: Oracle Clusterware Release patch level ('3291738383') does not match Software patch level ('724960844')

$
0
0
Problem:
After patching an Oracle 19.3 GRID_HOME of an Oracle Restart setup, with 19.5 RU patch [30125133], I was not able to start up Oracle Restart due to this error:

#  $GRID_HOME/bin/crsctl start has
CRS-6706: Oracle Clusterware Release patch level ('3291738383') does not match Software patch level ('724960844'). Oracle Clusterware cannot be started.
CRS-4000: Command Start failed, or completed with errors.

Analysis:
Despite the success of patching GRID_HOME with 19.5 RU went successful, something went wrong during the patching process.
While trying to find a solution, I landed on Oracle Note (Doc ID 1639285.1) which describes a similar problem on a RAC setup, but it didn't offer a solution for Oracle Restart setup --which is my case. So I thought to write about the solution I followed and worked for me in this post.

Solution:
Running the following commands would fix/complete an in-complete patching of the GRID_HOME on an Oracle Restart setup:
You have to run the following commands with root user while Oracle Restart HAS is stopped:
# $GRID_HOME/crs/install/roothas.sh -unlock
# $GRID_HOME/crs/install/roothas.sh -prepatch 
# $GRID_HOME/crs/install/roothas.sh -postpatch













Oracle Restart HAS will startup automatically after the last command.

Duplicate of a Standby DB fails with ORA-19845

$
0
0
While creating a Standby database on a RAC environment from another RAC database using duplicate method I was getting this weird error:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 05/20/2020 14:20:14
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
ORA-19845: error in backupControlFile while communicating with remote database server
ORA-17628: Oracle error 1580 returned by remote Oracle server
ORA-01580: error creating control backup file
ORA-19660: some files in the backup set could not be verified
ORA-19661: datafile 0 could not be verified
ORA-19845: error in backupControlFile while communicating with remote database server
ORA-17628: Oracle error 1580 returned by remote Oracle server
ORA-01580: error creating control backup file


I was running the duplicate command from the primary DB (Node1)

After long investigation I figured out that the Snapshot controlfile location is not shared between both RAC instances (on the primary side):

RMAN> show SNAPSHOT CONTROLFILE NAME;

RMAN configuration parameters for database with db_unique_name SPRINTS are:
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/cloudfs/backup/sprint/snapcf_sprint.f';


That location "/cloudfs/backup/sprint" was only available on Node1 and not available on Node2

I've simply gone to Node2 and created the full location path and the duplicate succeeded!
On Node2:


# mkdir -p /cloudfs/backup/sprint
# chown oracle /cloudfs/ -R


Conclusion:
As a rule of thump the SNAPSHOT CONTROLFILE location should be available on all RAC nodes. Even if it's not shared!


OEM 13c Agent installation failed with: plugins.txt not found. The Management Agent installation failed. The plug-in oracle.sysman.oh may not be present in the Management Agent software

$
0
0
Problem:
At the end of an OEM 13c agent manual installation on one of the servers I got this message:

Finished post install
Plugin txt:
Inside if , is empty
/u01/oracle/agent13c/plugins.txt not found. The Management Agent installation failed. The plug-in oracle.sysman.oh may not be present in the Management Agent software. Ensure that the Management Agent software has the oracle.sysman.oh monitoring and discovery plug-in. 


Reason:
When I copied the agent binary from the OMS server to target server I copied the one located under:
/u01/oracle/13cmiddleware/sysman/agent/13.4.0.0.0_AgentCore_226.zip

This agent was outdated because there were minor changes happened on the OMS since OMS was initially installed.

Fix:
You have to re-generate a new agent from OMS and use it on the target server to install the agent:

On the OMS server: Generate a new agent:

Create a temporary directory that will hold the new generated agent:
# mkdir /tmp/emcli

Login to OMS using emcli with sysman:
# cd /u01/oracle/13cmiddleware/bin
# ./emcli login -username=sysman -password=xxxx
Login successful
# ./emcli sync
Synchronized successfully

Get the platform details:
# ./emcli get_supported_platforms
-----------------------------------------------
Version = 13.4.0.0.0
Platform = Linux x86-64
-----------------------------------------------
Platforms list displayed successfully.


Generate the agent providing the temp location along with the platform and version you got in the  previous command:

# ./emcli get_agentimage -destination=/tmp/emcli -platform="Linux x86-64" -version=13.4.0.0.0
=== Partition Detail ===
Space free : 4 GB
Space required : 1 GB
Check the logs at /u01/oracle/gc_inst/em/EMGC_OMS1/sysman/emcli/setup/.emcli/get_agentimage_2020-05-27_13-55-05-PM.log
Downloading /tmp/emcli/13.4.0.0.0_AgentCore_226.zip
File saved as /tmp/emcli/13.4.0.0.0_AgentCore_226.zip
Downloading /tmp/emcli/13.4.0.0.0_Plugins_226.zip
File saved as /tmp/emcli/13.4.0.0.0_Plugins_226.zip
Downloading /tmp/emcli/unzip
File saved as /tmp/emcli/unzip
Executing command: /tmp/emcli/unzip /tmp/emcli/13.4.0.0.0_Plugins_226.zip -d /tmp/emcli
Exit status is:0
Agent Image Download completed successfully.


Now copy the newly generated agent to the target server and use it for installing the agent:
# scp /tmp/emcli/13.4.0.0.0_AgentCore_226.zip oracle@SRV1:/u01/oracle/agent13c

On the Target server install the copied agent software:

# mkdir /backup/tmp
# cd /u01/oracle/agent13c
# unzip 13.4.0.0.0_AgentCore_226.zip
# /u01/oracle/agent13c/agentDeploy.sh AGENT_BASE_DIR=/u01/oracle/agent13c \
-force \
-ignorePrereqs \
-invPtrLoc /etc/oraInst.loc  \
AGENT_PORT=3872 \
EM_UPLOAD_PORT=4903 \
OMS_HOST=OMSSRV \
ORACLE_HOSTNAME=SRV1 \
AGENT_INSTANCE_HOME=/u01/oracle/agent13c/agent_inst \
AGENT_REGISTRATION_PASSWORD=xxxx \
SCRATCHPATH=/backup/tmp


At the end of installation run this script: [By root]
# /u01/oracle/agent13c/agent_13.4.0.0.0/root.sh           

Make sure the agent is running:
# /u01/oracle/agent13c/agent_13.4.0.0.0/bin/emctl status agent

Conclusion:
As a rule of thumb, always generate a fresh agent on the OMS server to use for installing the EM agent on the target server.

Create a Read Only EM Account on Cloud Control 13c Viewing Databases Performance Pages

$
0
0
Creating a read only EM account along with viewing databases performance pages and generate AWS and ASH reports looks a piece of cake task, but this took me long time to figure it out, so I thought to write this post about it.

Login to your OEM console: i.e. https://xxx:7803/em
Login as admin account i.e. sysman
Go to Setup -> Security -> Administrators


 -> Click on Create

(Enter the username and password)

-> Next (leave the default roles EM_USER, Public)

-> Next 
Check the following:
   Connect to any viewable target
    Monitor Enterprise Manager
    View any Target


   Go down the page to "Target Privileges" section:
   Click Add to add all databases the user will need to access
A new window will popup, check all the databases the user will need to access then click Select

   Then Check the databases and under the tab "Manage Target Privilege Grants" click the pen

Then Select: [Of course you will need to follow all the pages to check all the privileges you need]
   Connect Target, View Database Actions, View Database ADDM, View Database Advanced Queues, View Database Alert Logs, Manage Database ASH Reports, View Database ASH Reports and Analytics, Manage Database AWR Settings, View Database AWR Reports, View Database Backup, View Database Clients, View Database Links, View Database Dimensions, View Database Directory Objects, View Database Feature Usage, View High Availability Console, View Database indexes, View Database In Memory Setting, View Database Memory Usage, View Database Modules, View Database Materialized Views, View Database Packages and Package Bodies, View Database Performance Home Page, View Database Performance Privilege Group, View Database Optimizer Statistics, View Database Procedures and Functions, View Database Redo Logs, View Database Resources, View Database Roles, View Database Scheduler, View Database Schema Privilege Group, View Database Segments, View Database Sequences, View Database Services, View Database Sessions, View Database SQL Performance Analyzer, View Database SQL Monitor, View Database SQL Plan Control, Use Database SQL Tuning Advisor, View Database SQL Tuning Sets, View Database SQLs, View Database Storage Privilege Group, View Database Synonyms, View Database Table Data, View Database Tables, View Database Tablespaces, View Database Text Indexes, View Database Top Activity, View Database triggers, View Database Types, View Database Users, View Database Workspaces, View XML Database

  Click Continue
  This will allow the user to have a Read Only privilege on the selected DB along with generating AWR & ASH reports.


-> Next
-> Next
-> Finish

In case you need to create a similar EM user with similar privileges like the one you already created, you don't have to go through this daunting task again, just go to Setup -> Security -> Administrators


Check the user you want to create like and Click on "Create Like" button

-> Enter Name & Password


-> Click Review at the most right side of the page
 

-> Finish
That one was easy!

19c Grid Infrastructure Installation when running root.sh with PRKH-1010 : Unable to communicate with CRS services.

$
0
0
Problem:

While executing root.sh during a 19c Grid Infrastructure on a two RAC nodes I experienced this error:

2020/05/18 10:22:47 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2020/05/18 10:22:59 Oracle Clusterware infrastructure error in CLSECHO (OS PID 27328): ADR home path /u01/oracle/diag/crs/clssrv1-vip2/crs does not exist; ADR initialization will try to create it
CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
PRKH-1010 : Unable to communicate with CRS services.

PRKH-3003 : An attempt to communicate with the CSS daemon failed

2020/05/18 10:23:31 CLSRSC-180: An error occurred while executing the command 'srvctl start listener -l LISTENER'
Died at /u01/grid/12.2.0.3/crs/install/crsutils.pm line 12160. 


Analysis:

When checked Server static host name it wasn't set properly:

 # cat /etc/hostname
localhost.localdomain

# hostnamectl | grep hostname
Static hostname: localhost.localdomain
Transient hostname: clssrv1-vip2


# hostnamectl --static
localhost.localdomain 


Fix:

Set the server static host name properly using this Linux command:

# hostnamectl set-hostname clssrv1

Now the static name is showing the correct value of server name:

# hostnamectl
   Static hostname: clssrv1         
         Icon name: computer-server
           Chassis: server
        Machine ID: r93o4kde7b4d8o333jr4406686a4e
           Boot ID: 6b34567b7a3577767a1e3768306d954
  Operating System: Oracle Linux Server 7.4
       CPE OS Name: cpe:/o:oracle:linux:7:4:server
            Kernel: Linux 4.1.12-94.3.9.el7uek.x86_64
      Architecture: x86-64


Now re-execute the root.sh script. If it fails again remove the Grid Infrastructure installation and re-install it again.

Re-execution od Duplicate command fail on 19c with RMAN-06054: media recovery requesting unknown archived log

$
0
0
Problem:
On a 19.5 database, I was trying to duplicate a database from an RMAN backup, first time the duplicate failed after restoring 90% of the datafiles due to space limitation on the underlying datafiles location, I re-ran the same duplicate command for the second time without deleting the already restored datafiles to see if the 19c RMAN will be able to continue the previous duplicate without problems using this command:

I've restarted the auxiliary DB again in mount mode and re-ran the duplicate command:
run {
ALLOCATE AUXILIARY CHANNEL ch1 DEVICE TYPE DISK;
ALLOCATE AUXILIARY CHANNEL ch2 DEVICE TYPE DISK;
ALLOCATE AUXILIARY CHANNEL ch3 DEVICE TYPE DISK;
ALLOCATE AUXILIARY CHANNEL ch4 DEVICE TYPE DISK;
ALLOCATE AUXILIARY CHANNEL ch5 DEVICE TYPE DISK;
ALLOCATE AUXILIARY CHANNEL ch6 DEVICE TYPE DISK;
ALLOCATE AUXILIARY CHANNEL ch7 DEVICE TYPE DISK;
ALLOCATE AUXILIARY CHANNEL ch8 DEVICE TYPE DISK;
duplicate database to "AWSNFR"  backup location '/bkp' nofilenamecheck
UNTIL TIME "TO_DATE('31/05/2020 15:20:52', 'DD/MM/YYYY HH24:MI:SS')";
}


But the second run of the duplicate command failed with the following error after restoring the rest of datafiles, requesting a very old archivelog for recovering the database:

.....
Starting restore at 07-Jun-2020 11:04:26
creating datafile file number=19 name=/awsnfr/AWSNFRDB1/datafiles/p_fz_data.283.1039812437
creating datafile file number=20 name=/awsnfr/AWSNFRDB1/datafiles/p_fz_data.284.1039812457
creating datafile file number=21 name=/awsnfr/AWSNFRDB1/datafiles/p_fz_data.285.1039812475
creating datafile file number=24 name=/awsnfr/AWSNFRDB1/datafiles/p_fz_data.288.1039812529
creating datafile file number=25 name=/awsnfr/AWSNFRDB1/datafiles/p_fz_data.289.1039812549
creating datafile file number=27 name=/awsnfr/AWSNFRDB1/datafiles/p_fz_data.291.1039812587
creating datafile file number=28 name=/awsnfr/AWSNFRDB1/datafiles/p_fz_data.292.1039812605
creating datafile file number=42 name=/awsnfr/AWSNFRDB1/datafiles/p_fz_data.306.1039812873
creating datafile file number=43 name=/awsnfr/AWSNFRDB1/datafiles/p_fz_data.307.1039812891
creating datafile file number=57 name=/awsnfr/AWSNFRDB1/datafiles/p_fz_data.321.1039813155
creating datafile file number=58 name=/awsnfr/AWSNFRDB1/datafiles/p_fz_data.322.1039813173

......

Finished restore at 07-Jun-2020 12:05:28
.....
starting media recovery

unable to find archived log
archived log thread=1 sequence=7


RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 06/07/2020 12:05:31
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 7 and starting SCN of 2470350


Analysis:
You can notice from the above log, when duplicate command ran again, it started with re-creating (not restoring) some blank datafiles (in green color).

After the failure of the duplicate command, I ran the following command and figured out that those datafiles (in green color) are the only ones that having very low checkpoint_time and checkpoint_change# than the other datafiles:

col name for a60
col CREATION_TIME for a20
col checkpoint_time for a20
select FILE#||'|'||name name,fuzzy, creation_change#,
to_char(creation_time, 'DD-MON-YYYY HH24:MI:SS') as creation_time,
to_char(checkpoint_time, 'DD-MON-YYYY HH24:MI:SS') as checkpoint_time,
to_char(checkpoint_change#, '999999999999999999999') as checkpoint_change#
from v$datafile_header;


Fix:
I decided to restore those datafiles manually from RMAN:

RMAN> restore datafile 19,20,21,24,25,27,28,42,43,57,58;

RMAN> recover database;

RMAN> alter database open resetlogs;

Voilà! Database opened.

SQL> alter tablespace temp add tempfile;



Conclusion:
In Oracle 19c, RMAN can continue the duplicate from where it stopped, it will catch from where it stopped in the previous execution, but the catch here, it will not be able to re-restore the datafiles that were in the middle restoration during the last duplicate execution, so it will try to create them as blank datafiles, during the recovery it will try to recover these datafiles from the time they were created on the source DB! Of course most probably those archivelogs are since the creation of the source database itself, and they are not available anymore. The DBA has to correct this behavior by restore those datafiles manually from the same RMAN backup the duplicate was using. Still, the re-execution of the duplicate operation on 19c is not a fail proof!

18c Remarkable New Features

$
0
0
18c is the new name of 12.2.0.2, In 2018 Oracle started to name the new DB versions represent the year the product was released on. I've already covered the top remarkable new features of 12.1 and 12.2.0.1 in a previous post: http://dba-tips.blogspot.com/2019/11/a-summary-of-remarkable-new-features-in.html

In this post I'll cover the top new features of 18c which are not much!

Miscellaneous Features:

Shadow lost write protection feature introduced to minimize data loss and the time required to repair a database due to lost Writes.

To Enable:
  -- First: Create A shadow bigfile tablespace to store only system change numbers (SCNs) for tracked data files it will allocate 2% of the protected data:
  SQL> CREATE BIGFILE TABLESPACE SHADOW_WRITE DATAFILE '/oradata/shadow_lwp1.dbf' SIZE 1G LOST WRITE PROTECTION;

  -- Second: Enable the scope of protection:
     [Database wise]
      SQL> ALTER DATABASE ENABLE LOST WRITE PROTECTION;
     [Tablespace wise]
      SQL> ALTER TABLESPACE USERS ENABLE LOST WRITE PROTECTION;
     [Datafile wise]
    SQL> ALTER DATABASE DATAFILE '/oradata/dfile1.dbf' ENABLE LOST WRITE PROTECTION;


To Disable: [Use same above command but with DISABLE keyword]
      SQL> ALTER DATABASE DISABLE LOST WRITE PROTECTION;
     [Tablespace wise]
      SQL> ALTER TABLESPACE USERS DISABLE LOST WRITE PROTECTION;
     [Datafile wise]
    SQL> ALTER DATABASE DATAFILE '/oradata/dfile1.dbf' DISABLE LOST WRITE PROTECTION;


SQL statement level KILL instead of killing the whole session:
Starting from 18c you can cancel the current running SQL Statement, leaving its session remain connected:
  SQL> ALTER SYSTEM CANCEL SQL '<SID>, <SERIAL#>, <SQLID>';
i.e.
  SQL> ALTER SYSTEM CANCEL SQL '20, 51142, 8vu7s907prbgr';


ORACLE HOME read-only mode:
Starting from 18c Oracle Home can be configured in a read-only mode, thus preventing creation or modification of files inside the Oracle home directory. A read-only Oracle home can be used as a software image that can be shared across multiple database servers.
How to implement this feature:
While installing a new Oracle Database software, choose software-only option, then configure it as a read-only Oracle home before you create the listener and the database.

Password File is now under ORACLE_BASE:
New location for Password file under $ORACLE_BASE/dbs instead of $ORACLE_HOME/dbs

Schema only accounts:
In Oracle 18c there is a new account type called schema only account, where it can hold objects but no one can log in to it:

      SQL> create test no authentication;
or:
      SQL> alter user test no authentication;


New Initialization Parameters:

OPTIMIZER_IGNORE_HINTS [Default is FALSE]
This will force the Optimizer to ignore all impeded hints.

OPTIMIZER_IGNORE_PARALLEL_HINTS [Default is FALSE]
Will force the Optimizer to ignore PARALLEL hints.

FORWARD_LISTENER         
Forward all the incoming connection from REMOTE_LISTENER to a particular listener.

References:
https://apex.oracle.com/database-features

Stay tuned for the 19c new features post.

19c Remarkable New Features

$
0
0
Oracle 19c is packed up with lots of cool features, I've summarized the significant ones in below categories.

Support:

[Supported untill 2027. As per Note 742060.1--which keeps changing]

 














Performance Features:

- SQL plan management automatically evolve the plans and accept the best ones.

- Gather Statistics Auto task can run more frequently:
    - Enable high-frequency task:    
EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_STATUS','ON');
    - Set Maximum job run time duration: [e.g. 10min = 600 sec]
EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_MAX_RUN_TIME','600');
    - Job execution frequency:  [e.g. 6hours = 21600 seconds]
EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_INTERVAL','21600');

- Caching Big tables:
Specify how much % of the db_buffer_cache will be allocated for caching big tables: [40%]
SQL> alter system set db_big_table_cache_percent_target = 40;

Note: Oracle will automatically cache any big table in the memory if its temperature goes off 1000.

- Automatic Indexing:
Automatic indexing analyze the application workload every 15 minutes and automatically creates, drop and rebuilds unusable B-tree indexes in a database based on the changes in application workload.

Auto Index Creation:
Indexes will be intially create as INVISIBLE indexes if the analysis show performance improvement on candidate SQL statements it will be converted to VISIABLE and will be used by the application, if not, the created indexes will be marked as UNSUABLE and will be dropped later and the candidate SQL statements will be blacklisted to not use use Auto Indexes in the future.

Auto Index Deletion:
If Automatic Indexing found an index created by "Auto Indexing feature" is unsed for 373 days it will be dropped automatically.
This role doesn't apply on the manually created indexes "not created by Auto Indexing feature".

Technical Details:
------------------
Enable/Disable:
..............
Enable Auto Indexing and Allow creation of VISIBLE auto indexes to be used immediately by the Application:
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','IMPLEMENT');

Enable Auto Indexing but Allow only the creation of INVISIBLE auto indexes, so it will NOT be used by the Application:
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','REPORT ONLY');

Disable the Auto Indexing feature:
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','OFF');

Scope of Analysis:

ADD ALL schemas to the Auto Indexing scope:
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_SCHEMA', NULL, TRUE);

REMOVE a schema from Auto Indexing list:
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_SCHEMA', 'HR', FALSE);

REMOVE ALL schemas from the Auto Indexing scope:
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_SCHEMA', NULL, FALSE);

Add a schema to the Auto Indexing scope:
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_SCHEMA', 'HR', NULL);

Control the Auto Deletion RETENTION of the UNUSED indexes created by AUTO INDEXING feature: [e.g. 90 days | The default is 373 days]
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_RETENTION_FOR_AUTO', '90');

Control the Auto Deletion RETENTION of the UNUSED indexes created manually: [Default is never to be deleted]
[Strongly recommended to NOT set this parameter]
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_RETENTION_FOR_MANUAL', '90');
    To Reset it back to Never Delete Manually created indexes:
    SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_RETENTION_FOR_MANUAL', NULL);

Specify the Default Tablespace to store Auto indexes: [e.g. IDX_TBS tablespace | The default is to store them on the DEFAULT TABLESPACE for the database]
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_DEFAULT_TABLESPACE', 'IDX_TBS');

Specify percentage of tablespace to allocate for new Auto indexes: [e.g. 5%]
SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_SPACE_BUDGET', '5');

Show automatic Indexing information for last 24 hours:
declare
 report clob := null;
begin
 report := DBMS_AUTO_INDEX.REPORT_ACTIVITY();
end;
/

Show the last activity for Automatic Indexing operation:
declare
 report clob := null;
begin
 report := DBMS_AUTO_INDEX.REPORT_LAST_ACTIVITY(
 type => 'HTML',
 section => 'SUMMARY +INDEX_DETAILS +ERRORS',
 level => 'BASIC');
end;
/

Show automatic Indexing information for a specific time: [HTML]
declare
 report clob := null;
begin
 report := DBMS_AUTO_INDEX.REPORT_ACTIVITY(
 activity_start => TO_TIMESTAMP('2018-11-01', 'YYYY-MM-DD'),
 activity_end => TO_TIMESTAMP('2018-12-01', 'YYYY-MM-DD'),
 type => 'HTML',
 section => 'SUMMARY',
 level => 'BASIC');
end;
/

Configurations:
...............
SQL> SELECT * FROM DBA_AUTO_INDEX_CONFIG;
SQL> SELECT OWNER,INDEX_NAME,AUTO FROM DBA_INDEXES where AUTO='YES';


New Non blocking DDL statements added in 19c:

Just to remind you with the non-blocking DDL statements that were added in the previous releases:

Non-blocking ddl's added as of 11.2:

CREATE INDEX online
ALTER INDEX rebuild online
ALTER TABLE add column not null with default value
ALTER TABLE add constraint enable no validate
ALTER TABLE modify constraint validate
ALTER TABLE add column (without any default)
ALTER INDEX visible / invisible
ALTER INDEX parallel / no parallel

Non-blocking ddl's added to the list in 12.1:

DROP INDEX online (backported to 11.2)
ALTER TABLE set unused column online
ALTER TABLE drop constraint online
ALTER INDEX unusable online
ALTER TABLE modify column visible / invisible
ALTER TABLE move partition / sub-partition online
ALTER TABLE add nullable column with default value

Non-blocking ddl's added to the list in 12.2:

ALTER TABLE split partition [sub-partition] online
ALTER TABLE move online (move of a non-partitioned table)
ALTER TABLEe modify partition by .. online (to convert a non-partitioned table to partitioned state)

Non-blocking ddl's added to the list in 18.1:

ALTER TABLE merge partition online
ALTER TABLE modify partition by .. online (to change the partitioning schema of a table)


Security Features:

- 19c Cluster interconnect traffic is automatically secured by Transport Layer Security (TLS) no manual configuration needed by the DBA.

- Schema-only accounts can have no passwords to now allow login:
SQL> CREATE USER TEST NO AUTHENTICATION;

- Privilege analysis is now available as part of Oracle Database Enterprise Edition to show exactly what privileges are used or not used by each account.
  You will create a policy and enable it for some time to do the analysis, then you disable the police.


RAC/Clusterware Features:

- 19c Cluster interconnect trafic is automatically secured by Transport Layer Security (TLS) no manual configuration needed by the DBA.
- Starting from 19.3 Oracle back supports placing OCR/VOTEDISKs on a non-ASM shared filesystem.


Import/Export Features:


New parameters introduced to impdp:
TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y     Disable Logging for the imported object during the import and set it back to LOGGING after.
TRANSFORM=TABLE_COMPRESSION_CLAUSE:COMPRESS     Import tables with BASIC compression option
TRANSFORM=TABLE_COMPRESSION_CLAUSE:'ROW STORE COMPRESS ADVANCED'    Import tables with OLTP compression option
logtime=all     Enable time logging for output log messages for each step
COMPRESSION_ALGORITHM=[BASIC | LOW | MEDIUM | HIGH]     Compress the dumpfile with different compression algorithms.

A new audit policy introduced to audit EXPORT/IMPORT operations:
CREATE AUDIT POLICY policy_name  ACTIONS COMPONENT=DATAPUMP [EXPORT | IMPORT | ALL];
AUDIT POLICY audit_dp_all_policy BY all;


Backup & Recovery Features:

- Clone & Standby DB can be created using dbca -createDuplicateDB command:
# dbca -createDuplicateDB
    -gdbName             global_database_name
    -primaryDBConnectionString     easy_db_connection_string
    -sid             database_system_identifier
    [-initParams         initialization_parameters
        [-initParamsEscapeChar initialization_parameters_escape_character]]
    [-sysPassword         SYS_user_password]
    [-adminManaged]
    [-nodelist             database_nodes_list]
    [-datafileDestination     data_files_directory]
    [-recoveryAreaDestination     recovery_files_directory
        [-recoveryAreaSize     fast_recovery_area_size]]
    [-databaseConfigType     {SINGLE | RAC | RACONENODE}
        [-RACOneNodeServiceName service_name_for_RAC_One_Node_database]]
    [-useOMF {true | false}]
    [-storageType {FS | ASM}
        [-asmsnmpPassword     ASMSNMP_password]
        -datafileDestination     database_files_directory]
    [-createListener         new_database_listener]
    [-createAsStandby
        [-dbUniqueName db_unique_name_for_standby_database]]


Availability Features:


- The restore points created on Primary are automatically propagated to the Standby.

- Easy connect Plus can initiate connection with simple command:
sales-server//inst1

This Translates to:
(DESCRIPTION=
   (CONNECT_DATA=
      (SERVICE_NAME=)
      (INSTANCE_NAME=inst1))
   (ADDRESS=
      (PROTOCOL=TCP)
      (HOST=sales-server)
      (PORT=1521)))

e.g.
conn aa/aa@1.1.1.1/orcl


Data Guard Features:

- Duplicate standby DB from active primary database using Compression:
In 19c you can duplicate a standby database from an active primary DB using COMPRESSION feature which can send the database block via the network in a compressed format saving the bandwidth and the time as well.

Command example:

run {
allocate channel disk1 type disk;
allocate auxiliary channel aux1 type disk;
duplicate target database for standby from active databaseUSING COMPRESSED BACKUPSET
spfile
parameter_value_convert 'orcl','orcls'
set db_name='orcl'
set db_unique_name='orcls';

}

- Automatic flashback of the standby:
When a FLASHBACK operation happen on the primary DB the Standby DB will be automatically flashed back.

- Restore points propagation:
  After the switchover between primary and standby the resotre points created on primary before the switchover will be available on the standby after the switchover.

- DML Operations on Active Data Guard: [insignificant feature! but thought to mention it]

If enabled, DML transactions can happen on the Standby DB, first DMls will not apply directly on the standby but will be first shipped to the Primary and applied on the Primary then changes will be replicated on the Standby then the DML will be completed (long path and can slow down the primary).

 To Enable DMLs on Standby:
 SQL> ALTER SYSTEM SET ADG_REDIRECT_DML=TRUE;
     Or: Enable it on session level:
     SQL> ALTER SESSION ENABLE ADG_REDIRECT_DML;
 To Enable PL/SQL operations on Standby:
 SQL> ALTER SESSION ENABLE ADG_REDIRECT_PLSQL;

- New Parameters to restart ADG process in case of hung network/IO:

  - DATA_GUARD_MAX_IO_TIME sets the maximum number of seconds that can elapse before a process is considered hung while performing a regular I/O operation in an Oracle Data Guard environment. [Default 240 sec]
  - DATA_GUARD_MAX_LONGIO_TIME sets the maximum number of seconds that can elapse before a process is considered hung while performing a long I/O operation in an Oracle Data Guard environment. [Default 240 sec]

- Sync the gap between the primary and the standby using one RMAN command: [From Standby side]
RMAN> recover standby database from service orclpr;

Note: orclpr is the service which connects to the Primary DB from the standby.

- Desupported ADG parameters:
  - MAX_CONNECTIONS attribute of the LOG_ARCHIVE_DEST_n initialization parameter.

Miscellaneous Features:

- DBA_REGISTRY_BACKPORTS view introduced to show the bugs that have been fixed by the patches applied to the database.
- Oracle Multimedia is desupported. Oracle recommends to store multimedia content in SecureFiles LOBs, and use third party products for image processing and conversion.
- Oracle Streams feature is desupported. Oracle recommends to use Oracle GoldenGate product to replace all replication features of Oracle Streams.
- DBMS_JOB Jobs Converted to DBMS_SCHEDULER Jobs in Oracle Database 19c. DBMS_JOBS already deprecated since 12.2.

References:
https://docs.oracle.com/en/database/oracle/oracle-database/19/admin/database-administrators-guide.pdf
https://apex.oracle.com/database-features/
Viewing all 214 articles
Browse latest View live