Quantcast
Channel: Database Administration Tips
Viewing all 214 articles
Browse latest View live

ORADEBUG How To

$
0
0

What is ORADEBUG:

ORADEBUG is a debugger tool, can be used for (tracing any session, dump DB memory structure, suspend/resume a session and the most useful use is to analyze the hang when an instance is hanging by creating a report shows the blocked/blocker sessions).

How to use ORADEBUG:

Firstlogin to SQLPLUS AS SYSDBA


Secondset your PID:

SQL> oradebug setmypid

To perform a cluster wide analysis:

SQL> oradebug setinst all

To make the tracefile size unlimited:

SQL> oradebug unlimit

To Run the Hang Analysis:<The most useful use for ORADEBUG>

SQL> oradebug -g def hanganalyze <level#>

-> This will create a logfile and will give you it's path.

-Available level# are:
10     Dump all processes (IGN state)
5      Level 4 + Dump all processes involved in wait chains (NLEAF state)
4      Level 3 + Dump leaf nodes (blockers) in wait chains (LEAF,LEAF_NW,IGN_DMP state)
3      Level 2 + Dump only processes thought to be in a hang (IN_HANG state)
1-2  Only HANGANALYZE basic output, no process dump at all

*Level 4 and above are resource intensive + large output, which can impact the instance performance.

Oracle recommend to use level 3: 

SQL> ORADEBUG -g def hanganalyze 3


How to read the HANGANALYZE trace:

Generally: Read the log and search for these keywords "final blocker" if not found search for "blocked by"& "blocking".

Note: Open chains consist of processes waiting for a resource held by a blocker. These are not true hangs and can be resolved by killing the holding session and allowing the blocked processes to proceed. This can be useful when a process is holding a latch, frequent example is the library latch, and other processes are waiting for the latch.


Check section "State of ALL nodes" At the end of the HANGANALYZE log :

The data under this line means:
([nodenum]/cnode/sid/sess_srno/session/ospid/state/[adjlist]):

"cnode" column means: cluster node number
"sid" is the Oracle session ID!
"sess_srno" column means: SERIAL#
"ospid" The operating system process ID
"state" columns have the following vlaues:

Session with status "LEAF" or "LEAF_NW" means a session blocking others"Good candidate for kill" specially "LEAF".
Session with status "NLEAF" means waitingsession and appear to be hanging. 
Session with status "IN_HANG" means there is a problem!
Session with status "IGN" or "IGN_DMP" means the session is idle.


Caution: Do not kill critical processes like SMON or PMON as that would terminate the instance.

Other Commands:

To Check the current trace file name:

SQL> oradebug tracefile_name

Flush any pending writes to the trace file and close it:

SQL> oradebug flush
SQL> oradebug close_trace

To trace a specific session with it's OS PID:

SQL> oradebug setospid

To trace a specific session with it's Oracle PID:

SQL> oradebug setorapid


To freeze/Suspend a session:

First you have to get the PID for that session and set ORADEBUG as it:
SQL> oradebug setospid 12518


*12518 is the PID for the session you want to kill


Second Freeze/Hang the session even it running in a middle of something it will freeze:
SQL> oradebug suspend

To UnFreeze/Resume the session to continue it's work:
SQL> oradebug resume


Trace Oracle Clusterware:

Trace CRS events:

SQL> oradebug dump crs 3

Trace CSS behaviour:

SQL> oradebug dump css 3

Trace OCR:

SQL> oradebug dump ocr 3

Remember:

-You can run oradebug in parallel using diffrent sqlplus sessions.

-ORADEBUG utility is poorly documented by Oracle because the caveats with using this tool to avoid potential damage to the database when calling kernel functions.

Reference: http://mgogala.byethost5.com/oradebug.pdf

About DEFAULT ROLES

$
0
0

There was a conversation between me and an auditor:

Auditor: What is the default role for the database?
Me:        What do you mean by default role for the database? !!!!!!!!!!!
Auditor: We found this output in the script log we asked you to run for us:


GRANTEE       GRANTED_ROLE                          ADM  DEFAULT_ROLE
------------------------------ ---------------------         ---        ------------
SYSTEM      AQ_ADMINISTRATOR_ROLE  YES      YES
SYSTEM       TTXLY_SUDI_ACCESS               YES      YES
.....

Now let me explain:

Firstly forget the auditor words about the database default role !

So what does column DEFAULT_ROLE represents in dba_role_privs view?

By default Oracle set the roles assigned to any user as a default role for him, to get rid of the headache of setting the roles manually every time the user try to use his roles.

This means the user HR doesn't need to explicitly set the "RESOURCE" role using "set role resource;" command every time he tries to create a table, because "RESOURCE" role is already been set as a DEFAULT role for him..

Here an example:

Now I’ll set the role “resource” for user HR as a non-default role to see what will happen:

SQL> sho user
USER is "SYS"


SQL> alter user hr default role all except resource;
User altered.


SQL>  select *from dba_role_privs where grantee='HR';
GRANTEE                        GRANTED_ROLE              ADM  DEFAULT_ROLE
------------------------------ ------------------------------        -------    -----
HR                                 RESOURCE                         NO        NO
HR                                 XXX                                    NO          YES

Now I’ll login with HR user and try to create a new table:

SQL> conn hr/hr
Connected.


SQL> create table asd as select * from employees;
create table asd as select * from employees                                 
ERROR at line 1:
ORA-01031: insufficient privileges


Here is what will happen when you set a role as a non default role, to use a non default role you have to explicitly enable the role “resource” using this command:

SQL> set role resource;
Role set.

Now user HR can create the table after enabling the "RESOURCE" role:

SQL> create table asd as select * from employees;
Table created.
                                                                           
Conclusion:
Oracle get rid of this hassle by automatically setting any role assigned to the user as a DEFAULT role unless the administrator set it as a non default.


Here are some usefull command:

To check how many roles are allowed to be  "DEFAULT ROLE" for each user in the the database:

SQL> sho parameter max_enabled_roles
NAME    TYPEVALUE
------------------------------------ ----------- ------------------------------
max_enabled_roles    integer150

To make a role as a NON-DEFAULT role:

SQL> alter user HR default role all except RESOURCE;

To make all roles assigned to a user default roles:

SQL> alter user HR default role all;

To check the default and non default roles assigned to a user:

SQL>  select *from dba_role_privs where grantee='HR';
GRANTEE                    GRANTED_ROLE        ADM    DEF
------------------------------ ------------------------------    ---        ---
HR                                 RESOURCE                      NO     YES
HR                                 XXX                                   NO     YES

SQL> desc dba_role_privs
 Name                                      Null?    Type
 ----------------------------------------- -------- ------------
 GRANTEE                                          VARCHAR2(30)
 GRANTED_ROLE                              NOT NULL VARCHAR2(30)
 ADMIN_OPTION                              VARCHAR2(3)
 DEFAULT_ROLE                               VARCHAR2(3)



Regarding the Roles protected by password:
When you grant a user a role protected by password although it will be a DEFAULT ROLE by default but the user must use "set role identified by" command providing the password in order to use that role:

Here an example:

SQL> sho user
USER is "SYS"


SQL> create role xxx identified by 123;
Role created.


SQL> grant select on scott.emp to xxx;
Grant succeeded.


SQL> grant xxx to hr;
Grant succeeded.


SQL> select *from dba_role_privs where grantee='HR';
GRANTEE                        GRANTED_ROLE               ADM   DEF
------------------------------ ------------------------------   ---        ---
HR                                 RESOURCE                        NO     YES
HR                       XXX                          NO     YES


As we can observe xxx role is a default_role by default.

Now can we use "xxx" role before setting it? let's try


SQL> conn hr/hr
Connected.


SQL> desc scott.emp
ERROR:
ORA-04043: object aa.ss does not exist

To use the password protected role "xxx" you have to explicitly set it using the following command:

SQL> set role xxx identified by 123;
Role set.

Now you can use "xxx" role by selecting from scott.emp table:

SQL> desc scott.emp
 Name                                      Null?    Type
 ----------------------------------------- -------- -------------
 EMPNO                                     NOT NULL NUMBER(4)
 ENAME                                              VARCHAR2(10)
 JOB                                                VARCHAR2(9)
 MGR                                                NUMBER(4)
 HIREDATE                                           DATE
 SAL                                                NUMBER(7,2)
 COMM                                               NUMBER(7,2)
 DEPTNO                                             NUMBER(2)

Gathering Fixed Objects Statistics

$
0
0

What are the fixed objects:

Fixed objects are the x$ tables and their indexes.

Why we must gather statistics on fixed objects:

If the statistics are not gathered on fixed objects, the Optimizer will use predefined default values for the
statistics. These defaults may lead to inaccurate execution plans.

Does Oracle gather statistics on fixed objects:

Statistics on fixed objects are not being gathered automatically nor within gathering database stats procedure.

When we should gather statistics on fixed objects:

-After a major database or application upgrade.
-After implementing a new module.
-After changing the database configuration. e.g. changing the size of memory pools (sga,pga,..).
-Poor performance/Hang encountered while querying dynamic views e.g. V$ views.
-This task should be done only a few times per year.

Note: 
-It's recommended to Gather the fixed object stats during peak hours (system is busy) orafter the peak hours but the sessions are still connected (even if they idle), to guarantee that the fixed object tables been populated and the statistics well represent the DB activity.
-Performance degradation may be experienced while the statistics are gathering.
-Having no statistics is better than having a non representive statistics.

How to gather stats on fixed objects:

FirstlyCheck the last analyzed date:


select OWNER, TABLE_NAME, LAST_ANALYZED from dba_tab_statistics where table_name='X$KGLDP';




OWNER      TABLE_NAME     LAST_ANAL
------------------------------ ------------------------------      ---------
SYS                  X$KGLDP     20-MAR-12


SecondlyExport the current fixed stats in a table: (in case you need to revert back)

exec dbms_stats.create_stat_table('OWNER','STATS_TABLE_NAME','TABLESPACE_NAME');
exec dbms_stats.export_fixed_objects_stats(stattab=>'STATS_TABLE_NAME',statown=>'OWNER');

ThirdlyGather fixed objects stats:


exec dbms_stats.gather_fixed_objects_stats;


In case of reverting to the old statistics:
In case you experianced a bad performance on fixed tables after gathering the new statistics:

exec dbms_stats.delete_fixed_objects_stats(); 
exec DBMS_STATS.import_fixed_objects_stats(stattab =>’STATS_TABLE_NAME’,STATOWN =>'OWNER');

Hiding table Data from unauthorized Users

$
0
0

In this scenario I'll explain how to hide data on a table from unauthorized users even if these unauthorizes users have DBA privilege.
After hiding the data from unauthorized users whenever they try to select from the secured table they will receive "no rows selected" message.

The advantage of this way that the secured table will look like having no rows, at all will likely be less attractive to intruders than a table that has records but that hides or masks "interesting" columns.

In this demonstration I'll use SCOTT schema.

Firstly: let's give some powerful privileges to user HR on scott.emp table:

SQL> grant dba to hr;
Grant succeeded.

SQL> grant all on scott.emp to hr;
Grant succeeded.

Let's try to select from scott.emp

SQL> conn hr/hr
Connected.

SQL> select count(*) from scott.emp;--user HR can select from scott.emp
  COUNT(*)
----------
        15

SQL> delete from scott.emp;--user HR can also delete from scott.emp
15 rows deleted.

SQL> rollback;
Rollback complete.

Now we will not revoke any privileges from HR but we will hide the data from him only in two steps.

Step1: Create predicate generation function: 

This function will allow/block the access on table data upon specified conditions like (logon username, IP address, machine name ,..etc)

conn scott/tiger

CREATE OR REPLACE FUNCTION sec_emp  (oowner IN VARCHAR2, ojname IN VARCHAR2)
 RETURN VARCHAR2 AS 
cond VARCHAR2 (200);
BEGIN
    IF (SYS_CONTEXT('USERENV','OS_USER')='JAMES' AND --OS Username
--     SYS_CONTEXT('USERENV','IP_ADDRESS')='192.168.1.211' AND --IPADRESS
  SYS_CONTEXT('USERENV','SESSION_USER')<>'HR')  --not allowed users(check the note below)
--     AND  SYS_CONTEXT('USERENV','TERMINAL')='PC77') --MACHINE NAME
--  AND SYS_CONTEXT('USERENV','CLIENT_IDENTIFIER')='Administrator')     --SESSION IDENTIFIER
    THEN
  cond := ' SAL IS NOT NULL';                               --Condition is set on SAL column
           ELSE
  cond := ' SAL IS NULL';
    END IF;
   RETURN (cond);
END sec_emp ;
/

SHOW ERRORS;

Note:

In the line# 7 "SYS_CONTEXT('USERENV','SESSION_USER')='SCOTT')"
You can specify the users who only allowed to access scott.emp data and block the others:
e.g. if users scott,Smith,John are the only ones allowed to access scott.emp table, line# 7 will be like that:
SYS_CONTEXT('USERENV','SESSION_USER') in ('SCOTT','SMITH','JOHN')

And vice versa, you can specify the users who are not allowed to access scott.emp and allow the other users:
e.g. if Users HR,Emma,Lee shouldn't access scott.emp data and the others are allowed, line# 7 will be like that:
SYS_CONTEXT('USERENV','SESSION_USER') not in ('HR','EMMA','LEE')

Restrictions are NOT limited to usernames, as you can see in the function above you can restrict by machine name,IP, operating system username,... the thing that gives you the power in securing sensitive data.

Step2: Add a security policy:

Next we will add a policy to enforce the function conditions on SCOTT.EMP table:

Connect as sysdba

BEGIN
     DBMS_RLS.ADD_POLICY(object_schema=>'SCOTT', object_name=>'EMP',
                         policy_name=>'HIDE_EMP', function_schema=>'SYS',
                         policy_function=>'SEC_EMP');
END;
/


--In case the policy is already exist drop it first:

EXEC dbms_rls.drop_policy(object_schema=>'SCOTT',object_name=>'EMP',policy_name=>'HIDE_EMP');


Secondly: Now let's check if user HR can access scott.emp data:


SQL>  select count(*) from scott.emp;  
  COUNT(*)
----------
         0

SQL> delete from scott.emp;
0 rows deleted.

User HR can issue the commands on scott.emp table this means he have no problem with his privileges, but he can't access or play with the data inside scott.emp table, Now user HR will get an indication that table scott.emp is empty so it's highly probable that he will not do further investigations on that table ;-)

For more information and techniques please check this paper
http://www.oracle.com/technetwork/articles/idm/jucan-security-094705.html

How To Calculate Your Network Bandwidth Speed

$
0
0


There is a great software called IPERF you can easily use it for testing the network bandwidth speed.

Why IPERF is a great software to test bandwidth speed:
Testing you network speed by copying a file between two machines is not an accurate way, because there are many factors like (Disk I/Os,memory delays,OS delays) can slow down the copying process so you will end up with inaccurate speed result, also nowadays -specially with 10G/s networks- sometimes you find harddisk speed slower than network speed.

IPERF doesn't engage other factors like (Disk I/Os,memory delays,OS delays) during network speed test, the thing will give you an accurate result.

Download IPERF from:

http://sourceforge.net/projects/iperf/

First install IPERF:

Install it on the machines you want to test the bandwidth speed between them:

On Machine1:

#gunzip iperf-2.0.5.tar.gz
#tar xvf iperf-2.0.5.tar
#cd iperf-2.0.5
#./configure
#make install

On Machine2:


#gunzip iperf-2.0.5.tar.gz
#tar xvf iperf-2.0.5.tar
#cd iperf-2.0.5
#./configure 
#make install


How to use IPERF for testing bandwidth speed:

Now let's test the bandwidth speed between these two mahines:

Before starting the philosophy is that on one machine we will open a session listening on port 5001 which will be ready for accepting the packets from outside and from other machine we will send the test packets to the open port on the first machine using IPERF then IPERF automatically will show us the speed statistics.

On Machine 1:

We will run iperf server on that machine:

#iperf -s

------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------

This will start to listen on port 5001,leave that session as it, don't interrupt it, then go to machine 2

On Machine 2:

Test the speed between machine 2 machine 1:

#iperf -c 192.168.0.1 -f g

192.168.0.1 -> Machine#1 IP
-f                  -> Argument display the result in a desired format (g)->giga bits/sec also(b->bit,B-
                                  >byte,m->Mega bit,M->Mega byte,g->giga bit,G->giga byte)

After some time the output will come out

------------------------------------------------------------
Client connecting to 192.168.0.1, TCP port 5001
TCP window size: 0.00 GByte (default)
------------------------------------------------------------
[  3] local 192.168.0.2 port 40094 connected with 192.168.0.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.09 GBytes  0.94Gbits/sec

This means my network speed between machine1 and machine2 = 0.94 Gbits/sec which is near to 1 Gbits/sec

Also you can see the same result on machine1 if you go to the same session you executed "iperf -s" from.


Note: Generally speaking bits are usually being used in hardware world and bytes are being used in software world, this means (network bandwidth,network card,switch,cable,link,..) measured in bits, when I say my network bandwidth speed = 1Gbits/sec it's in bits which is = 128 Megabyte/sec

In software world when I say the speed of copying a file = 1M/sec it means in Mega bytes..,etc.

note also that 1 byte = 8 bits 

For more info:
http://openmaniak.com/iperf.php

Easy Way to Configure VNC Server On Linux

$
0
0

Make sure that VNCserver package is already installed:
# rpm -qa | grep vnc-server
vnc-server-4.1.2-14.el5_6.6

If not installed:
Download the latest version of VNC server from this link:
http://www.realvnc.com/download/vnc/latest/

Note: Server Free Edition has no security feature available.

Install VNC server on your server:
# rpm -Uvh VNC-Server-5.2.0-Linux-x64.rpm 

Modify the VNC config file:
# vi /etc/sysconfig/vncservers
Add these lines at the bottom:
VNCSERVERS="2:root"
VNCSERVERARGS[2]="-geometry 800x600 -nolisten tcp -nohttpd -localhost"

Set a password for VNC: 
# vncpasswd
Password:
Verify:

Start a VNC server session just to generate the default config files:
# vncserver :1

Note: It's recommended to not run VNC server as root, as this will allow connected users to have root privileges once they connect via VNC.

Configure VNC to start an Xsession when connecting:
# vi ~/.vnc/xstartup
UN-hash these two lines:
 unset SESSION_MANAGER
 exec /etc/X11/xinit/xinitrc

Start a VNC session on the machine: 
# vncserver -kill :1
# vncserver :1

Now you can login from any machine (your Windows machine) using VNCviewer to access that remote server using port 5900 or 5901. make sure these port are not blocked by the firewall.
VNC Viewer can be downloaded from this link:
http://www.realvnc.com/download/viewer/

Note: If you close that vnc session without logout you can re-login again to the same used session,
but if you logout from that session you have to re-issue the previous two commands again on the server:

# vncserver -kill :1
# vncserver :1

or just do it in one command:
# service vncserver restart

To disable VNCserver and kill current connected VNC sessions:

# service vncserver stop
or use:
# vncserver -kill :1

I hope that was easy and clear.

All About Statistics In Oracle

$
0
0

In this post I'll try to summarize all sorts of statistics in Oracle, I strongly recommend reading the full article, as it contains information you may find it valuable in understanding Oracle statistics.

################################
Database | Schema | Table | Index Statistics
################################

Gather Database Stats:
===================
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(
     ESTIMATE_PERCENT=>100,METHOD_OPT=>'FOR ALL COLUMNS SIZE SKEWONLY',
    CASCADE => TRUE,
    degree => 4,
    OPTIONS => 'GATHER STALE',
    GATHER_SYS => TRUE,
    STATTAB => PROD_STATS);

CASCADE => TRUE :Gather statistics on the indexes as well. If not used Oracle will determine whether to collected or not.
DEGREE => 4:Degree of parallelism.
options: 
       =>'GATHER' :Gathers statistics on all objects in the schema.
       =>'GATHER AUTO':Oracle determines which objects need new statistics, and determines how to gather those statistics.
       =>'GATHER STALE':Gathers statistics on stale objects. will return a list of stale objects.
       =>'GATHER EMPTY':Gathers statistics on objects have no statistics.will return a list of no stats objects.
       =>'LIST AUTO': Returns a list of objects to be processed with GATHER AUTO.
       =>'LIST STALE': Returns a list of stale objects as determined by looking at the *_tab_modifications views.
       =>'LIST EMPTY': Returns a list of objects which currently have no statistics.
GATHER_SYS => TRUE :Gathers statistics on the objects owned by the 'SYS' user.
STATTAB => PROD_STATS :Table will save the current statistics. see SAVE & IMPORT STATISTICS section -last third in this post-.

Note: All above parameters are valid for all stats kind (schema,table,..) except Gather_SYS.
Note: Skew data means the data inside a column is not uniform, there is a particular one or more value are being repeated much than other values in the same column, for example the gender column in employee table with two values (male/female) in a construction or security service company is skewed where most of employees are male, but in an entity like a hospital where the number of males is near to the number of females, the gender column is not skewed.

For faster execution:
------------------
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(
ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE,degree => 8);

What's new?
ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE => Let Oracle estimate skewed values always gives excellent results.(DEFAULT).
Removed "METHOD_OPT=>'FOR ALL COLUMNS SIZE SKEWONLY'" => As histograms is not recommended to be gathered on all columns.
Removed  "cascade => TRUE" To let Oracle determine whether index statistics to be collected or not.
Doubled the "degree => 8" but this depends on the number of CPUs on the machine and accepted CPU overhead during gathering DB statistics.

Starting from Oracle 10g, Oracle introduced an automated task gathers statistics on all objects in the database that having [stale or missing] statistics, To check the status of that task:
SQL> select status from dba_autotask_client where client_name = 'auto optimizer stats collection';

To Enable automatic optimizer statistics task:
SQL> BEGIN
    DBMS_AUTO_TASK_ADMIN.ENABLE(
    client_name => 'auto optimizer stats collection', 
    operation => NULL, 
    window_name => NULL);
    END;
    /

In case you want to Disable automatic optimizer statistics task:
SQL> BEGIN
    DBMS_AUTO_TASK_ADMIN.DISABLE(
    client_name => 'auto optimizer stats collection', 
    operation => NULL, 
    window_name => NULL);
    END;
    /

To check tables having stale statistics:

SQL> exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
SQL> select OWNER,TABLE_NAME,LAST_ANALYZED,STALE_STATS from DBA_TAB_STATISTICS where STALE_STATS='YES';

[update on 03-Sep-2014]
Note: In order to get an accurate information from DBA_TAB_STATISTICS or (*_TAB_MODIFICATIONS, *_TAB_STATISTICS and *_IND_STATISTICS) views, you should manually run DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO procedure to refresh it's parent table mon_mods_all$ from SGA recent data, or you have wait for an Oracle internal that refresh that table  once a day in 10g onwards [except for 10gR2] or every 15 minutes in 10gR2 or every 3 hours in 9i backwards. or when you run manually run one of GATHER_*_STATS procedures.
[Reference: Oracle Support and MOS ID 1476052.1]

Gather SCHEMA Stats:
=====================
SQL> Exec DBMS_STATS.GATHER_SCHEMA_STATS (
     ownname =>'SCOTT',
     estimate_percent=>10,
     degree=>1,
     cascade=>TRUE,
     options=>'GATHER STALE');


Gather TABLE Stats:
===================
Check table statistics date:
SQL> select table_name, last_analyzed from user_tables where table_name='T1';

SQL> Begin DBMS_STATS.GATHER_TABLE_STATS (

    ownname => 'SCOTT',
    tabname => 'EMP',
    degree => 2,
    cascade => TRUE,
    METHOD_OPT => 'FOR COLUMNS SIZE AUTO',
    estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE);
    END;
    /

CASCADE => TRUE : Gather statistics on the indexes as well. If not used Oracle will determine whether to collect it or not.
DEGREE => 2: Degree of parallelism.
ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE: (DEFAULT) Auto set the sample size % for skew(distinct) values (accurate and faster than setting a manual sample size).
METHOD_OPT=> :  For gathering Histograms:
 FOR COLUMNS SIZE AUTO :  You can specify one column between "" instead of all columns.
 FOR ALL COLUMNS SIZE REPEAT :  Prevent deletion of histograms and collect it only for columns already have histograms.
 FOR ALL COLUMNS :  Collect histograms on all columns.
 FOR ALL COLUMNS SIZE SKEWONLY :  Collect histograms for columns have skewed value should test skewness first>.
 FOR ALL INDEXED COLUMNS :  Collect histograms for columns have indexes only.



Note: Truncating a table will not update table statistics, it will only reset the High Water Mark, you've to re-gather statistics on that table.

Inside "DBA BUNDLE", there is a script called "gather_stats.sh", it will help you easily & safely gather statistics on specific schema or table plus providing advanced features such as backing up/ restore new statistics in case of fallback.
To learn more about "DBA BUNDLE" please visit this post:
http://dba-tips.blogspot.com/2014/02/oracle-database-administration-scripts.html


Gather Index Stats:
==================
SQL> exec DBMS_STATS.GATHER_INDEX_STATS(ownname => 'SCOTT',
indname => 'EMP_I',
     estimate_percent =>DBMS_STATS.AUTO_SAMPLE_SIZE);

###################
Fixed OBJECTS Statistics
###################

What are Fixed objects:
---------------------
-Fixed objects are the x$ tables (been loaded in SGA during startup) on which V$ views are built (V$SQL etc.).
-If the statistics are not gathered on fixed objects, the Optimizer will use predefined default values for the statistics. These defaults may lead to inaccurate execution plans.
-Statistics on fixed objects are not being gathered automatically nor within gathering DB stats.

How frequent to gather stats on fixed objects?
------------------------------------------
Only one time for a representative workload unless you've one of these cases:

- After a major database or application upgrade.
- After implementing a new module.
- After changing the database configuration. e.g. changing the size of memory pools (sga,pga,..).
- Poor performance/Hang encountered while querying dynamic views e.g. V$ views.


Note:
- It's recommended to Gather the fixed object stats during peak hours (system is busy) or after the peak hours but the sessions are still connected (even if they idle), to guarantee that the fixed object tables been populated and the statistics well represent the DB activity.
- Also note that performance degradation may be experienced while the statistics are gathering.
- Having no statistics is better than having a non representative statistics.

How to gather stats on fixed objects:
----------------------------------

First Check the last analyzed date:
---- --------------------------
SQL> select OWNER, TABLE_NAME, LAST_ANALYZED

       from dba_tab_statistics where table_name='X$KGLDP';
Second Export the current fixed stats in a table: (in case you need to revert back)
------- -----------------------------------
SQL> EXEC DBMS_STATS.CREATE_STAT_TABLE

       ('OWNER','STATS_TABLE_NAME','TABLESPACE_NAME');

SQL> EXEC dbms_stats.export_fixed_objects_stats

       (stattab=>'STATS_TABLE_NAME',statown=>'OWNER');
Third Gather the fixed objects stats:
------  ------------------------
SQL> exec dbms_stats.gather_fixed_objects_stats; 


Note:
In case you experienced a bad performance on fixed tables after gathering the new statistics:

SQL> exec dbms_stats.delete_fixed_objects_stats(); 
SQL> exec DBMS_STATS.import_fixed_objects_stats

       (stattab =>'STATS_TABLE_NAME',STATOWN =>'OWNER');


#################
SYSTEM STATISTICS
#################

What is system statistics:
-----------------------
System statistics are statistics about CPU speed and IO performance, it enables the CBO to
effectively cost each operation in an execution plan. Introduced in Oracle 9i.

Why gathering system statistics:
------------------------------
Oracle highly recommends gathering system statistics during a representative workload,
ideally at peak workload time, in order to provide more accurate CPU/IO cost estimates to the optimizer.
You only have to gather system statistics once.

There are two types of system statistics (NOWORKLOAD statistics & WORKLOAD statistics):

NOWORKLOAD statistics:
--------------------------
This will simulates a workload -not the real one but a simulation- and will not collect full statistics, it's less accurate than "WORKLOAD statistics" but if you can't capture the statistics during a typical workload you can use noworkload statistics.
To gather noworkload statistics:
SQL> execute dbms_stats.gather_system_stats(); 


WORKLOAD statistics:
----------------------
This will gather statistics during the current workload [which supposed to be representative of actual system I/O and CPU workload on the DB].
To gather WORKLOAD statistics:
SQL> execute dbms_stats.gather_system_stats('start');
Once the workload window ends after 1,2,3.. hours or whatever, stop the system statistics gathering:
SQL> execute dbms_stats.gather_system_stats('stop');
You can use time interval (minutes) instead of issuing start/stop command manually:
SQL> execute dbms_stats.gather_system_stats('interval',60); 


Check the system values collected:
-------------------------------
col pname format a20
col pval2 format a40
select * from sys.aux_stats$;
 


cpuspeedNW:  Shows the noworkload CPU speed, (average number of CPU cycles per second).
ioseektim:    The sum of seek time, latency time, and OS overhead time.
iotfrspeed:  I/O transfer speed,tells optimizer how fast the DB can read data in a single read request.
cpuspeed:      Stands for CPU speed during a workload statistics collection.
maxthr:          The maximum I/O throughput.
slavethr:      Average parallel slave I/O throughput.
sreadtim:     The Single Block Read Time statistic shows the average time for a random single block read.
mreadtim:     The average time (seconds) for a sequential multiblock read.
mbrc:             The average multiblock read count in blocks.

Notes:

-When gathering NOWORKLOAD statistics it will gather (cpuspeedNW, ioseektim, iotfrspeed) system statistics only.
-Above values can be modified manually using DBMS_STATS.SET_SYSTEM_STATS procedure.
-According to Oracle, collecting workload statistics doesn't impose an additional overhead on your system.

Delete system statistics:
---------------------
SQL> execute dbms_stats.delete_system_stats();


####################
Data Dictionary Statistics
####################

Facts:
-----
> Dictionary tables are the tables owned by SYS and residing in the system tablespace.
> Normally data dictionary statistics in 9i is not required unless performance issues are detected.
> In 10g Statistics on the dictionary tables will be maintained via the automatic statistics gathering job run during the nightly maintenance window.

If you choose to switch off that job for application schema consider leaving it on for the dictionary tables. You can do this by changing the value of AUTOSTATS_TARGET from AUTO to ORACLE using the procedure:

SQL> Exec DBMS_STATS.SET_PARAM(AUTOSTATS_TARGET,'ORACLE');  


When to gather Dictionary statistics:
---------------------------------
-After DB upgrades.
-After creation of a new big schema.
-Before and after big datapump operations.

Check last Dictionary statistics date:
----------------------------------
SQL> select table_name, last_analyzed from dba_tables

     where owner='SYS' and table_name like '%$' order by 2; 

Gather Dictionary Statistics:  
-------------------------
SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;

->Will gather stats on 20% of SYS schema tables.
or...
SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS ('SYS');

->Will gather stats on 100% of SYS schema tables.
or...
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS
(gather_sys=>TRUE);
->Will gather stats on the whole DB+SYS schema.



#############
Extended Statistics "11g onwards"
#############

Extended statistics can be gathered on columns based on functions or column groups.

Gather extended stats on column function:
=====================================
If you run a query having in the WHERE statement a function like upper/lower the optimizer will be off and index on that column will not be used:
SQL> select count(*) from EMP where lower(ename) = 'scott'; 


In order to make optimizer work with function based terms you need to gather extended stats:

1-Create extended stats:
>>>>>>>>>>>>>>>>>>>>>
SQL> select dbms_stats.create_extended_stats
('SCOTT','EMP','(lower(ENAME))') from dual;

2-Gather histograms:
>>>>>>>>>>>>>>>>>>
SQL> exec dbms_stats.gather_table_stats
('SCOTT','EMP', method_opt=> 'for all columns size skewonly');

OR
--
*You can do it also in one Step:
>>>>>>>>>>>>>>>>>>>>>>>>>

SQL> Begin dbms_stats.gather_table_stats

     (ownname => 'SCOTT',tabname => 'EMP',
     method_opt => 'for all columns size skewonly for
     columns (lower(ENAME))');
     end;
     /

To check the Existance of extended statistics on a table:
---------------------------------------------------
SQL> select extension_name,extension from dba_stat_extensions 
where owner='SCOTT'and table_name = 'EMP';
SYS_STU2JLSDWQAFJHQST7$QK81_YB(LOWER("ENAME"))

Drop extended stats on column function:
-------------------------------------
SQL> exec dbms_stats.drop_extended_stats
('SCOTT','EMP','(LOWER("ENAME"))');

Gather extended stats on column group: -related columns-
=================================
Certain columns in a table that are part of a join condition (where statement  are correlated e.g.(country,state). You want to make the optimizer aware of this relationship between two columns and more instead of using separate statistics for each columns. By creating extended statistics on a group of columns, the Optimizer can determine a more accurate the relation between the columns are used together in a where clause of a SQL statement. e.g. columns like country_id and state_name the have a relationship, state like Texas can only be found in USA so the value of state_name are always influenced by country_id.
If there are extra columns are referenced in the "WHERE statement  with the column group the optimizer will make use of column group statistics.

1- create a column group:
>>>>>>>>>>>>>>>>>>>>>
SQL> select dbms_stats.create_extended_stats
('SH','CUSTOMERS', '(country_id,cust_state_province)')from dual;
2- Re-gather stats|histograms for table so optimizer can use the newly generated extended statistics:
>>>>>>>>>>>>>>>>>>>>>>>
SQL> exec dbms_stats.gather_table_stats ('SH','customers',
method_opt=> 'for all columns size skewonly');

OR
---


*You can do it also in one Step:
>>>>>>>>>>>>>>>>>>>>>>>>>

SQL> Begin dbms_stats.gather_table_stats

     (ownname => 'SH',tabname => 'CUSTOMERS',
     method_opt => 'for all columns size skewonly for
     columns (country_id,cust_state_province)');
     end; 
     /

Drop extended stats on column group:
--------------------------------------
SQL> exec dbms_stats.drop_extended_stats
('SH','CUSTOMERS', '(country_id,cust_state_province)');


########
Histograms
########

What are Histograms?

-----------------------
> Holds data about values within a column in a table for number of occurrences for a specific value/range.
> Used by CBO to optimize a query to use whatever index Fast Full scan or table full scan.
> Usually being used against columns have data being repeated frequently like country or city column.
> gathering histograms on a column having distinct values (PK) is useless because values are not repeated.
> Two types of Histograms can be gathered:
  -Frequency histograms: is when distinct values (buckets) in the column is less than 255 
(e.g. the number of countries is always less than 254).
  -Height balanced histograms: are similar to frequency histograms in their design, but distinct values  > 254
    See an Example: http://aseriesoftubes.com/articles/beauty-and-it/quick-guide-to-oracle-histograms
> Collected by DBMS_STATS (which by default doesn't collect histograms, 
it deletes them if you didn't use the parameter).
> Mainly being gathered on foreign key columns/columns in WHERE statement.
> Help in SQL multi-table joins.
> Column histograms like statistics are being stored in data dictionary.
> If application exclusively uses bind variables, Oracle recommends deleting any existing 
histograms and disabling Oracle histograms generation.

Cautions:
   – Do not create them on Columns that are not being queried.
   – Do not create them on every column of every table.
   – Do not create them on the primary key column of a table.

Verify the existence of histograms:
--------------------------------------
SQL> select column_name,histogram from dba_tab_col_statistics

     where owner='SCOTT' and table_name='EMP'; 

Creating Histograms:
-----------------------
e.g.

SQL> Exec dbms_stats.gather_schema_stats
     (ownname => 'SCOTT',
     estimate_percent => dbms_stats.auto_sample_size,
     method_opt => 'for all columns size auto',
     degree => 7);


method_opt:
FOR COLUMNS SIZE AUTO                 => Fastest. you can specify one column instead of all columns.
FOR ALL COLUMNS SIZE REPEAT     => Prevent deletion of histograms and collect it only 
for columns already have histograms.
FOR ALL COLUMNS => collect histograms on all columns .
FOR ALL COLUMNS SIZE SKEWONLY => collect histograms for columns have skewed value .
FOR ALL INDEXES COLUMNS     => collect histograms for columns have indexes.

Note: AUTO & SKEWONLY will let Oracle decide whether to create the Histograms or not.

Check the existence of Histograms:
SQL> select column_name, count(*) from dba_tab_histograms

     where OWNER='SCOTT' table_name='EMP' group by column_name; 

Drop Histograms: 11g
------------------
e.g.
SQL> Exec dbms_stats.delete_column_stats

     (ownname=>'SH', tabname=>'SALES',
     colname=>'PROD_ID', col_stat_type=> HISTOGRAM);


Stop gather Histograms: 11g
----------------------
[This will change the default table options]
e.g.
SQL> Exec dbms_stats.set_table_prefs

     ('SH', 'SALES','METHOD_OPT', 'FOR ALL COLUMNS SIZE AUTO,FOR COLUMNS SIZE 1 PROD_ID');
>Will continue to collect histograms as usual on all columns in the SALES table except for PROD_ID column.

Drop Histograms: 10g
------------------
e.g.
SQL> exec dbms_stats.delete_column_stats
(user,'T','USERNAME');


##################################
Save/IMPORT & RESTORE STATISTICS:
##################################
===================
Export /Import Statistics:
===================
In this way statistics will be exported into table then imported later from that table.

1-Create STATS TABLE:
-  -------------------------
SQL> Exec dbms_stats.create_stat_table
(ownname => 'SYSTEM', stattab => 'prod_stats',tblspace => 'USERS'); 

2-Export the statistics to the STATS table:
---------------------------------------------
For Database stats:
SQL> Exec dbms_stats.export_database_stats
(statown => 'SYSTEM', stattab => 'prod_stats');
For System stats:
SQL> Exec dbms_stats.export_SYSTEM_stats
(statown => 'SYSTEM', stattab => 'prod_stats');
For Dictionary stats:
SQL> Exec dbms_stats.export_Dictionary_stats
(statown => 'SYSTEM', stattab => 'prod_stats');
For Fixed Tables stats:
SQL> Exec dbms_stats.export_FIXED_OBJECTS_stats
(statown => 'SYSTEM', stattab => 'prod_stats');
For Schema stas:
SQL> EXEC DBMS_STATS.EXPORT_SCHEMA_STATS
('ORIGINAL_SCHEMA','STATS_TABLE',NULL,'STATS_TABLE_OWNER');
For Table:
SQL> Conn scott/tiger
SQL> Exec dbms_stats.export_TABLE_stats
(ownname => 'SCOTT',tabname => 'EMP',stattab => 'prod_stats');
For Index:
SQL> Exec dbms_stats.export_INDEX_stats
(ownname => 'SCOTT',indname => 'PK_EMP',stattab => 'prod_stats');
For Column:
SQL> Exec dbms_stats.export_COLUMN_stats 
(ownname=>'SCOTT',tabname=>'EMP',colname=>'EMPNO',stattab=>'prod_stats');

3-Import the statistics from PROD_STATS table to the dictionary:
--------------------------------------------------------------------
For Database stats:
SQL> Exec DBMS_STATS.IMPORT_DATABASE_STATS

     (stattab => 'prod_stats',statown => 'SYSTEM');
For System stats:
SQL> Exec DBMS_STATS.IMPORT_SYSTEM_STATS

     (stattab => 'prod_stats',statown => 'SYSTEM');
For Dictionary stats:
SQL> Exec DBMS_STATS.IMPORT_Dictionary_STATS

     (stattab => 'prod_stats',statown => 'SYSTEM');
For Fixed Tables stats:
SQL> Exec DBMS_STATS.IMPORT_FIXED_OBJECTS_STATS

     (stattab => 'prod_stats',statown => 'SYSTEM');
For Schema stats:
SQL> Exec DBMS_STATS.IMPORT_SCHEMA_STATS

     (ownname => 'SCOTT',stattab => 'prod_stats', statown => 'SYSTEM');
For Table stats and it's indexes:
SQL> Exec dbms_stats.import_TABLE_stats

     ( ownname => 'SCOTT', stattab => 'prod_stats',tabname => 'EMP');
For Index:
SQL> Exec dbms_stats.import_INDEX_stats

     ( ownname => 'SCOTT', stattab => 'prod_stats', indname => 'PK_EMP');
For COLUMN:
SQL> Exec dbms_stats.import_COLUMN_stats

     (ownname=>'SCOTT',tabname=>'EMP',colname=>'EMPNO',stattab=>'prod_stats');

4-Drop Stat Table:
-------------------
SQL> Exec dbms_stats.DROP_STAT_TABLE 
(stattab => 'prod_stats',ownname => 'SYSTEM');

===============
Restore statistics: -From Dictionary-
===============
Old statistics are saved automatically in SYSAUX for 31 day.

Restore Dictionary stats as of timestamp:
-----------------------------------------
SQL> Exec DBMS_STATS.RESTORE_DICTIONARY_STATS(sysdate-1); 


Restore Database stats as of timestamp:
--------------------------------------
SQL> Exec DBMS_STATS.RESTORE_DATABASE_STATS(sysdate-1); 


Restore SYSTEM stats as of timestamp:
--------------------------------------
SQL> Exec DBMS_STATS.RESTORE_SYSTEM_STATS(sysdate-1); 


Restore FIXED OBJECTS stats as of timestamp:
-----------------------------------------------
SQL> Exec DBMS_STATS.RESTORE_FIXED_OBJECTS_STATS(sysdate-1); 


Restore SCHEMA stats as of timestamp:
---------------------------------------
SQL> Exec dbms_stats.restore_SCHEMA_stats

     (ownname=>'SYSADM',AS_OF_TIMESTAMP=>sysdate-1); 
OR:
SQL> Exec dbms_stats.restore_schema_stats

     (ownname=>'SYSADM',AS_OF_TIMESTAMP=>'20-JUL-2008 11:15:00AM');

Restore Table stats as of timestamp:
-----------------------------------
SQL> Exec DBMS_STATS.RESTORE_TABLE_STATS

     (ownname=>'SYSADM', tabname=>'T01POHEAD',AS_OF_TIMESTAMP=>sysdate-1);
=========
Advanced:
=========

To Check current Stats history retention period (days):
-------------------------------------------------
SQL> select dbms_stats.get_stats_history_retention from dual;
SQL> select dbms_stats.get_stats_history_availability 
from dual;
To modify current Stats history retention period (days):
-------------------------------------------------
SQL> Exec dbms_stats.alter_stats_history_retention(60); 


Purge statistics older than 10 days:
------------------------------
SQL> Exec DBMS_STATS.PURGE_STATS(SYSDATE-10);

Procedure To claim space after purging statstics:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Space will not be claimed automatically when you purge stats, you must claim it manually using this procedure:

Check Stats tables size:
>>>>>>
        col Mb form 9,999,999
        col SEGMENT_NAME form a40
        col SEGMENT_TYPE form a6
        set lines 120
        select sum(bytes/1024/1024) Mb,

        segment_name,segment_type from dba_segments
        where  tablespace_name = 'SYSAUX'
        and segment_name like 'WRI$_OPTSTAT%'
        and segment_type='TABLE'
        group by segment_name,segment_type order by 1 asc
        /

Check Stats indexes size:
>>>>>
        col Mb form 9,999,999
        col SEGMENT_NAME form a40
        col SEGMENT_TYPE form a6
        set lines 120
        select sum(bytes/1024/1024) Mb, segment_name,segment_type

        from dba_segments
        where  tablespace_name = 'SYSAUX'
        and segment_name like '%OPT%'
        and segment_type='INDEX'
        group by segment_name,segment_type order by 1 asc
        /
Move Stats tables in same tablespace:
>>>>>
        select 'alter table '||segment_name||' move tablespace

        SYSAUX;' from dba_segments
        where tablespace_name = 'SYSAUX'
        and segment_name like '%OPT%' and segment_type='TABLE'
        /
Rebuild stats indexes:
>>>>>>
        select 'alter index '||segment_name||' rebuild online;'

        from dba_segments where tablespace_name = 'SYSAUX'
        and segment_name like '%OPT%' and segment_type='INDEX'
        /

Check for un-usable indexes:
>>>>>
        select  di.index_name,di.index_type,di.status  from

        dba_indexes di , dba_tables dt
        where  di.tablespace_name = 'SYSAUX'
        and dt.table_name = di.table_name
        and di.table_name like '%OPT%'
        order by 1 asc
        /

Delete Statistics:
===============
For Database stats:
SQL> Exec DBMS_STATS.DELETE_DATABASE_STATS ();
For System stats:
SQL> Exec DBMS_STATS.DELETE_SYSTEM_STATS ();
For Dictionary stats:
SQL> Exec DBMS_STATS.DELETE_DICTIONARY_STATS ();
For Fixed Tables stats:
SQL> Exec DBMS_STATS.DELETE_FIXED_OBJECTS_STATS ();
For Schema stats:
SQL> Exec DBMS_STATS.DELETE_SCHEMA_STATS ('SCOTT');
For Table stats and it's indexes:
SQL> Exec dbms_stats.DELETE_TABLE_stats
(ownname=>'SCOTT',tabname=>'EMP');
For Index:
SQL> Exec dbms_stats.DELETE_INDEX_stats
(ownname => 'SCOTT',indname => 'PK_EMP');
For Column:
SQL> Exec dbms_stats.DELETE_COLUMN_stats
(ownname =>'SCOTT',tabname=>'EMP',colname=>'EMPNO');

Note: This procedure can be rollback by restoring STATS using DBMS_STATS.RESTORE_ procedure.


Pending Statistics:  "11g onwards"
===============

What is Pending Statistics:
Pending statistics is a feature let you test the new gathered statistics without letting the CBO (Cost Based Optimizer) use them "system wide" unless you publish them.

How to use Pending Statistics:
Switch on pending statistics mode:
SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS('PUBLISH','FALSE');

Note: Any new statistics will be gathered on the database will be marked PENDING unless you change back the previous parameter to true:
SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS('PUBLISH','TRUE');

Gather statistics: "as you used to do"
SQL> Exec DBMS_STATS.GATHER_TABLE_STATS('sh','SALES');
Enable using pending statistics on your session only:
SQL> Alter session set optimizer_use_pending_statistics=TRUE;

Then any SQL statement you will run will use the new pending statistics...

When proven OK, publish the pending statistics:
SQL> Exec DBMS_STATS.PUBLISH_PENDING_STATS(); 


Once you finish don't forget to return the Global PUBLISH parameter to TRUE:

SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS('PUBLISH','TRUE');
>If you didn't do so, all new gathered statistics on the database will be marked as PENDING, the thing may confuse you or any DBA working on this DB in case he is not aware of that parameter change.

References:
http://docs.oracle.com/cd/E18283_01/appdev.112/e16760/d_stats.htm


Upgrade from 11.2.0.1 to 11.2.0.3 (Part I Software Installation)

$
0
0
Part I : environment preparation & 11.2.0.3 Software Installation (Standalone DB)

In this part I'll discuss OUT-PLACE installation steps of Oracle 11.2.0.3 (standalone DB) on a Linux server already have an Oracle 11.2.0.1 setup with databases up and running.

Database upgrade steps (Standalone DB) will be covered in Part II

In case you're interested in RAC database upgrade from 11.2.0.1 to 11.2.0.3 I've explained a full implementation (cookbook) of upgrading a 11.2.0.1 to 11.2.0.3 RAC database on new servers (out-place upgrade) in this link:
http://dba-tips.blogspot.ae/2013/09/upgrade-rac-11201-to-11203-part-i.html

My current environment specs before the upgrade:
Oracle Enterprise Linux 5.8 X86_64
Oracle 11.2.0.1
3 Up and running 11.2.0.1 databases.
11.2.0.1 ORACLE_HOME = /u01/oracle/ora11gr2/11.2.0.1

My environment specs after the upgrade:
Oracle Enterprise Linux 5.8 X86_64
Oracle 11.2.0.3
3 Up and running 11.2.0.3 databases.
11.2.0.3 ORACLE_HOME = /u02/oracle/ora11g/11.2.0.3

Note: In this demonstration I'll do an Out-Place upgrade, this means I'll install the new software on a new ORACLE_HOME path which Oracle recommends.

In-Place upgrade means, to install the new software on the same location of the original old ORACLE_HOME, the thing requires to detach the old ORACLE_HOME first before installing the new software in-place of it, the thing will increases the downtime window.

Out-PLACE upgrade is much safer, easier, with minimal downtime if you compare it with the In-PLACE upgrade, the only disadvantage of Out-Place upgrade that it needs more space than the In-Place upgrade.


Environment Preparation:
#####################
Requirements for Linux :
================
For RHEL 5 x86_64:

Minimum Red Hat/Oracle Enterprise Linux version is EL 5 Update 5
Minimum Unbreakable Enterprise Kernel => 2.6.32 or later

Currently I've Oracle Linux 5 Update 8 with Unbreakable kernel 2.6.32-300

Note: Starting with Oracle 11gR2, SELinux is supported.

If enabling Automatic Memory Management:
----------------------------------------
/dev/shm must be greater than the sum of MEMORY_MAX_TARGET for all instance on the server.

Requirements for Linux x86_64 Packages: (install the same packages versions or later)
============================
All of these packages are already installed on my server as I already have 11.2.0.1 setup.

 Required packages for 11.2.0.2 and above (on OEL 5 x86_64):
--------------------------------------------------------------
rpm -qa | grep binutils-2.17.50.0.6
rpm -qa | grep compat-libstdc++-33-3.2.3
rpm -qa | grep elfutils-libelf-0.1
rpm -qa | grep elfutils-libelf-devel-0.1
rpm -qa | grep gcc-4.1.2
rpm -qa | grep gcc-c++-4.1.2
rpm -qa | grep glibc-2.5
rpm -qa | grep glibc-common-2.5
rpm -qa | grep glibc-devel-2.5
rpm -qa | grep glibc-headers-2.5
rpm -qa | grep ksh-2
rpm -qa | grep libaio-0.3.106
rpm -qa | grep libaio-devel-0.3.106
rpm -qa | grep libgcc-4.1.2
rpm -qa | grep libstdc++-4.1.2
rpm -qa | grep libstdc++-devel-4.1.2
rpm -qa | grep make-3.81
rpm -qa | grep sysstat-7.0.2
rpm -qa | grep unixODBC-2.2.11       #=> (32-bit) or later
rpm -qa | grep unixODBC-devel-2.2.11 #=> (64-bit) or later
rpm -qa | grep unixODBC-2.2.11       #=> (64-bit) or later


Create Oracle user:
===============
Note: Oracle user, DBA and OINSTALL groups are automatically created during Oracle Enterprise Linux installation, you can skip this step. also as we already have 11.2.0.1 Oracle installation on this server this gurantee that Oracle user with it's group is already there.

# groupadd -g 502 dba
groupadd -g 503 oinstall
useradd -u 505 -g oinstall -G dba -s /bin/bash -d /home/oracle  oracle
passwd oracle
mkdir -p /home/oracle
chown oracle:dba /home/oracle
chmod 750 /home/oracle

Create a new ORACLE_HOME path:
============================
mkdir -p /u02/oracle/ora11g/11.2.0.3
chown -R oracle:dba /u01/oracle/ora11g
chmod 750 /u02/oracle/ora11g/11.2.0.3

Create a new .bash_profile holds 11.2.0.3 Environment variables:
==============================================
As I already have an 11.2.0.1 installation on the server and I don't want to create a new oracle user to be the owner of the new 11.2.0.3 installation, I have to create a new .bash_profile file, holds the environment variables of the new ORACLE_HOME to not mix with the already running 11.2.0.1 setup.

This new .bash_profile which will name it .bash_profile11203 will holds the same Env variables from the original .bash_profile except replacing  $ORACLE_BASE & ORACLE_HOME variables with the ones pointing to 11.2.0.3 installation.
Each time I deal with 11.2.0.3 installation or databases running from 11.2.0.3 Oracle Home, I've to call that .bash_profile11203 .

# cd /home/oracle
# cp .bash_profile .bash_profile11203

Modify the new environment profile:
# vi .bash_profile11203
=>Run these two vi commands:

ESC  :
%s/\/u01\/oracle\/ora11gr2\/11\.2\.0\.1/\/u02\/oracle\/ora11g\/11\.2\.0\.3/g


%s/\/u01\/oracle/\/u02\/oracle/g

=> You've to customize the above command to replace old ORACLE_HOME path with the new ORACLE_HOME path.

Now I've a good new profile contains variables pointing to the new ORACLE_HOME installation.


Configure SYSTEM parameters:
=========================

All parameters should be same or greater on the OS:
--------------------------------------------------
# /sbin/sysctl -a | grep sem                     #=> semaphore parameters (25032000100142).
# /sbin/sysctl -a | grep shm                 #=> shmmax, shmall, shmmni (536870912, 2097152, 4096).
# /sbin/sysctl -a | grep file-max               #=> (6815744).
# /sbin/sysctl -a | grep ip_local_port_range #=> Minimum: 9000, Maximum: 65500
# /sbin/sysctl -a | grep rmem_default       #=> (262144).
# /sbin/sysctl -a | grep rmem_max            #=> (4194304).
# /sbin/sysctl -a | grep wmem_default       #=> (262144).
# /sbin/sysctl -a | grep wmem_max           #=> (1048576).
# /sbin/sysctl -a | grep aio-max-nr      #=> (Maximum: 1048576) limits concurrent requests to avoid
                                                           I/O Failures.

If you need to change any of these parameters, login as root user, modify file /etc/sysctl.conf then execute this command:
# sysctl -p

vi /etc/security/limits.conf--Already exist with bigger values so keep it as it except the last parameter.
-------------------------
oracle   soft   nofile    131072
oracle   hard   nofile    131072
oracle   soft   nproc    131072
oracle   hard   nproc    131072
oracle   soft   core    unlimited
oracle   hard   core    unlimited
oracle   soft   memlock    50000000
oracle   hard   memlock    50000000
# Adjust MAX stack size for 11.2.0.3 => Original was 8192:
oracle   soft   stack    10240

After updating limits.conf user should logoff & logon to let new adjustments take effect.

Ensure mounting /usr in READ-WRITE mode:
====================================
# mount -o remount,rw /usr

>For security reasons some System Administrators prefer to mount /usr in READ ONLY mode, during Oracle installation /usr must be in RW mode.

Backup important files:
=================
# cp /etc/oraInst.loc /etc/oraInst.loc.11.2.0.1
# cp /etc/oratab.11.2.0.1

# cp /usr/local/bin/oraenv  /usr/local/bin/oraenv.11.2.0.1
# cp /usr/local/bin/dbhome  /usr/local/bin/dbhome.11.2.0.1
# cp /usr/local/bin/coraenv /usr/local/bin/coraenv.11.2.0.1

Create&Modify Oracle Inventory location: (By ROOT user)
==============================
# mkdir -p /u02/oracle/oraInventory
# vi /etc/oraInst.loc

inventory_loc=/u02/oracle/oraInventory
inst_group=oinstall

Run the new 11.2.0.3 PROFILE:
===========================
Set the new Env variables that point to the new ORACLE_HOME by running file .bash_profile1123:
cd /home/oracle
. .bash_profile1123


ORACLE 11.2.0.3 INSTALLATION:
#############################

Download the 11.2.0.3 installation files from Oracle support web site -it's not available on oracle.com site-
Go to Patches&Updates tab,
search for patch# 10404530then select the one for your platform (mine is Linux x86_64).
Download the first two files only (p10404530_112030_Linux-x86-64_1of7 , p10404530_112030_Linux-x86-64_2of7) for Oracle software installation only, if you'll install Grid for RAC or ASM (which is not in our scope) you need to download the first three files.
The rest of files are for (Client, Gateways, examples, deinstall) which are also not in our scope.

You can select which mode will use to do the installation  GUI or SILENT mode, I'll discuss both of them.

Note: As we are doing an out-place upgrade, we will not touch the old Oracle Home, that means you don't need to shutdown the already running databases or listeners which runs from 11.2.0.1 Oracle Home. this explains how Out-Place upgrade minimize the downtime window.

Installation using GUI mode:
#####################
Login to your server using VNC or directly to the Console (physical access) with Oracle user.
Sometimes Sys admins lock direct login to Oracle user, if so login with root user then issue this command:
# xhost + 
Then switch to Oracle user
# su - oracle

To know how to use configure VNC on your server check this link:
http://dba-tips.blogspot.ae/2012/05/easy-way-to-configure-vnc-server-on.html

Go to DVD location:
------------------
# cd /export/11.2.0.3/database

# ./runInstaller

Page1 (Configure Security Updates):
      Remove the check from"I wish to receive security updates via My Oracle Support"click Next..click YES
Page2 (Download Software Updates):
      Check "Skip software updates" ...click Next (My server doesn't have access to the internet).
Page3 (Installation Option):
      Check "Install database software only" ...click Next
Page4 (Grid Installation Options):
      Check "Single instance database installation" ...click Next
Page5 (Product Languages):
      Leave it to the default ...click Next
Page6 (Database Edition):
      Enterprise Edition ...click select options..(check the needed options)..click OK..click Next
Page7 (Installation Location):
      ORACLE_BASE: /u02/oracle , Software Location: /u02/oracle/ora11g/11.2.0.3 (software location is ORACLE_HOME).
Page8 (Operating System Groups):
      Database Administrator (OSDBA) Group ...select "dba"
      Database Operator (OSOPER) Group (Optional) ..leave it blank  >>I never had a need to connect to the database as SYSOPER.
Page9 (Summary):
      Click Install
      At the end of installation: 
      from another session by root user execute script $ORACLE_HOME/root.sh 
Page10 (Finish):
      Once executed root.sh script from another session, go back to the OUI window and click Exit.

Note: if you want to create a response file, from runInstaller GUI do this:
# cd /u02/11.2.0.3/database
# ./runInstaller
Choose Options you need to install ,at the "Summary" screen click "Save responsefile" button.


Installation using SILENT mode:
======================

# cd /export/11.2.0.3/database

#./runInstaller -silent -ignoreSysPrereqs -ignorePrereq -ignoreInternalDriverError -showProgress -noconfig
-responseFile /u02/11.2.0.3/database/response/db_install.rsp 
ORACLE_BASE=/u02/oracle
ORACLE_HOME=/u02/oracle/ora11g/11.2.0.3
INVENTORY_LOCATION=/u02/oracle/oraInventory
ORACLE_HOME_NAME=OraDbHome11203
oracle.install.option=INSTALL_DB_SWONLY
oracle.install.db.InstallEdition=EE
UNIX_GROUP_NAME=oinstall
oracle.install.db.DBA_GROUP=dba
DECLINE_SECURITY_UPDATES=true
oracle.install.db.optionalComponents=oracle.rdbms.lbac:11.2.0.3.0,oracle.rdbms.dm:11.2.0.3.0,oracle.rdbms.dv:11.2.0.3.0,oracle.rdbms.rat:11.2.0.3.0


Note:
----
oracle.install.db.optionalComponents=
oracle.oraolap:11.2.0.3.0               - Oracle OLAP
oracle.rdbms.dm:11.2.0.3.0           - Oracle Data Mining RDBMS Files
oracle.rdbms.dv:11.2.0.3.0            - Oracle Database Vault option
oracle.rdbms.lbac:11.2.0.3.0         - Oracle Label Security
oracle.rdbms.partitioning:11.2.0.3.0 - Oracle Partitioning
oracle.rdbms.rat:11.2.0.3.0           - Oracle Real Application Testing
That means the options I'm installing are: Oracle Label Security, Data Mining, Database Vault, Real Application Testing. 
-showProgress :Showing the progress of the installation on the screen.
-noconfig: Supress running configuration assistants during installation as it will be "Software Only"

>>Progress will be printed on the screen also you can check the log:
# tail -f /u02/oracle/oraInventory/logs/installActions<$date_$time>.log


Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 2872 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 12768 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-01-02_12-13-51PM. Please wait ...
[oracle@dev1 database]#
[oracle@dev1 database]#You can find the log of this install session at:
 /u02/oracle/oraInventory/logs/installActions2013-01-02_12-13-51PM.log

Prepare in progress.
..................................................   9% Done.

Prepare successful.

Copy files in progress.
..................................................   14% Done.
..................................................   20% Done.
..................................................   26% Done.
..................................................   31% Done.
..................................................   36% Done.
..................................................   44% Done.
..................................................   49% Done.
..................................................   55% Done.
..................................................   63% Done.
..................................................   68% Done.
..................................................   73% Done.
..................................................   78% Done.
..................................................   83% Done.
..............................
Copy files successful.

Link binaries in progress.
..........
Link binaries successful.

Setup files in progress.
..................................................   88% Done.
..................................................   94% Done.

Setup files successful.
The installation of Oracle Database 11g was successful.
Please check '/u02/oracle/oraInventory/logs/silentInstall2013-01-02_12-13-51PM.log' for more details.

Execute Root Scripts in progress.

As a root user, execute the following script(s):
1. /u02/oracle/ora11g/11.2.0.3/root.sh


..................................................   100% Done.

Execute Root Scripts successful.
Successfully Setup Software.


At the End of the installation: (By root user run root.sh script):
# /u02/oracle/ora11g/11.2.0.3/root.sh

Installation is done.


Backup oraInventory directory:
-------------------------------
# tar cvf /u02/oracle/oraInventory.tar   /u02/oracle/oraInventory

Backup root.sh:
--------------
# cp /u02/oracle/ora11g/11.2.0.3/root.sh /u02/oracle/ora11g/11.2.0.3/root.sh_after_installation

Backup ORACLE_HOME: (By root user)
-----------------------------
# tar cvpf /u02/oracle/ora11g/11.2.0.3_After_DB_install.tar   /u02/oracle/ora11g/11.2.0.3

Backup the following files:
--------------------------
# cp /usr/local/bin/oraenv  /usr/local/bin/oraenv.11.2.0.3
# cp /usr/local/bin/dbhome  /usr/local/bin/dbhome.11.2.0.3
# cp /usr/local/bin/coraenv /usr/local/bin/coraenv.11.2.0.3


PSU Patch Apply: 
##############
Apply the latest PSU patch on the 11.2.0.3 ORACLE_HOME before upgrading the database, PSU patch post steps which runs on the database like catbunddel.sql are not required (unless the PATCH README mention post steps to be applied after DB Creation|Upgrade), because every PSU patch is unique. As per .

Download and Install latest OPatch utility: Patch# 6880880 (For 11.2.0.3 on Linux X64_68)
Latest OPatch utility was 11.2.0.3.3

# cd $ORACLE_HOME
# tar cvf OPatch.org.tar OPatch
# cd OPatch
# rm -rf *
# cd $ORACLE_HOME
# unzip p6880880_112000_Linux-x86-64.zip

Download & Apply latest PSU patch 11.2.0.3.4 :  Metalink patch# 14275605
  # unzip p14275605_112030_Linux-x86-64.zip
  # cd 14275605
  # opatch apply

Now you're done with PSU patch apply.


In the next post Part II, I'll demonstrate database upgrade (Standalone) from 11.2.0.1 to 11.2.0.3 using manual & DBUA ways.



Upgrade from 11.2.0.1 to 11.2.0.3 (Part II Database Upgrade)

$
0
0
Part II : Database Upgrade steps from 11.2.0.1 to 11.2.0.3 (Standalone DB)

In this part I'll discuss database upgrade steps from 11.2.0.1 to 11.2.0.3, I already covered 11.2.0.3 database software installation steps on a server already running 11.2.0.1 database using out-place method in Part I : environment preparation & 11.2.0.3 Software Installation.

In case you're interested in RAC database upgrade from 11.2.0.1 to 11.2.0.3 I've explained a full implementation (cook book) of upgrading a 11.2.0.1 to 11.2.0.3 RAC database on a new servers (out-place upgrade) in this link:
http://dba-tips.blogspot.ae/2013/09/upgrade-rac-11201-to-11203-part-i.html

Important Notes:
Metalink [ID 730365.1]  Includes all patchset downloads + How to upgrade from any Oracle DB version to another one.
Metalink [ID 1276368.1] Out-of-place manual upgrade from previous 11.2.0.N version to the latest 11.2.0.N patchset.


Tips Before Starting The Upgrade Process:
###############################

Backup & Truncate Audit Table SYS.AUD$:
----------------------------------------
Truncating SYS.AUD$ table will speed up the upgrade process:

   --Backing up + compressing audit data in SYS.AUD_BKP table:
SQL> create table SYS.AUD_BKP COMPRESS as select * from SYS.AUD$; 
SQL> Truncate table SYS.AUD$;

Purge DBA_RECYCLEBIN:
-------------------------
Purging DBA_RECYCLEBIN will speed up the upgrade process:
SQL> PURGE DBA_RECYCLEBIN;
Save a backup of all DB Parameters (Visible & hidden):
-------------------------------------------------
This will help you in troubleshooting issues coming after the upgrade (bugs, performance, or whatever the problem is).

The following statement will make it easy for you:
SQL> Spool All_parameters.txt
     set linesize 170
     col Parameter for a50
     col "Session Value" for a40
     col "Instance Value" for a40
     SELECT a.ksppinm "Parameter", b.ksppstvl "Session Value", c.ksppstvl "Instance Value"
     FROM x$ksppi a, x$ksppcv b, x$ksppsv c
     WHERE a.indx = b.indx AND a.indx = c.indx order by 1;
     Spool off

I also strongly recommend to reset the hidden parameter to its default values before starting the upgrade, you don't want to get a weird problems during the upgrade and you don't know from where they came up.


##################
PRE-UPGRADE STEPS:
##################

Step 1: Software Installation
#####

Covered in Part I.
Install 11.2.0.3 RDBMS Software into a new ORACLE_HOME.
=>You only need to install examples cd if your DB is using Oracle Text Themes or you intend to install Multimedia demos.

Apply the latest PSU patch on the 11.2.0.3 ORACLE_HOME before upgrading the database.

PSU patch post steps which runs on the database like executing script $ORACLE_HOME/rdbms/admin/catbundle.sql are not required in my case as per 11.2.0.3.4 PSU read-me
Note that every PSU patch is unique, not all PSU patches exempt running catcpu.sql after the database upgrade, so you have to read the readme file first.

Pre-upgrade information:
=====================
Download latest version of utlu112i_5.sql script and execute it: Note ID 884522.1
Note: This script already exist under the new Oracle Home $ORACLE_HOME/rdbms/admin, but usually the one in Metalink is updated with the most recent upgrade checks.

SQL> SPOOL upgrade_info.log
SQL> @new_ORACLE_HOME/rdbms/admin/utlu112i.sql
SQL> SPOOL OFF


Step 2: Dictionary Check
#####
Verify the validity of data dictionary objects by running dbupgdiag.sql script (Download it from NOTE ID 
556610.1)
If the dbupgdiag.sql script reports any invalid objects, run utlrp (multiple times) to validate the invalid objects in the database, until there is no change in the number of invalid objects:
SQL> @/home/oracle/dbupgdiag.sql
SQL> @?/rdbms/admin/utlrp.sql

Gather Dictionary Statistics:
--------------------------

Helps in speeding up upgrade process. 
SQL> EXECUTE dbms_stats.gather_dictionary_stats;
Step 3: TIMEZONE Version
#####
In case you have TIMESTAMP WITH TIME ZONE data type in your DB, you need to Upgrade the Time Zone version to version 14.

The possibilities are:
 >If current version < 14 ,You need to upgrade to version 14 after you finish the upgrade to 11.2.0.3.
 >If current version = 14 ,No need to upgrade, Skip the whole Step.
 >If current version > 14 ,You must upgrade your Time Zone version before upgrading to 11.2.0.3 or your data stored in   TIMESTAMP WITH TIME ZONE datatype can become corrupted during the upgrade.

Check your current Time Zone version:

SQL> SELECT version FROM v$timezone_file;
     VERSION
     ----------
    4

In my case the version is older, so I can do this step later after finalizing the upgrade.

STEP 11 covers this part.

Check National Characterset is UTF8 or AL16UTF16:
============================================
SQL> select value from NLS_DATABASE_PARAMETERS where parameter = 'NLS_NCHAR_CHARACTERSET';

Step 4: Disable Cluster option
####
Set the parameter cluster_database to FALSE for RAC database.


Step 5: Configure log locations
#####

Change SPFILE parameters to point to the new ORACLE HOME:
------------------------------------------------------------
# mkdir -p /u02/oracle/ora11g/11.2.0.3/diagnostics/orcl
SQL> ALTER SYSTEM SET diagnostic_dest = '/u02/oracle/ora11g/11.2.0.3/diagnostics/orcl';
SQL> ALTER SYSTEM SET audit_file_dest = '/u02/oracle/ora11g/11.2.0.3/rdbms/audit' SCOPE=SPFILE;


Step 6: New Environment Variables
#####

Make sure the following Linux environment variables are pointing to the new 11.2.0.3 ORACLE_HOME:
(ORACLE_BASE, ORACLE_HOME, PATH, NLS_10 and LIBRARY_PATH).


Step 7: Modify | Move configuration files
#####

>Make sure that entries inside /etc/oratab file are pointing to the new 11.2.0.3 ORACLE_HOME, hash the original entries pointing to 11.2.0.1 ORACLE_HOME:
e.g.
#pefms:/u01/oracle/ora11gr2/11.2.0.1:Y
pefms:/u02/oracle/ora11g/11.2.0.3:Y

> Copy SPFILE & Password File to the new ORACLE_HOME:
  -------------------------------------------------------------------------
# cd /u01/oracle/ora11gr2/11.2.0.1/dbs
# cp spfile* orapw* /u02/oracle/ora11g/11.2.0.3/dbs/

> Copy network configuration files to the new 11.2.0.3 $TNS_ADMIN directory:
  --------------------------------------------------------------------------
# cd /u01/oracle/ora11gr2/11.2.0.1/network/admin
# cp tnsnames.ora listener.ora sqlnet.ora  /u02/oracle/ora11g/11.2.0.3/network/admin/

> Copy DB Control EM directories to the new 11.2.0.3 ORACLE_HOME:
  ------------------------------
------------------------------------------------------
# $ORACLE_HOME/
# $ORACLE_HOME/oc4j/j2ee/OC4J_DBConsole_
# $ORACLE_HOME/owb/bin/admin

> Copy SQLPLUS settings file:
  ----------------------------------
# cp /u01/oracle/ora11gr2/11.2.0.1/sqlplus/admin/glogin.sql  /u02/oracle/ora11g/11.2.0.3/sqlplus/admin/


Step 8: Disable Vault | Adjust parameters for JVM
#####
>Disable Database Vault if enabled.

>IF JVM installed, java_pool_size and shared_pool_size must be set to at least 250MB prior to the upgrade.

Latest checks:
==========
SQL> SELECT * FROM v$recover_file;no rows selected

SQL> SELECT * FROM v$backup WHERE status != 'NOT ACTIVE';no rows selected

SQL> SELECT * FROM dba_2pc_pending;--outstanding distributed transactionsno rows selected

SQL> SELECT name FROM sys.user$ WHERE ext_username IS NOT NULL AND password = 'GLOBAL';

no rows selected

Disable all batch and cron jobs:
----------------------------
Disable all crontab scripts:

# crontab /root/crontab_root 
# crontab /dev/null
# crontab -l

# crontab -l > /home/oracle/oracle_crontab
# crontab /dev/null
# crontab -l
Disable DB jobs:
SQL> EXEC dbms_scheduler.set_scheduler_attribute('SCHEDULER_DISABLED','TRUE');SQL> alter system set job_queue_processes=0 scope=both;

Step 9: Set the Database in the Noarchivelog mode.
#####
>Stop the listener: lsnrctl stop
>Stop DBCONSOLE: emctl stop dbconsole

Putting the database in the noarchivelog shrink the upgrade time. .

$ sqlplus "/as sysdba"
SQL> shutdown immediate;
SQL> startup mount
SQL> alter database noarchivelog;
SQL> archive log stop;
SQL> shutdown immediate;

YOU MUST TAKE A COLD BACKUP AT THIS STAGE FOR FALLBACK PLAN.


##############
UPGRADE STEPS:
##############


Note: The database should be opened from the old ORACLE_HOME before running DBUA.
Note: If Oracle clusterware installed, it needs to be UP and running before starting DBUA.


There are many ways Oracle provide to upgrade your database, I'll discuss here three of them (DBUA, Silent, Manual) and it's up to you to select the suitable way for your implementation.


DBUA Way: (GUI)
===========
Ensure that dbua is running from the new ORACLE_HOME:
# which dbua

Run the DBUA by Oracle User:
# dbua

=>First screen: introduction .
=>2nd screen: Select the database name.
=>3rd screen: Check the following:
              -Recompile invalid objects at the end of upgrade.
              -Turn off Archiving for the duration of upgrade.
              -Upgrade Timezone version and TIMESTAMP WITH TIMEZONE data.
=>4th screen: keep the default.
=>5th screen: keep the default.
=>6th screen: keep the default.
=>7th screen: Put password for DBSNMP,SYSMAN.
=>8th screen: Select the DB listener.
=>9th screen: Finish.

Notes: Continue to Post upgrade steps but skip these two steps (Upgrade the Timezone version & Configure EM) as they are automatically configured by DBCA.

Silent Way:
===========
In case you want to perform the upgrade using DBUA but you cannot forward X11 packets due to firewall rule or other reason, you can use the silent mode which is Faster and doesn't requires X11 packet forwarding:

By Oracle user run this command:
# dbua -silent 
       -sid orcl 
       -autoextendFiles 
       -upgradeTimezone
       -recompile_invalid_objects true 
       -degree_of_parallelism 4 
       -emConfiguration LOCAL

       -dbsnmpPassword  <password>
       -sysmanPassword  <password>


Outputs will be like the following:

Log files for the upgrade operation are located at: /u02/oracle/cfgtoollogs/dbua/bkpefms/upgrade1 
Performing Pre Upgrade
1% complete 
7% complete 
Upgrading Oracle Server
....
Upgrading JServer JAVA Virtual Machine
22% complete 
....
85% complete 
Upgrading Timezone
....
92% complete 
....
Generating Summary
100% complete 
Check the log file "/u02/oracle/cfgtoollogs/dbua/logs/silent2.log" for upgrade details.




Notes: Continue to Post upgrade steps but skip these two steps (Upgrade the Timezone version & Configure EM) as they are already configured by DBCA.


The Manual Way: (The Way I Prefer)
=================
Run 1123 profile which points to the new 11.2.0.3 ORACLE_HOME locations:

# cd /home/oracle
# . .bash_profile11203

If you didn't create this file yet you can copy the .bash_profile, renaming the new file to 
.bash_profile11203 ,edit the file by replacing the old ORACLE_HOME path with the new ORACLE_HOME path which is  /u02/oracle/ora11g/11.2.0.3 in my setup, then replace the old ORACLE_BASE with the new ORACLE_BASE which is /u02/oracle in my setup.

Step 10: Execute the upgrade script
######

# export ORACLE_SID=orcl
# sqlplus / as sysdba
SQL> set echo on
SQL> SPOOL upgrade.log
SQL> startup upgrade
SQL> @?/rdbms/admin/catupgrd.sql
SQL> spool off
SQL> Shutdown immediate


Note: 
If you encounter a message listing obsolete initialization parameters during startup upgrade,remove the obsolete parameters from the PFILE.

 Note: You can re-run the catupgrd.sql script as many times as necessary. IF you experience errors during the upgrade fix it first then: 
     1)Shu immediate   
     2)Startup Upgrade   
     3)@?/rdbms/admin/catupgrd.sql
> Check the spool file for errors.
> Restart the database in normal mode.

When upgrade script is done, run the following scrips:

SQL> Startup
SQL> @?/rdbms/admin/utlu112s.sql
SQL> @?/rdbms/admin/catuppst.sql
SQL> @?/rdbms/admin/utlrp.sql
SQL> @?/rdbms/admin/utluiobj.sql
    --Checks invalid objects after the upgrade.

Run dbupgdiag.sql script (See note: 556610.1) and verify that all the components in dba_registry are valid and there are no invalid objects in dba_objects.

SQL>@/home/oracle/dbupgdiag.sql
Modify listener.ora:
=================
In the listener.ora file, modify the ORACLE_HOME path to the new 11.2.0.3 ORACLE_HOME:

Ex:
vi /u02/oracle/ora11g/11.2.0.3/network/admin/listener.ora
LISTENER_orcl =
    (DESCRIPTION =
      (ADDRESS_LIST =
       (ADDRESS = (PROTOCOL = TCP)(HOST = ora-dev1)(PORT = 1521))
    )
  )

SID_LIST_LISTENER_orcl =
 (SID_LIST =
   (SID_DESC =
      (ORACLE_HOME = /u02/oracle/ora11g/11.2.0.3)
      (SID_NAME = orcl)
    )
 )



##################
Post Upgrade Steps:
##################


Step 11: Upgrade the TimeZone version
######
Preparation Stage:
=============

SQL> SHU IMMEDIATE
SQL> STARTUP UPGRADE
SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
     FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;
   
     PROPERTY_NAME                  VALUE
     ------------------------------ ------------------------------
     DST_PRIMARY_TT_VERSION             4
     DST_SECONDARY_TT_VERSION       0
     DST_UPGRADE_STATE                     NONE

SQL> alter session set "_with_subquery"=materialize;
SQL> alter session set "_simple_view_merging"=TRUE;
SQL> exec DBMS_DST.BEGIN_PREPARE(14)

SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
     FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;
   
     PROPERTY_NAME                  VALUE
     ------------------------------ ------------------------------
     DST_PRIMARY_TT_VERSION             4
     DST_SECONDARY_TT_VERSION       14
     DST_UPGRADE_STATE                     PREPARE

SQL> TRUNCATE TABLE SYS.DST$TRIGGER_TABLE;
SQL> TRUNCATE TABLE sys.dst$affected_tables;
SQL> TRUNCATE TABLE sys.dst$error_table;

SQL> set serveroutput on
SQL> BEGIN
     DBMS_DST.FIND_AFFECTED_TABLES
     (affected_tables => 'sys.dst$affected_tables',
     log_errors => TRUE,
     log_errors_table => 'sys.dst$error_table');
     END;
     /
SQL> SELECT * FROM sys.dst$affected_tables;
SQL> SELECT * FROM sys.dst$error_table;
SQL> EXEC DBMS_DST.END_PREPARE;

Upgrade Stage:
==============

SQL> purge dba_recyclebin;
SQL> alter session set "_with_subquery"=materialize;
SQL> alter session set "_simple_view_merging"=TRUE;
SQL> EXEC DBMS_DST.BEGIN_UPGRADE(14);

SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
     FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;
   
     PROPERTY_NAME                  VALUE
     ------------------------------ ------------------------------
     DST_PRIMARY_TT_VERSION             14
     DST_SECONDARY_TT_VERSION       4
     DST_UPGRADE_STATE                     UPGRADE

SQL> SELECT OWNER, TABLE_NAME, UPGRADE_IN_PROGRESS FROM ALL_TSTZ_TABLES where UPGRADE_IN_PROGRESS='YES';
no rows selected  |  I got 14 rows selected.

SQL> shutdown immediate
SQL> startup
SQL> alter session set "_with_subquery"=materialize;
SQL> alter session set "_simple_view_merging"=TRUE;

SQL> set serveroutput on
     VAR numfail number
     BEGIN
     DBMS_DST.UPGRADE_DATABASE(:numfail,
     parallel => TRUE,
     log_errors => TRUE,
     log_errors_table => 'SYS.DST$ERROR_TABLE',
     log_triggers_table => 'SYS.DST$TRIGGER_TABLE',
     error_on_overlap_time => FALSE,
     error_on_nonexisting_time => FALSE);
     DBMS_OUTPUT.PUT_LINE('Failures:'|| :numfail);
     END;
     /

.
.
.
Failures:0

SQL> VAR fail number
     BEGIN
     DBMS_DST.END_UPGRADE(:fail);
     DBMS_OUTPUT.PUT_LINE('Failures:'|| :fail);
     END;
     /

An upgrade window has been successfully ended.
Failures:0


SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
     FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;
   
     PROPERTY_NAME                  VALUE
     ------------------------------ ------------------------------
     DST_PRIMARY_TT_VERSION             14
     DST_SECONDARY_TT_VERSION       0
     DST_UPGRADE_STATE                     NONE

SQL> SELECT * FROM v$timezone_file;
    FILENAME                VERSION
     --------------------   ----------
     timezlrg_14.dat              14

SQL> select TZ_VERSION from registry$database;
     TZ_VERSION
     ----------
              4

SQL> update registry$database set TZ_VERSION = (select version FROM v$timezone_file);
SQL> commit;
SQL> select TZ_VERSION from registry$database;

     TZ_VERSION
     ----------
             14

SQL> SELECT value$ FROM sys.props$ WHERE NAME = 'DST_PRIMARY_TT_VERSION';
     VALUE$
     --------
           14



STEP 12: Set CLUSTER_DATABASE=TRUE
######
If your DB is a RAC one, set CLUSTER_DATABASE=TRUE

I've explained a full implementation (cook book) of upgrading a 11.2.0.1 to 11.2.0.3 RAC database on a new servers (out-place upgrade) in this link:

http://dba-tips.blogspot.ae/2013/09/upgrade-rac-11201-to-11203-part-i.html

STEP 13: Upgrade the Recovery Catalog
#######
If you're using the Recovery Catalog to backup your database you have to upgrade tha catalog DB:

A) Connect to the catalog DB through RMAN:
      RMAN> CONNECT CATALOG username/password@catalog_DB
B) Execute this command two times: 
      RMAN> UPGRADE CATALOG;


STEP 14: Upgrade Statistics Tables
#######

Statistics tables are tables store the statistics for DB or schema or tables to be restored later, or to be imported on another database usually for testing purposes.

If you created statistics tables before using the DBMS_STATS.CREATE_STAT_TABLE, then upgrade each statistics table by running:
e.g.
SQL> EXECUTE DBMS_STATS.UPGRADE_STAT_TABLE('scott', 'stat_table');


STEP 15: Enable Database Vault
######
Enable Oracle Database Vault and Revoke the DV_PATCH_ADMIN Role Note 453903.1


STEP 16: Compatible version + Enable Archiving & Flashback mode
#######
Set the compatibility version to the current on, and enable archivelog and flashback modes:


SQL> alter system set compatible='11.2.0.3' scope=spfile;
SQL> shutdown immediate;
SQL> startup mount;
SQL> alter database archivelog;
SQL> alter database flashback on;
SQL> alter database open;


Re-point the directories to the new ORACLE_HOME:
-----------------------------------------------
As the ORACLE_HOME path became different, The directories that point to old ORACLE_HOME should point to the new ORACLE_HOME:

Example:

SQL> col DIRECTORY_PATH for a80
SQL> SELECT * FROM dba_directories;

OWNER DIRECTORY_NAME        DIRECTORY_PATH
---------------  ------------------------------ ------------------------------------------
SYS QUEST_SOO_UDUMP_DIR  /u01/oracle/ora11g/11.2.0.1/diagnostics/orcl/diag/rdbms/orcl/orcl/trace/

SQL> create or replace directory QUEST_SOO_UDUMP_DIR as '/u02/oracle/ora11g/11.2.0.3/diagnostics/orcl/diag/rdbms/orcl/orcl/trace/';

STEP 17: Enable Cron & DB Jobs:
######
Enable Crontab Jobs.
Enable DB jobs:
SQL> EXEC dbms_scheduler.set_scheduler_attribute('SCHEDULER_DISABLED','FALSE');


STEP 18: Configure the Enterprise Manager
######
WARNING:
##########
During recreating or dropping the repository, this requires the database to be in quiesce mode. This means that, temporarily, no operations are possible on the database and new users cannot login.
If this mistakenly done during business hours, do the following:
1. Kill the emca command
2. SQL > alter system unquiesce;

Configure EM Using DBCA:
======================
Flush any previous failed attempt to configure EM:
-----------------------------------------------

From sqlplus by sysdba:alter user dbsnmp identified by "xxx";
drop user sysman cascade;
drop public synonym SETEMVIEWUSERCONTEXT;
drop role MGMT_USER;
drop PUBLIC SYNONYM MGMT_TARGET_BLACKOUTS;
drop user MGMT_VIEW;
drop type sys.MGMT_MNTR_USER_STATS_ARRAY;
drop type sys.HA_HOST_CREDS_ARR;

Then run this command from Linux shell by Oracle user:
# emca -deconfig dbcontrol db -repos drop

Configure EM:

--------------
By Oracle User:


# emctl stop dbconsole 

# dbca 
    --> Configure Database Options 
    --> ... select: register this database with selected listeners only --> ..    --> Put complicated password for SYSMAN and DBSNMP like abcde.$1234    --> keep selecting the defaults.


How To start multiple EM Agents On same server: ORACLE_UNQNAME
------------------------------------------------
Note: We have more than one DB on DEV server, each DB will have a unique EM port assigned to it, before starting or stopping EM agent for each DB you have to export the environment variable ORACLE_UNQNAME first:

e.g. starting EM agent for orcl3:
# export ORACLE_UNQNAME=orcl3
# emctl start dbconsole


>>>>>>>>>>>>>>>>>>>>>>>

THE UPGRADE IS DONE
>>>>>>>>>>>>>>>>>>>>>>>


The following are Optional STEPS: Good To Do

##########################


STEP 20: Rebuild Unusable Indexes & Gather Statistics
#######

Rebuild Unusable indexes:
=====================
SQL> select 'ALTER INDEX '||OWNER||'.'||INDEX_NAME||' REBUILD ONLINE;'from dba_indexes where status ='UNUSABLE';

Gather FIXED OBJECTS stats: (Do it during peak hours not within the downtime)
========================


Fixed objects are the x$ tables (been loaded in SGA during startup) on which V$ views are built (V$SQL etc.).
Fixed Object statistics are not being gathered automatically nor within gather DB stats procedure.
If the statistics are not gathered on fixed objects, the Optimizer will use predefined default values for the statistics. These defaults may lead to inaccurate execution plans.
Statistics on fixed objects are not being gathered automatically nor within gathering DB stats.
Note: 
-It's recommended to Gather the fixed object stats during peak hours (system is busy) or after the peak hours but the sessions are still connected (even if they idle), to gurantee that the fixed object tables been populated and the statistics well represent the DB activity. also note that performance degradation may be experienced while the statistics are gathering.
-Having no statistics is better than having a non representative statistics.

Gather the fixed objects stats:
-----------------------------
SQL> exec dbms_stats.gather_fixed_objects_stats;

Gather DICTIONARY stats:
======================

Dictionary stats are gathered on dictionary tables owned by SYS and resides in the system tablespace.
SQL> Exec DBMS_STATS.GATHER_DICTIONARY_STATS ();

Gather database statistics:
==================
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE,degree => 8);

In case you need to gather DB Stats + Histograms on all skewed columns:

SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(ESTIMATE_PERCENT=>100,METHOD_OPT=>'FOR ALL COLUMNS SIZE SKEWONLY',cascade => TRUE,degree => 8);


ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE => Let Oracle estimate skewed values always gives excellent results.
Removed "METHOD_OPT=>'FOR ALL COLUMNS SIZE SKEWONLY'" => As histograms is not recommended to gathered on all columns.
Removed "cascade => TRUE" => To let Oracle determine whether index statistics to be collected or not.

For more information about gathering statistics on the database I strongly recommend you to read this post:

http://dba-tips.blogspot.ae/2012/11/all-about-statistics-in-oracle.html

Step 21: Check Oracle Recommended Patches:
######
Note ID 756671.1 includes the recommended patches for releases starting from 10.2.0.3.
Note ID 742060.1 represents the most accurate information Oracle can provide for coming releases.

The difference between PSU patch and CPU 
(SPU) patch : [ID 854428.1]
===========================================
-PSU patch is the fifth digit in the release like 11.2.0.1.1 it's being published each 4 months in the same moths of releasing CPU patch (Jan,Apr,Jun,Oct).
-PSU patch includes (CPU patches + common Bug fixes which affect a large number of customers).
-Once you start to apply PSU patch you can't apply CPU patch again (this is what Oracle recommends but it possible to apply CPU patch after apply PSU patches).
-PSU patches are cumulative same like CPU once you applied the latest patch the old ones are included in that patch.
-When downloading PSU patch there is (PSU & GI PSU), GI PSU is for Grid Infrastructure (this apply the patch for GI and database) the other one PSU (only apply the patch for the database).

-Starting from October 2012 Oracle re-named CPU Critical Patch Update to SPU Security Patch Update, both are same, it's just a renaming .

*You can check the latest applied patches on the database by running this query:
SQL> select * from DBA_REGISTRY_HISTORY;

////
//////////////////////////////////////////*
Removing OLD ORACLE_HOME: (Optional, you can do it later)
========================
After you feel confident with the new Oracle installation and you will never downgrade to the previous release, remove the ORACLE_HOME:

Detach old ORACLE_HOME:
# $OLD_HOME/oui/bin/runInstaller -detachHome -silent -local

Confirm old ORACLE_HOME is removed from central inventory:
# $OLD_HOME/OPatch/opatch lsinventory -all

Remove files in old ORACLE_HOME manually:
# rm -rf $OLD_HOME
*///////////////////////
/////////////////////

The difference between granting direct privileges to a user and granting same privileges within a role

$
0
0

>When granting a role to a user the user have to re-login to get use of that role.
>When granting a privilege to a user the user can use it immediately. NO need to re-login.
>Roles doesn't give the granted user the right to create objects based on the role given to him.
  Come again.. What does the last point means?
  For example: if there is a role includes a SELECT privilege on table X1 on schema X and this
  role granted to user Y, User Y can select from table X1, but once user Y wants to create a
  view on his schema selecting from table X1, he will get ORA-01031: insufficient privileges. in
  this case you have to grant user Y a direct select privilege on table X1 to be able to create
  a view based on that table

The following example will illustrate this point:


--By user SYS 

--create a new role HR1, grant to it select privilege on hr.employees, then grant this role to user SH:
SQL> create role hr1;
SQL> grant select on hr.employees to hr1;
SQL> grant hr1 to SH;

--By user SH

--now try to test the role granted to SH:
SQL> select * from hr.employees;
multiple rows returned...

--but when user SH tries to create a view based on the SELECT right he inherit it from the role:

SQL> create view empl as select * from hr.employees;
ERROR at line 1:
ORA-01031: insufficient privileges

--By user SYS

--User SH must have a direct privilege grant (not within a role) to be able to create objects based on it:
SQL> grant select on hr.employees to SH;

--By SH

--now user SH can create objects based on select privilege he has on hr.employees table:
SQL> create view empl as select * from hr.employees;
View created.

In Arup Nanda blog he explained the difference between the system privilege SELECT ANY DICTIONARY and the role SELECT_CATALOG_ROLE based on this point.


I hope that was informative.


Upgrade RAC 11.2.0.1 to 11.2.0.3 (Part I Software Installation)

$
0
0
In this post I'll discuss full implementation of upgrading Oracle RAC 11.2.0.1 to 11.2.0.3, the new RAC 11.2.0.3 will be installed on a new hardware (outplace upgrade).
This lengthy post (in order to make it more beneficial) I divided it to FOUR major posts:
  Part I   OS Installation, Filesystem preparation (OCFS2 on ISCSI)
             ->Covers Oracle Enterprise Linux 5.9 x86_64 installation, preparation of ISCSI storage, using OCFS2 to format the shared filesystem.
             ->Covers Preparation of Linux OS for Oracle ,11.2.0.3 Grid Infrastructure & database software installation.
             ->Covers the creation of a standby database being refreshed from the primary DB 11.2.0.1 taking advantage of a new feature that a standby DB with an 11gr2 higher release can be refreshed from 11gr2 lower release.
  Part IV Database Upgrade from 11.2.0.1 to 11.2.03
             ->Covers switching over the new standby DB resides on 11.2.0.3 server to act as a primary DB, Upgrade the new primary DB from 11.2.0.1 to 11.2.0.3

Feel free to click on the part you interested in.

Part I, II, III doesn't require a downtime as they are being done on a different hardware as part of our (outplace upgrade) only Part IV is the one will require a downtime window.
The whole implementation may take less than two hours of downtime if every thing went smooth. but it will take long hours of DBA work.

In this post I tried to provide reference for each point. Also followed many recommendations recommended by the Maximum Availability Architecture (MAA).

Part I   OS Installation, Filesystem preparation (OCFS2 on ISCSI)


The following are "good to know" information before installing Oracle 11.2.0.3 on any hardware:

Knowledge Requirements:
===================
RAC and Oracle Clusterware Best Practices and Starter Kit (Linux) [Metalink Doc ID 811306.1]
RAC and Oracle Clusterware Best Practices and Starter Kit (Platform Independent) [Metalink Doc ID 810394.1]
Configuring raw devices (singlepath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5 [Metalink Doc ID 465001.1]

Product Support Lifetime:
===================
This document indicates database release 11.2 premier support ends at Jan 2015 and the Extended support ends at Jan 2018.

Patching Support Lifetime:
===================
This document indicates Oracle will continue provide security patches for 11.2.0.3 version till 27-Aug-2015.
Security patches means (PSU, CPU, SPU patches). [Metalink Doc ID 742060.1]

Hardware Certification:
=================
RAC Technologies Matrix for Linux Platforms:

The main link for certification Matrix for other platforms: (Linux, Unix, Windows)

In this implementation I'll install RAC on ISCSI NAS storage.

Now let's move from the theoretical part to the technical steps...

Linux Requirements:
===============
If installing 11.2.0.3 on RHEL 5 x86_64:
The minimum requirement is Red Hat Enterprise Linux 5 Update 5 with the Unbreakable Enterprise Kernel 2.6.32 or later.

Partitioning requirement on the server’s local hard disk: [Minimum!]
=======================================
Local hard disk will contain Linux, Grid Infrastructure and database installed binaries.
/u01  => 10G Free space to hold the installation files (GI+DB). I recommend at least 30G to hold future generated logs.
/tmp  => 1G Free space.
SWAP  => RAM=32G which is >8G, SWAP= 75% of RAM = 24G
/dev/shm => must be greater than the sum of MEMORY_MAX_TARGET for all instance if you will use the new 11g feature Automatic Memory Management by setting parameters memory_max_target & memory_target to a specific value which will handle the memory size of SGA & PGA together.

More about /dev/shm:
-According to Oracle Support /dev/shm will not be able to be greater than 50% of the RAM installed on the server.
-Make /dev/shm size = Memory Size installed on the server. or at least the sum of all DBs Memory_max_target on the server.
-/dev/shm must be exist if you will use 11g new feature Auto Memory Management by setting memory_max_target parameter.
-If /dev/shm isn't exist or isn't properly sized the database will pop up this error when starting up if memory_max_target parameter has been set:
 ORA-00845: MEMORY_TARGET not supported on this system.
-Oracle will create files under /dev/shm upon instance startup and will be removed automatically after instance shutdown.
-Oracle will use these files to manage the memory size dynamically between SGA and PGA.
-It's recommended to have the /dev/shm configured with the "tmpfs" option instead of "ramfs", as ramfs is not supported for Automatic Memory Management AMM:
 # df -h /dev/shm
 Filesystem            Size  Used Avail Use% Mounted on
 tmpfs                 16G     0  16G   0%   /dev/shm

BTW I'm not using this feature I'm still stick with sga_taget & pga_aggregate_target. :-)

------------------------------
Linux OS Installation: Both Nodes (Estimated time: 3 hours)
------------------------------
Note: Install a fresh Linux installation on all RAC nodes, DON'T clone the installation from node to other in purpose of saving the time.

FS Layout:
>>>>>>>>>
The whole disk space is 300G

Filesystem      Size(G) Size(M) used in setup
----------          ----    ---------------------
/boot                1G     1072 --Force to Be Primary Partition.
Swap               24G    24576 --Force to Be Primary Partition, 75% of RAM.
/dev/shm          30G    30720 --Force to Be Primary Partition.
/                      20G    20480
/u01                 70G    73728
/home               10G   10547
/tmp                 5G     5240
/var                  10G   10547
/u02                 95G   The rest of space

Note:
 If you will install ISCSI drive avoid making a separate partition for /usr , ISCSI drive will prevent system from booting.

Packages selection during Linux installation:
>>>>>>>>>>>>>

Desktop Environment:
 # Gnome Desktop Environment
Applications:
 #Editors -> VIM
Development:
 # Development Libraries.
 # Development Tools
 # GNOME software development
 # Java Development
 # Kernel Development
 # Legacy Software Development
 # X Software Development
Servers:
 # Legacy Network Server -> Check only: rsh-server,xinetd
 # PostgreSQL -> Check only: UNIXODBC-nnn
 # Server Configuration Tools -> Check All
Base System:
 # Administration Tools.
 # Base -> Un check bluetooth,wireless packs Check-> Device mapper multipath
 # Java
 # Legacy Software Support
 # System Tools -> Check also: OCFS2 packages
 # X Window System

FIREWALL & SELINUX MUST BE STOPPED. [Note ID 554781.1]

I've uploaded OEL 5.9 installation snapshots in this link:

populate /etc/hosts with the IPs and resolved names:
=======================================
# vi /etc/hosts

#You must keep 127.0.0.1  localhost, if removed VIP will not work !!!
#cluster_scan,Public and VIP should be in the same subnet.

127.0.0.1       localhost localhost.localdomain

#Public:
172.18.20.1  ora1123-node1  node1 n1
172.18.20.2  ora1123-node2  node2 n2

#Virtual:
172.18.20.3  ora1123-node1-vip node1-vip n1-vip
172.18.20.4  ora1123-node2-vip node2-vip n2-vip

#Private:
192.168.10.1      ora1123-node1-priv n1-priv node1-priv
192.168.10.2      ora1123-node2-priv n2-priv node2-priv

#Cluster:
172.18.20.10  cluster-scan

#NAS
172.20.30.100   nas nas-server

#11.2.0.1 Servers:
10.60.60.1  ora1121-node1 old1 #the current 11.2.0.1 Node1
10.60.60.2  ora1121-node2 old2 #the current 11.2.0.1 Node2

I've added RAC node names, VIP and private IPs and it's resolved names for both nodes and guess what I'm resolving also the cluster scan in /etc/hosts, keep it a secret don't till Larry :-)
Actually I'm still not convinced with using the SCAN feature, if you will use it in your setup just ask the network admin to resolve at least three SCAN IPs in the DNS to the cluster scan name you will use.
This document will help you understanding the SINGLE CLIENT ACCESS NAME (SCAN):

Upgrade the KERNEL:
==================
-Subscribe The new servers in ULN network.
-Upgrade the Kernel to the latest version.
Ensure that /etc/resolv.conf is equipped with the DNS entry and you are connected to the internet, once this task is done if you don't have a need to connect the servers to the internet make sure the servers are not connecting anymore to the internet for security reason.

On linux server:
-------------------
# up2date --register

Install key? Yes

put this information:

login: xxxxxx
pass:  xxxxxx
CSI#: xxxxxx


In case you still cannot establish a connection with ULN You can use the IP 141.146.44.24 instead of address linux-update.oracle.com in "Network Configuration" button.
Also: in  /etc/sysconfig/rhn/up2date :
      You can change this line:
      noSSLServerURL=http://linux-update.oracle.com/XMLRPC to  noSSLServerURL=http://141.146.44.24/XMLRPC
      and this line:
      serverURL=https://linux-update.oracle.com/XMLRPC  to  serverURL=https://141.146.44.24/XMLRPC

Then proceed with updating the kernel from the same GUI or from command line as shown below:

up2date -d @  --> To download the updated packages
up2date @     --> To install the updated packages

I'm putting the symbol @ for skipping the GUI mode and continue with CLI.

Configure YUM with ULN:
--------------------------------
# cd /etc/yum.repos.d
# wget http://public-yum.oracle.com/public-yum-el5.repo
# vi public-yum-el5.repo
Modify the following:
Under both paragraphs: [el5_latest] & [ol5_UEK_latest] modify enabled=0 to enabled=1
An excerpt:

[el5_latest]
name=Oracle Linux $releasever Latest ($basearch)
baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL5/latest/$basearch/
gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-el5
gpgcheck=1
enabled=1

[ol5_UEK_latest]
name=Latest Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch)
baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL5/UEK/latest/$basearch/
gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-el5
gpgcheck=1
enabled=1

Network configuration:
================
Node1:
--------
# cat /etc/sysconfig/network-scripts/ifcfg-eth0   #=>Public
DEVICE=eth0
BOOTPROTO=static
BROADCAST=172.18.20.255
IPADDR=172.18.20.1
NETMASK=255.255.255.0
NETWORK=172.18.20.0
ONBOOT=yes

# cat /etc/sysconfig/network-scripts/ifcfg-eth1   #=>ISCSI NAS
DEVICE=eth1
BOOTPROTO=static
BROADCAST=172.20.30.255
IPADDR=172.20.30.101
NETMASK=255.255.255.0
NETWORK=172.20.30.0
ONBOOT=yes

# cat /etc/sysconfig/network-scripts/ifcfg-eth3   #=>Private
DEVICE=eth3
BOOTPROTO=static
BROADCAST=192.168.10.255
IPADDR=192.168.10.1
NETMASK=255.255.255.0
NETWORK=192.168.10.0
ONBOOT=yes


Node2:
--------
# cat /etc/sysconfig/network-scripts/ifcfg-eth0   #=>Public
DEVICE=eth0
BOOTPROTO=static
BROADCAST=172.18.20.255
IPADDR=172.18.20.2
NETMASK=255.255.255.0
NETWORK=172.18.20.0
ONBOOT=yes

# cat /etc/sysconfig/network-scripts/ifcfg-eth1   #=>ISCSI NAS STORAGE
DEVICE=eth1
BOOTPROTO=static
BROADCAST=172.20.30.255
IPADDR=172.20.30.102
NETMASK=255.255.255.0
NETWORK=172.20.30.0
ONBOOT=yes

# cat /etc/sysconfig/network-scripts/ifcfg-eth3   #=>Private
DEVICE=eth3
BOOTPROTO=static
BROADCAST=192.168.10.255
IPADDR=192.168.10.2
NETMASK=255.255.255.0
NETWORK=192.168.10.0
ONBOOT=yes

-----------------------------
Filesystem Preparation:
-----------------------------

RAC servers will connect to the NAS shared storage using ISCSI protocol.

ISCSI Configuration:

Required packages:
# rpm -q iscsi-initiator-utils
# yum install iscsi-initiator-utils

To get ISCSI aware that LUNs are being accessed simultaneously by more than one node at the same time and to avoid LUNs corruption, use one of the following ways (A or B):
A) Generate IQN number
or
B) Setup username and password of ISCSI storage

I'll explain both of them:

A) Generate IQN number (ISCSI Qualified Name) in linux for each node to be saved inside NAS configuration console:

On Node1:

Generate an IQN number:
# /sbin/iscsi-iname
iqn.1988-12.com.oracle:9e963384353a

Note: the last portion of IQN after semicolon ":" is editable and can be changed to the node name, I mean instead of "9e963384353a" you can rename it "node1", no space

allowed in the name.

Now insert the generated IQN to /etc/iscsi/initiatorname.iscsi
# vi /etc/iscsi/initiatorname.iscsi
#Note that last portion of the IQN is modifyiable (modify it to meaningful name)
InitiatorName=iqn.1988-12.com.oracle:node1

Do the same on Node2:

On Node2:

# /sbin/iscsi-iname
iqn.1988-12.com.oracle:18e6f43d73ad

# vi /etc/iscsi/initiatorname.iscsi
#Note that last portion of the IQN is modifyiable (modify it to meaningful name)
InitiatorName=iqn.1988-12.com.oracle:node2

Put the same IQN you already inserted in /etc/iscsi/initiatorname.iscsi on both nodes in the NAS administration console for each LUN will be accessed by both nodes.

(This should be done by the Storage Admin)


B) Set up a username and password for ISCSI storage:

# vi /etc/iscsi/iscsid.conf
node.session.auth.username =
node.session.auth.password =
discovery.sendtargets.auth.username =
discovery.sendtargets.auth.password =

Start the iscsi service:
# /etc/init.d/iscsi start

Same Username & password should be configured in the NAS administration console.(This should be done by the Storage Admin)

Continue configuring the ISCSI:
=======================
Turn on the iscsi service to start after booting the machine:
# chkconfig iscsi on

Discover the target LUNs:

# service iscsi start
# iscsiadm -m discovery -t sendtargets -p 172.20.30.100
# (cd /dev/disk/by-path; ls -l *iscsi* | awk '{FS=""; print $9 "" $10 "" $11}')

Whenever iscsid discovers new target, it will add corresponding information in the following directory:
# ls -lR /var/lib/iscsi/nodes/
# service iscsi restart

Create Persistent Naming: (NON Multipath Configuration)
===================
Every time the machine or ISCSI service restart the partitions source names /dev/sd* will change e.g. /data1 will point to /dev/sdc instead of /dev/sda, the thing we cannot live with it at all.

Note: I only have one physical path "NIC" connecting to the NAS storage, so I apply non multipath configuration.

1) Whitelist all SCSI devices:
-- -----------------------------
# vi /etc/scsi_id.config
#Add the following lines:
vendor="ATA",options=-p 0x80
options=-g


2) Get the names of LUNs and it's device name:
-- --------------------------------------------------
# (cd /dev/disk/by-path; ls -l *iscsi* | awk '{FS=""; print $9 "" $10 "" $11}')

ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-archive1-lun-0 -> ../../sdn
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-archive2-lun-0 -> ../../sdr
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-backupdisk-lun-0 -> ../../sdi
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-control1-lun-0 -> ../../sdl
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-control2-lun-0 -> ../../sda
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-data1-lun-0 -> ../../sdp
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-index1-lun-0 -> ../../sdo
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-ocr1-lun-0 -> ../../sdq
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-ocr2-lun-0 -> ../../sde
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-ocr3-lun-0 -> ../../sdf
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-redo1-lun-0 -> ../../sdb
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-redo2-lun-0 -> ../../sdm
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-temp1-lun-0 -> ../../sdh
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-undo1-lun-0 -> ../../sdj
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-undo2-lun-0 -> ../../sdc
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-voting1-lun-0 -> ../../sdk
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-voting2-lun-0 -> ../../sdg
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-voting3-lun-0 -> ../../sdd

3) Get the drives UUID:
-- ------------------------
scsi_id -g -s /block/sdn
scsi_id -g -s /block/sdr
scsi_id -g -s /block/sdi
scsi_id -g -s /block/sdl
scsi_id -g -s /block/sda
scsi_id -g -s /block/sdp
scsi_id -g -s /block/sdo
scsi_id -g -s /block/sdq
scsi_id -g -s /block/sde
scsi_id -g -s /block/sdf
scsi_id -g -s /block/sdb
scsi_id -g -s /block/sdm
scsi_id -g -s /block/sdh
scsi_id -g -s /block/sdj
scsi_id -g -s /block/sdc
scsi_id -g -s /block/sdk
scsi_id -g -s /block/sdg
scsi_id -g -s /block/sdd

These UUIDs are the consistent identifiers for the devices, we will use them in the next step.

4) Create the file /etc/udev/rules.d/04-oracle-naming.rules with the following format:
-- --------------------------------------------------------------------------------------
# vi /etc/udev/rules.d/04-oracle-naming.rules

#Add a line for each device specifying the device name & it's UUID:
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d003000000000", NAME="archive1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d004000000000", NAME="archive2"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d002000000000", NAME="backupdisk"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d001000000000", NAME="control1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d005000000000", NAME="control2"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d006000000000", NAME="data1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d007000000000", NAME="index1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d008000000000", NAME="ocr1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d009000000000", NAME="ocr2"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d010000000000", NAME="ocr3"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d011000000000", NAME="redo1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d012000000000", NAME="redo2"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d013000000000", NAME="temp1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d014000000000", NAME="undo1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d015000000000", NAME="undo2"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d016000000000", NAME="voting1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d017000000000", NAME="voting2"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s /block/%k", RESULT=="360014052e3032700063d018000000000", NAME="voting3"

# service iscsi restart

5) Check the configuration:
-- -----------------------------
Now new files name under /dev should be e.g. /dev/archive1 instead of /dev/sdn
Note: fdisk -l will not show the new NAS devices anymore, don't worry use the following:

#(cd /dev/disk/by-path; ls -l *iscsi* | awk '{FS=""; print $9 "" $10 "" $11}')

ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-archive1-lun-0 -> ../../archive1
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-archive2-lun-0 -> ../../archive2
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-backupdisk-lun-0 -> ../../backupdisk
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-control1-lun-0 -> ../../control1
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-control2-lun-0 -> ../../control2
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-data1-lun-0 -> ../../data1
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-index1-lun-0 -> ../../index1
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-ocr1-lun-0 -> ../../ocr1
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-ocr2-lun-0 -> ../../ocr2
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-ocr3-lun-0 -> ../../ocr3
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-redo1-lun-0 -> ../../redo1
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-redo2-lun-0 -> ../../redo2
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-temp1-lun-0 -> ../../temp1
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-undo1-lun-0 -> ../../undo1
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-undo2-lun-0 -> ../../undo2
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-voting1-lun-0 -> ../../voting1
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-voting2-lun-0 -> ../../voting2
ip-172.20.30.100:3260-iscsi-iqn.2013-7.VLA-NAS03:pefms-voting3-lun-0 -> ../../voting3

also test UDEV rule:
# udevtest /block/sdb | grep udev_rules_get_name

udev_rules_get_name: rule applied, 'sdb' becomes 'ocr2'
...


OCFS2 Configuration:

Required Packages:
OCFS2 packages should be installed during Linux installation, if you selected the right packages.
If you didn't do so, you can download and install the required OCFS2 packages using the following commands:
# up2date --install ocfs2-tools ocfs2console
# up2date --install ocfs2-`uname -r`

1) populate /etc/ocfs2/cluster.conf settings:
-  -------------------------------------
In the OCFS2 configuration I'll use the heartbeat NICs (private) not the public ones.

# mkdir -p /etc/ocfs2/
# vi /etc/ocfs2/cluster.conf
node:
        ip_port = 7000
        ip_address = 192.168.10.1
        number = 0
        name = ora1123-node1
        cluster = ocfs2

node:
        ip_port = 7000
        ip_address = 192.168.10.2
        number = 1
        name = ora1123-node2
        cluster = ocfs2

cluster:
        node_count = 2
        name = ocfs2

Options:
ip_port:    The Default Can be changed to unused port.
ip_address: Using the private interconnect is highly recommended as it's supposed to be a private network between cluster nodes only.
number:     Node unique number from 0-254
name:       The node name needs to match the hostname without the domain name.
cluster:    Name of the cluster.
node_count: Number of the nodes in the cluster.

BEWARE: During editing the file be-careful, parameters must start after a tab, a blank space must separate each value.

2) Timeout Configuration:
-  -----------------------------
The O2CB cluster stack uses these timings to determine whether a node is dead or alive. Keeping default values is recommended.

# /etc/init.d/o2cb configure

Load O2CB driver on boot (y/n) [n]: y
Cluster stack backing O2CB [o2cb]:
Cluster to start on boot (Enter "none" to clear) [ocfs2]:
Specify heartbeat dead threshold (>=7) [31]: 61
Specify network idle timeout in ms (>=5000) [30000]: 60000
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:

Heartbeat Dead Threshold: is the number of two-second iterations before a node is considered dead.61 is recommended for multipath users, for my setup I'll

set the timeout to 120sec.
Network Idle Timeout: The time in milliseconds before a network connection is considered dead.recommended 60000ms

configured the cluster to load on boot:
-------------------------------------------
# chkconfig --add o2cb
# chkconfig --add ocfs2
# /etc/init.d/o2cb load
# /etc/init.d/o2cb start ocfs2


Filesystem Partitioning: OCFS2
==================
As per the labels on the NAS disk names, I'll assign same names with OCFS2.

# fdisk -l |grep /dev
# (cd /dev/disk/by-path; ls -l *iscsi* | awk '{FS=""; print $9 "" $10 "" $11}')

Formating:
--------------
# mkfs.ocfs2 -F -b 4K -C 32K -N 2 -L ocr1  /dev/ocr1
# mkfs.ocfs2 -F -b 4K -C 32K -N 2 -L ocr2  /dev/ocr2
# mkfs.ocfs2 -F -b 4K -C 32K -N 2 -L ocr3  /dev/ocr3
# mkfs.ocfs2 -F -b 4K -C 32K -N 2 -L voting1  /dev/voting1
# mkfs.ocfs2 -F -b 4K -C 32K -N 2 -L voting2  /dev/voting2
# mkfs.ocfs2 -F -b 4K -C 32K -N 2 -L voting3  /dev/voting3
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L redo1  -J size=64M /dev/redo1
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L redo2  -J size=64M /dev/redo2
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L control1  -J size=64M /dev/control1
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L control2  -J size=64M /dev/control2
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L archive1  -J size=64M /dev/archive1
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L archive2   -J size=64M /dev/archive2
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L undo1  -J size=64M /dev/undo1
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L undo2  -J size=64M /dev/undo2
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L data1  -J size=64M /dev/data1
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L index1   -J size=64M /dev/index1
# mkfs.ocfs2 -F -b 4k -C 8k -N 2 -L temp1   -J size=64M /dev/temp1
# mkfs.ocfs2 -F -b 4k -C 1M -N 2 -L backupdisk  -J size=64M /dev/backupdisk

Options:
-F If the device was previously formatted by OCFS to overwrite the data.
-b blocksize from 512 to 4k(default), 4k recommended (small block size mean smaller maxsize, maxsize=2^32*blocksize means with blocksize=4096 maxsize=16T).
-C clustersize from 4k(default) to 1M, 4k recommended EXCEPT for DBFILES partition it should = Database Block Size =8k
   For backup storage filesystem holding RMAN, dump files, use bigger clustersize.
   128k recommended as a default clustersize if you're not sure what clustersize to use.
-N #Node slots, number of nodes can mount the volume concurrently, it's recommended to set it bigger than required,e.g. if you have two nodes set it to 4, this

parameter can be increased later using tunefs.ocfs2 but this practice can lead to bad performance.
-L lablel name, labeling the volume allow consistent "presistent" naming across the cluster. even if you're using ISCSI.
-J Journal size, 256 MB(default), recommeded (64MB for datafiles, 128MB for vmstore and 256MB for mail).
-T filesystem-type (datafiles,mail,vmstore)
    (datafiles) recommended for database FS will set (blocksize=4k, clustersize=128k, journal size=32M)
    (vmstore)   recommended for backup FS will set (blocksize=4k, clustersize=128k, journal size=128M) .

Note: For the filesystems that hold the database files set the cluster size -C 8k
      For the filesystems that hold backup files set the cluster size -C 1M
      If you're not sure what cluster size to use, use 128k, it proven reasonable trade-off between wasted space and performance.

Mounting the partitions:
-----------------------------
mkdir /ora_redo1
mkdir /ora_backupdisk
mkdir /ora_undo1
mkdir /ora_undo2
mkdir /ora_control2
mkdir /ora_control1
mkdir /ora_archive1
mkdir /ora_redo2
mkdir /ora_temp1
mkdir /ora_index1
mkdir /ora_archive2
mkdir /ora_data1
mkdir /ora_ocr1
mkdir /ora_ocr2
mkdir /ora_ocr3
mkdir /ora_voting1
mkdir /ora_voting2
mkdir /ora_voting3

chown -R oracle:oinstall /ora*
chmod 750 /ora*

Mounting the partitions automatically when system restart:
-----------------------------------------------------------------
vi /etc/fstab

LABEL=ocr1  /ora_ocr1 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=ocr2  /ora_ocr2 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=ocr3  /ora_ocr3 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=voting1  /ora_voting1 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=voting2  /ora_voting2 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=voting3  /ora_voting3 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=control1  /ora_control1 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=control2  /ora_control2 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=redo1  /ora_redo1 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=redo2  /ora_redo2 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=archive1  /ora_archive1 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=archive2  /ora_archive2 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=temp1  /ora_temp1 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=undo1  /ora_undo1 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=undo2  /ora_undo2 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=index1  /ora_index1 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=data1  /ora_data1 ocfs2   _netdev,datavolume,nointr   0   0
LABEL=backupdisk /ora_backupdisk ocfs2   _netdev        0   0

Partitions mount options:
>>>>>>>>>>>>>>>>>
_netdev: mandatory, prevent attempting to mount the filesystem until the network has been enabled on the system.
datavolume: force using direct I/O, used with FS contain Oracle data files, control files, redo/archive, voting/OCR disk. same behavior when using init.ora

filesystemio_options.
            datavolume mount option MUST NOT be used on volumes hosting the Oracle home or Oracle E-Business Suite or any other use.
nointr: default, blocks signals from interrupting certain cluster operations, disable interrupts.
rw: default, mount the FS in read write mode.
ro: mount the FS in read only mode.
noatime: default, disable access time updates, improve the performance (important for DB/cluster files).
atime_quantum=: update atime of files every 60 second(default), degrades the performance.
commit=: optional, sync all data every 5 seconds(default), degrades the performance. in case of failure you will lose last 5 seconds of work (Filesystem will 

not be damaged, thanks to journaling). higher assigned value improves the performance with more data loss risk.

After adding the values inside /etc/hosts you can mount the partitions using these commands:
# mount -a

OR:
# mount -L "temp1" /ora_temp1

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/cciss/c1d0p6      20G  3.9G   15G  22% /
/dev/cciss/c1d0p10     98G  7.0G   86G   8% /u02
/dev/cciss/c1d0p9     5.0G  139M  4.6G   3% /tmp
/dev/cciss/c1d0p8      10G  162M  9.4G   2% /home
/dev/cciss/c1d0p7      10G  629M  8.9G   7% /var
/dev/cciss/c1d0p2      16G     0   16G   0% /dev/shm
/dev/cciss/c1d0p5      70G  180M   66G   1% /u01
/dev/cciss/c1d0p1    1003M   76M  876M   8% /boot
tmpfs                  16G     0   16G   0% /dev/shm
/dev/sdm              1.0G  143M  882M  14% /ora_ocr1
/dev/sdd              1.0G  143M  882M  14% /ora_ocr2
/dev/sde              1.0G  143M  882M  14% /ora_ocr3
/dev/sdk              1.0G  143M  882M  14% /ora_voting1
/dev/sdi              1.0G  143M  882M  14% /ora_voting2
/dev/sdg              1.0G  143M  882M  14% /ora_voting3
/dev/sdl               10G  151M  9.9G   2% /ora_control1
/dev/sda               10G  151M  9.9G   2% /ora_control2
/dev/sdb               10G  151M  9.9G   2% /ora_redo1
/dev/sdr               10G  151M  9.9G   2% /ora_redo2
/dev/sdp              300G  456M  300G   1% /ora_archive1
/dev/sdn              300G  456M  300G   1% /ora_archive2
/dev/sdf               60G  205M   60G   1% /ora_temp1
/dev/sdj               40G  184M   40G   1% /ora_undo1
/dev/sdc               40G  184M   40G   1% /ora_undo2
/dev/sdo              200G  349M  200G   1% /ora_index1
/dev/sdq              400G  563M  400G   1% /ora_data1
/dev/sdh              500G  674M  500G   1% /ora_backupdisk

Performance Tip: Ensure updatedb is not running on OCFS2 partitions, by adding "OCFS2" keyword to "PRUNEFS =" list in file /etc/updatedb.conf


 ///////////////////////////////////////////////////////////////
 In case of using ASM for the shared storage (very quick guide)
 ///////////////////////////////////////////////////////////////
 Note: Don't use persistent naming unless you finish configuring ASM first.

 Install ASMLib 2.0 Packages:
 ---------------------------
 # rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep oracleasm | sort
 oracleasm-2.6.18-348.el5-2.0.5-1.el5 (x86_64)
 oracleasmlib-2.0.4-1.el5 (x86_64)
 oracleasm-support-2.1.7-1.el5 (x86_64)

 Configure ASMLib:
 ----------------
 # /usr/sbin/oracleasm configure -i
 Default user to own the driver interface []: oracle
 Default group to own the driver interface []: dba
 Start Oracle ASM library driver on boot (y/n) [n]: y
 Scan for Oracle ASM disks on boot (y/n) [y]: y

 # /usr/sbin/oracleasm init

 Use FDISK to create RAW partition for each disk:
 -----------------------------------------------
 # fdisk /dev/sdn
  n
  p
  1
 
 
  w

 Do the same for other disks....
 Commit your changes without the need to restart the system using this command:
 # partprobe

 Create ASM Disks:
 ----------------
 # /usr/sbin/oracleasm createdisk OCR1 /dev/sdn1
 # /usr/sbin/oracleasm createdisk OCR2 /dev/sdd1
 # /usr/sbin/oracleasm createdisk OCR3 /dev/sde1
 # /usr/sbin/oracleasm createdisk voting1 /dev/sdk1
 # /usr/sbin/oracleasm createdisk voting2 /dev/sdi1
 # /usr/sbin/oracleasm createdisk voting3 /dev/sdj1
 ... and so on

 SCAN ASM Disks:
 --------------
 # /usr/sbin/oracleasm scandisks
 Reloading disk partitions: done
 Cleaning any stale ASM disks...
 Scanning system for ASM disks...
 Instantiating disk "OCR1"
 Instantiating disk "OCR2"
 ....

 # /usr/sbin/oracleasm listdisks
 OCR1
 ...

 # oracleasm querydisk /dev/sdn1

 Diskgroup creation will be done from the installer.

 ////////////////////////////////////////////////////////////////////

In case you want to use RAW DEVICES for the shared storage:
Note that starting with 11gr2 using DBCA or the installer to store Oracle Clusterware or Oracle Database files on block or raw devices is not supported.


Next:

InPart II  I’ll continue with OS preparation for RAC setup, Grid Infrastructure and Database installation on the new servers.

Upgrade from RAC 11.2.0.1 to 11.2.0.3 (Part II OS preparation, Grid Infrastructure and Database software installation)

$
0
0
In Part I I've installed the Linux OS and prepared the shared filesystem (ISCSI configuration & OCFS2)
In this part I'll prepare the Linux OS for the RAC installation, install Grid Infrastructure 11.2.0.3 and install Database software.

Packages Requirements: (install the same packages version or later)
==================
OEL 5 Required packages for 11.2.0.2 and later versions:
--------------------------------------------------------------------
rpm -qa | grep binutils-2.17.50.0.6
rpm -qa | grep compat-libstdc++-33-3.2.3
rpm -qa | grep elfutils-libelf-0.1
rpm -qa | grep elfutils-libelf-devel-0.1
rpm -qa | grep gcc-4.1.2
rpm -qa | grep gcc-c++-4.1.2
rpm -qa | grep glibc-2.5
rpm -qa | grep glibc-common-2.5
rpm -qa | grep glibc-devel-2.5
rpm -qa | grep glibc-headers-2.5
rpm -qa | grep ksh-2
rpm -qa | grep libaio-0.3.106
rpm -qa | grep libaio-devel-0.3.106
rpm -qa | grep libgcc-4.1.2
rpm -qa | grep libstdc++-4.1.2
rpm -qa | grep libstdc++-devel-4.1.2
rpm -qa | grep make-3.81
rpm -qa | grep sysstat-7.0.2
rpm -qa | grep unixODBC-2.2.11       #=> (32-bit) or later
rpm -qa | grep unixODBC-devel-2.2.11 #=> (64-bit) or later
rpm -qa | grep unixODBC-2.2.11       #=> (64-bit) or later

In case you have missing packages try installing them from the Linux installation DVD:
e.g.
cd /media/OL5.9\ x86_64\ dvd\ 20130429/Server/
rpm -ivh numactl-devel-0.9.8-12.0.1.el5_6.i386.rpm

The most easiest way to download & install 11gr2 required packages & OS settings is to install oracle-rdbms-server-11gR2-preinstall package:

Make sure to install Oracle on a NON Tainted Kernel:
-------------------------------------------------------------
What does Tainted Kernel mean:
 -A special module has changed the kernel.
 -That module has been force loaded by insmod -f
 -Successful (Oracle installation, Oracle support for that database and Oracle support for Linux) will depend on the module that tainted the kernel.
Oracle Support may not support your system (Linux, database) if there is a main module in the kernel has been tainted,.
How to check if the kernel is tainted or not:
# cat /proc/sys/kernel/tainted
1
If the output is 1 ,the kernel is tainted, you have to contact Oracle Support asking their help whether to proceed with oracle installation or not.
if the output is 0 ,the kernel is not tainted, you're good to go to install oracle software.

Network Requirements:
=================
> Each node must have at least two NICs.
> Recommended to use NIC bonding for public NIC, use HAIP for private NIC (11.2.0.2 onwards).
> Recommended to use redundant switches along with NIC bonding.
> Public & private interface names must be identical on all nodes (e.g. eth0 is the public NIC on all nodes).
> Crossover cables between private RAC NICs are NOT supported (gigabit switch is the minimum requirement). Crossover cables limits the expansion of RAC to two nodes, bad performance due to excess packets collision and cause unstable negotiation between the NICs.
> Public NICs and VIPs / SCAN VIPs must be on the same subnet. Private NICs must be on a different subnet.
> For private interconnect use non-routable addresses:
   [From 10.0.0.0    to  10.255.255.255 or
    From 172.16.0.0  to  172.31.255.255 or
    From 192.168.0.0 to  192.168.255.255]
> Default GATEWAY must be on the same Public | VIPs | SCAN VIPs subnet.
> If you will use SCAN VIP, SCAN name recommended to resolve via DNS to a minimum 3 IP addresses.
> /etc/hosts or DNS must include PUBLIC & VIP IPs with the host names.
> SCAN IPs should not be in /etc/hosts. People not willing to use SCAN can do so, just to let the Grid Infrastructure installation succeed.
> NIC names must NOT include DOT "."
> Every node in the cluster must be able to connect to every private NIC in each node.
> Host names for nodes must NOT have underscores (_).
> Linux Firewall (iptables) must be disabled at least on the private network. If you will enable the firewall I recommend to disable it till you finish the installation of all Oracle products to easily troubleshoot installation problems once you finish you can enable the firewall then feel free to blame the firewall if something didn't work :-).
> Oracle recommend to disable Network zero conf: (as it causing node eviction)
  # route -n  => If found line 169.254.0.0 this means zero conf is enabled on your OS (default), next step is to disable it by doing the following:
  # vi /etc/sysconfig/network
  #Add this line:
  NOZEROCONF=yes
Restart the network:
  # service network restart
> Recommended to use JUMBO frames for interconnect: [Note: 341788.1]
  Warning: although it's available in most network devices it's not supported by some NICs (specially Intel NICs) & switches, JUMBO frames should be enabled on the interconnect switch device(Doing a test is mandatory)
  # suppose that eth3 is your interconnect NIC:
  # vi /etc/sysconfig/network-scripts/ifcfg-eth3
  #Add the following parameter:
  MTU=9000
  # ifdown eth3; ifup eth3
  # ifconfig -a eth3  => you will see the value of MTU=9000 (The default MTU is 1500)
  Testing JUMBO frames using traceroute command: (during the test, we shouldn't see in the output something like "Message too long":
  =>From Node1:
  # traceroute -F node2-priv 8970
    traceroute to n2-priv (192.168.110.2), 30 hops max, 9000 byte packets
    1  node2-priv (192.168.110.2)  0.269 ms  0.238 ms  0.226 ms
  =>This test was OK
  =>In case you got this message "Message too long" try to reduce the MTU untill this message stop appear.
  Testing JUMBO frames using ping: (With MTU=9000 test with 8970 bytes not more)
  =>From Node1:
  # ping -c 2 -M do -s 8970 node2-priv
    1480 bytes from node2-priv (192.168.110.2): icmp_seq=0 ttl=64 time=0.245 ms
  =>This test was OK.
  =>In case you got this message "Frag needed and DF set (mtu = 9000)" reduce the MTU till you get the previous output.
> Stop avahi-daemon: recommended by Oracle, it causes node eviction plus failing the node to to re-join to the cluster [Note: 1501093.1]
  # service avahi-daemon stop
  # chkconfig avahi-daemon off

Create new Grid & Oracle home:
========================
mkdir -p /u01/grid/11.2.0.3/grid
mkdir -p /u01/oracle/11.2.0.3/db
chown -R oracle:dba /u01
chown oracle:oinstall /u01
chmod 700 /u01
chmod 750 /u01/oracle/11.2.0.3/db

Note: Oracle user, DBA and OINSTALL groups are created during Oracle Enterprise Linux installation.
Note: I'll install Grid & Oracle with oracle user, I'll not create a new user to be the grid installation owner.

Adding environment variables to Oracle profile:
-----------------------------------------------------
I'm using too much command aliases inside oracle user profile to speed up my administration work, I think it may be helpful for you too. also some aliases refer to some helpful shell scripts like checking the locker session on the DB and more I'll share it with you later in future posts.

# su - oracle
# vi .bash_profile  


# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
 . ~/.bashrc
fi
if [ -t 0 ]; then
   stty intr ^C
fi

umask 022
# User specific environment and startup programs
unset USERNAME
ORACLE_SID=pefms1
export ORACLE_SID
ORACLE_BASE=/u01/oracle
export ORACLE_BASE
ORACLE_HOME=/u01/oracle/11.2.0.3/db; export ORACLE_HOME
GRID_HOME=/u01/grid/11.2.0.3/grid
export GRID_HOME
LD_LIBRARY_PATH=$ORACLE_HOME/lib; export LD_LIBRARY_PATH
TNS_ADMIN=$ORACLE_HOME/network/admin
export TNS_ADMIN
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
CLASSPATH=$CLASSPATH:$ORACLE_HOME/network/jlib; export CLASSPATH
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin:$GRID_HOME/bin:/usr/ccs/bin:/usr/bin/X11/:/usr/local/bin:$ORACLE_HOME/OPatch
export PATH
export ORACLE_UNQNAME=pefms
export ORACLE_HOSTNAME=ora1123-node1
alias profile='cd;. ./.bash_profile;cd -'
alias viprofile='cd; vi .bash_profile'
alias catprofile='cd; cat .bash_profile'
alias tnsping='$ORACLE_HOME/bin/./tnsping'
alias pefms='export ORACLE_SID=pefms; echo $ORACLE_SID'
alias sql="sqlplus '/ as sysdba'"
alias alert="tail -100f $ORACLE_HOME/diagnostics/pefms/diag/rdbms/pefms/pefms1/trace/alert_pefms1.log"
alias vialert="vi $ORACLE_HOME/diagnostics/pefms/diag/rdbms/pefms/pefms1/trace/alert_pefms1.log"
alias lis="vi $ORACLE_HOME/network/admin/listener.ora"
alias tns="vi $ORACLE_HOME/network/admin/tnsnames.ora"
alias sqlnet="vi $ORACLE_HOME/network/admin/sqlnet.ora"
alias sqlnetlog='vi $ORACLE_HOME/log/diag/clients/user_oracle/host_2245657081_76/trace/sqlnet.log'
alias network=" cd $ORACLE_HOME/network/admin;ls -rtlh;pwd"
alias arc="cd /ora_archive1/pefms/; ls -rtlh|tail -50;pwd"
alias p="ps -ef|grep pmon|grep -v grep"
alias oh="cd $ORACLE_HOME;ls;pwd"
alias dbs="cd $ORACLE_HOME/dbs;ls -rtlh;pwd"
alias pfile="vi $ORACLE_HOME/dbs/initpefms1.ora"
alias catpfile="cat $ORACLE_HOME/dbs/initpefms1.ora"
alias spfile="cd /fiber_ocfs_pefms_data_1/oracle/pefms; cat spfilepefms1.ora"
alias bdump='cd $ORACLE_HOME/diagnostics/pefms/diag/rdbms/pefms/pefms1/trace;ls -lrt|tail -10;pwd'
alias udump='cd $ORACLE_HOME/diagnostics/pefms/diag/rdbms/pefms/pefms1/trace;ls -lrt;pwd';
alias cdump='cd $ORACLE_HOME/diagnostics/pefms/diag/rdbms/pefms/pefms1/cdump;ls -lrt;pwd'
alias rman='cd $ORACLE_HOME/bin; ./rman target /'
alias listenerlog='tail -100f $ORACLE_BASE/diag/tnslsnr/ora1123-node1/listener/trace/listener.log'
alias vilistenerlog='vi $ORACLE_BASE/diag/tnslsnr/ora1123-node1/listener/trace/listener.log'
alias listenerpefms1log='tail -100f $ORACLE_HOME/log/diag/tnslsnr/ora1123-node1/listener_pefms1/trace/listener_pefms1.log '
alias listenerpefms2log='tail -100f $ORACLE_HOME/log/diag/tnslsnr/ora1123-node2/listener_pefms2/trace/listener_pefms2.log'
alias listenertail='tail -100f $ORACLE_BASE/diag/tnslsnr/ora1123-node1/listener/trace/listener.log'
alias cron='crontab -e'
alias crol='crontab -l'
alias df='df -h'
alias ll='ls -rtlh'
alias lla='ls -rtlha'
alias l='ls'
alias patrol='sh /home/oracle/patrol.sh'
alias datafiles='sh /home/oracle/db_size.sh'
alias locks='sh /home/oracle/locks.sh'
alias objects='sh /home/oracle/object_size.sh'
alias jobs='sh /home/oracle/jobs.sh'
alias crs='$GRID_HOME/bin/crsstat'
alias crss='crs|grep -v asm|grep -v acfs|grep -v gsd|grep -v oc4j|grep -v ora.cvu'
alias raclog='tail -100f $GRID_HOME/log/ora1123-node1/alertora1123-node1.log'
alias viraclog='vi $GRID_HOME/log/ora1123-node1/alertora1123-node1.log'
alias datafile='sh /home/oracle/db_size.sh'
alias invalid='sh /home/oracle/Invalid_objects.sh'
alias d='date'
alias dc='d;ssh n2 date'
alias aud='cd $ORACLE_HOME/rdbms/audit;ls -rtl|tail -200'
alias lastdb='/home/oracle/lastdb.sh'
alias sessions='/home/oracle/sessions.sh'
alias spid='sh /home/oracle/spid.sh'
alias spidd='sh /home/oracle/spid_full_details.sh'
alias session='/home/oracle/session.sh'
alias killsession='/home/oracle/kill_session.sh'
alias unlock='/home/oracle/unlock_user.sh'
alias sqlid='/home/oracle/sqlid.sh'
alias parm='/home/oracle/parm.sh'
alias grid='cd /u01/grid/11.2.0.3/grid; ls; pwd'
alias lsn='ps -ef|grep lsn|grep -v grep'

When adding the variables to Oracle profile in the other node you will change the node name from ora1123-node1 to ora1123-node2

Configure SYSTEM parameters:
========================

All parameters should be same or greater on the OS:
----------------------------------------------------
# /sbin/sysctl -a | grep sem           #=> semaphore parameters (250 32000 100 142).
# /sbin/sysctl -a | grep shm           #=> shmmax, shmall, shmmni (536870912, 2097152, 4096).
# /sbin/sysctl -a | grep file-max     #=> (6815744).
# /sbin/sysctl -a | grep ip_local_port_range  #=> Minimum: 9000, Maximum: 65500
# /sbin/sysctl -a | grep rmem_default  #=> (262144).
# /sbin/sysctl -a | grep rmem_max      #=> (4194304).
# /sbin/sysctl -a | grep wmem_default #=> (262144).
# /sbin/sysctl -a | grep wmem_max     #=> (1048576).
# /sbin/sysctl -a | grep aio-max-nr    #=> (Minimum: 1048576) limits concurrent requests to avoid I/O Failures.

Note:
If the current value of any parameter is higher than the value listed above, then do not change the value of that parameter.
If you will change any parameter on /etc/sysctl.conf then issue the command: sysctl -p

Check limit.conf values:
vi /etc/security/limits.conf  

oracle   soft   nofile    131072
oracle   hard   nofile    131072
oracle   soft   nproc    131072
oracle   hard   nproc    131072
oracle   soft   core    unlimited
oracle   hard   core    unlimited
oracle   soft   memlock    50000000
oracle   hard   memlock    50000000
# Adjust MAX stack size for 11.2.0.3 => Original was 8192:
oracle   soft   stack    10240 

After updating limits.conf file, oracle user should logoff & logon to let the new adjustments take effect.

Ensure mounting /usr in READ-WRITE mode:
------------------------------------------------
# mount -o remount,rw /usr

>For security reasons Sys admins prefer to mount /usr in READ ONLY mode, during Oracle installation /usr must be in RW mode.

Restart the internet services daemon (xinetd):
----------------------------------------------
# service xinetd restart

Edit the /etc/securetty file and append it with the relevant service name:
------------------------------------------------------------------------
ftp
rlogin
rsh
rexec
telnet

Create ".rhosts" file:
This file will provide user equivalence between the servers, should be create under Oracle user home:
su - oracle
cd
vi .rhosts
# Add the following lines
ora1123-node1 oracle
ora1123-node2 oracle
ora1123-node1-priv oracle
ora1123-node2-priv oracle
ora1123-node1-vip oracle
ora1123-node2-vip oracle

Create hosts.equiv file:
vi /etc/hosts.equiv
#add these lines:
ora1123-node1 oracle
ora1123-node2 oracle
ora1123-node1-priv oracle
ora1123-node2-priv oracle
ora1123-node1-vip  oracle
ora1123-node2-vip  oracle

chmod 600 /etc/hosts.equiv
chown root.root /etc/hosts.equiv

Configure Host equivalence between Nodes:
-----------------------------------------------
on Both Nodes:
----------------
mkdir -p cd /home/oracle/.ssh
cd /home/oracle/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa

cat id_rsa.pub > authorized_keys
cat id_dsa.pub >> authorized_keys

On Node1:
cd /home/oracle/.ssh
scp authorized_keys oracle@ora1123-node2:/home/oracle/.ssh/authorized_keys_nod1

on Node2:
cd /home/oracle/.ssh
mv authorized_keys_nod1 authorized_keys

cat id_rsa.pub >> authorized_keys
cat id_dsa.pub >> authorized_keys

Copy the authorized_keys file to Node1:
scp authorized_keys oracle@ora1123-node1:/home/oracle/.ssh/


From Node1: Answer each question with "yes"
ssh ora1123-node1 date
ssh ora1123-node2 date
ssh n1 date
ssh n2 date
ssh ora1123-node1-priv date
ssh ora1123-node2-priv date

From Node2: Answer each question with "yes"
ssh ora1123-node1 date
ssh ora1123-node2 date
ssh n1 date
ssh n2 date
ssh ora1123-node1-priv date
ssh ora1123-node2-priv date

Enable rsh on both Nodes:
------------------------------
First verify that rsh & rsh-server packages are installed

rpm -qa|grep rsh

rsh-server-0.17-40.el5
rsh-0.17-40.el5

If the packages are not installed install them:
you can find rsh package in CD1 under "Server" directory
you can find rsh-server package in CD3 under "Server" directory

Add rsh to PAM:
------------------
vi /etc/pam.d/rsh:
#Add the following line
auth sufficient pam_rhosts_auth.so no_hosts_equiv


Enable xinetd service:
--------------------
vi /etc/xinetd.d/rsh
#Modify this line:
disable=no

-Test rsh connectivity between the cluster nodes:
From Node1: rsh n2 date
From Node2: rsh n1 date

Enable rlogin:
---------------
vi /etc/xinetd.d/rlogin
#add this line:
disable=no

Configure Hangcheck-timer:
------------------------------
If a hang occur on a node the module will reboot it to avoid the database corruption.

*To Load the hangcheck-timer module for 2.6 kernel:

# insmod /lib/modules/`uname -r`/kernel/drivers/char/hangcheck-timer.ko  hangcheck_tick=1 hangcheck_margin=10 hangcheck_reboot=1

->hangcheck_tick: Defines how often in seconds, the hangcheck-timer checks the node for hangs. The default is 60, Oracle recommends 1 second.
->hangcheck_margin: Defines how long in seconds the timer waits for a response from the kernel. The default is 180, Oracle recommends 10.
->hangcheck_reboot: 1 reboot when hang occur, 0 do not reboot when hang occur.

*To confirm that the hangcheck module is loaded, enter the following command:
# lsmod | grep hang
# output will be like below
hangcheck_timer         2428  0 

*Add the service in the startup by editing this file:

vi /etc/rc.d/rc.local
#add this line
insmod /lib/modules/`uname -r`/kernel/drivers/char/hangcheck-timer.ko  hangcheck_tick=1 hangcheck_margin=10 hangcheck_reboot=1

You have to put the real value in place of `uname -r` which is your kernel version.
e.g.
insmod /lib/modules/2.6.32-300.32.2/kernel/drivers/char/hangcheck-timer.ko  hangcheck_tick=1 hangcheck_margin=10 hangcheck_reboot=1

Prepare for using Cluster Time Synchronization Service - (CTSS)
----------------------------------------------------------
Oracle Grid Infrastructure 11gr2 provides a new service called Cluster Time Synchronization Service (CTSS) that can synchronize the time between cluster nodes automatically without any manual intervention, If you want to use (CTSS) to handle this job automatically for you, then de-configure and de-install the Network Time Protocol (NTP), during the installation when Oracle find that NTP protocol is not active it will automatically activate (CTSS) to handle the time synchronization between RAC nodes for you, no more steps are required from you during the GI installation.

Disable NTP service:
# service ntpd stop
# chkconfig ntpd off
# mv /etc/ntp.conf /etc/ntp.conf.original
# rm /var/run/ntpd.pid

Disable SELINUX:
--------------
Note: Starting with 11gR2 SELinux is supported. but I'll continue disabling it. Disabling SELINUX is easier than configuring it :-) it's a nightmare :-)

vi /etc/selinux/config

SELINUX=disabled
SELINUXTYPE=targeted

#################
Extra Configurations:
#################

Configure HugePages: [361468.1]
================
What is HugePages:
--------------------
HugePages is a feature allows larger pages to manage memory as the alternative to the small 4KB pagesize.
HugePages is crucial for faster Oracle database performance on Linux if you have a large RAM and SGA > 8G.
HugePages are not only for 32X system but for improving the memory performance on 64x kernel.

HugePages Pros:
------------------
-Doesn't allow memory to be swaped.
-Less Overhead for Memory Operations.
-Less Memory Usage.

Huge Pages Cons:
--------------------
-You must set  MEMORY_TARGET and MEMORY_MAX_TARGET = 0 as Automatic Memory Management (AMM) feature is incompatible with HugePages:
ORA-00845: MEMORY_TARGET not supported on this system

Implementation:

1-Make sure that MEMORY_TARGET and MEMORY_MAX_TARGET = 0 on All instances.
2-Make sure that all instances on the server are up.
3- Set these parameters equal or greater than SGA size: (values are in KB)

# vi /etc/security/limits.conf 
oracle   soft   memlock    20971520
oracle   hard   memlock    20971520

Here I'll set SGA to 18G so I'll set it to 20G in limits.conf file.

Re-login to oracle user and check the value:
# ulimit -l

4- Create this script:

# vi /root/hugepages_settings.sh

#!/bin/bash
#
# hugepages_settings.sh
#
# Linux bash script to compute values for the
# recommended HugePages/HugeTLB configuration
#
# Note: This script does calculation for all shared memory
# segments available when the script is run, no matter it
# is an Oracle RDBMS shared memory segment or not.
#
# This script is provided by Doc ID 401749.1 from My Oracle Support 
# http://support.oracle.com
# Welcome text
echo "
This script is provided by Doc ID 401749.1 from My Oracle Support 
(http://support.oracle.com) where it is intended to compute values for 
the recommended HugePages/HugeTLB configuration for the current shared 
memory segments. Before proceeding with the execution please note following:
 * For ASM instance, it needs to configure ASMM instead of AMM.
 * The 'pga_aggregate_target' is outside the SGA and 
   you should accommodate this while calculating SGA size.
 * In case you changes the DB SGA size, 
   as the new SGA will not fit in the previous HugePages configuration, 
   it had better disable the whole HugePages, 
   start the DB with new SGA size and run the script again.
And make sure that:
 * Oracle Database instance(s) are up and running
 * Oracle Database 11g Automatic Memory Management (AMM) is not setup 
   (See Doc ID 749851.1)
 * The shared memory segments can be listed by command:
     # ipcs -m
Press Enter to proceed..."
read
# Check for the kernel version
KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'`
# Find out the HugePage size
HPG_SZ=`grep Hugepagesize /proc/meminfo | awk '{print $2}'`
if [ -z "$HPG_SZ" ];then
    echo "The hugepages may not be supported in the system where the script is being executed."
    exit 1
fi
# Initialize the counter
NUM_PG=0
# Cumulative number of pages required to handle the running shared memory segments
for SEG_BYTES in `ipcs -m | cut -c44-300 | awk '{print $1}' | grep "[0-9][0-9]*"`
do
    MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q`
    if [ $MIN_PG -gt 0 ]; then
        NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q`
    fi
done
RES_BYTES=`echo "$NUM_PG * $HPG_SZ * 1024" | bc -q`
# An SGA less than 100MB does not make sense
# Bail out if that is the case
if [ $RES_BYTES -lt 100000000 ]; then
    echo "***********"
    echo "** ERROR **"
    echo "***********"
    echo "Sorry! There are not enough total of shared memory segments allocated for 
HugePages configuration. HugePages can only be used for shared memory segments 
that you can list by command:
    # ipcs -m
of a size that can match an Oracle Database SGA. Please make sure that:
 * Oracle Database instance is up and running 
 * Oracle Database 11g Automatic Memory Management (AMM) is not configured"
    exit 1
fi
# Finish with results
case $KERN in
    '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`;
           echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;;
    '2.6') echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;;
     *) echo "Unrecognized kernel version $KERN. Exiting." ;;
esac

5-Run script hugepages_settings.sh to help you get the right value for vm.nr_hugepages parameter:
# chmod 700 /root/hugepages_settings.sh
# sh /root/hugepages_settings.sh


6-Edit the file /etc/sysctl.conf and set the vm.nr_hugepages parameter as per the script output value:
# cat /etc/sysctl.conf|grep vm.nr_hugepages
# vi /etc/sysctl.conf
vm.nr_hugepages = 9220

7-Reboot the server.

8-Check and Validate the Configuration:
# grep HugePages /proc/meminfo

Note: Any further modification to the following should be followed by re-run hugepages_settings.sh script and put the new value of vm.nr_hugepages parameter:
      -Amount of RAM installed for the Linux OS changed.
      -New database instance(s) introduced.
      -SGA size / configuration changed for one or more database instances.


Increase vm.min_free_kbytes system parameter: [Doc ID 811306.1]
================================
In case you enabled regular HugePages on your system (the thing we did above) it's recommended to increase the system parameter vm.min_free_kbytes from 51200 to 524288 This will cause the system to

start reclaiming memory at an earlier time than it would have before, therefore it can help to decrease the LowMem pressure, hangs and node evictions.

# sysctl -a |grep min_free_kbytes
vm.min_free_kbytes = 51200

# vi /etc/sysctl.conf
vm.min_free_kbytes = 524288

# sysctl -p 

# sysctl -a |grep min_free_kbytes
vm.min_free_kbytes = 51200


Disable Transparent HugePages: [Doc ID 1557478.1]
=====================
Transparent HugePages are different than regular HugePages (the one we configured above), Transparent HugePages are set up dynamically at run time.
Transparent HugePages are known to cause unexpected node reboots and performance problems with RAC & Single Node, Oracle strongly recommend to disable it.
Note: For UEK2 kernel, starting with 2.6.39-400.116.0 Transparent HugePages has been removed from the kernel.

Check if Transparent HugePages Enabled:
--------------------------------------
# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] never

Disable Transparent HugePages:
-----------------------------
Add "transparent_hugepage=never" to boot kernel:

# vi /boot/grub/grub.conf
kernel /vmlinuz-2.6.39-300.26.1.el5uek ro root=LABEL=/ transparent_hugepage=never 


Configure VNC on Node1:
===================
VNC will help us login to the linux machine with a GUI session, from this GUI session we can run Oracle installer to install Grid Infrastructure and Database software. eliminating the need to go to the server room and do the installation on the server itself.

Make sure that VNCserver package is already installed:
# rpm -qa | grep vnc-server
vnc-server-4.1.2-14.el5_6.6

Modify the VNC config file:
# vi /etc/sysconfig/vncservers
Add these lines at the bottom:
VNCSERVERS="2:root"
VNCSERVERARGS[2]="-geometry 800x600 -nolisten tcp -nohttpd -localhost"

Set a password for VNC:
# vncpasswd 
Password: 
Verify:

Run a VNC session just to generate the default config files:
# vncserver :1

Configure VNC to start an Xsession when connecting:
# vi ~/.vnc/xstartup
#UN-hash these two lines:
 unset SESSION_MANAGER
 exec /etc/X11/xinit/xinitrc

Now start a VNC session on the machine:
# vncserver :1

Now you can login from any machine (your Windows PC) using VNCviewer to access that remote server using port 5900 or 5901. make sure these port are not blocked by the firewall.
VNC Viewer can be downloaded from this link:

Download Oracle 11.2.0.3 installation media:
================================

Note [ID 753736.1] have all Patch Sets + PSU reference numbers.

11.2.0.3 (for Linux x86_64) is patch#  10404530  we need only the first 3 zip files from 1-3.
 (1&2 for database, 3 for grid, 4 for client, 5 for gateways, 6 examples cd, 7 for deinstall).

I'll extract the first 3 zip files which have Grid and Database binaries under /u02/stage

###########################
Grid Infrastructure installation:
###########################

Setup Cluverify:
===========
Cluverify is a tool checks the fulfillment of RAC and database installation prerequisites.

cd /u02/stage/grid/rpm
rpm -ivh cvuqdisk-1.0.9-1.rpm

Check the fulfillment of Grid Infrastructure setup prerequisites: (using Cluverify tool)
------------------------------------------------------------
cd /u02/stage/grid
./runcluvfy.sh stage -pre crsinst -n ora1123-node1,ora1123-node2  -verbose

Grid installation:
============
On Node1:
Start a VNC session on the server to be able to open a GUI session with the server and run Oracle Installer:
# vncserver :1

Login to the server from your PC using VNCviewer, then from the GUI session execute the following:
# xhost +
# su - oracle
# cd /u02/stage/grid
# chmod +x runInstaller
# ./runInstaller

During the installation:
================
Click "skip software updates": 
 >Install and configure Grid Infrastructure for a cluster.

 >Advanced Installation

 >Grid Plug and Play:
   Cluster Name:  cluster
    SCAN Name: cluster-scan
    SCAN Port: 1523

 >Cluster Node Information:
   Add:
   ora1123-node2
   ora1123-node2-vip

 >Network Interface Usage:
   eth0 Public
   eth3 Private
   eth1 Do Not Use
   eth2 Do Not Use

Note: Starting With Oracle (11.2.0.2), you are no longer required to use the network bounding technique to configure interconnect redundancy. You can now define at most four interfaces for redundant interconnect (private network) during the installation phase.

 >Storage Option: Shared File System

 >OCR Storage: Normal Redundancy
   /ora_ocr1/ocr1.dbf
   /ora_ocr2/ocr2.dbf
   /ora_ocr3/ocr3.dbf

Note: Oracle strongly recommend to set the voting disks number to an odd number like 3 or 5 and so on, because the cluster must be able to access more than half of the voting disks at any time.


 >Voting Storage:Normal Redundancy
   /ora_voting1/voting1.dbf
   /ora_voting2/voting2.dbf
   /ora_voting3/voting3.dbf

 >Do not use IPMI

Oracle Base:
   /u01/oracle/
RAC Installation path:  /u01/grid/11.2.0.3/grid
OraInventory path:      /u01/oraInventory

At the End of the installation run:
---------------------------------
Run orainstRoot.sh On Node1 then run it on Node2:
# /u01/oraInventory/orainstRoot.sh

Run root.sh On Node1 once it finish run it on Node2:
# /u01/grid/11.2.0.3/grid/root.sh 

Just hit ENTER when get this message:
Enter the full pathname of the local bin directory: [/usr/local/bin]: 

Note: root.sh may take from 5 to 15 minutes to complete.

Once root.sh finish, go back to the Execute Configuration Scripts window and press "OK".

I've uploaded the screenshots to this link:

 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
 In case you are doing an in-place upgrade of an older release and you are installing the GI on a different home, the following should be done within a downtime window:        
 At the End of installation by root user run:
 -------------------------------------------
 Note: In case of doing an in-place upgrade Oracle recommends that you leave Oracle RAC instances running from Old GRID_HOME. 
 Execute this script:
 # /u01/grid/11.2.0.3/grid/rootupgrade.sh 
   =>Node By Node (don't run it in parallel).
   =>rootupgrade will restart cluster resources on the node which is run on. 
   =>Once you finish with rootupgrade.sh ,click OK on the OUI window to finish the installation.
 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

The outputs of executed commands:
---------------------------
#/u01/oraInventory/orainstRoot.sh

Changing permissions of /u01/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/oraInventory to oinstall.
The execution of the script is complete.

#/u01/grid/11.2.0.3/grid/root.sh 
Node1 outputs:
-------------
[root@ora1123-node1 /u01]#/u01/grid/11.2.0.3/grid/root.sh 
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/grid/11.2.0.3/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/grid/11.2.0.3/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'ora1123-node1'
CRS-2676: Start of 'ora.mdnsd' on 'ora1123-node1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'ora1123-node1'
CRS-2676: Start of 'ora.gpnpd' on 'ora1123-node1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'ora1123-node1'
CRS-2672: Attempting to start 'ora.gipcd' on 'ora1123-node1'
CRS-2676: Start of 'ora.cssdmonitor' on 'ora1123-node1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'ora1123-node1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'ora1123-node1'
CRS-2672: Attempting to start 'ora.diskmon' on 'ora1123-node1'
CRS-2676: Start of 'ora.diskmon' on 'ora1123-node1' succeeded
CRS-2676: Start of 'ora.cssd' on 'ora1123-node1' succeeded
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting disk: /ora_voting1/voting1.dbf.
Now formatting voting disk: /ora_voting2/voting2.dbf.
Now formatting voting disk: /ora_voting3/voting3.dbf.
CRS-4603: Successful addition of voting disk /ora_voting1/voting1.dbf.
CRS-4603: Successful addition of voting disk /ora_voting2/voting2.dbf.
CRS-4603: Successful addition of voting disk /ora_voting3/voting3.dbf.
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   205267b4e4334fc9bf21154f92cd30fa (/ora_voting1/voting1.dbf) []
 2. ONLINE   83217239b9c84fe9bfbd6c5e76a9dcc1 (/ora_voting2/voting2.dbf) []
 3. ONLINE   41a59373d30b4f6cbf6f41c50dc48dbd (/ora_voting3/voting3.dbf) []
Located 3 voting disk(s).
Configure Oracle Grid Infrastructure for a Cluster ... succeeded


Node2 outputs:
-------------
[root@ora1123-node2 /u01/grid/11.2.0.3/grid]#/u01/grid/11.2.0.3/grid/root.sh 
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/grid/11.2.0.3/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/grid/11.2.0.3/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node ora1123-node1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

> In the last page it will show error message regarding Oracle Clusterware verification utility failed, just ignore it. Our installation indeed is successful.

Test the installation:
================
-> Check the logs under: /u02/oraInventory/logs

By oracle:
cluvfy stage -post crsinst -n ora1123-node1,ora1123-node2 -verbose
crsctl check cluster -all
# crsctl check crs
Cluster Synchronization Services appears healthy
Cluster Ready Services appears healthy
Event Manager appears healthy

# olsnodes -n
# ocrcheck
# crsctl query crs softwareversion
# crsctl query crs activeversion
# crs_stat -t -v

Confirm clusterware time synchronization service is running (CTSS):
--------------------------------------------------------------------
# crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 0

Create crs_stat script to show you a nice output shape of crs_stat command:
-----------------------------------------------------------------------
cd /u01/grid/11.2.0.3/grid/bin
vi crsstat

#--------------------------- Begin Shell Script ----------------------------
#!/bin/bash
##
#Sample 10g CRS resource status query script
##
#Description:
# - Returns formatted version of crs_stat -t, in tabular
# format, with the complete rsc names and filtering keywords
# - The argument, $RSC_KEY, is optional and if passed to the script, will
# limit the output to HA resources whose names match $RSC_KEY.
# Requirements:
# - $ORA_CRS_HOME should be set in your environment
RSC_KEY=$1
QSTAT=-u
AWK=/usr/bin/awk # if not available use /usr/bin/awk
# Table header:echo ""
$AWK \
'BEGIN {printf "%-75s %-10s %-18s\n", "HA Resource", "Target", "State";
printf "%-75s %-10s %-18s\n", "-----------", "------", "-----";}'
# Table body:
/u01/grid/11.2.0.3/grid/bin/crs_stat $QSTAT | $AWK \
'BEGIN { FS="="; state = 0; }
$1~/NAME/ && $2~/'$RSC_KEY'/ {appname = $2; state=1};
state == 0 {next;}
$1~/TARGET/ && state == 1 {apptarget = $2; state=2;}
$1~/STATE/ && state == 2 {appstate = $2; state=3;}
state == 3 {printf "%-75s %-10s %-18s\n", appname, apptarget, appstate; state=0;}'
#--------------------------- End Shell Script ------------------------------

chmod 700 /u01/grid/11.2.0.3/grid/bin/crsstat
scp crsstat root@node2:/u01/grid/11.2.0.3/grid/bin

Now you can use "crs" command that been included in the oracle profiles to execute crs_stat -t in a cute format.

Change OCR backup location:
=========================
# ocrconfig -showbackup
# ocrconfig -backuploc /u01/grid/11.2.0.3/grid/cdata/cluster11g


Modify RAC configurations:
#######################

=Configure CSS misscount:
 ====================
 The CSS misscount parameter represents the maximum time, in seconds, that a network heartbeat can be missed before kicking out the problematic node...

Check current configurations for css misscount :
# crsctl get css misscount

It's recommended to backup OCR disks before running the following command.
configure css misscount: -From One Node only-
# crsctl set css misscount 60


#################################
Install Oracle Database Software 11.2.0.3:  
#################################

Note: It's recommended to backup oraInventory directory before starting this stage.
Note: Ensure that the clusterware services are running on both nodes.

Run cluvfy to check database installation prerequisites:
========
# cluvfy stage -pre dbinst -n ora1123-node1,ora1123-node2 -verbose
-->Ignore cluster scan errors.

Execute runInstaller:
===============
Connect to the server using VNCviewer to open a GUI session, that enable you to run Oracle installer
Note: Oracle Installer can also run from the command line mode using -silent and -responseFile attributes, you should prepare the response file that will hold all installation selections"

# xhost +
# su - oracle
# cd /u02/stage/database
# ./runInstaller

During the installation:
==================
Select "skip software updates"
Select "Install database Software only"
Select "Oracle Real Application Clusters database installation" -> Select both nodes (selected by default).
Select "Enterprise" -> Selected options like (Partitioning, Data Mining, Real Application Testing)
 =>From security perspective it's recommended to install only the options you need.
 =>From licensing perspective there is no problem if installed an options you are not using it, as Oracle charges only on the options are being used.
Select "dba" group for OSDBA, leave it blank for OSOPER (I never had a need to login to the database with SYSOPER privilege).
Ignore SCAN warning in the prerequisite check page
ORACLE_BASE: /u01/oracle
ORACLE_HOME (Software Location): /u01/oracle/11.2.0.3/db

At the end of installation: By root user execute /u01/oracle/11.2.0.3/db/root.sh on Node1 first then execute it on Node2:
# /u01/oracle/11.2.0.3/db/root.sh

Go back to the Oracle Installer:
click OK.
click Close.

I've uploaded Oracle software installation snapshots to this link:

Post Steps:
########
Installation verification:
=================
# cluvfy stage -post crsinst -n ora1123-node1,ora1123-node2 -verbose
  =>All passed except SCAN check which I'm not using it in my setup.

Do some backing up:
==============
Query Voting disks:
------------------
crsctl query css votedisk

Backing up voting disks manually is no longer required, dd command is not supported in 11gr2 for backing up voting disks. Voting disks are backed up automatically in the OCR as part of any configuration change and voting disk data is automatically restored to any added voting disks.

Backup the OCR: clusterware is up and running
------------------
# ocrconfig -export /u01/grid/11.2.0.3/grid/cdata/cluster11g/ocr_after_DB_installation.dmp
# ocrconfig -manualbackup

Backup oraInventory directory:
---------------------------------
# cp -r /u01/oraInventory /u01/oraInventory_After_DBINSTALL

Backup root.sh:
-----------------
# cp /u01/grid/11.2.0.3/grid/root.sh /u01/grid/11.2.0.3/grid/root.sh._after_installation
# cp /u01/oracle/11.2.0.3/db/root.sh /u01/oracle/11.2.0.3/db/root.sh_after_installation

Backup ORACLE_HOME: 
---------------------------
# tar cvpf /u01/oracle/11.2.0.3/db_After_DB_install.tar /u01/oracle/11.2.0.3/db

Backup GRID_HOME: 
----------------------
# tar cvpf /u01/grid/11.2.0.3/grid_after_DB_install.tar /u01/grid/11.2.0.3/grid

Note: Although clusterware services are up and running, GI Home can be backed up online.

Backup the following files:
--------------------------
# cp /usr/local/bin/oraenv  /usr/local/bin/oraenv.11.2.0.3
# cp /usr/local/bin/dbhome  /usr/local/bin/dbhome.11.2.0.3
# cp /usr/local/bin/coraenv /usr/local/bin/coraenv.11.2.0.3

-Restart RAC severs more than once and ensure that RAC processes are starting up automatically.

July SPU Patch Apply:
##################
-Since October 2012 Oracle re-named CPU Critical Patch Update to SPU Security Patch Update, both are same, it's just a renaming .
-SPU patches are cumulative once you apply the latest patch, there is no need to apply the older patches.
-To eliminate making a big change on my environment, plus minimizing the downtime when applying security patches, I prefer to apply SPU (CPU before) over applying PSU patches (which contains SPU patch + Common Bug fixes that affect large number of customers).
OPatch utility version must be 11.2.0.3.0 or later: (OPatch utility is the tool being used to apply SPU patches)
  >> To download the latest OPatch utlility: Go to Metalink, search for Patch# 6880880
   >Backup the original OPatch directory under ORACLE_HOME and just unzip the patch file under ORACLE_HOME.

> $PATH must refer to /usr/ccs/bin
  # export PATH=$PATH:/usr/ccs/bin

> Unzip the Patch:
  # cd $ORACLE_HOME
  # unzip p16742095_112030_Linux-x86-64.zip

Patch Installation:
=============
Remember we still don't have any running database for the time being.
Shutdown Nodeapps or crs:
# srvctl stop nodeapps -n ora1123-node1

Patch Installation:
# cd $ORACLE_HOME/16742095
# opatch napply -skip_subset -skip_duplicate -local

Go to Node2 and do the same steps for installing the latest SPU patch...


NEXT:

Part III Create a standby database under the new 11.2.0.3 environment being refreshed from the 11.2.0.1 primary DB.


Upgrade from RAC 11.2.0.1 to 11.2.0.3 (Part III Create Standby database being synchronized from the 11.2.0.1 primary DB)

$
0
0

In Part I I've installed the Linux OS and prepared the shared filesystem (ISCSI configuration & OCFS2) 
In Part II I've prepared the Linux OS for the RAC installation, installed 11.2.0.3 Grid Infrastructure and Database software.

In this part I'll create a physical standby database under 11.2.0.3 Oracle Home on the new RAC server will be refreshed from the 11.2.0.1 primary database for the purpose of minimizing the downtime by eliminating the time wasted on copying the 11.2.0.1 datafiles from the old SAN to the new NAS where the new 11.2.0.3 resides. There are many technologies/means can do the same job and get you rid of creating a standby database for the purpose of minimize the time of copying datafiles to the new 11.2.0.3 server shared storage, as simple as just dismount the file system where the datafiles located from the old 11.2.0.1 server and mount it on the new servers if the new server are connected to the same SAN/NAS (this will be done from the NAS/SAN console), or utilize a new technology like SAN to SAN replication if the new 11.2.0.3 servers are connected to different SAN/NAS storage, that was just an example but there are many other solutions in the market can get this job done without creating a standby DB.

Any ways I'll take the hard & cheapest route and create a standby database on the new 11.2.0.3 servers located on the new NAS storage, again I'm doing so for the purpose of minimizing the copying time of datafiles during the upgrade phase.

My primary database is located on SAN storage connected to 11.2.0.1 RAC server through fiber cables, the new 11.2.0.3 database will be located on NAS storage connected to 11.2.0.3 servers through Ethernet using ISCSI protocol. (ISCSI configuration and file system preparation already done in Part I).

This post can be also used to create a standby database for disaster recovery purpose.

It's good to know the following:

License:
-------
-Data Guard license comes free with Enterprise Edition license.

Docs:
----
Standby Creation: -Single Node- Oracle Doc.

Standby Creation: -RAC 2 nodes- MAA.

Also Check this OBE link:

Creating a Standby DB on a different OS/Endian than primary: [Metalink Note ID 413484.1]
----------------------------------------------------------
If the primary OS is:          Standby DB can be created on the following OS:
--------------------                 -----------------------------------------------
Linux (32-bit)   Linux (32-bit)
           Microsoft Windows (32-bit)  =>Oracle 11g onward
           Linux (64-bit)                          =>Oracle 10g onward

Linux (64-bit)   Linux (64-bit)
            Linux (32-bit)                   =>Oracle 10g onward
            Microsoft Windows (64-bit)  =>Oracle 11g onward  
            Microsoft Windows (32-bit)  =>Oracle 11g onward  
            Solaris (64-bit) -Non SPark-  =>Oracle 11g onward

Microsoft Windows (32-bit) Microsoft Windows (32-bit)
             Microsoft Windows (64-bit)  =>Oracle 10g onward
             Linux (32-bit)             =>Oracle 11g onward
             Linux (64-bit)              =>Oracle 11g onward

Microsoft Windows (64-bit) Microsoft Windows (64-bit)
             Microsoft Windows (32-bit)  =>Oracle 10g onward
             Linux (32-bit)             =>Oracle 11g onward
             Linux (64-bit)              =>Oracle 11g onward

Solaris (64-bit)Non SPark    Solaris (64-bit) -Non SPark-
             Solaris (32-bit) -Non SPark- =>Oracle 10g onward
             Linux (64-bit)             =>Oracle 11g onward


Note: to see all operating systems endians run the following:
SQL> SELECT *FROM V$TRANSPORTABLE_PLATFORM;

Note: to see your OS endian run the following:
SQL> SELECT PLATFORM_ID,PLATFORM_NAME FROM V$DATABASE;

In brief, the main steps of creating a standby database are the following:
1. Perform an RMAN backup of the primary database.
2. Create the standby controlfile of the primary database.
3. Copy the backup of primary database/standby controlfile/SPFILE to the standby DB server.
4. Copy the password file orapw to the standby DB server.
5. Restore the SPFILE and standby controlfile on the standby DB.
6. Restore the database from the RMAN backup.
7. Configure both primary and standby database with Data Guard initialization parameters.
8. Start Managed Recovery Process to automate recovering the standby DB.

Note: the first three steps can be done in one step if RMAN command “duplicate target database for standby from active database” used to create the standby database.

Extra steps are related to Oracle Maximum Availability Architecture (MAA)
To get a deep knowledge of Data Guard Maximum Availability Architecture technical practices I strongly recommend this paper:

Now let's get started...

########################
Operating System Preparation:
########################
I'll refer to the 11.2.0.1 database | server as primary. (ora1121-node1 & ora1121-node2)
I'll refer to the 11.2.0.3 database | server as standby. (ora1123-node1 & ora1123-node2)

Host equivalence between primary servers and standby server:
====================================================
On Primary Node1:
cd /home/oracle/.ssh
scp authorized_keys oracle@ora1123-node1:/home/oracle/.ssh/authorized_keys.primary

On Standby Node1:
cd /home/oracle/.ssh
cat authorized_keys.primary >> authorized_keys
Now the authorized_keys file on standby node1 has all keys for both Primary & standby servers, now we will overwrite this file on all primary RAC nodes to complete the host equivalence between all primary and standby nodes.
scp authorized_keys oracle@ora1123-node2:/home/oracle/.ssh/authorized_keys
scp authorized_keys oracle@ora1121-node1:/home/oracle/.ssh/authorized_keys
scp authorized_keys oracle@ora1121-node2:/home/oracle/.ssh/authorized_keys

On Primary node1: (by oracle user) Answer all question with YES
ssh ora1121-node1 date
ssh ora1121-node2 date
ssh ora1123-node1 date
ssh ora1123-node2 date

On Primary node2: (by oracle user) Answer all question with YES
ssh ora1121-node1 date
ssh ora1121-node2 date
ssh ora1123-node1 date
ssh ora1123-node2 date

On Standby node1: (by oracle user) Answer all question with YES
ssh ora1121-node1 date
ssh ora1121-node2 date
ssh ora1123-node1 date
ssh ora1123-node2 date

On Standby node2: (by oracle user) Answer all question with YES
ssh ora1121-node1 date
ssh ora1121-node2 date
ssh ora1123-node1 date
ssh ora1123-node2 date


#######################
Create The Standby Database:
#######################

Create Directories Tree:
=====================
On Standby node1:
mkdir -p /u01/oracle/11.2.0.3/db/diagnostics/pefms
mkdir -p /u01/oracle/diag/tnslsnr/ora1123-node1/listener_pefms1

mkdir -p /ora_control1/pefms
mkdir -p /ora_control2/pefms
mkdir -p /ora_redo1/pefms
mkdir -p /ora_redo2/pefms
mkdir -p /ora_archive1/pefms
mkdir -p /ora_archive2/pefms
mkdir -p /ora_temp1/pefms
mkdir -p /ora_undo2/pefms
mkdir -p /ora_undo1/pefms
mkdir -p /ora_index1/pefms
mkdir -p /ora_data1/pefms
mkdir -p /ora_backupdisk/flash_recovery_area/PEFMS/flashback

chown -R oracle:oinstall /ora_control1
chown -R oracle:oinstall /ora_control2
chown -R oracle:oinstall /ora_redo1
chown -R oracle:oinstall /ora_redo2
chown -R oracle:oinstall /ora_archive1
chown -R oracle:oinstall /ora_archive2
chown -R oracle:oinstall /ora_temp1
chown -R oracle:oinstall /ora_undo1
chown -R oracle:oinstall /ora_undo2
chown -R oracle:oinstall /ora_index1
chown -R oracle:oinstall /ora_data1
chown -R oracle:oinstall /ora_backupdisk

chmod -R 750 /ora_control1
chmod -R 750 /ora_control2
chmod -R 750 /ora_redo1
chmod -R 750 /ora_redo2
chmod -R 750 /ora_archive1
chmod -R 750 /ora_archive2
chmod -R 750 /ora_temp1
chmod -R 750 /ora_undo1
chmod -R 750 /ora_undo2
chmod -R 750 /ora_index1
chmod -R 750 /ora_data1
chmod -R 750 /ora_backupdisk


Create the listener.ora and tnsnames.ora files:
====================================
vi $ORACLE_HOME/network/admin/listener.ora
# Add the following lines

LISTENER=
  (DESCRIPTION=
    (ADDRESS_LIST=
      (ADDRESS=(PROTOCOL=tcp)(HOST=ora1123-node1)(PORT=1521))))

SID_LIST_LISTENER=
  (SID_LIST=
    (SID_DESC=
     (SDU=32767)
      (ORACLE_HOME=/u01/oracle/11.2.0.3/db)
      (SID_NAME=pefms1)))
Note: this line (SDU=32767) is part of MAA.

vi $ORACLE_HOME/network/admin/tnsnames.ora
# Add the following lines
pefm1=(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (Host = ora1123-node1) (Port = 1521))) (CONNECT_DATA = (SID = pefms1)))
pefms_pri=(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (Host = idub-ora-node1) (Port = 1521))) (CONNECT_DATA = (SID = pefms1)))
pefms_dr=(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (Host = ora1123-node1) (Port = 1521))) (CONNECT_DATA = (SID = pefms1)))
The last two entries will be used later in the standby configurations to help copying the redo data and fixing the gaps between the primary and the standby DB.

Copy the Password File from primary to standby:
=========================================
On Primary node1:
# cd /u01/oracle/11.2.0.3/db/11.2.0.1/dbs
# scp orapwpefms1 oracle@ora1123-node1:/u01/oracle/11.2.0.3/db/dbs

 In case there is no password file created yet on the primary server:
 ======================================================
On Primary node1:
 Stop the case sensitivity for password:
 SQL> ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON=FALSE SCOPE=BOTH;

 Create the password file:
 # cd $ORACLE_HOME/dbs
 # orapwd file=orapwpefms1 password=xxxxxxxxx ignorecase=y

 Copy the password file to the standby node1:
 # scp orapwpefms2 oracle@ora1123-node1:/u01/oracle/11.2.0.3/db/dbs/orapwpefms

 Reboot the Primary DB
 Return back the case sensitivity parameter:
 SQL> ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON=TRUE SCOPE=BOTH;

On Standby node1:
Create SQLNET.ora file:

# vi $ORACLE_HOME/network/admin/sqlnet.ora
#Add this parameter to sqlnet.ora file:
NAMES.DIRECTORY_PATH= (TNSNAMES, ONAMES, HOSTNAME)

Create the pfile:
==============
Total Memory:  33G
Memory reserved for OS: 5G
Memory reserved for DB: 28G
 -SGA  18G
  -Minimum DB_CACHE 8G
  -Minimum SHARED         2G
  -Minimum LARGE         300M
  -Minimum JAVA         100M
  -Log Buffer                 30M
 -PGA  10G
  -Minimum  SORTAREA   80M

# vi $ORACLE_HOME/dbs/initpefms1.ora

#Memory Parameters:
##################
sga_max_size=19327352832
sga_target=19327352832
*.db_cache_size=8589934592
*.java_pool_size=104857600
*.large_pool_size=314572800
*.shared_pool_reserved_size=52428800
*.shared_pool_size=2618767104
*.sort_area_size=83886080
*.log_buffer=31457280
*.pga_aggregate_target=10737418240
#Destination Parameters:
#######################
*.control_files='/ora_control1/pefms/control01.ctl','/ora_control2/pefms/control02.ctl'
*.db_recovery_file_dest='/ora_backupdisk/flash_recovery_area'
*.diagnostic_dest='/u01/oracle/11.2.0.3/db/diagnostics/pefms'
*.log_archive_dest_1='LOCATION=/ora_archive1/pefms'
#Other Parameters:
#################
*.compatible='11.2.0.1'
*.db_flashback_retention_target=21600
*.db_name='pefms'
*.db_recovery_file_dest_size=536870912000
*.fast_start_mttr_target=300
instance_name='pefms1'
log_archive_config='dg_config=(pefms_pri,pefms_dr)'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_format='%d_%t_%r_%s.arc'
*.log_archive_max_processes=3
*.open_cursors=500
*.processes=1000
*.remote_login_passwordfile='EXCLUSIVE'
*.resource_limit=TRUE
*.standby_file_management='AUTO'
thread=1
*.undo_management='AUTO'
*.undo_retention=172800
undo_tablespace='UNDOTBS1'
*.fal_server='PEFMS_PRI'
*.fal_client='PEFMS_DR'
*.db_unique_name='pefmsdr'

Note: Parameter db_unique_name must be set to "pefmsdr"
Note: Parameter log_archive_config must be set to "'dg_config=(pefms_pri,pefms_dr)'".

On the Primary Node1:

Backup the primary database:
=========================
# $ORACLE_HOME/bin/rman target /
RMAN> run {
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate channel c4 type disk;
change archivelog all crosscheck;
backup as compressed backupset incremental level=0 format '/backupdisk/rmanbkps/%d_%t_%s_%p' tag='fullprodbk'
filesperset 100 database plus archivelog;
}

copy the backup to the DR server:
=============================
# cd /backupdisk/rmanbkps/
# scp * oracle@ora1123-node1:/ora_backupdisk/rmanbkps

create standby controlfile: -On primary-
-------------------------
SQL> alter database create standby controlfile as '/home/oracle/standby.ctl' reuse;

move it to the standby server:
# scp /home/oracle/standby.ctl oracle@ora1123-node1:/ora_control1/pefms/control01.ctl

On the standby Node1:

Start the Standby database creation:
==============================
Multiplex the standby controlfiles:
---------------------------------
# cp /ora_control1/pefms/control01.ctl /ora_control2/pefms/control02.ctl
# chmod 640 /ora_control1/pefms/control01.ctl
# chmod 640 /ora_control2/pefms/control02.ctl

Mount the standby DB:
---------------------
# sqlplus '/ as sysdba' 

SQL> STARTUP NOMOUNT;
SQL> create spfile='/u01/oracle/11.2.0.3/db/dbs/spfilepefms1.ora' from pfile='/u01/oracle/11.2.0.3/db/dbs/initpefms1.ora';
SQL> alter database mount standby database;
SQL> exit

Catalog the RMAN backup been copied from Primary site, start DB restoration:
--------------------------------------------------------------
# $ORACLE_HOME/bin/rman target /
RMAN> catalog start with '/ora_backupdisk/rmanbkps/';
RMAN> restore database;

When it done check the archives inside the backup:
-----------------------------------------------
RMAN> list backup of archivelog all; 
RMAN> list backup of archivelog from time 'sysdate-10/24';

Recover the database to the latest scn you have in the backup:
-------------------------------------------------------------
RMAN> recover database until scn xxx;

No need to worry about the following error, just move to the next step:

ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01152: file 1 was not restored from a sufficiently old backup 
ORA-01110: data file 1: '/ora_data1/pefms/system01.dbf'

Put the standby database in recover managed mode:
-------------------------------------------------
SQL> alter database recover managed standby database disconnect from session;  

set log_archive_config:
SQL> alter system set log_archive_config='dg_config=(pefms_pri,pefms_dr)' sid='*' scope=both;

Note: pefms_pri & pefmsdr are two services already added to tnsnames.ora file located on the standby node1, pefms_pri will point to the primary DB in the primary server and pefms_dr pointing to the standby database on the standby server.

Now you have a standby database............................

Start the listener:
---------------
# lsnrctl start


Configure archive logs shipping:
########################
On Primary Node1:
Set the primary database in FORCE LOGGING mode, to ensure that all transactions are being written to the redologs:
SQL> ALTER DATABASE FORCE LOGGING;

=insert the following lines inside the tnsnames.ora located on the primary node1 that represent PEFMS_DR service:

PEFMS_DR =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = ora1123-node1)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = pefms1)
    )
  )

Note: Make sure that the DR is in the mount mode.

On Primary Node1:
SQL> alter system set log_archive_config='dg_config=(PEFMS_DR,PEFMS)' sid='*' scope=both;
SQL> alter system set log_archive_dest_3='service="PEFMS_DR" valid_for=(online_logfiles,primary_role) db_unique_name=PEFMS_DR sid='*' scope=both;
SQL> alter system set log_archive_dest_state_3='enable' scope=both sid='*';
SQL> alter system set standby_file_management=auto sid='*' scope=both;
SQL> alter system set fal_server='PEFMS_DR' sid='*' scope=both;
SQL> alter system set fal_client='PEFMS1' sid='pefms1' scope=both;
SQL> alter system set fal_client='PEFMS2' sid='pefms2' scope=both;
SQL> alter system set service_names='PEFMS' sid='*' scope=both;
The following are MAA recommendations:
SQL> ALTER SYSTEM SET ARCHIVE_LAG_TARGET=1800 sid='*' scope=both;
     -->REDOLOG switch will be forced every 30min.
SQL> ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=4 sid='*' scope=both;
     -->To quickly resolve gaps in the redo stream to a standby database.

Create standby redo logs on the primary database: (MAA)
========================================
- Even though standby redo logs are required for maximum protection and maximum availability modes and with the LGWR ASYNC transport mode and not required for Maximum Performance mode (which I'm using), it's recommend to create them as they will speed up (redo transport, data recovery, speedup the switchover).
- As the primary database possibly becoming the standby database as a result of a database switchover or failover, standby redologs should be created on the primary database as well.

The minimum number of standby redolog groups is = the number of online redo logs.
The best practice: Number of standby redologs = (Number of redologs on production) +1
Standby redologs size = Primary redologs size
Standby redologs should not be multiplexed.

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 
 GROUP 7 ('/ora_redo1/pefms/pefms1_redo07_a.log')SIZE 100M reuse,
 GROUP 8 ('/ora_redo1/pefms/pefms1_redo08_a.log')SIZE 100M reuse,
 GROUP 9 ('/ora_redo1/pefms/pefms1_redo09_a.log')SIZE 100M reuse,
 GROUP 10('/ora_redo1/pefms/pefms1_redo10_a.log')SIZE 100M reuse;

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2
 GROUP 11 ('/ora_redo1/pefms/pefms2_redo11_a.log')SIZE 100M reuse,
 GROUP 12 ('/ora_redo1/pefms/pefms2_redo12_a.log')SIZE 100M reuse,
 GROUP 13 ('/ora_redo1/pefms/pefms2_redo13_a.log')SIZE 100M reuse,
 GROUP 14 ('/ora_redo1/pefms/pefms2_redo14_a.log')SIZE 100M reuse;

SQL> SELECT * FROM V$LOG;
SQL> SELECT * FROM V$STANDBY_LOG;


On the Standby Node1:
##################
create standby redo logs on the standby database:
=========================================
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1
 GROUP 7 ('/ora_redo1/pefms/pefms1_redo07_a.log')SIZE 100M reuse,
 GROUP 8 ('/ora_redo1/pefms/pefms1_redo08_a.log')SIZE 100M reuse,
 GROUP 9 ('/ora_redo1/pefms/pefms1_redo09_a.log')SIZE 100M reuse,
 GROUP 10('/ora_redo1/pefms/pefms1_redo10_a.log')SIZE 100M reuse;

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2
 GROUP 11 ('/ora_redo1/pefms/pefms2_redo11_a.log')SIZE 100M reuse,
 GROUP 12 ('/ora_redo1/pefms/pefms2_redo12_a.log')SIZE 100M reuse,
 GROUP 13 ('/ora_redo1/pefms/pefms2_redo13_a.log')SIZE 100M reuse,
 GROUP 14 ('/ora_redo1/pefms/pefms2_redo14_a.log')SIZE 100M reuse;

SQL> alter system set standby_file_management='AUTO';

SQL> SELECT * FROM V$LOG;
     SELECT * FROM V$STANDBY_LOG;

Maximum Availability Architecture (MAA) recommendations:
===================================================
> Speed up the parallel recovery:
     SQL> ALTER SYSTEM SET parallel_execution_message_size=16384;
     -->16384 is the 11gr2 default, the larger the faster the parallel recovery

> On the standby DB you can shrink the SHARED_POOL to increase the DB_CACHE_SIZE, since the recovery process does not require much shared pool memory.


Enable Flashback on the Standby DB:
-------------------------------
That helps in fixing logical corruption scenarios, easily re-instate the primary database after failing over to the standby.
SQL> ALTER database flashback on;

Enable the real-time apply on the standby database:
---------------------------------------------
Apply the changes on the DR as soon as the redo data is received:
SQL> RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;


VERIFY DATA GUARD Archivelog shipping:
=======================================
ON Primary:
------------
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

ON DR:
--------
SQL> SELECT DATABASE_ROLE,OPEN_MODE,PROTECTION_MODE from v$database;
     SELECT THREAD#,SEQUENCE#, FIRST_TIME, NEXT_TIME,applied FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#,applied;
     SELECT thread#,max(SEQUENCE#) from v$archived_log group by thread#;

Parallel the recovery process:
--------------------------
SQL> ALTER system set recovery_parallelism=16 scope=spfile;

Check the time lag between Primary & Standby:
-----------------------------------------
SQL> col NAME for a15
          col VALUE for a15
          SELECT NAME,VALUE FROM V$DATAGUARD_STATS WHERE NAME='apply lag';

NAME  VALUE
--------------- -------------
apply lag +00 00:04:43

The lag is 4.43 minutes

RMAN configuration:
==================
# rman target /
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/ora_backupdisk/rmanbkps/controlfiles/%F';

Add the database services to the clusterware:
====================================
If you want the database and the listener to be managed by clusterware:
# srvctl add database -d pefms -o /u01/oracle/11.2.0.3/db
# srvctl add instance -d pefms -i pefms1 -n ora1123-node1
# srvctl add listener -o /u01/oracle/11.2.0.3/db -l LISTENER_PEFMS1 -s

Now the standby DB is done.

Next:
In the next part Part IV I'll cover the steps of upgrading this standby DB from 11.2.0.1 to 11.2.0.3 and configure it with RAC and make it ready for production use.


The following information is good to know about Data Guard:

Using compression option for (ASYNC only) -one of 11g new features-. (compression=enable)
-------------------------------------------
redo compression can:
-Improve data protection by reducing redo transport lag.
-Reduce network utilization.
-Provide faster redo gap resolution.
-Reduce redo transfer time.
-Need to buy advanced compression license.
To implement:
             --Use compression=enable in the dest_* parameter
      --alter system set "_REDO_TRANSPORT_COMPRESS_ALL"=TRUE scope=both;

>This option is recommended when the link speed between the primary and the standby.
 is not fast enough.

Discover ARCHIVE_DEST_N options:
--------------------------------------
reopen: The time that the primary database reconnect to standby DB when connection cut between.
compression: Requires the Oracle Advanced Compression license, it compress redo data when transfer it to DR.[ID 729551.1]
             If you're using Maximum Performance, consider setting _REDO_TRANSPORT_COMPRESS_ALL=TRUE
DELAY:  delay sending redo data, to mitigate potential human induced errors and logical corruption. I don't recommend to set it
        The best approach is to delay applying the redo data on the standby DB or to use FLASHBACK DATABASE feature on both sites.

Examples:
--SQL> alter system set log_archive_dest_3='service="PEFMS_DR" valid_for=(online_logfiles,primary_role) db_unique_name=PEFMS_DR DELAY=240' sid='*' scope=both;
--SQL> alter system set log_archive_dest_3='service="PEFMS_DR" valid_for=(online_logfiles,primary_role) db_unique_name=PEFMS_DR delay=60 sid='*' scope=both;
--SQL> alter system set log_archive_dest_3='service="PEFMS_DR", ARCH NOAFFIRM delay=0 optional reopen=15 register max_failure=10 db_unique_name=PEFMS_DR 

compression=enable';
Note: In case of using compression option alter below parameter:
--SQL> alter system set "_REDO_TRANSPORT_COMPRESS_ALL"=TRUE scope=both;

Using maximum availability option with LGWR with SYNC:
--SQL> alter system set log_archive_dest_3='service=PEFMS_DR LGWR SYNC AFFIRM db_unique_name=PEFMS_DR VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE) compression=enable';
--SQL> Alter database set standby database to maximize availability;
reboot the production and the standby databases.


Upgrade RAC 11.2.0.1 to 11.2.0.3 (Part IV RAC Database Upgrade from 11.2.0.1 to 11.2.03)

$
0
0
In Part I I've installed the Linux OS and prepared the shared filesystem (ISCSI configuration & OCFS2)
In Part II I've prepared the Linux OS for the RAC installation, installed 11.2.0.3 Grid Infrastructure and Database software.
In Part III I've created a physical standby database under 11.2.0.3 Oracle Home on the new RAC server being refreshed from the 11.2.0.1 primary database, for the purpose of minimizing the downtime of copying the database from old SAN to the new NAS during the upgrade phase.

In this part I'll upgrade the 11.2.0.1 standby database resides on the new 11.2.0.3 servers.

In brief:
  > I'll convert the 11.2.0.1 standby database to be a primary, I'll shutdown the original primary DB.
  > Start upgrading the new primary DB from 11.2.0.1 to 11.2.0.3.
  > I'll do the post upgrade steps plus RAC configurations.

Recommended Metalink notes:
========================
Metalink [ID 730365.1]  Includes all patchset downloads + How to upgrade from any Oracle DB version to another one.
Metalink [ID 1276368.1] Out-of-place manual upgrade from previous 11.2.0.N version to the latest 11.2.0.N patchset.

If you're not following the way I'm using in this implementation (out-place upgrade) or you're willing to use a standby DB for the purpose of the upgrade you can easily skip the Failover paragraph in the green color.

Let's get started...

########  
Failover:  
########  
On Primary:
-----------
Shutdown the primary DB to ensure there is no further updates.

On Standby:
-------------
Terminate managed recovery mode by:
----------------------------------------
Make sure that all archivelogs been copied from the primary to the standby DB, and also been applied.
SQL> SELECT THREAD#,SEQUENCE#, FIRST_TIME, NEXT_TIME,applied FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#,applied;
SQL> recover managed standby database nodelay;
SQL> alter database recover managed standby database finish;

Convert the standby database to primary role and open it in normal mode:
-----------------------------------------------------------------------
SQL> alter database commit to switchover to primary;
SQL> shu immediate;
SQL> startup upgrade;

In case having a problem with tempfiles:
SQL> alter database tempfile '/ora_temp1/pefms/temp_01.dbf' drop;
SQL> alter tablespace temp add tempfile '/ora_temp1/pefms/temp_01.dbf' reuse;
and so on for the rest of tempfiles...

=> Proceed with DB upgrade steps.


Before the upgrade it's good to do the following:

Purge DBA_RECYCLEBIN:
---------------------------
Purging DBA_RECYCLEBIN will speed up the upgrade process:
SQL> PURGE DBA_RECYCLEBIN;

Keep a list of all DB initialization Parameters (normal & hidden):
--------------------------------------------------------------
The following statement will list the non default parameters :

SQL> Spool All_parameters.txt
     set linesize 170
     col Parameter for a50
     col SESSION FOR a28
     col Instance FOR a55
     col S FOR a1
     col I FOR a1
     col D FOR a1
SELECT * FROM (SELECT  
 a.ksppinm  "Parameter", 
 decode(p.isses_modifiable,'FALSE',NULL,NULL,NULL,b.ksppstvl) "Session", 
 c.ksppstvl "Instance",
 decode(p.isses_modifiable,'FALSE','F','TRUE','T') "S",
 decode(p.issys_modifiable,'FALSE','F','TRUE','T','IMMEDIATE','I','DEFERRED','D') "I",
 decode(p.isdefault,'FALSE','F','TRUE','T') "D"
 FROM x$ksppi a, x$ksppcv b, x$ksppsv c, v$parameter p
 WHERE a.indx = b.indx AND a.indx = c.indx
 AND p.name(+) = a.ksppinm
 ORDER BY a.ksppinm) WHERE d='F';
     Spool off

Oracle strongly recommend to reset the hidden parameter to it's default values before starting the upgrade:
e.g.
SQL> alter system reset scope=spfile sid='*';

##################
PRE-UPGRADE STEPS:
##################

Step 1: Software Installation
######
Install 11.2.0.3 RDBMS Software in a new ORACLE_HOME. 
Apply the latest CPU/SPU patch on the 11.2.0.3 ORACLE_HOME before upgrading the database,  .

11.2.0.3 Grid Infrastructure & Database software already installed in Part II

Create the default directory that holds sqlnet log files:
-------------------------------------------------
# mkdir -p /u01/oracle/11.2.0.3/db/log/diag/clients
# chmod 700 /u01/oracle/11.2.0.3/db/log/diag/clients

Pre-upgrade Diagnose:
================
The following script checks the database and give recommendations before the upgrade.
Download latest version of utlu112i_5.sql script: Note 884522.1
Note: This script already exist under the new $ORACLE_HOME/rdbms/admin but usually the one in the Metalink is updated with most recent upgrade checks.

SQL> SPOOL upgrade_info.log
SQL> @?/rdbms/admin/utlu112i.sql
SQL> SPOOL OFF

Note: Here I've ran the script which under ORACLE_HOME but I strongly recommend to down load the latest version of that script from Metalink and run it against the database.

Step 2: Dictionary Check
#####
Verify the validity of data dictionary objects by running dbupgdiag.sql script:

If the dbupgdiag.sql script reports any invalid objects, run utlrp (multiple times) to validate the invalid objects in the database, until there is no change in the number of invalid objects: 

SQL> @/home/oracle/dbupgdiag.sql
SQL> @?/rdbms/admin/utlrp.sql 

Note: In my case scenario, in the 11.2.0.1 primary DB I had only 32 invalid object and after switching over the standby DB to a primary for the upgrade purpose I got more than 200 invalid objects (views, packages, synonyms) most of them under SYS schema, no need to worry, all of them will be valid after running step 9.

Gather Dictionary Statistics:
--------------------------
SQL> EXECUTE dbms_stats.gather_dictionary_stats;
=> In my case scenario it failed due to invalid objects under SYS schema:
             ORA-04063: package body "SYS.DBMS_SQLTUNE" has errors 
          .


Step 3: TIMEZONE Version
#####
In case you have TIMESTAMP WITH TIME ZONE datatype in your database, you need to Upgrade the Time Zone version to version 14.

The possibilities are:
 >If current version < 14 ,You need to upgrade to version 14 after you done with the upgrade to 11.2.0.3.
 >If current version = 14 ,No need to upgrade, Skip the whole Step (skip STEP 10).
 >If current version > 14 ,You must upgrade your Time Zone version before upgrading to 11.2.0.3 or your data stored in TIMESTAMP WITH TIME ZONE datatype can become corrupted during the upgrade.

Check your current Time Zone version:
SQL> SELECT version FROM v$timezone_file;

    VERSION
     ----------
    4

In my case scenario the version is older, so we'll do this step in post upgrade steps (STEP 10).

Check National Characterset is UTF8 or AL16UTF16:
============================================
SQL> select value from NLS_DATABASE_PARAMETERS where parameter = 'NLS_NCHAR_CHARACTERSET';


Step 4: Disable Cluster option
#####
Set parameter cluster_database to FLASE for RAC DB.

SQL> ALTER SYSTEM SET cluster_database = false scope=spfile;


Step 5: Disable Vault | Adjust parameters for JVM
#####
>Disable Database Vault if enabled:

>IF JVM installed, java_pool_size and shared_pool_size must be set to at least 250MB prior to the upgrade.


Step 6: Last checks:
#####
SQL> SELECT * FROM v$recover_file;
no rows selected

SQL> SELECT * FROM v$backup WHERE status != 'NOT ACTIVE';
no rows selected

SQL> SELECT * FROM dba_2pc_pending;--outstanding distributed transactions 
no rows selected

SQL> SELECT name FROM sys.user$ WHERE ext_username IS NOT NULL AND password = 'GLOBAL';


Step 7: Disable all batch and cron jobs:
#####
Disable crontab scripts:
---------------------
 
# crontab /root/crontab_root 
# crontab /dev/null
# crontab -l

 
# crontab -l > /home/oracle/oracle_crontab
# crontab /dev/null
# crontab -l

Disable Database scheduler:
------------------------
SQL> EXEC dbms_scheduler.set_scheduler_attribute('SCHEDULER_DISABLED','TRUE');
SQL> alter system set job_queue_processes=0 scope=both;
It's strongly recommended to take a database cold backup before going through next steps.
 > Although I heavily rely on RMAN on many cases but actually in such scenarios I usually prefer cold backup technique, feel free to take an RMAN  or any kind of backup you can rely on.


Step 8: Set the Database in the Noarchivelog mode.
#####
>Stop listeners: lsnrctl stop
>Stop DBCONSOLE: emctl stop dbconsole

Putting the database in the noarchivelog speed up the upgrade and minimize the downtime window.

$ sqlplus "/as sysdba"
SQL> shutdown immediate;
SQL> startup mount
SQL> alter database flashback off;
SQL> alter database noarchivelog;
SQL> archive log stop;
SQL> shutdown immediate;


##############
UPGRADE STEPS:
##############

Step 9: Execute the upgrade script
####
> If you encounter a message listing obsolete initialization parameters during "startup upgrade", remove the obsolete parameters from the PFILE.

# sqlplus / as sysdba

SQL> startup upgrade
SQL> spool upgrade.log
SQL> @?/rdbms/admin/catupgrd.sql 

Note: In case you experience errors during the upgrade, fix it first then re-run catupgrd.sql script as many times as necessary using this order:
1)Shu immediate   
2)Startup Upgrade  
        3)@?/rdbms/admin/catupgrd.sql 

Once upgrade script is done:
> Check the spool file for errors.
> Restart the database in normal mode and run the following scripts:

SQL> startup
        -- Check the validity of Oracle installed options:
SQL> @?/rdbms/admin/utlu112s.sql
SQL> @?/rdbms/admin/catuppst.sql
        -- Compile invalid objects:
SQL> @?/rdbms/admin/utlrp.sql
        -- .
SQL> @?/rdbms/admin/utluiobj.sql
        -- :
       --e.g.:
      SQL> alter trigger SYSADM.TR_AFTER_LOGON compile;

Run dbupgdiag.sql script (See note: 556610.1): 
dbupgdiag.sql script verifies all dba_registry components are valid:
SQL> @/home/oracle/dbupgdiag.sql


###############
Post Upgrade Steps:
###############

Step 10: Upgrade the TimeZone version
######
PREPARE Stage:
=============

SQL> SHU IMMEDIATE
SQL> startup upgrade
SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value  
     FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;
     
     PROPERTY_NAME                       VALUE
     ------------------------------              ------------------------------
     DST_PRIMARY_TT_VERSION              4
     DST_SECONDARY_TT_VERSION        0
     DST_UPGRADE_STATE                      NONE

SQL> alter session set "_with_subquery"=materialize;
SQL> alter session set "_simple_view_merging"=TRUE;
SQL> exec DBMS_DST.BEGIN_PREPARE(14)

SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value  
     FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;
     
     PROPERTY_NAME                       VALUE
     ------------------------------              ------------------------------
     DST_PRIMARY_TT_VERSION              4
     DST_SECONDARY_TT_VERSION        14
     DST_UPGRADE_STATE                      PREPARE

SQL> TRUNCATE TABLE SYS.DST$TRIGGER_TABLE;
SQL> TRUNCATE TABLE sys.dst$affected_tables;
SQL> TRUNCATE TABLE sys.dst$error_table;

SQL> set serveroutput on
     BEGIN  
     DBMS_DST.FIND_AFFECTED_TABLES  
     (affected_tables => 'sys.dst$affected_tables',  
     log_errors => TRUE,  
     log_errors_table => 'sys.dst$error_table');  
     END;  
     /
SQL> SELECT * FROM sys.dst$affected_tables;
no rows selected

SQL> SELECT * FROM sys.dst$error_table;
no rows selected

SQL> EXEC DBMS_DST.END_PREPARE;
A prepare window has been successfully ended.

PL/SQL procedure successfully completed.

Upgrade Stage:
==============

SQL> purge dba_recyclebin;
SQL> alter session set "_with_subquery"=materialize;
SQL> alter session set "_simple_view_merging"=TRUE;
SQL> EXEC DBMS_DST.BEGIN_UPGRADE(14);

SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value  
     FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;
     
     PROPERTY_NAME                           VALUE
     ------------------------------                   --------------------
     DST_PRIMARY_TT_VERSION                14
     DST_SECONDARY_TT_VERSION           4
     DST_UPGRADE_STATE                         UPGRADE

SQL> SELECT OWNER, TABLE_NAME, UPGRADE_IN_PROGRESS FROM ALL_TSTZ_TABLES where UPGRADE_IN_PROGRESS='YES';
no rows selected

SQL> shutdown immediate
SQL> startup
SQL> alter session set "_with_subquery"=materialize;
SQL> alter session set "_simple_view_merging"=TRUE;

SQL> set serveroutput on
     VAR numfail number
     BEGIN
     DBMS_DST.UPGRADE_DATABASE(:numfail,
     parallel => TRUE,
     log_errors => TRUE,
     log_errors_table => 'SYS.DST$ERROR_TABLE',
     log_triggers_table => 'SYS.DST$TRIGGER_TABLE',
     error_on_overlap_time => FALSE,
     error_on_nonexisting_time => FALSE);
     DBMS_OUTPUT.PUT_LINE('Failures:'|| :numfail);
     END;
     /
.
.
...
Number of failures: 0
Failures:0

SQL> VAR fail number
     BEGIN
     DBMS_DST.END_UPGRADE(:fail);
     DBMS_OUTPUT.PUT_LINE('Failures:'|| :fail);
     END;
     /

An upgrade window has been successfully ended.
Failures:0

PL/SQL procedure successfully completed.

SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value  
     FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;
     
     PROPERTY_NAME                        VALUE
     ------------------------------                ------------------------------
     DST_PRIMARY_TT_VERSION              14
     DST_SECONDARY_TT_VERSION        0
     DST_UPGRADE_STATE                       NONE

SQL> SELECT * FROM v$timezone_file;

    FILENAME                VERSION
     --------------------         ----------
     timezlrg_14.dat              14

SQL> select TZ_VERSION from registry$database;

     TZ_VERSION
     ----------
              4

SQL> update registry$database set TZ_VERSION = (select version FROM v$timezone_file);
SQL> commit;
SQL> select TZ_VERSION from registry$database;

     TZ_VERSION
     ----------
             14

SQL> SELECT value$ FROM sys.props$ WHERE NAME = 'DST_PRIMARY_TT_VERSION';

     VALUE$
     --------
           14

SQL> exit;


STEP 11: Set CLUSTER_DATABASE=TRUE In case of RAC DB
######
Create the a pfile holds RAC configurations:

# cd $ORACLE_HOME/dbs
# mv initpefms1.ora initpefms1.ora.old
# mv spfilepefms1.ora spfilepefms1.ora.old
# vi initpefms1.ora

######################################################
#Memory Parameters:
######################################################
*.sga_max_size=19327352832
*.sga_target=19327352832
*.db_cache_size=8589934592
*.java_pool_size=104857600
*.large_pool_size=314572800
*.shared_pool_reserved_size=52428800
*.shared_pool_size=2618767104
*.sort_area_size=83886080
*.log_buffer=31457280
*.pga_aggregate_target=10737418240
######################################################
#Parameters with destinations:
######################################################
*.control_files='/ora_control1/pefms/control01.ctl','/ora_control2/pefms/control02.ctl'
*.db_recovery_file_dest='/ora_backupdisk/flash_recovery_area'
*.diagnostic_dest='/u01/oracle/11.2.0.3/db/diagnostics/pefms'
*.log_archive_dest_1='LOCATION=/ora_archive1/pefms'
*.log_archive_dest_2='LOCATION=/ora_archive2/pefms'
######################################################
#Important Parameters:
######################################################
*.db_flashback_retention_target=21600
*.db_recovery_file_dest_size=536870912000
*.fast_start_mttr_target=300
*.undo_management='AUTO'
*.undo_retention=172800
*.archive_lag_target=1800
*.cluster_database_instances=2
*.CLUSTER_DATABASE=true
*.compatible='11.2.0.3'
*.control_file_record_keep_time=30
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.service_names='PEFMS'
######################################################
#Instance Specific Parameters:
######################################################
pefms1.instance_name='pefms1'
pefms2.instance_name='pefms2'
pefms1.instance_number=1
pefms2.instance_number=2
pefms1.thread=1
pefms2.thread=2
pefms2.undo_retention=172800
pefms1.undo_tablespace='UNDOTBS1'
pefms2.undo_tablespace='UNDOTBS2'
######################################################
#Features Enable|Disable Parameters:
######################################################
*._mmv_query_rewrite_enabled=TRUE
*._optim_peek_user_binds=FALSE
*.control_management_pack_access='DIAGNOSTIC+TUNING'
*.query_rewrite_enabled='TRUE'
*.resource_limit=TRUE
*.result_cache_max_size=0
*.result_cache_mode='MANUAL'
*.star_transformation_enabled='FALSE'
######################################################
#Performance Parameters:
######################################################
*.cursor_sharing='EXACT'
*.session_cached_cursors=100
*.open_cursors=500
*.processes=1000
*.db_file_multiblock_read_count=16
*.log_archive_max_processes=3
*.optimizer_mode='ALL_ROWS'
*.parallel_degree_limit='4'
*.parallel_max_servers=2
*.timed_statistics=TRUE
*.transactions=2000
*.transactions_per_rollback_segment=10
######################################################
#Security Parameters:
######################################################
*.remote_login_passwordfile='EXCLUSIVE'
*.sql92_security=TRUE
*.sec_case_sensitive_logon=FALSE
######################################################
#Other Parameters:
######################################################
*.db_block_size=8192
*.db_name='pefms'
*.job_queue_processes=10
*.log_archive_format='%d_%t_%r_%s.arc'


SQL> shu immediate
SQL> startup mount pfile='/u01/oracle/11.2.0.3/db/dbs/initpefms1.ora'
SQL> ALTER DATABASE ADD LOGFILE THREAD 2 
     GROUP 4 (
    '/ora_redo1/pefms/pefms2_redo04_a.log',
    '/ora_redo2/pefms/pefms2_redo04_b.log'
  ) SIZE 100M BLOCKSIZE 512 REUSE ,
  GROUP 5 (
    '/ora_redo1/pefms/pefms2_redo05_a.log',
    '/ora_redo2/pefms/pefms2_redo05_b.log'
  ) SIZE 100M BLOCKSIZE 512 REUSE ,
  GROUP 6 (
    '/ora_redo1/pefms/pefms2_redo06_a.log',
    '/ora_redo2/pefms/pefms2_redo06_b.log'
  ) SIZE 100M BLOCKSIZE 512 REUSE ;

SQL> alter database archivelog;
SQL> alter database flashback on;
SQL> alter database open;
         

Copy the pfile to the other RAC node:
# scp $ORACLE_HOME/dbs/initpefms1.ora oracle@node2:$ORACLE_HOME/dbs/initpefms2.ora


STEP 12: Clusterware Configurations 
######
Start RAC Services on Node2:
---------------------------- 
# crsctl start crs

Add Database/Instances/listeners services:
---------------------------------------
Note: Don't include any UPPERCASE characters => [Metalink Note ID:  372145.1]
Note: Hashed commands #>> may be used if things didn't go smooth.
Note: Below steps will be done automatically if DBUA is used to upgrade the DB.
#>>

# srvctl add database -d pefms -o /u01/oracle/11.2.0.3/db
# srvctl add instance -d pefms -i pefms1 -n vla-ora-node1
# srvctl add instance -d pefms -i pefms2 -n vla-ora-node2
# srvctl add listener -o /u01/oracle/11.2.0.3/db -l LISTENER_PEFMS1 -s
# srvctl add listener -o /u01/oracle/11.2.0.3/db -l LISTENER_PEFMS2 -s 

-d  database name
-i   instance name
-n  node name
-o  ORACLE_HOME path
-l   listener_name

In case you did an out-place upgrade on the same server this command will upgrade RAC configuration with the new Oracle Home:
# srvctl upgrade database -d  pefms -o /u01/oracle/11.2.0.3/db

Check RAC configuration:
-------------------------
# srvctl config database -d pefms

Database unique name: pefms
Database name: 
Oracle home: /u01/oracle/11.2.0.3/db
Oracle user: oracle
Spfile: 
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: pefms
Database instances: pefms1,pefms2
Disk Groups: 
Mount point paths: 
Services: pefms11g
Type: RAC
Database is administrator managed

Create SPFILE:
---------------
On Node1:
SQL> create spfile=/ora_data1/pefms/spfilepefms.ora from pfile;

Note: The spfile will be on the shared storage to be readable from both RAC nodes.

# cd $ORACLE_HOME/dbs
# mv initpefms1.ora initpefms1.sav
# vi initpefms1.ora
# Add this line only:
spfile=/ora_data1/pefms/spfilepefms.ora

On Node2:
# cd $ORACLE_HOME/dbs
# mv initpefms2.ora initpefms2.sav
# vi initpefms2.ora
# Add this line only:
spfile=/ora_data1/pefms/spfilepefms.ora

In the next startup of RAC instance it will use the new spfile.

Test the RAC configurations: 
# srvctl stop database -d pefms
# srvctl start database -d pefms
# srvctl stop instance -d pefms -i pefms1 
# srvctl stop instance -d pefms -i pefms2 

 
Troubleshooting:
---------------
=>If an instance didn't startup, may be it's marked DISABLED in the configurations, 
            enable it with:
 e.g. For specific instance:
    # srvctl enable instance -d pefms -i pefms2
                For the database:
            # srvctl enable database -d pefms

Create TAF services:  
=================
The following is just an example:

# srvctl add service -d pefms -s pefms11g -r pefms1,pefms2 -P basic
# srvctl start service -d pefms -s pefms11g 

Troubleshooting:   To Stop or Delete the service:
---------------    ----------------------------------------
# srvctl stop service -d pefms -s pefms11g 
# crs_stop
# srvctl remove service -d pefms -s pefms11g 
Delete the Service on the DB:
SQL> EXEC dbms_service.delete_service('pefms11g');

Then you can use this service PEFMS11g  as a connection string when the application connect to the database.


STEP 13: Upgrade the RMAN Recovery Catalog
######
If you're using the Recovery Catalog to backup your database you have to upgrade it using "UPGRADE CATALOG;" command.


STEP 14: Upgrade Statistics Tables
######
If you created statistics tables before, using the DBMS_STATS.CREATE_STAT_TABLE, then upgrade each table by running:
e.g.
SQL> EXECUTE DBMS_STATS.UPGRADE_STAT_TABLE('scott', 'stat_table');


STEP 15: Enable Database Vault
######
Enable Oracle Database Vault and Revoke the DV_PATCH_ADMIN Role [Note ID: 453903.1]


STEP 16: RMAN Configuration
######
RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/ora_backupdisk/rmanbkps/controlfiles/snapcf_pefms.f';
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/ora_backupdisk/rmanbkps/controlfiles/%F';
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;
RMAN> CONFIGURE BACKUP OPTIMIZATION ON;
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 4;

Note: Starting with 11.2.0.2 onward the snapshot controlfile must be located on a shared accessible location by all nodes in the cluster [ID 1472171.1].
In 11.2.0.3 controlfile backups (auto & manual) must created on a shared device [Bug 13780443].

STEP 17: Enable Cron & DB Jobs:
######
Enable Crontab Jobs: (which we stopped in step 7)
----------------------
 
#crontab /root/crontab_root 

# crontab /home/oracle/crontab_oracle

Enable DB jobs:
----------------
SQL> EXEC dbms_scheduler.set_scheduler_attribute('SCHEDULER_DISABLED','FALSE');


STEP 18: Configure the Enterprise Manager
######

Configure the SCAN LISTENER: 
----------------------------------
EM depends on the scan listener to run.

Ensure that SCAN LISTENER is exist:
# vi $GRID_HOME/network/admin/listener.ora

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON
LISTENER_SCAN1 =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
    )
  )

Ensure that SCAN listener is up:
# crs_start ora.LISTENER_SCAN1.lsnr

Add the SCAN listener tns entry to tnsnames.ora on all nodes:
# vi $ORACLE_HOME/network/admin/tnsnames.ora

LISTENER_SCAN1 =
  (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = cluster-scan)(PORT = 1523))
  )

Set the parameter remote_listener to point to the SCAN listener in all RAC instances:
SQL> alter system set remote_listener=LISTENER_SCAN1 sid='*';

Using Command line way:
====================
First Change this parameter in DEFAULT profile:
----------------------------------------------
Note down the DEFAULT profile settings:
SQL> spool default_profile_settings.log
SQL> select * from dba_profiles where profile='DEFAULT';  
SQL> spool off
SQL> alter profile default limit PASSWORD_REUSE_MAX unlimited;
SQL> alter user dbsnmp profile default;
SQL> alter profile default limit PASSWORD_VERIFY_FUNCTION null;
SQL> alter profile default limit PASSWORD_LOCK_TIME unlimited;

# emca -config dbcontrol db -repos create -cluster

STARTED EMCA at Sep 9, 2013 4:51:28 PM
EM Configuration Assistant, Version 11.2.0.3.0 Production
Copyright (c) 2003, 2011, Oracle.  All rights reserved.

Enter the following information:
Database unique name: pefms
Service name: pefms11g
Listener port number: 1521
Listener ORACLE_HOME [ /u01/grid/11.2.0.3/grid ]: /u01/oracle/11.2.0.3/db
Password for SYS user:  
Password for DBSNMP user:  
Password for SYSMAN user:  
Cluster name: cluster
Email address for notifications (optional): 
Outgoing Mail (SMTP) server for notifications (optional): 

Once EM setup is done, return back the default profile settings
SQL> alter profile default limit PASSWORD_REUSE_MAX x;
SQL> alter profile default limit PASSWORD_VERIFY_FUNCTION VERIFY_FUNCTION;
SQL> alter profile default limit PASSWORD_LOCK_TIME 1;

UPGRADE IS DONE


############
Optional STEPS: Good To Do (Not mandatory for the upgrade)
############


STEP 19: Rebuild Indexes & Gather Statistics
######

Rebuild Unusable indexes: 
======================
SQL> select 'ALTER INDEX '||OWNER||'.'||INDEX_NAME||' REBUILD ONLINE;'from dba_indexes where status ='UNUSABLE';

Gather FIXED OBJECTS statistics:
=============================
Fixed objects are the x$ tables (being loaded in SGA during startup) on which V$ views are built (V$SQL etc.).
If the statistics are not gathered on fixed objects, the Optimizer will use predefined default values for the statistics. These defaults may lead to inaccurate execution plans.
Statistics on fixed objects are not being gathered automatically nor within gathering DB stats.

Backup current fixed objects statistics:
SQL> exec DBMS_STATS.CREATE_STAT_TABLE('SYS','STATS_TABLE','USERS');
SQL> exec dbms_stats.export_fixed_objects_stats(stattab=>'STATS_TABLE',statown=>'SYS');

During Peek hours run:
SQL> Exec DBMS_STATS.GATHER_FIXED_OBJECTS_STATS (no_invalidate => FALSE );

In case you encountered a bad performance after gathering new fixed object stats, restore the old stats:
SQL> exec dbms_stats.delete_fixed_objects_stats(); 
SQL> exec dbms_stats.import_fixed_objects_stats(stattab =>'STATS_TABLE',statown=>'SYS');

In general, fixed object statistics should be gathered after:
-A major database or application upgrade.
-Implementing a new module.
-Changing the database configuration. e.g. changing the size of memory pools (sga,pga,..).
-Poor performance/Hang encountered while querying dynamic views e.g. V$ views, RMAN repository.
Note: 
-It's recommended to Gather the fixed object stats during peak hours (system is busy) or after the peak hours but the sessions are still connected (even if they idle), to guarantee that the fixed object tables been populated and the statistics well represent the DB activity. also note that performance degradation may be experienced while the statistics are gathering.
-Having no statistics is better than having a non represented statistics.

Gather SYSTEM Statistics:   
======================
System statistics are statistics about CPU speed and IO performance, it enables the CBO to 
effectively cost each operation in an execution plan. Introduced in Oracle 9i.

Oracle highly recommends gathering system statistics during a representative workload, 
ideally at peak workload time, in order to provide more accurate CPU/IO cost estimates to the optimizer. You only need to gather system statistics one time.

There are two types of system statistics (NOWORKLOAD statistics & WORKLOAD statistics):
NOWORKLOAD statistics: 
This will simulates a workload -not the real one but a simulation- and will not collect full statistics, it's less accurate than "WORKLOAD statistics" but if you can't capture the statistics during a typical workload you can use noworkload statistics.
To gather noworkload statistics:
SQL> execute dbms_stats.gather_system_stats();

WORKLOAD statistics: The one I recommend
This will gather statistics during a current workload -which supposed to be representative of actual system I/O and CPU workload on the DB-.
To gather WORKLOAD statistics:
e.g. In thursday at 11:00am run the following:
SQL> execute dbms_stats.gather_system_stats('start');

Once the workload window ends after 1,2,3.. hours e.g at 2:00pm or whatever, stop the system statistics gathering:
SQL> execute dbms_stats.gather_system_stats('stop');

OR, You can use time interval (minutes) instead of issuing start/stop command manually:
SQL> execute dbms_stats.gather_system_stats('interval',60);

Gather DICTIONARY Statistics:
==========================
SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;->Will gather stats on dictionary tables 20% of SYS schema tables.
or...
SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS ('SYS');->Will gather stats on 100% of SYS schema tables including dictionary tables.
or...
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS (gather_sys=>TRUE); ->Will gather stats on the whole DB including all SYS & dictionary tables.


Gather Database Statistics:
======================
BEWARE: You have to do an intensive test before gathering the full database statistics as this will change the execution plans of SQL statements on the database to better or worst, your test will determine this.

SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE,degree => 8);

Or...
In case you want to gather Database Stats + Histograms on all columns :
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(ESTIMATE_PERCENT=>100,METHOD_OPT=>'FOR ALL COLUMNS SIZE SKEWONLY',cascade => TRUE,degree => 8);

if used ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE  will Let Oracle estimate skewed values, always gives excellent results and faster than using 100.
if removed "cascade => TRUE" Oracle will determine whether index statistics to be collected or not.


Step 20: Check Oracle Recommended Patches:
#######
Note ID 756671.1 includes the recommended patches for releases starting from 10.2.0.3.
Note ID 742060.1 represents the most accurate information Oracle can provide for coming releases.

PSU patch and CPU|SPU patch: [ID 854428.1]
==========================
*You can check the latest applied patches on the database by running this query:
SQL> select * from DBA_REGISTRY_HISTORY;

-->The July SPU patch I had applied in Part II before on the ORACLE_HOME is not showing a record in DBA_REGISTRY_HISTORY after the upgrade , so I ran:
 SQL> @?/rdbms/admin/catbundle.sql cpu apply
 SQL> select * from DBA_REGISTRY_HISTORY;

RAC Configuration Audit Tool 

This tool is a script provided by Oracle to help you perform a Health Check on your Database environment (specially RAC environments), it also checks the implementation of Maximum Availability Architecture on your environment.

For all information about this tool + download visit METALINK [Doc ID 1268927.1]
For a brief explanation check this link: http://dba-tips.blogspot.ae/2013/12/introduction-to-great-tool-rac.html

After downloading the zip file available in METALINK [Doc ID 1268927.1], extract it under a local mount point e.g. /home/oracle, then run the script:

cd /home/oracle
./raccheck

=> Provide the root user password/ or select to use sudo privilege, to use SUDO privilege without password do the following in a new session:
visudo
#Allow oracle to run raccheck.sh script:
oracle ALL=(root) NOPASSWD:/tmp/root_raccheck.sh

=>/tmp/root_raccheck.sh will be created during running raccheck, after script is done it will  be deleted.

-> Go to raccheck session and Enter 2 to run the script with SUDO privilege.

* Once script is done it will create an HTML report, upload it to your PC and open it.
* Script may take from 15 to 30min to finish with a minimal performance overhead from 3% to 20%
* Feel free to remove/Hash the entry you added to sudoers file after script is done.

Note that time by time Oracle release a new version of this script, it's recommended to download the latest version of that script every three months.


Exclusive Environment configuration:
############################
Enable supplemental logging (Data Mining):
====================================
Enable supplemental logging for easy checking transactions happen in the past:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
      
     =>This statement may take long time as it waits for all transactions to on the DB to finish.
=>Check that minimal supplemental logging is enabled:

SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

SUPPLEME
--------
YES


Mission Accomplished.


////////////////////////////////////////////////////////
Most problems being raised after the upgrade from a release to another are performance related ones, In case you have some SQL STATEMENTS performing bad after the upgrade you can set this parameter to the previous release:
SQL> Alter system set optimizer_features_enable='11.2.0.1';
=>Check and see if it makes difference, if so try to dig more (gather statistics, check the execution plans, create sql profiles,..)
It's not recommended to set this parameter instance wide as I did, you can set it for a SQL STATEMENT using hint /*+ optimizer_features_enable('11.2.0.1') */ or use it inside a session "alter session set optimizer_features_enable='11.2.0.1';"
////////////////////////////////////////////////////////



Introduction To a Great Tool: RAC Configuration Audit Tool (RAC & DB Health Check & MAA Check)

$
0
0
RAC Configuration Audit Tool 

This tool is a script provided by Oracle to help you perform a Health Check on your Database environment (specially RAC environments), it also checks the implementation of Maximum Availability Architecture (MAA) on your environment.

This tool is very helpful specially for DBAs who deal with a lot of environments, it gives them a very fast and easy way to perform health checks on the database environments plus easing their job to identify and implement MAA recommendations.

This script (tool) can run on a currently running production environment [10gr2 onward], it will create a report at the end, giving you a system health scorecard plus the failed/passed items and recommendations.

For all information about this tool + download visit METALINK [Doc ID 1268927.1]

Now I'll provide a very brief explanation of the main functions of that tool.

After downloading the zip file available in METALINK [Doc ID 1268927.1], extract it under a local mount point e.g. /home/oracle, then run the script:

# cd /home/oracle
# ./raccheck

=> Provide the root user password/ or select to use sudo privilege, to use SUDO privilege without password do the following in a new session:
# visudo
#Allow oracle to run raccheck.sh script:
oracle ALL=(root) NOPASSWD:/tmp/root_raccheck.sh

=>/tmp/root_raccheck.sh will be created during running raccheck, after script is done it will  be deleted.

-> Go to raccheck session and Enter 2 to run the script with SUDO privilege.

* Once script is done it will create an HTML report, upload it to your PC and open it.
* Script may take from 15 to 30min to finish with a minimal performance overhead from 3% to 20%
* Feel free to remove/Hash the entry you added to sudoers file after script is done.

Note that time by time Oracle release a new version of this script, it's recommended to download the latest version of that script every three months.

Extra Features:
------------------
In case you don't have a DATAGUARD and want to discard it from the health&MAA checks:
# ./raccheck -m

In case you want to do a comparison between two reports, provide the reports directories name:
# ./raccheck -diff raccheck_cetrain01_ORCL_020813_105850 raccheck_cetrain01_ORCL_022713_163238

In case you want to upgrade to 11.2.0.3, this automate pre and post checks:
# ./raccheck -u -o pre

That was a very brief explanation for RAC Configuration Audit Tool, the full documentation is available in METALINK [Doc ID 1268927.1].

Database | Filesystem | CPU Monitoring Script

$
0
0
I would like to share with you today the dbalarm.sh script. This script monitors FILESYSTEM usage, CPU overhead and report ORA- errors and TNS- errors on ALL databases & Listeners ALERTLOGs on the server.

This script is coded to send you the new errors that appear since the last run,
in other words, it will not report the already reported errors unless they appear again in the logs, and this is one of the key strengths of that script.

The script has been tested on Linux and SUN environments.

How to use dbalarm script?
========================

This script is very smart and very easy to use, just follow these three steps:

Step 1:
Download the script from this link:

Step 2:
Open the script and change the E-mail address to your email address in the line# 16
MAIL_LIST="youremail@yourcompany.com"

Note: sendmail service should be configured and running on your server.

Step 3:
By Oracle user:
Schedule the execution of this script in the crontab at least every 5 minutes:
# crontab -e
#Add this line to schedule the run of dbalarm.sh script every 5minutes:
*/5 * * * * /home/oracle/dbalarm.sh
Note: /home/oracle/dbalarm.sh is the full path pointing to dbalarm script where /home/oracle is the Oracle user home directory.

In case you will schedule this script to run from root user crontab:
# crontab -e
#Add this line to schedule the run of dbalarm.sh script every 5minutes:
*/5 * * * * su - oracle -c /home/oracle/dbalarm.sh
Now the only thing remaining is to set back and relax and the script will send you the errors once it appear.

Also you can visit and download the whole DBA bundle that includes this script plus many other smart and easy to use scripts for database administration tasks:
http://dba-tips.blogspot.ae/2014/02/oracle-database-administration-scripts.html

DISCLAIMER: THIS SCRIPT IS DISTRIBUTED IN THE HOPE THAT IT WILL BE USEFUL, BUT WITHOUT ANY WARRANTY. IT IS PROVIDED "AS IS".

In case the download link is not working, please find below the full code:



COLD BACKUP Script (For Oracle Database)

$
0
0
Today I'll share with a script can take a cold backup of your oracle database on Linux and Unix environments.

The script is downloadable from this link:
https://www.dropbox.com/s/sjibiupwic9oxt1/COLD_BACKUP.sh?dl=0

COLD_BACKUP.sh script is part of the database administration bundle, this bundle includes more than 30 scripts I'm using in daily database administration activities, you can download the DBA BUNDLE from here: 
[http://dba-tips.blogspot.ae/2014/02/oracle-database-administration-scripts.html].

The cold backup script I'm sharing with you will check the current running databases on the server and will ask you to select the database you want to backup (in case that you have multiple running databases on the server).

It will shutdown the database, take a cold backup, startup the database.
This script is RAC aware, it will detect if your database is RAC or standalone one, if it's a RAC DB the script will ensure that there is no other instances are currently running for the same database before starting the cold backup process.

The most important feature in that script, it will create a restore script to make it easy for you to restore back the cold backup.
[Preparing a script to restore back the cold backup is a time consuming job, this script should safe your time.]


Note: This script is not designed for databases using ASM  :-)

DISCLAIMER: THIS SCRIPT IS DISTRIBUTED IN THE HOPE THAT IT WILL BE USEFUL, BUT WITHOUT ANY WARRANTY. IT IS PROVIDED "AS IS".

If you faced a problem with the download link, you can copy & paste the below code using [Ctrl+insert] to copy and [Shift+insert] to paste:

Oracle Logs CLEANUP Script

$
0
0
In this post I'll share with you one of my scripts I'm using to backup and cleanup Oracle logs associated with a specific database on the Oracle server.

This script has been tested on Linux & SUN environments.

You can download the script from here:
https://www.dropbox.com/s/eytsv5duxe95lrh/oracle_cleanup.sh?dl=0

Once you run it, it will ask you to select a database (in case that you have multiple running databases on the server), then will ask you the location you want to backup the logs, then it will start to clean up all logs under udump, bdump, cdump folder plus the audit logs and also will cleanup the logs of the listener associated with the selected database.

Note: This script will backup and delete all logs and will keep the logs of the last 5 days only.

Note: it's recommended to test this script on a test environment before you run it on production ones.

DISCLAIMER: THIS SCRIPT IS DISTRIBUTED IN THE HOPE THAT IT WILL BE USEFUL, BUT WITHOUT ANY WARRANTY. IT IS PROVIDED "AS IS".

Also you can visit and download the whole DBA bundle that includes this script plus many other smart and easy to use scripts for database administration tasks:
http://dba-tips.blogspot.ae/2014/02/oracle-database-administration-scripts.html

DISCLAIMER: THIS SCRIPT IS DISTRIBUTED IN THE HOPE THAT IT WILL BE USEFUL, BUT WITHOUT ANY WARRANTY. IT IS PROVIDED "AS IS".

Here is the code of the script in case the download link didn't open:



Script to TRACE Oracle Session

$
0
0
I'm sharing with you today two shell scripts will help you easily trace an Oracle session and get a readable output trace file.

The first script start_tracing.sh will let you identify the session you want to trace.


The second script stop_tracing.sh will let you stop tracing the session and will provide you the original trace file plus another readable version of the original one has been processed using TKPROF utility.


You can download the scripts from here:


start_tracing.sh

https://www.dropbox.com/s/nvhefxlrr4w254g/start_tracing.sh?dl=0

stop_tracing.sh

https://www.dropbox.com/s/xp2hwgbb0xrggmq/stop_tracing.sh?dl=0

Also you can visit and download the whole DBA bundle that includes these two scripts plus many other smart and easy to use scripts for database administration tasks:
http://dba-tips.blogspot.ae/2014/02/oracle-database-administration-scripts.html

DISCLAIMER: THIS SCRIPT IS DISTRIBUTED IN THE HOPE THAT IT WILL BE USEFUL, BUT WITHOUT ANY WARRANTY. IT IS PROVIDED "AS IS".

Here are the codes in case the download links are not working with you:


start_tracing.sh


stop_tracing.sh

Extract Oracle Audit Records Script

$
0
0
Today I'll share with you one of my scripts I'm using to easily retrieve the audit records of an oracle database user.

Note: Auditing should be enabled on the database or the script will return no rows.
              To enable auditing please check this article:
              http://www.oracle-base.com/articles/10g/auditing-10gr2.php

Download the script from this link:

Once you run this script it will let you choose the database you want to retrieve data from (in case that you have multiple running databases on the server), then it will ask you to enter the username, and lastly will ask you to enter the number of days back you want to retrieve audit data or enter a specific date.

This script is very easy to use, it has been tested on Linux and SUN environments.

DISCLAIMER: THIS SCRIPT IS DISTRIBUTED IN THE HOPE THAT IT WILL BE USEFUL, BUT WITHOUT ANY WARRANTY. IT IS PROVIDED "AS IS".


Here is the script code, in case the download link is not working with you:

Viewing all 214 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>