Posted by: Monish | January 29, 2023

10 Oracle RAC Conceptual Interview Q&A

Oracle RAC (Real Application Cluster) administration is one of the most important skills a DBA must have to excel in his/her career. A candidate with a clear understanding and hands-on experience in Oracle RAC always gets the preference. In this post, I have tried to find answers of 10 such conceptual questions I feel could be useful for the candidates preparing for DBA interviews. Please note that RAC is a vast subject and it’s not possible to cover it all in a single post. I’ll try to create multiple such posts in near future.

  1. How do you troubleshoot an instance eviction in a RAC environment?

Ans: Troubleshooting a RAC instance eviction is a complex process, there are several steps that can be taken to identify and resolve the issue.

  • Check the alert log and trace files of the affected instance for any error messages or clues.
  • Check the Cluster Health Monitor (CHM) or other monitoring tools to see if they have detected any issues with the instance.
  • Check the network connectivity between the nodes to ensure that they are able to communicate properly.
  • Check the storage connectivity to ensure that the instance can access the shared storage.
  • Check the status of the instance in the cluster to see if it is still running or if it has been evicted.
  • Check the status of the other instances in the cluster to see if they are still running and communicating properly with the affected instance.
  • Check the status of the cluster interconnect to ensure that it is functioning properly.
  • Check the status of the cluster services, such as the Cluster Ready Services (CRS) and the Global Enqueue Service (GES), to ensure that they are running properly.

Once the root cause of the issue has been identified, it can be addressed and resolved. This may involve restarting the instance, resolving network or storage issues, or patching the software. It is also important to take steps to prevent the issue from happening again in the future.

  1. How do you configure a RAC environment for high availability and disaster recovery?

Ans: Configuring a RAC environment for high availability and disaster recovery involves several key steps, including:

  • Implementing a load balancing solution: Implement a load balancing solution to distribute the workload evenly across the instances in the RAC environment and to ensure high availability.
  • Configuring automatic instance failover: Configure automatic instance failover to ensure that in the event of an instance failure, the workload is automatically redirected to another instance in the RAC environment.
  • Implementing ASM disk mirroring: Implement ASM disk mirroring to provide redundancy and to protect data in the event of a disk failure.
  • Implementing Data Guard: Implement Oracle’s Data Guard to provide disaster recovery capabilities and to protect data in the event of a disaster. Data Guard provides real-time data protection, automatic failover, and data replication.
  • Configuring backup and recovery strategies: Configure backup and recovery strategies to ensure that data can be restored in the event of a disaster or data loss. This includes setting up regular backups, configuring recovery scenarios, and testing disaster recovery procedures.
  • Monitoring the RAC environment: Monitor the RAC environment regularly to ensure that all components are working properly, that the load balancing solution is distributing the workload evenly, and that the disaster recovery solutions are working as expected.

In a RAC environment, it is important to implement a load balancing solution, automatic instance failover, disk mirroring, and disaster recovery solutions to ensure high availability and to protect data. Regular monitoring of the RAC environment is also important to ensure that all components are working properly and that the disaster recovery solutions are working as expected. This will help ensure that the RAC environment is highly available and that data is protected in the event of a disaster or data loss.

  1. How do you manage and monitor the cache fusion process in a RAC cluster?

Ans: Cache fusion is a key component of the RAC environment, as it allows multiple instances to access the same data in a cache, improving performance and scalability. Managing and monitoring cache fusion in a RAC cluster involves several key steps, including:

  • Monitoring global cache service statistics: Monitor global cache service statistics to understand the health and performance of the cache fusion process. This includes monitoring statistics such as cache hits, cache misses, and cache transfer counts.
  • Monitoring the Global Cache Directory (GCD): Monitor the Global Cache Directory (GCD) to understand the distribution of data blocks across the instances in the RAC cluster. This can help to identify any performance issues or hot spots in the cache fusion process.
  • Monitoring the interconnect: Monitor the interconnect, which is the communication channel used by cache fusion, to ensure that it is functioning properly and that data is being transferred between instances in a timely manner.
  • Monitoring lock contention: Monitor lock contention, which can occur when multiple instances attempt to access the same data block, to ensure that lock contention is not impacting performance or causing the RAC environment to become unstable.
  • Monitoring performance metrics: Monitor performance metrics such as response time, CPU utilization, and memory usage to ensure that the cache fusion process is not impacting performance or causing the RAC environment to become unstable.
  • Tuning the cache fusion process: Tune the cache fusion process as needed, including adjusting the size of the buffer cache and adjusting the distribution of data blocks across instances.

In a RAC environment, it is important to regularly monitor the cache fusion process to ensure that it is functioning properly and that performance and scalability are optimized. This will help to ensure that the RAC environment is highly available and that performance is optimized.

  1. How do you handle node evictions and failovers in a RAC environment?

Ans: In a RAC environment, a node eviction occurs when a node leaves the cluster unexpectedly, such as due to a hardware failure or network issue. A failover, on the other hand, is a planned event where one node takes over the workload of another node that is being taken offline for maintenance or upgrade.

To handle node evictions, it is important to have proper monitoring in place to notify the issue as soon as possible and take appropriate action. This can include using Oracle’s Cluster Health Monitor (CHM) or other third-party monitoring tools. Once the issue is identified, it is important to diagnose the root cause and take steps to prevent it from happening again in the future.

To handle failovers, it is important to have a clear and well-tested plan in place for how the workload will be transferred from one node to another. This can include using Oracle’s Automatic Storage Management (ASM) or other third-party tools for data replication, as well as configuring proper listener settings and load balancing. It is also important to test and validate the failover process regularly to ensure that it is working properly.

In addition, it is a best practice to have proper redundancy in the RAC cluster, to ensure that if one node fails, the other nodes can take over the workload without interruption. This can include redundant network connections, storage, and power supplies to ensure high availability.

  1. How do you configure and manage ASM in a RAC environment?

Ans: Configuring and managing ASM (Automatic Storage Management) in a RAC environment involves several key steps, including:

  • Installing ASM: Install the ASM software on each node in the RAC environment.
  • Configuring ASM instances: Configure ASM instances on each node in the RAC environment. Each ASM instance is responsible for managing the disk groups that are used by the RAC environment.
  • Creating disk groups: Create disk groups in the ASM environment. Disk groups are collections of disk drives that are managed by ASM and can be used to store data.
  • Adding disk drives to disk groups: Add disk drives to the disk groups in the ASM environment. This will make the disk drives available for use by the RAC environment.
  • Configuring ASM disk mirroring: Configure ASM disk mirroring to provide redundancy and to protect data in the event of a disk failure.
  • Managing disk groups: Manage the disk groups in the ASM environment, including addi       ng or removing disk drives, resizing disk groups, and monitoring disk usage.
  • Monitoring ASM performance: Monitor the performance of the ASM environment to ensure that disk space usage is balanced across the disk groups, and to ensure that disk space usage does not exceed the available space.

In a RAC environment, it is important to ensure that the ASM environment is configured and managed properly to ensure high availability and to protect data from disk failures. Regular monitoring of the performance of the ASM environment is also important to ensure that disk space usage is balanced and that disk usage does not exceed the available space. This will help ensure that the RAC environment is highly available and that data is protected.

  1. How do you optimize the performance of a RAC cluster?

Ans: Optimizing the performance of a RAC cluster involves several key steps, including:

  • Monitoring performance metrics: Regularly monitor performance metrics such as response time, CPU utilization, and memory usage to identify any potential performance bottlenecks.
  • Load balancing: Implement a load balancing solution to distribute the workload evenly across the instances in the RAC environment. This can help to ensure that performance is optimized and that the RAC environment is highly available.
  • Tuning the interconnect: Tune the interconnect, which is the communication channel used by cache fusion, to ensure that it is functioning properly and that data is being transferred between instances in a timely manner.
  • Optimizing the buffer cache: Optimize the buffer cache to ensure that frequently used data blocks are stored in memory, reducing disk I/O and improving performance.
  • Optimizing the database design: Optimize the database design to ensure that data is stored in an efficient manner and that queries can be executed quickly. This includes designing indexes, partitioning tables, and using materialized views where appropriate.
  • Monitoring and tuning the cache fusion process: Monitor and tune the cache fusion process to ensure that it is functioning properly and that performance is optimized.
  • Monitoring and tuning SQL statements: Monitor and tune SQL statements to ensure that they are executing efficiently and that performance is optimized.

In a RAC environment, it is important to regularly monitor performance metrics and to take steps to optimize performance, such as implementing a load-balancing solution, tuning the interconnect, and optimizing the database design. This will help to ensure that the RAC environment is highly available and that performance is optimized.

  1. How do you implement a rolling upgrade in a RAC environment?

Ans: A rolling upgrade in a RAC environment is a process of upgrading one instance in the cluster at a time, while the other instances continue to operate normally. The steps to implement a rolling upgrade in a RAC environment are as follows:

  • Plan the upgrade: Plan the upgrade by identifying the sequence of instances that will be upgraded, the timeline for the upgrade, and the resources that will be required.
  • Backup the data: Backup the data in the RAC environment to ensure that it is protected and can be restored in the event of a problem.
  • Upgrade the first instance: Upgrade the first instance in the RAC environment by following the appropriate upgrade procedures for your database software. This may involve installing new software, applying patches, or reconfiguring the instance.
  • Verify the first instance: Verify that the first instance has been successfully upgraded by checking the logs, performance metrics, and other sources of information.
  • Repeat the process for each instance: Repeat the process for each instance in the RAC environment, upgrading one instance at a time and verifying each instance before moving on to the next.
  • Synchronize the instances: Once all instances have been upgraded, synchronize the instances in the RAC environment to ensure that they are all running the same version of the software and that they are working together correctly.
  • Verify the RAC environment: Verify the RAC environment as a whole to ensure that it is functioning properly and that performance is optimized.

In a RAC environment, it is important to implement a rolling upgrade in a well-planned and systematic manner to ensure that the RAC environment remains highly available and that performance is optimized.

  1. How do you handle data inconsistencies in a RAC environment?

Ans: Data inconsistencies can occur in a RAC environment when multiple instances are accessing the same data simultaneously. To handle data inconsistencies in a RAC environment, you should follow these steps:

  • Monitor for inconsistencies: Regularly monitor the RAC environment for data inconsistencies by checking the database logs, performance metrics, and other sources of information.
  • Isolate the cause of the inconsistency: Once you have identified a data inconsistency, isolate the cause of the problem. This may involve reviewing database logs, performance metrics, or other information.
  • Implement corrective actions: Implement corrective actions to resolve the data inconsistency. This may involve resolving the root cause of the problem, such as a misconfigured instance, or taking steps to prevent the problem from recurring in the future.
  • Synchronize the data: Once the data inconsistency has been resolved, synchronize the data between instances in the RAC environment to ensure that all instances have the same up-to-date data.
  • Verify the data: Verify the data in the RAC environment to ensure that it is consistent and accurate.

In a RAC environment, it is important to regularly monitor for data inconsistencies and to implement corrective actions as needed. This will help to ensure that the RAC environment is highly available and that data is accurate and consistent.

  1. How do you implement a load-balancing solution in a RAC environment?

Ans: Implementing a load-balancing solution in a RAC environment involves several key steps, including:

  • Configuring listeners: Configure the Oracle Net listeners on each node in the RAC environment. The listeners are responsible for listening for incoming connections and redirecting them to the appropriate instance.
  • Setting up virtual IPs: Set up virtual IPs (VIPs) that can be used to access the RAC environment. The VIPs provide a single point of access to the RAC environment and allow clients to connect to the environment without knowing which node or instance they are connecting to.
  • Configuring load balancing algorithms: Configure load balancing algorithms that will determine which instance will handle incoming requests. Oracle supports several load balancing algorithms, including round-robin, least connections, and weighted least connections.
  • Implementing load balancing tools: Implement load balancing tools, such as Oracle’s Real Application Clusters (RAC) Load Balancing Advisory (LBA), to help distribute the workload evenly across the instances in the RAC environment.
  • Monitoring performance: Monitor the performance of the RAC environment to ensure that the load balancing solution is working properly and that the instances are not becoming overloaded.

In a RAC environment, it is important to ensure that the load balancing solution is configured properly and that the load balancing algorithms are set up to distribute the workload evenly across the instances. Regular monitoring of the performance of the RAC environment is also important to ensure that the load balancing solution is working properly and that the instances are not becoming overloaded. This will help ensure that the RAC environment is highly available and that clients can access the data they need quickly and efficiently.

  1. How do you implement a backup and recovery strategy for a RAC environment?

Ans: Implementing a backup and recovery strategy for a RAC environment involves several key steps, including:

  • Identifying critical data: Determine which data is critical to the business and must be protected.
  • Creating backup policies: Develop backup policies that define how often backups will be taken, how they will be stored, and how they will be tested.
  • Configuring RMAN: Configure Oracle’s Recovery Manager (RMAN) to manage backups and recoveries in the RAC environment. RMAN is designed to work with a RAC environment and can handle multiple nodes and instances.
  • Setting up backup storage: Determine the backup storage options that will be used, such as disk, tape, or cloud storage, and configure the backup storage to work with RMAN.
  • Performing regular backups: Regularly perform backups of the critical data using RMAN. The frequency of backups should be in accordance with the backup policies.
  • Testing backups: Regularly test the backups to ensure that they can be restored in the event of a failure. This can be done by restoring the backup to a test environment and verifying that the data is complete and accurate.
  • Establishing a disaster recovery plan: Establish a disaster recovery plan that outlines the steps that will be taken in the event of a failure, such as failing over to another node or instance. The disaster recovery plan should include procedures for restoring backups, testing the restored data, and resuming normal operations.

Advertisement
Posted by: Monish | March 12, 2019

Effective UNIX commands for Oracle DBA

One of the most important skill set you require as an Oracle database administrator is UNIX scripting and good grip on UNIX commands. In this post I’m not going to explain all the basic UNIX commands; but some of them which find very useful during our day-to-day support.

 

Process Status:

This ps command will help you to identify the instances and/or databases process are up and running in a particular server.

ps –ef |grep –i pmon

ps –ef|grep –i ora

ps –ef|grep –i tns

 

Find big Directories:

When you receive an alert like one of the oracle owned filesystem has hit the threshold limit, below command will help you to identify the big directories need your attention first.

du -sk * | sort -nk 1 | tail

bigdir

 

Remove ^M from files

Sometimes you may have experience ^M character at the end of every line once you move some .sql or .ksh file from external source to your unix server and open it with your vi editor.  Use this command in your vi editor in command mode to replace this character with blank.

:%s/[ctrlkey+v ctrl-key+M]//g

viM

You can use this same command to replace any specific word with a new one. In this below example I have replace the word “TEST” with “ORACLE” in the same file.

 

Remove one week old logs

During any server housekeeping activity you have identified some old log files which are consuming some significant space. Now use this below script to remove them except the last 7 days files. The file type and the timestamp values can be changed based on your requirement.

find . -name "*.xml" -print -mtime +7 -exec rm -f {} \;

 

 

AIX Commands to collect server resource details

prtconf |more

prtconf | grep -i "Processor Type"

prtconf | grep -i "bit"

lparstat -i | grep CPU

 

FTP script to copy file

This below ftp shell script is an example to move export backup dumps.

HOST=<Server Name>
USER=<username>
PASSWD=<password>

exec 4>&1

ftp -nv >&4 2>&4 |&

print -p open $HOST
print -p user $USER $PASSWD
print -p cd /ora/dev/bkup
print -p lcd /opt/bkup/MB
print -p binary
print -p put expdp_325567.log
print -p binary
print -p put expdp_325567.dmp
print -p bye
wait
exit 0

 

Mailx Commands

Modify these mailx commands as per your need and use them with your shell script.

Show log in email body:
mailx -s "EXPORT LOG - $ORACLE_SID" $DBAS < $LOGFILE     

Attach log in email:
uuencode ${REPORT_FILE} ${REPORT_FILE} | mailx -m -s "Report for $ORACLE_SID" ${EMAIL_USER}

 

Find Memory size/details in different env:

meminfo  --> Linux

grep Physical /var/adm/syslog/syslog.log --> HP server

/usr/sbin/prtconf | grep -i mem  -->solaris

AIX  Top 15 memory consuming process:
svmon -Pt15 | perl -e 'while(<>){print if($.==2||$&&&!$s++);$.=0 if(/^-+$/)}'

 

SCP

In this below command we are moving an export dump file from our logged in source server to “Hostname01” server’s “/oradata01/bkp” directory.

this command prompt you for the password of the oracle user of hostname01 server.

scp -p /oracle/backups/exp/exp_tab_03112018.exp oracle@Hostname01:/oradata01/bkp

 

Some useful Solaris Commands:

# CHECK PROCESS STATUS
prstat

#CHECK CPU UTILIZATION
sar 2 5

# ORATAB
cat /var/opt/oracle/oratab

# 32/64 bit
isainfo -b

 

topas

The topas command reports selected statistics about the activity on the local system.

This command can be used to identify which process is consuming more resource and server resource status.

top

 

Hope this post will be helpful for you. Please leave a comment below with your thoughts. Let me know if you have any queries and suggestions, will try to reply ASAP.

Following are some of the great books to master this particular area. Use the below links to grab your copy and move up your level.

UNIX CONCEPTS AND APPLICATIONS

Unix for Oracle DBAs Pocket Reference: Command Syntax and Scripts 

Linux Command Line and Shell Scripting Bible

Learning Linux Shell Scripting

Thanks for your time!

Posted by: Monish | November 29, 2017

Oracle ASM Patching Step-By-Step Approach

Oracle ASM patching using Opatch auto utility is a very common activity for Oracle DBAs. In this post I have tried to capture all the steps need to be performed to patch Oracle ASM/Grid and RDBMS homes in one go. The GRID patch is being installed here is 24436338

This article contains mostly the high level steps, feel free to reach me, by commenting below,  if you face any error or difficulties while  performing this activity.

1.0     Prerequisits:

1.1      Opatch Version Check:

Go through the Readme.txt of the patch to make sure all the DB and GI homes contains the recommended Opatch version.

$ /OPatch/opatch version

If it’s not present then Download the OPatch from Oracle Metalink utility to a temporary directory and unzip it to the DB/GI home.

$ unzip  -d 
$ /OPatch/opatch version

1.2      OCM Configuration

The OPatch utility will prompt for your OCM (Oracle Configuration Manager) response file when it is run. You should enter a complete path of OCM response file if you already have created this in your environment. OCM response file is mandatory and is not optional.

If you do not have the OCM response file (ocm.rsp).

$ export ORACLE_HOME=
$ $ORACLE_HOME/OPatch/ocm/bin/emocmrsp -output /ocm.rsp

1.3      Check for patch Conflict

opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./ -invPtrLoc $ORACLE_HOME/oraInst.loc

2.0     Patch Installation

Connect to root

Set Grid Home

Execute opatch auto command

$ $ORACLE_HOME/OPatch/opatch auto  -ocmrf /ocm.rsp

[hostname:root]/opt/oracle>$ORACLE_HOME/OPatch/opatch auto /opt/oracle/24436338 -ocmrf /opt/oracle/product/11.2.0.4/grid/OPatch/ocm/bin/ocm.rsp

In the below sample Output you can see that at first its getting appled to RDBMS home than its applying it to GRID home:

Executing /opt/oracle/product/11.2.0.4/grid/perl/bin/perl /opt/oracle/product/11.2.0.4/grid/OPatch/crs/patch11203.pl -patchdir /opt/oracle -patchn 24436338 -ocmrf /opt/oracle/product/11.2.0.4/grid/OPatch/ocm/bin/ocm.rsp -paramfile /opt/oracle/product/11.2.0.4/grid/crs/install/crsconfig_params

This is the main log file: /opt/oracle/product/11.2.0.4/grid/cfgtoollogs/opatchauto2016-12-01_08-28-38.log

This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:

/opt/oracle/product/11.2.0.4/grid/cfgtoollogs/opatchauto2016-12-01_08-28-38.report.log

2016-12-01 08:28:38: Starting Oracle Restart Patch Setup

Using configuration parameter file: /opt/oracle/product/11.2.0.4/grid/crs/install/crsconfig_params

Stopping RAC /opt/oracle/product/11.2.0.4/dbhome_1 ...

Stopped RAC /opt/oracle/product/11.2.0.4/dbhome_1 successfully

patch /opt/oracle/24436338/24006111  apply successful for home  /opt/oracle/product/11.2.0.4/dbhome_1

patch /opt/oracle/24436338/23054319/custom/server/23054319  apply successful for home  /opt/oracle/product/11.2.0.4/dbhome_1

Stopping CRS...

Stopped CRS successfully

patch /opt/oracle/24436338/24006111  apply successful for home  /opt/oracle/product/11.2.0.4/grid

patch /opt/oracle/24436338/23054319  apply successful for home  /opt/oracle/product/11.2.0.4/grid

patch /opt/oracle/24436338/22502505  apply successful for home  /opt/oracle/product/11.2.0.4/grid

Starting CRS...

CRS-4123: Oracle High Availability Services has been started.

Starting RAC /opt/oracle/product/11.2.0.4/dbhome_1 ...

Started RAC /opt/oracle/product/11.2.0.4/dbhome_1 successfully

opatch auto succeeded.

 

Start the database

cd $ORACLE_HOME/rdbms/admin

sqlplus / as sysdba

startup

@catbundle.sql psu apply

 

3.0     Post patching activity

Recompile Invalid Objects

@?/rdbms/admin/utlrp.sql

Check if the latest patch version is reflecting from DB level. Alos chcek the opatch lsinventory of both the homes for the patch apply status.

set lines 300
set pages 30
col ACTION_TIME for a30
col ID for a10
col ACTION for a15
col VERSION for a15
col BUNDLE for a15
col COMMENTS for a30
select substr(action_time,1,30) as action_time, substr(id,1,10) as id,substr(action,1,10) as action,substr(version,1,8) as version, substr(BUNDLE_SERIES,1,6) as bundle, substr(comments,1,20) as comments from registry$history;

 
=========

Readers are leaders. You can only master the craft if you constantly study and keep up the hands-on practice. Following are some of the great books for your reference.

Please use the below links to grab your copy. Thank you! 🙂

Expert Oracle Database Architecture (by Thomas Kyte and Darl Kuhn) –> https://amzn.to/2EZ8eUE

Expert Oracle Database Architecture: Oracle Database 9i, 10g, and 11g Programming Techniques and Solutions –> https://amzn.to/2F17qPh

Oracle Database 12c DBA Handbook (Oracle Press) –> https://amzn.to/2F0tJoh
RMAN Recipes for Oracle Database 12c-A Problem-Solution Approach –> https://amzn.to/2EYY3ja
Oracle Database 12c Performance Tuning Recipes –> https://amzn.to/2F01uGj

Oracle RMAN 11g Backup and Recovery (Oracle Press) –> https://amzn.to/2NQ3OT9

OCA/OCP Oracle Database 12c All-in-One Exam Guide (Exams 1Z0-061, 1Z0-062, & 1Z0-063) –> https://amzn.to/2F0jTD0

OCA/OCP Oracle Database 11g All-in-One Exam Guide with CD-ROM –> https://amzn.to/2SPPJWJ

Expert Oracle RAC Performance Diagnostics and Tuning –> https://amzn.to/2NTXxWr

Pro Oracle Database 11g Administration (Expert’s Voice in Oracle) –> https://amzn.to/2NRj0iD

Posted by: Monish | April 21, 2017

SSH Key-based Host Authentication Configuration in OEM

If we want to schedule or submit a centralized job/patching activity through OEM which is intended to connected to multiple hosts, this setup will help.

Since the SSH credential is set as preferred credentials for the host the agent can now run the commands through the SSH client on the host to perform the requested operations.

 Advantages of SSH Key-based Host Authentication:

  • The encryption used by SSH provides confidentiality and integrity of data over an insecure network.
  • SSH also protects the system against DNS spoofing attacks.
  • SSH was designed as a replacement for FTP, telnet and other unsecure remote shells, which send information, notably passwords in plaintext, leaving them open for interception.
  • This makes ssh a better choice in production environments over telnet/FTP and other username/password based authentications.

0102

Please feel free to comment on the post if you have any queries.

 

 

Posted by: Monish | January 5, 2015

How to change the default Listener in 12c ASM

Please note that the steps need to be done before the default listener is dropped/removed. If not then the default listener need to be added and then new listener name needs to be configured.

 

Add the new listener:

srvctl add listener -l L_HOSTNAME -p TCP:1527 -o /oracle/GRID/121_2

 

Check the status if the service is added


[oracle@HOSTNAME admin]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       HOSTNAME                 STABLE
ora.LISTENER.lsnr
               OFFLINE OFFLINE      HOSTNAME                 STABLE
ora.L_HOSTNAME.lsnr
               ONLINE  ONLINE       HOSTNAME                 STABLE
ora.RECO.dg
               ONLINE  ONLINE       HOSTNAME                 STABLE
ora.asm
               ONLINE  ONLINE       HOSTNAME                 Started,STABLE
ora.ons
               OFFLINE OFFLINE      HOSTNAME                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.xyz.db
      1        ONLINE  ONLINE       HOSTNAME                 Open,STABLE
ora.cssd
      1        ONLINE  ONLINE       HOSTNAME                 STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       HOSTNAME                 STABLE
--------------------------------------------------------------------------------

 

Modify with the new value:

[oracle@HOSTNAME admin]$ srvctl modify asm -l L_HOSTNAME
[oracle@HOSTNAME admin]$ srvctl config asm
ASM home: <CRS home>
Password file: +DATA/orapwasm
ASM listener: L_HOSTNAME
Spfile: +DATA/ASM/ASMPARAMETERFILE/registry.253.877930002
ASM diskgroup discovery string:
[oracle@HOSTNAME admin]$

 

Remove the old listener

[oracle@HOSTNAME admin]$ srvctl remove listener -l LISTENER

 

Check the status

[oracle@HOSTNAME admin]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       HOSTNAME                 STABLE
ora.L_HOSTNAME.lsnr
               ONLINE  ONLINE       HOSTNAME                 STABLE
ora.RECO.dg
               ONLINE  ONLINE       HOSTNAME                 STABLE
ora.asm
               ONLINE  ONLINE       HOSTNAME                 Started,STABLE
ora.ons
               OFFLINE OFFLINE      HOSTNAME                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.xyz.db
      1        ONLINE  ONLINE       HOSTNAME                 Open,STABLE
ora.cssd
      1        ONLINE  ONLINE       HOSTNAME                 STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       HOSTNAME                 STABLE
--------------------------------------------------------------------------------

 

Crosscheck the ASM configuration

[oracle@HOSTNAME admin]$ srvctl config asm
ASM home: <CRS home>
Password file: +DATA/orapwasm
ASM listener: L_HOSTNAME
Spfile: +DATA/ASM/ASMPARAMETERFILE/registry.253.877930002
ASM diskgroup discovery string:
[oracle@HOSTNAME admin]$
Posted by: Monish | April 28, 2011

EXP using pipe

Using the following script we can take export backup using pipe.
Change the DIRECTORY value to your backup location.

suppose the following script name is exp_pipe.ksh then execute it as

nohup ./exp_pipe.ksh $SID &

—————————————————————————————–
#!/usr/bin/ksh
if [ $# -ne 1 ]
then
   echo “Usage : $0 ORACLE_SID”
   exit
fi

export ORACLE_SID=$1
export NLS_LANG=’AMERICAN_AMERICA.WE8ISO8859P1′

export ORACLE_HOME=/u01/oracle/product/9.2.0
DATE=`date +%Y%m%d`
DIRECTORY=/u01/backup; export DIRECTORY
FILENAME=${DIRECTORY}/${ORACLE_SID}_${DATE}.dmp.gz; export FILENAME
LOG=${DIRECTORY}/${ORACLE_SID}_${DATE}.log; export LOG
PIPE=${DIRECTORY}/export_pipe_${DATE}.dmp; export PIPE

rm -f ${PIPE}
test -p ${PIPE} || mknod ${PIPE} p
chmod +rw ${PIPE}

ls -l $PIPE
echo “cat $PIPE | /usr/bin/gzip > ${FILENAME}”
echo “${ORACLE_HOME}/bin/exp log=${LOG} file=${PIPE} owner=scott”
cat $PIPE | /usr/bin/gzip > ${FILENAME} &
${ORACLE_HOME}/bin/exp log=${LOG} file=${PIPE} STATISTICS=NONE owner=scott  <<!
db_user/password123@${ORACLE_SID}
!
—————————————————————————————-

To execute the same script in HP-UNIX environment, change the gzip location to /usr/contrib/bin/gzip

———————-

Hi All,

After a long time I’m posting my blog. Actually I was very busy in my life. Literally did not get much time. At the same time change the company. New company, new environment!

Anyways before coming to the main topic of the post I want to share the story behind this issue.

One day I got a request to compile the invalid objects on the production database. A very basic activity, so I ran the utlrp.sql to compile them. But wow all remain same. So I ran again, still same. I got amazed. Never faced such issue. Ok so went to look which are Invalid and who are the owner. Found most of them belongs to APPS schema.

So I tried with dbms_utility.compile_schema to compile the Apps schema. And BANG!! You wont believe within 15 min the Invalid object count reached to 20K. Then and there all the users and the client start complaining that the application is not working. Nothing is working. That time I felt like standing in the marriage ceremony NAKED! I aborted the utility, and checked all 20K invalid objects belong to APPS schema. Asked so many seniors, but nobody had any clue, utlrp.sql was useless.

We start looking all possible aspects, and found the PLSQL_CODE_TYPE is set as NATIVE. That’s cool, its me plsql will use the C compiler to compile. Here is he story, there was no C compiler on the server.

It was a huge issue, coz it was weekday and pure business hour, and the business is down because of invalid objects.

We did not had time to install the C compiler because it was a Solaris 64bit server and the compiler comes with the CD from vendor, the customer told he has 30 CDs and don’t know which one have the software.

So we change the   PLSQL_CODE_TYPE parameter to INTERPRETED and tried to compile them manually. We prepared a script and starts compile them. It took really long time but worked.

SQL>select ‘alter package ‘||object_name||’ compile body;’ from user_objects where status=’INVALID’ and object_type=’PACKAGE BODY’ AND ROWNUM<1001;

Anyways the database came up. And the business started.

Now, the main thing, find the ROOT CAUSE. After doing all sorts of research found that long time back someone change the plsql_code_type parameter from interpreted to native without the C compiler and the proper steps. So when we tried to compile the invalid object, it was looking for the C compile but there was none.

NATIVE is faster than INTERPRETED. Though the issue was resolved but there was still performance issue. So we had to make it NATIVE. So, following are the steps.

The Steps: –

1. Ensure that following are properly configured to support native compilation:

* A supported C compiler is installed in the database environment and that the spnc_commands file has the correct command templates for this compiler.
* The PLSQL_NATIVE_LIBRARY_DIR is set. This is a required system-level only parameter that specifies the full path and directory name of the location of the shared libraries that contain natively compiled PL/SQL code. The value must be explicit and point to an existing, accessible directory; the path cannot contain a variable such as ORACLE_HOME. Use the ALTER SYSTEM command or update the initialization file to set the parameter value.
* The PLSQL_NATIVE_LIBRARY_SUBDIR_COUNT is set correctly for the number of natively compiled units after conversion.
* A test PL/SQL unit can be compiled

2. Shut down application services, the listener, and the database.

* Shut down all of the Application services including the Forms Processes, Web Servers, Reports Servers, and Concurrent Manager Servers. After shutting down all of the Application services, ensure that all of the connections to the database have been terminated.
* Shut down the TNS listener of the database to ensure that no new connections are made.
* Shut down the database in normal or immediate mode as the user SYS.

3. Set PLSQL_CODE_TYPE to NATIVE in the initialization parameter file. If the database is using a server parameter file, then set this after the database has started. See “PLSQL_CODE_TYPE Initialization Parameter”.

The value of PLSQL_CODE_TYPE does not affect the conversion of the PL/SQL units in these steps. However, it does affect all subsequently compiled units and it should be explicitly set to the compilation type that you want.

4. Start up the database in upgrade mode, using the UPGRADE option.

5. Execute the following code to list the invalid PL/SQL units. You can save the output of the query for future reference with the SQL SPOOL command.

REM To save the output of the query to a file: SPOOL pre_update_invalid.log
SELECT o.OWNER, o.OBJECT_NAME, o.OBJECT_TYPE
FROM DBA_OBJECTS o, DBA_PLSQL_OBJECT_SETTINGS s
WHERE o.OBJECT_NAME = s.NAME AND o.STATUS=’INVALID’;
REM To stop spooling the output: SPOOL OFF

If any Oracle supplied units are invalid, try to validate them. For example:

ALTER PACKAGE OLAPSYS.DBMS_AWM COMPILE BODY REUSE SETTINGS;

6. Execute the following query to determine how many objects are compiled NATIVE and INTERPRETED. Use the SQL SPOOL command if you want to save the output.

SELECT TYPE, PLSQL_CODE_TYPE, COUNT(*) FROM DBA_PLSQL_OBJECT_SETTINGS
WHERE PLSQL_CODE_TYPE IS NOT NULL
GROUP BY TYPE, PLSQL_CODE_TYPE
ORDER BY TYPE, PLSQL_CODE_TYPE;

Any objects with a NULL plsql_code_type are special internal objects and can be ignored.

7. Run the $ORACLE_HOME/rdbms/admin/dbmsupgnv.sql script as the user SYS to update the plsql_code_type setting to NATIVE in the dictionary tables for all PL/SQL units. This process also invalidates the units. Use TRUE with the script to exclude package specifications; FALSE to include the package specifications.
@$ORACLE_HOME/rdbms/admin/dbmsupgnv.sql TRUE


This update must be done when the database is in UPGRADE mode. The script is guaranteed to complete successfully or rollback all the changes.

8. Shut down the database and restart in NORMAL mode.

9. Before you run the utlrp.sql script, Oracle recommends that no other sessions are connected to avoid possible problems. You can ensure this with:

ALTER SYSTEM ENABLE RESTRICTED SESSION;

10. Run the $ORACLE_HOME/rdbms/admin/utlrp.sql script as the user SYS. This script recompiles all the PL/SQL modules using a default degree of parallelism. See the comments in the script for information on setting the degree explicitly.

If for any reason the script is abnormally terminated, rerun the utlrp.sql script to recompile any remaining invalid PL/SQL modules.

11. After the compilation completes successfully, verify that there are no new invalid PL/SQL units using the query in point 5. You can spool the output of the query to the post_upgrade_invalid.log file and compare the contents with the pre_upgrade_invalid.log file, if it was created previously.

12. Re-execute the query in point 6. If recompiling with dbmsupgnv.sql, confirm that all PL/SQL units, except TYPE specifications and package specifications if excluded, are NATIVE. If recompiling with dbmsupgin.sql, confirm that all PL/SQL units are INTERPRETED.

13. Disable the restricted session mode for the database, then start the services that you previously shut down. To disable restricted session mode:

ALTER SYSTEM DISABLE RESTRICTED SESSION;

Read More…

Error: ORA-1684
Text: max # extents (%s) reached in table %s.%s partition %s
—————————————————————————
Cause: A table tried to extend past maxextents
Action: If maxextents is less than the system maximum, raise it. Otherwise,
you must recreate with larger initial, next or pctincrease params

Older Posts »

Categories