Recommendation for the Real Application Cluster Interconnect and Jumbo Frames (Doc ID 341788.1)

 

rac 구성시 기본적인 ping test 방법

 

- private 네트웍 설정시 MTU 1500 -> 9000  Junbo Frame 으로 바꾸는 경우가 많은데 잘못된 설정으로 문제가 종종 발생함, 스위치도 동일하게 안 바꾼다던가 NIC 설정 등등..

 

1. ping test 시 아래과 같은 방법으로 진행하여야 함.

 

[node01]$ ping -c 2 -M do -s 8972 node02-priv


PING node02-priv (10.10.10.2) 1472(1500) bytes of data.
1480 bytes from node02-priv (10.10.10.2): icmp_seq=0 ttl=64 time=0.220 ms
1480 bytes from node02-priv (10.10.10.2): icmp_seq=1 ttl=64 time=0.197 ms

 

[node01]$ ping -c 2 -M do -s 8973 node02-priv


From node02-priv (10.10.10.1) icmp_seq=0 Frag needed and DF set (mtu = 9000)
From node02-priv (10.10.10.1) icmp_seq=0 Frag needed and DF set (mtu = 9000)
--- node02-priv ping statistics ---
0 packets transmitted, 0 received, +2 errors

 

여기서 8972 는 성공하고 8973 은 실패하는 것이 정상인 상황임.

 

9000 - 28 = 8972 하는 이유는 패킷당 28 bytes 오버헤드를 고려한 것임.

 

2. traceroute 명령어를 사용해서 확인할 수도 있음

 

[node01] $ traceroute -F node02-priv 9000


traceroute to node02-priv (10.10.10.2), 30 hops max, 9000 byte packets
1 node02-priv (10.10.10.2) 0.232 ms 0.176 ms 0.160 ms

 

[node01] $ traceroute -F node02-priv 9001


traceroute to node02-priv (10.10.10.2), 30 hops max, 9001 byte packets
traceroute: sendto: Message too long
1 traceroute: wrote node02-priv 9001 chars, ret=-1

 

위와 마찬가지로 9000 일때 정상, 9001 일때 실패로 나오는 것은 정상인 상황임.

Posted by pat98

- FRA가 Full 났을시 조치

 

1. DB_RECOVERY_FILE_DEST_SIZE 의 크기를 확장한다.

 

alter system set db_recovery_file_dest_size=500G SCOPE=BOTH;

 

- 아카이브로그 백업을 위해 바람직한 방법

SQL> alter system set log_archive_dest_10='LOCATION=USE_DB_RECOVERY_FILE_DEST' scope=both;

 

2. rman 으로 영역을 백업

 

RMAN> BACKUP RECOVERY AREA;

 

3. OS에서 지웠다면 RMAN에서 DELETE 수행

 

RMAN> CROSSCHECK BACKUP;
RMAN> CROSSCHECK ARCHIVELOG ALL;

RMAN> Delete expired backup;
RMAN> Delete expired archivelog all;
RMAN> Delete force obsolete;

 

4. 복원지점이 필요없다면 삭제


SQL> select * from v$restore_point;
SQL> Drop restore point <restore_point_name>;

 

5. 충분한 용량을 가지고 있지 않다면

 

SQL>Alter database FLASHBACK OFF;

 

6. 백업 retenton 정책을 변경

 

RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;

Posted by pat98


Background Process 설명 (12.1 기준)

Table F-1 Background Processes

Name Expanded Name Short Description Long Description External Properties

ABMR

Auto BMR Background Process

Coordinates execution of tasks such as filtering duplicate block media recovery requests and performing flood control

When a process submits a block media recovery request to ABMR, it dynamically spawns slave processes (BMRn) to perform the recovery. ABMR and BMRn terminate after being idle for a long time.

See Also: Oracle Database Backup and Recovery User's Guide

Database instances

ACFS

ASM Cluster File System CSS Process

Tracks the cluster membership in CSS and informs the file system driver of membership changes

ACFS delivers CSS membership changes to the Oracle cluster file system. These membership changes are required for the file system to maintain file system consistency within the cluster.

Oracle ASM instances, Oracle RAC

ACMS

Atomic Control File to Memory Service Process

Coordinates consistent updates to a control file resource with its SGA counterpart on all instances in an Oracle RAC environment

The ACMS process works with a coordinating caller to ensure that an operation is executed on every instance in Oracle RAC despite failures. ACMS is the process in which a distributed operation is called. As a result, this process can exhibit a variety of behaviors. In general, ACMS is limited to small, nonblocking state changes for a limited set of cross-instance operations.

Database instances, Oracle RAC

APnn

Database Apply Process Coordinator Process

Obtains transactions from the reader server and passes them to apply servers

The coordinator process name is APnn, where nn can include letters and numbers.

For more information about the coordinator process, seeV$STREAMS_APPLY_COORDINATOR for Oracle Streams,V$XSTREAM_APPLY_COORDINATOR for XStream, and V$GG_APPLY_COORDINATOR for Oracle GoldenGate.

See Also: Oracle Streams Concepts and Administration and Oracle Database XStream Guide

Database instances, Logical Standby, Streams Apply, XStream Inbound servers, XStream Outbound servers, GoldenGate Integrated Replicat

AQPC

AQ Process Coordinator

Per instance AQ global coordinator

AQPC is responsible for performing administrative tasks for AQ Master Class Processes including commands like starting, stopping, and other administrative tasks. This process is automatically started on instance startup.

Database instances Advanced Queueing

ARBn

ASM Rebalance Process

Rebalances data extents within an ASM disk group

Possible processes are ARB0-ARB9 and ARBA.

Oracle ASM instances

ARCn

Archiver Process

Copies the redo log files to archival storage when they are full or an online redo log switch occurs

ARCn processes exist only when the database is in ARCHIVELOG mode and automatic archiving is enabled, in which case ARCn automatically archives online redo log files. LGWR cannot reuse and overwrite an online redo log group until it has been archived.

The database starts multiple archiver processes as needed to ensure that the archiving of filled online redo logs does not fall behind. Possible processes include ARC0-ARC9 and ARCa-ARCt.

The LOG_ARCHIVE_MAX_PROCESSES initialization parameter specifies the number of ARCn processes that the database initially invokes.

See Also: Oracle Database Concepts and Oracle Database Administrator's Guide

Database instances

ARSn

ASM Recovery Slave Process

Recovers ASM transactional operations

The ASM RBAL background process coordinates and spawns one or more of these slave processes to recover aborted ASM transactional operations. These processes run only in the Oracle ASM instance.

Possible processes are ARS0-ARS9.

Oracle ASM instances

ASMB

ASM Background Process

Communicates with the ASM instance, managing storage and providing statistics

ASMB runs in Oracle ASM instances when the ASMCMD cp command runs or when the database instance first starts if the server parameter file is stored in Oracle ASM. ASMB also runs with Oracle Cluster Registry on Oracle ASM.

Database instances, Oracle ASM instances

ASnn

Database Apply Reader or Apply Server

  • Computes dependencies between logical change records (LCRs) and assembles messages into transactions (Reader Server)

  • Applies LCRs to database objects or passes LCRs and user messages to their appropriate apply handlers (Apply Server)

When the reader server finishes computing dependencies between LCRs and assembling transactions, it returns the assembled transactions to the coordinator process. Query V$STREAMS_APPLY_READERV$XSTREAM_APPLY_READER, andV$GG_APPLY_READER for information about the reader server background process.

An apply server receives the transactions from the coordinator background process, and either applies database changes in LCRs or sends LCRs or messages to apply handlers. Apply servers can also enqueue a queue. If an apply server encounters an error, then it then tries to resolve the error with a user-specified conflict handler or error handler. If an apply server cannot resolve an error, then it rolls back the transaction and places the entire transaction, including all of its messages, in the error queue. When an apply server commits a completed transaction, this transaction has been applied. When an apply server places a transaction in the error queue and commits, this transaction also has been applied. Query V$STREAMS_APPLY_SERVERfor information about the apply server background process. For XStream Inbound servers, query V$XSTREAM_APPLY_SERVER. For GoldenGate Integrated Replicat, query V$GG_APPLY_SERVER.

The coordinator process name is ASnn, where nn can include letters and numbers.

Database instances, XStream Outbound servers, XStream Inbound servers, GoldenGate Integrated Replicat

BMRn

Automatic Block Media Recovery Slave Pool Process

Fetches blocks from a real-time readable standby database

When a process submits a block media recovery request to ABMR, it dynamically spawns slave processes (BMRn) to perform the recovery. BMRn processes fetch blocks from a real-time readable standby database. ABMR and BMRn terminate after being idle for a long time.

See Also: Oracle Database Backup and Recovery User's Guide

Database instances

Bnnn

ASM Blocking Slave Process for GMON

Performs maintenance actions on Oracle ASM disk groups

Bnnn performs actions that require waiting for resources on behalf of GMON. GMON must be highly available and cannot wait.

A Bnnn slave is spawned when a disk is taken offline in an Oracle ASM disk group. Offline timer processing and drop of the disk are performed in this slave. Up to five process (B000 to B004) can exist depending on the load.

Oracle ASM instances

BWnn

Database Writer Process

Writes modified blocks from the database buffer cache to the data files

See the Long Description for the DBWn process in this table for more information about the BWnn process.

Database instances

CJQ0

Job Queue Coordinator Process

Selects jobs that need to be run from the data dictionary and spawns job queue slave processes (Jnnn) to run the jobs

CJQ0 is automatically started and stopped as needed by Oracle Scheduler.

The JOB_QUEUE_PROCESSES initialization parameter specifies the maximum number of processes that can be created for the execution of jobs. CJQ0 starts only as many job queue processes as required by the number of jobs to run and available resources.

See Also: Oracle Database Concepts and Oracle Database Administrator's Guide

Database instances

CKPT

Checkpoint Process

Signals DBWn at checkpoints and updates all the data files and control files of the database to indicate the most recent checkpoint

At specific times CKPT starts a checkpoint request by messaging DBWn to begin writing dirty buffers. On completion of individual checkpoint requests, CKPT updates data file headers and control files to record most recent checkpoint.

CKPT checks every three seconds to see whether the amount of memory exceeds the value of the PGA_AGGREGATE_LIMIT initialization parameter, and if so, takes the action described in "PGA_AGGREGATE_LIMITOpens a new window".

See Also: Oracle Database Concepts

Database instances, Oracle ASM instances

CPnn

Database Capture Process

Captures database changes from the redo log by using the infrastructure of LogMiner

The capture process name is CPnn, where nn can include letters and numbers. The underlying LogMiner process name is MSnn, where nn can include letters and numbers. The capture process includes one reader server that reads the redo log and divides it into regions, one or more preparer servers that scan the redo log, and one builder server that merges redo records from the preparer servers. Each reader server, preparer server, and builder server is a process. Query theV$STREAMS_CAPTUREV$XSTREAM_CAPTURE, and V$GOLDENGATE_CAPTURE view for information about this background process.

See Also: Oracle Streams Concepts and Administration and Oracle Database XStream Guide

Database instances, XStream Outbound Servers, Oracle Streams

CSnn

I/O Calibration Process

Issues I/Os to storage as part of storage calibration.

CSnn slave processes are started on execution of theDBMS_RESOURCE_MANAGER.CALIBRATE_IO() procedure. There is one slave process per CPU on each node of the database.

Database instances, Oracle RAC

CTWR

Change Tracking Writer Process

Tracks changed data blocks as part of the Recovery Manager block change tracking feature

CTWR tracks changed blocks as redo is generated at a primary database and as redo is applied at a standby database. The process is slightly different depending on the type of database.

See Also: Oracle Database Backup and Recovery User's Guide

Database instances

CXnn

Streams Propagation Sender Process

Sends LCRs to a propagation receiver

The propagation sender process name is CXnn, where nn can include letters and numbers. In an Oracle Streams combined capture and apply optimization, the propagation sender sends LCRs directly to the propagation receiver to improve performance. The propagation receiver passes the LCRs to an apply process. QueryV$PROPAGATION_SENDER for information about a propagation sender.

Database instances, XStream Outbound Server, Oracle Streams

DBRM

Database Resource Manager Process

Sets resource plans and performs other tasks related to the Database Resource Manager

If a resource plan is not enabled, then this process is idle.

See Also: Oracle Database Administrator's Guide

Database instances

DBWn

Database Writer Process

Writes modified blocks from the database buffer cache to the data files

The primary responsibility of the Database Writer Process is to write data blocks to disk. It also handles checkpoints, file open synchronization, and logging of Block Written records.

In many cases the blocks that the Database Writer Process writes are scattered throughout the disk. Thus, the writes tend to be slower than the sequential writes performed by LGWR. The Database Writer Process performs multiblock writes when possible to improve efficiency. The number of blocks written in a multiblock write varies by operating system.

The DB_WRITER_PROCESSES initialization parameter specifies the number of Database Writer Processes. There can be 1 to 100 Database Writer Processes. The names of the first 36 Database Writer Processes are DBW0-DBW9 and DBWa-DBWz. The names of the 37th through 100th Database Writer Processes are BW36-BW99. The database selects an appropriate default setting for the DB_WRITER_PROCESSESparameter or adjusts a user-specified setting based on the number of CPUs and processor groups.

See Also"DB_WRITER_PROCESSESOpens a new window"

Database instances

DIA0

Diagnostic Process

  • Detects and resolves hangs and deadlocks

    Database instances, Oracle ASM instances

    DIAG

    Diagnostic Capture Process

    • Performs diagnostic dumps
    • DIAG performs diagnostic dumps requested by other processes and dumps triggered by process or instance termination. In Oracle RAC, DIAG performs global diagnostic dumps requested by remote instances.

    Database instances, Oracle ASM instances

    DMnn

    Data Pump Master Process

    Coordinates the Data Pump job tasks performed by Data Pump worker processes and handles client interactions

    The Data Pump master (control) process is started during job creation and coordinates all tasks performed by the Data Pump job. It handles all client interactions and communication, establishes all job contexts, and coordinates all worker process activities on behalf of the job.

    Database instances, Data Pump

    DMON

    Data Guard Broker Monitor Process

    Manages and monitors a database that is part of a Data Guard broker configuration

    When you start the Data Guard broker, a DMON process is created. DMON runs for every database instance that is managed by the broker. DMON interacts with the local database and the DMON processes of the other databases to perform the requested function. DMON also monitors the health of the broker configuration and ensures that every database has a consistent description of the configuration.

    DMON maintains profiles about all database objects in the broker configuration in a binary configuration file. A copy of this file is maintained by the DMON process for each of the databases that belong to the broker configuration. The process is created when the DG_BROKER_START initialization parameter is set to true.

    See Also: Oracle Data Guard Broker

    Database instances, Data Guard

    Dnnn

    Dispatcher Process

    Performs network communication in the shared server architecture

    In the shared server architecture, clients connect to a dispatcher process, which creates a virtual circuit for each connection. When the client sends data to the server, the dispatcher receives the data into the virtual circuit and places the active circuit on the common queue to be picked up by an idle shared server. The shared server then reads the data from the virtual circuit and performs the database work necessary to complete the request. When the shared server must send data to the client, the server writes the data back into the virtual circuit and the dispatcher sends the data to the client. After the shared server completes the client request, the server releases the virtual circuit back to the dispatcher and is free to handle other clients.

    Several initialization parameters relate to shared servers. The principal parameters are: DISPATCHERSSHARED_SERVERSMAX_SHARED_SERVERSLOCAL_LISTENER,REMOTE_LISTENER.

    See Also: Oracle Database Concepts

    Database instances, shared servers

    DSKM

    Slave Diskmon Process

    Acts as the conduit between the database, Oracle ASM instances, and the Master Diskmon daemon to communicate information to Exadata storage

    This process is active only if Exadata Storage is used. DSKM performs operations related to Exadata I/O fencing and Exadata cell failure handling.

    Oracle ASM instances, Exadata

    DWnn

    Data Pump Worker Process

    Performs Data Pump tasks as assigned by the Data Pump master process

    The Data Pump worker process is responsible for performing tasks that are assigned by the Data Pump master process, such as the loading and unloading of metadata and data.

    Database instances

    EMNC

    EMON Coordinator Process

    Coordinates database event management and notifications

    EMNC is a master background process that coordinates event management and notification activity in the database, including Streams Event Notifications, Continuous Query Notifications, and Fast Application Notifications.

    Database instances

    Ennn

    EMON Slave Process

    Performs database event management and notifications

    The database event management and notification load is distributed among the EMON slave processes. These processes work on the system notifications in parallel, offering a capability to process a larger volume of notifications, a faster response time, and a lower shared memory use for staging notifications.

    Database instances

    FBDA

    Flashback Data Archiver Process

    Archives historical rows for tracked tables into flashback data archives and manages archive space, organization, and retention

    When a transaction that modifies a tracked table commits, FBDA stores the pre-image of the rows in the archive. FBDA maintains metadata on the current rows and tracks how much data has been archived.

    FBDA is also responsible for automatically managing the flashback data archive for space, organization (partitioning tablespaces), and retention. FBDA also keeps track of how far the archiving of tracked transactions has progressed.

    See Also: Oracle Database Development Guide

    Database instances

    FDnn

    Oracle ASM Stale FD Cleanup Slave Process

    Cleans up Oracle ASM stale file descriptors on foreground processes

    This process cleans up Oracle ASM stale file descriptors on foreground processes if an Oracle ASM disk is globally closed.

    Database and Oracle ASM instances

    FENC

    Fence Monitor Process

    Processes fence requests for RDBMS instances which are using Oracle ASM instances

    CSS monitors RDBMS instances which are connected to the Oracle ASM instance and constantly doing I/Os. When the RDBMS instance terminates due to a failure, all the outstanding I/O's from the RDBMS instance should be drained and any new I/O's rejected. FENC receives and processes the fence request from CSSD.

    Oracle ASM instances

    FMON

    File Mapping Monitor Process

    Manages mapping information for the Oracle Database file mapping interface

    The DBMS_STORAGE_MAP package enables you to control the mapping operations. When instructed by the user, FMON builds mapping information and stores it in the SGA, refreshes the information when a change occurs, saves the information to the data dictionary, and restores it to the SGA at instance startup.

    FMON is started by the database whenever the FILE_MAPPING initialization parameter is set to true.

    Database instances, Oracle ASM instances

    FSFP

    Data Guard Broker Fast Start Failover Pinger Process

    Maintains fast-start failover state between the primary and target standby databases

    FSFP is created when fast-start failover is enabled.

    Database instances, Data Guard

    GCRn

    Global Conflict Resolution Slave Process

    Performs synchronous tasks on behalf of LMHB

    GCRn processes are transient slaves that are started and stopped as required by LMHB to perform synchronous or resource intensive tasks.

    Database instances, Oracle ASM instances, Oracle RAC

    GEN0

    General Task Execution Process

    Performs required tasks including SQL and DML

     

    Database instances, Oracle ASM instances, Oracle ASM Proxy instances

    GMON

    ASM Disk Group Monitor Process

    Monitors all mounted Oracle ASM disk groups

    GMON monitors all the disk groups mounted in an Oracle ASM instance and is responsible for maintaining consistent disk membership and status information. Membership changes result from adding and dropping disks, whereas disk status changes result from taking disks offline or bringing them online.

    Oracle ASM instances

    GTXn

    Global Transaction Process

    Provides transparent support for XA global transactions in an Oracle RAC environment

    These processes help maintain the global information about XA global transactions throughout the cluster. Also, the processes help perform two-phase commit for global transactions anywhere in the cluster so that an Oracle RAC database behaves as a single system to the externally coordinated distributed transactions.

    The GLOBAL_TXN_PROCESSES initialization parameter specifies the number of GTXnprocesses, where n is 0-9 or a-j. The database automatically tunes the number of these processes based on the workload of XA global transactions. You can disable these processes by setting the parameter to 0. If you try to run XA global transactions with these processes disabled, an error is returned.

    See Also: Oracle Real Application Clusters Administration and Deployment Guide

    Database instances, Oracle RAC

    Innn

    Disk and Tape I/O Slave Process

    Serves as an I/O slave process spawned on behalf of DBWR, LGWR, or an RMAN backup session

    I/O slave process can be configured on platforms where asynchronous I/O support is not available. These slaves are started by setting the corresponding slave enable parameter in the server parameter file. The I/O slaves simulate the asynchronous I/O behavior when the underlying platform does not have native support for asynchronous I/O.

    Database instances

    IMCO

    In-memory Database Process

    Initiates background population and repopulation of in-memory enabled objects

    The IMCO background process initiates population (prepopulation) of in-memory enabled objects with priority LOW/MEDIUM/HIGH/CRITICAL. In-memory enabled objects with priority NONE will not be prepopulated but will be populated on demand via Wnnn processes when queried. The IMCO background process can also initiate repopulation of in-memory objects.

    Database instances

    INSV

    Data Guard Broker Instance Slave Process

    Performs Data Guard broker communication among instances in an Oracle RAC environment

    INSV is created when the DG_BROKER_START initialization parameter is set to true.

    Database instances, Data Guard

    IPC0

    IPC Service Background Process

    Common background server for basic messaging and RDMA primitives based on IPC (Inter-process communication) methods.

    IPC0 handles very high rates of incoming connect requests, as well as, completing reconfigurations to support basic messaging and RDMA primitives over several transports such as UDP, RDS, InfiniBand and RC.

    Oracle RAC

    Jnnn

    Job Queue Slave Process

    Executes jobs assigned by the job coordinator

    Job slave processes are created or awakened by the job coordinator when it is time for a job to be executed.

    Job slaves gather all the metadata required to run the job from the data dictionary. The slave processes start a database session as the owner of the job, execute triggers, and then execute the job. After the job is complete, the slave processes commit and then execute appropriate triggers and close the session. The slave can repeat this operation in case additional jobs need to be run.

    Database instances

    LCKn

    Lock Process

    Manages global enqueue requests and cross-instance broadcasts

    The process handles all requests for resources other than data blocks. For examples, LCKn manages library and row cache requests. Possible processes are LCK0 and LCK1.

    Database instances, Oracle ASM instances, Oracle RAC

    LDDn

    Global Enqueue Service Daemon Helper Slave

    Helps the LMDn processes with various tasks

    LDDn processes are slave processes spawned on demand by LMDn processes. They are spawned to help the dedicated LMDn processes with various tasks when certain workloads start creating performance bottlenecks. These slave processes are transient as they are started on demand and they can be shutdown when no longer needed. There can be up to 36 of these slave processes (LDD0-LDDz).

    Database instances, Oracle ASM instances, Oracle RAC

    LGnn

    Log Writer Worker

    Writes redo log

    On multiprocessor systems, LGWR creates worker processes to improve the performance of writing to the redo log. LGWR workers are not used when there is a SYNC standby destination. Possible processes include LG00-LG99.

    Database instances

    LGWR

    Log Writer Process

    Writes redo entries to the online redo log

    Redo log entries are generated in the redo log buffer of the system global area (SGA). LGWR writes the redo log entries sequentially into a redo log file. If the database has a multiplexed redo log, then LGWR writes the redo log entries to a group of redo log files.

    See Also: Oracle Database Concepts and Oracle Database Administrator's Guide

    Database instances, Oracle ASM instances

    LMDn

    Global Enqueue Service Daemon Process

    Manages incoming remote resource requests from other instances

    LMDn processes enqueue resources managed under Global Enqueue Service. In particular, they process incoming enqueue request messages and control access to global enqueues. They also perform distributed deadlock detections. There can be up to 36 of these processes (LMD0-LMDz).

    Database instances, Oracle ASM instances, Oracle RAC

    LMHB

    Global Cache/Enqueue Service Heartbeat Monitor

    Monitor the heartbeat of several processes

    LMHB monitors the CKPT, DIAn, LCKn, LGnn, LGWR, LMDn, LMON, LMSn , and RMSn processes to ensure they are running normally without blocking or spinning.

    Database instances, Oracle ASM instances, Oracle RAC

    LMON

    Global Enqueue Service Monitor Process

    Monitors an Oracle RAC cluster to manage global resources

    LMON maintains instance membership within Oracle RAC. The process detects instance transitions and performs reconfiguration of GES and GCS resources.

    See Also: Oracle Real Application Clusters Administration and Deployment Guide

    Database instances, Oracle ASM instances, Oracle RAC

    LMSn

    Global Cache Service Process

    Manages resources and provides resource control among Oracle RAC instances

    LMS, where n is 0-9 or a-z, maintains a lock database for Global Cache Service (GCS) and buffer cache resources. This process receives, processes, and sends GCS requests, block transfers, and other GCS-related messages.

    See Also: Oracle Real Application Clusters Administration and Deployment Guide

    Database instances, Oracle ASM instances, Oracle RAC

    LREG

    Listener Registration Process

    Registers the instance with the listeners

    LREG notifies the listeners about instances, services, handlers, and endpoint.

    Database instances, Oracle ASM instances, Oracle RAC

    LSP0

    Logical Standby Coordinator Process

    Schedules transactions for Data Guard SQL Apply

    LSP0 is the initial process created upon startup of Data Guard SQL Apply. In addition to managing LogMiner and Apply processes, LSP0 is responsible for maintaining inter-transaction dependencies and appropriately scheduling transactions with applier processes. LSP0 is also responsible for detecting and enabling run-time parameter changes for the SQL Apply product as a whole.

    Database instances, Data Guard

    LSP1

    Logical Standby Dictionary Build Process

    Performs a logical standby dictionary build on a primary database

    The LSP1 process is spawned on a logical standby database that is intended to become the new primary database. A logical standby database becomes a primary database because of switchover or failover. The dictionary is necessary for logical standby databases to interpret the redo of the new primary database.

    Database instances, Data Guard

    LSP2

    Logical Standby Set Guard Process

    Determines which database objects will be protected by the database guard

    The LSP2 process is created as needed during startup of SQL Apply to update the list of objects that are protected by the database guard.

    Database instances, Data Guard

    Lnnn

    Pooled Server Process

    Handles client requests in Database Resident Connection Pooling

    In Database Resident Connection Pooling, clients connect to a connection broker process. When a connection becomes active, the connection broker hands off the connection to a compatible pooled server process. The pooled server process performs network communication directly on the client connection and processes requests until the client releases the server. After being released, the connection is returned to the broker for monitoring, leaving the server free to handle other clients.

    See Also: Oracle Database Concepts

    Database instances, Database Resident Connection Pooling

    MARK

    Mark AU for Resynchronization Coordinator Process

    Marks ASM allocation units as stale following a missed write to an offline disk

    MARK essentially tracks which extents require resynchronization for offline disks. This process runs in the database instance and is started when the database instance first begins using the Oracle ASM instance. If required, MARK can also be started on demand when disks go offline in the Oracle ASM redundancy disk group.

    Database instances, Oracle ASM instances

    MMAN

    Memory Manager Process

    Serves as the instance memory manager

    This process performs the resizing of memory components on the instance.

    Database instances, Oracle ASM instances

    MMNL

    Manageability Monitor Lite Process

    Performs tasks relating to manageability, including active session history sampling and metrics computation

    MMNL performs many tasks relating to manageability, including session history capture and metrics computation.

    Database instances, Oracle ASM instances

    MMON

    Manageability Monitor Process

    Performs or schedules many manageability tasks

    MMON performs many tasks related to manageability, including taking Automatic Workload Repository snapshots and performing Automatic Database Diagnostic Monitor analysis.

    Database instances, Oracle ASM instances

    Mnnn

    MMON Slave Process

    Performs manageability tasks on behalf of MMON

    Mnnn performs manageability tasks dispatched to them by MMON. Tasks performed include taking Automatic Workload Repository snapshots and Automatic Database Diagnostic Monitor analysis.

    Database instances, Oracle ASM instances

    MRP0

    Managed Standby Recovery Process

    Coordinates the application of redo on a physical standby database

    MRP0 is spawned at the start of redo apply on a physical standby database. This process handles the extraction of redo and coordinates the application of that redo on a physical standby database.

    See Also: Oracle Data Guard Concepts and Administration

    Database instances, Data Guard

    MSnn

    LogMiner Worker Process

    Reads redo log files and translates and assembles into transactions

    Multiple MSnn processes can exists, where n is 0-9 or a-Z. A minimum of three MSnnprocesses work as a group to provide transactions to a LogMiner client, for example, a logical standby database or a database capture. There may be more than one such group, for example, multiple capture processes configured for either local or downstream capture in a database.

    Database instances, Logical Standby, Oracle Streams, XStream Outbound servers, Oracle GoldenGate

    Nnnn

    Connection Broker Process

    Monitors idle connections and hands off active connections in Database Resident Connection Pooling

    In Database Resident Connection Pooling, clients connect to a connection broker process. When a connection becomes active, the connection broker hands off the connection to a compatible pooled server process. The pooled server process performs network communication directly on the client connection and processes requests until the client releases the server. After being released, the connection is returned to the broker for monitoring, leaving the server free to handle other clients.

    See Also: Oracle Database Concepts

    Database instances, Database Resident Connection Pooling

    NSSn

    Network Server SYNC Process

    Transfers redo from current online redo logs to remote standby destinations configured for SYNC transport

    NSSn can run as multiple processes, where n is 1-9 or A.

    See Also: Oracle Data Guard Concepts and Administration

    Database instances, Data Guard

    NSVn

    Data Guard Broker NetSlave Process

    Performs broker network communications between databases in a Data Guard environment

    NSVn is created when a Data Guard broker configuration is enabled. There can be as many NSVn processes (where n is 0- 9 and A-U) created as there are databases in the Data Guard broker configuration.

    Database instances, Data Guard

    OCFn

    ASM CF Connection Pool Process

    Maintains a connection to the Oracle ASM instance for metadata operations

     

    Database instances, Oracle ASM instances

    OFSD

    Oracle File Server Background Process

    Serves file system requests submitted to an Oracle instance

    This background process listens for new file system requests, both management (like mount, unmount, and export) and I/O requests, and executes them using Oracle threads.

    Database instances, Oracle RAC

    Onnn

    ASM Connection Pool Process

    Maintains a connection to the Oracle ASM instance for metadata operations

    Onnn slave processes are spawned on demand. These processes communicate with the Oracle ASM instance.

    Database instances, Oracle ASM instances

    PING

    Interconnect Latency Measurement Process

    Assesses latencies associated with communications for each pair of cluster instances

    Every few seconds, the process in one instance sends messages to each instance. The message is received by PING on the target instance. The time for the round trip is measured and collected.

    Database instances, Oracle ASM instances, Oracle RAC

    PMON

    Process Monitor

    Monitors the other background processes and performs process recovery when a server or dispatcher process terminates abnormally

    PMON periodically performs cleanup of all the following:

    • Processes that died abnormally

    • Sessions that were killed

    • Detached transactions that have exceeded their idle timeout

    • Detached network connections which have exceeded their idle timeout

    In addition, PMON monitors, spawns, and stops the following as needed:

    • Dispatcher and shared server processes

    • Job queue processes

    • Pooled server processes for database resident connection pooling

    • Restartable background processes

    See Also: Oracle Database Concepts and Oracle Database Net Services Administrator's Guide

    Database instances, Oracle ASM instances, Oracle ASM Proxy instances

    Pnnn

    Parallel Query Slave Process

    Perform parallel execution of a SQL statement (query, DML, or DDL)

    Parallel Query has two components: a foreground process that acts as query coordinator and a set of parallel slaves (Pnnn) that are background processes. These background processes are spawned or reused during the start of a parallel statement. They receive and perform units of work sent from the query coordinator.

    The maximum number of Pnnn processes is controlled by the initialization parameterPARALLEL_MAX_SERVERS. Slave processes are numbered from 0 to thePARALLEL_MAX_SERVERS setting. If the query is a GV$ query, then these background processes are numbered backward, starting from PPA7.

    Database instances, Oracle ASM instances

    PRnn

    Parallel Recovery Process

    Performs tasks assigned by the coordinator process performing parallel recovery

    PRnn serves as a slave process for the coordinator process performing parallel media recovery and carries out tasks assigned by the coordinator. The default number of these processes is based on number of CPUs.

    Database instances

    PSP0

    Process Spawner Process

    Spawns Oracle background processes after initial instance startup

     

    Database instances, Oracle ASM instances

    QMNC

    Non-sharded queue master process

    Monitors AQ

    QMNC is the non-sharded queue master process responsible for facilitating various background activities required by AQ and Oracle Streams: time management of messages, management of nonpersistent queues, cleanup of resources, and so on. QMNC dynamically spawns Qnnn processes as needed for performing these tasks.

    Note that if the AQ_TM_PROCESSES initialization parameter is set to 0, this process will not start. The database writes the following message to the alert log: WARNING: AQ_TM_PROCESSES is set to 0. System might be adversely affected.

    Database instances Advanced Queueing

    QMnn

    AQ Master Class Process

    Per instance per AQ Master Class Process

    Each of this type of process represents a single class of work item such as AQ notification, queue monitors, and cross process.

    Database instances Advanced Queueing

    Qnnn

    AQ Server Class Process

    Per AQ Master Class server process

    Each server class process acts on behalf of an AQ master class process. This relationship is maintained until the master requires services of a particular service process. Once released, the server class processes are moved to a free server pool.

    Database instances Advanced Queueing

    RBAL

    ASM Rebalance Master Process

    Coordinates rebalance activity

    In an Oracle ASM instance, it coordinates rebalance activity for disk groups. In a database instance, it manages Oracle ASM disk groups.

    Database instances, Oracle ASM instances

    RCBG

    Result Cache Background Process

    Handles result cache messages

    This process is used for handling invalidation and other messages generated by server processes attached to other instances in Oracle RAC.

    Database instances, Oracle RAC

    RECO

    Recoverer Process

    Resolves distributed transactions that are pending because of a network or system failure in a distributed database

    RECO uses the information in the pending transaction table to finalize the status of in-doubt transactions. At timed intervals, the local RECO attempts to connect to remote databases and automatically complete the commit or rollback of the local portion of any pending distributed transactions. All transactions automatically resolved by RECO are removed from the pending transaction table.

    See Also: Oracle Database Concepts and Oracle Database Net Services Administrator's Guide

    Database instances

    RM

    RAT Masking Slave Process

    Extracts and masks bind values from workloads like SQL tuning sets and DB Replay capture files

    This background process is used with Data Masking and Real Application Testing.

    Database instances

    RMON

    Rolling Migration Monitor Process

    Manages the rolling migration procedure for an Oracle ASM cluster

    The RMON process is spawned on demand to run the protocol for transitioning an ASM cluster in and out of rolling migration mode.

    Oracle ASM instance, Oracle RAC

    RMSn

    Oracle RAC Management Process

    Performs manageability tasks for Oracle RAC

    RMSn performs a variety of tasks, including creating resources related to Oracle RAC when new instances are added to a cluster.

    See Also: Oracle Real Application Clusters Administration and Deployment Guide

    Database instances, Oracle RAC

    RMVn

    Global Cache Service Remaster Process

    Performs remastering for cluster reconfiguration and dynamic remastering

    Each RMV is a slave process for LMSn to handle remastering work. They are also helper processes for LMS to handle non-critical work from global cache service.

    Database instances, Oracle RAC

    Rnnn

    ASM Block Remap Slave Process

    Remaps a block with a read error

    A database instance reading from an Oracle ASM disk group can encounter an error during a read. If possible, Oracle ASM asynchronously schedules a Rnnn slave process to remap this bad block from a mirror copy.

    Oracle ASM instances

    RPnn

    Capture Processing Worker Process

    Processes a set of workload capture files

    RPnn are worker processes spawned by callingDBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE(capture_dir,parallel_level). Each worker process is assigned a set of workload capture files to process.

    Worker processes execute in parallel without needing to communicate with each other. After each process is finished processing its assigned files, it exits and informs its parent process.

    The number of worker processes is controlled by the parallel_level parameter ofDBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE. By default, parallel_level is null. Then, the number of worker processes is computed as follows:

    SELECT VALUE 
    FROM   V$PARAMETER 
    WHERE  NAME='cpu_count';
    

    When parallel_level is 1, no worker processes are spawned.

    Database instances

    RPOP

    Instant Recovery Repopulation Daemon

    Responsible for re-creating and/or repopulating data files from snapshot files and backup files

    The RPOP process is responsible for re-creating and repopulating data files from snapshots files. It works with the instant recovery feature to ensure immediate data file access. The local instance has immediate access to the remote snapshot file's data, while repopulation of the recovered primary data files happens concurrently. Any changes in the data are managed between the instance's DBW processes and RPOP to ensure the latest copy of the data is returned to the user.

    Database instances

    RSM0

    Data Guard Broker Worker Process

    Performs monitoring management tasks related to Data Guard on behalf of DMON

    The process is created when a Data Guard broker configuration is enabled.

    Database instances, Data Guard

    RSMN

    Remote Slave Monitor Process

    Manages background slave process creation and communication on remote instances in Oracle RAC

    This background process manages the creation of slave processes and the communication with their coordinators and peers. These background slave processes perform tasks on behalf of a coordinating process running in another cluster instance.

    Database instances, Oracle RAC

    RVWR

    Recovery Writer Process

    Writes flashback data to the flashback logs in the fast recovery area

    RVWR writes flashback data from the flashback buffer in the SGA to the flashback logs. RVWR also creates flashback logs and performs some tasks for flashback log automatic management.

    Database instances, Flashback Database

    SAnn

    SGA Allocator

    Allocates SGA

    A small fraction of SGA is allocated during instance startup. The SAnn process allocates the rest of SGA in small chunks. The process exits upon completion of SGA allocation.

    The possible processes are SA00 - SAzz.

    Database instances

    SCCn

    ASM Disk Scrubbing Slave Check Process

    Performs Oracle ASM disk scrubbing check operation

    SCCn acts as a slave process for SCRB and performs the checking operations. The possible processes are SCC0-SCC9.

    Oracle ASM instances

    SCRB

    ASM Disk Scrubbing Master Process

    Coordinates Oracle ASM disk scrubbing operations

    SCRB runs in an Oracle ASM instance and coordinates Oracle ASM disk scrubbing operations.

    Oracle ASM instances

    SCRn

    ASM Disk Scrubbing Slave Repair Process

    Performs Oracle ASM disk scrubbing repair operation

    SCRn acts as a slave process for SCRB and performs the repairing operations. The possible processes are SCR0-SCR9.

    Oracle ASM instances

    SCVn

    ASM Disk Scrubbing Slave Verify Process

    Performs Oracle ASM disk scrubbing verify operation

    SCVn acts as a slave process for SCRB and performs the verifying operations. The possible processes are SCV0-SCV9.

    Oracle ASM instances

    SMCO

    Space Management Coordinator Process

    Coordinates the execution of various space management tasks

    This background process coordinates the execution of various space management tasks, including proactive space allocation and space reclamation. SMCO dynamically spawns slave processes (Wnnn) to implement these tasks.

    Database instances

    SMON

    System Monitor Process

    Performs critical tasks such as instance recovery and dead transaction recovery, and maintenance tasks such as temporary space reclamation, data dictionary cleanup, and undo tablespace management

    SMON performs many database maintenance tasks, including the following:

    • Creates and manages the temporary tablespace metadata

    • Reclaims space used by orphaned temporary segments

    • Maintains the undo tablespace by onlining, offlining, and shrinking the undo segments based on undo space usage statistics

    • Cleans up the data dictionary when it is in a transient and inconsistent state

    • Maintains the SCN to time mapping table used to support Oracle Flashback features

    In an Oracle RAC database, the SMON process of one instance can perform instance recovery for other instances that have failed.

    SMON is resilient to internal and external errors raised during background activities.

    See Also: Oracle Database Concepts

    Database instances

    Snnn

    Shared Server Process

    Handles client requests in the shared server architecture

    In the shared server architecture, clients connect to a dispatcher process, which creates a virtual circuit for each connection. When the client sends data to the server, the dispatcher receives the data into the virtual circuit and places the active circuit on the common queue to be picked up by an idle shared server. The shared server then reads the data from the virtual circuit and performs the database work necessary to complete the request. When the shared server must send data to the client, the server writes the data back into the virtual circuit and the dispatcher sends the data to the client. After the shared server completes the client request, the server releases the virtual circuit back to the dispatcher and is free to handle other clients.

    Several initialization parameters relate to shared servers. The principal parameters are: DISPATCHERSSHARED_SERVERSMAX_SHARED_SERVERSLOCAL_LISTENER,REMOTE_LISTENER.

    See Also: Oracle Database Concepts

    Database instances, shared servers

    TEMn

    ASM disk Test Error Emulation Process

    Emulates I/O errors on Oracle ASM disks through named events

    I/O errors can be emulated on Oracle ASM disk I/O through named events. The scope can be the process, instance, or even cluster. Optionally, a set of AUs can be chosen for error emulation.

    Oracle ASM instances

    TTnn

    Redo Transport Slave Process

    Ships redo from current online and standby redo logs to remote standby destinations configured for ASYNC transport

    TTnn can run as multiple processes, where nn is 00 to ZZ.

    See Also: Oracle Data Guard Concepts and Administration

    Database instances, Data Guard

    Unnn

    Container process for threads

    Host processes where database processes execute as threads.

    Unnn processes are database container operating system processes where database backgrounds processes like SMON, CJQ0, and database foreground processes run. The V$PROCESS view lists database processes running in these container processes. These container processes are created only when the THREADED_EXECUTIONinitialization parameter is set to TRUE. The number of these processes vary depending on the active database processes. On a host with multiple NUMA nodes, there will be at least one Unnn process per NUMA node.

    These processes are fatal processes, if any of them is killed, it will result in instance termination. These processes exit when the instance is shut down or terminated.

    Database instances

    VBGn

    Volume Background Process

    Communicates between the Oracle ASM instance and the operating system volume driver

    VBGn handles messages originating from the volume driver in the operating system and sends them to the Oracle ASM instance.

    VBGn can run as multiple processes, where n is 0-9.

    Oracle ASM instances, Oracle ASM Proxy instances

    VDBG

    Volume Driver Process

    Forwards Oracle ASM requests to perform various volume-related tasks

    VDBG handles requests to lock or unlock an extent for rebalancing, volume resize, disk offline, add or drop a disk, force and dismount disk group to the Dynamic Volume Manager driver.

    Oracle ASM instances, Oracle ASM Proxy instances

    VIO0, VIO1, VIO2, VIO3

    Volume I/O

    Route ADVM volume I/O for ASM instances on compute nodes within an Exadata

    These processes handle requests for I/Os targeted at storage not locally accessible. They are used for Exadata targeted storage as well. These background processes only start when an ASM Volume is created and set up to be used. One process will start for each NUMA node on target machines. Under normal operation on non-Exadata hardware and on Exadata hardware that is not utilizing ASM volumes, these processes will not be started.

    Oracle ASM Proxy instances

    VKRM

    Virtual Scheduler for Resource Manager Process

    Serves as centralized scheduler for Resource Manager activity

    VKRM manages the CPU scheduling for all managed Oracle processes. The process schedules managed processes in accordance with an active resource plan.

    Database instances

    VKTM

    Virtual Keeper of Time Process

    Provides a wall clock time and reference time for time interval measurements

    VKTM acts as a time publisher for an Oracle instance. VKTM publishes two sets of time: a wall clock time using a seconds interval and a higher resolution time (which is not wall clock time) for interval measurements. The VKTM timer service centralizes time tracking and offloads multiple timer calls from other clients.

    Database instances, Oracle ASM instances

    VMB0

    Volume Membership Process

    Maintains cluster membership on behalf of the Oracle ASM volume driver

    This process membership in the cluster as an I/O-capable client on behalf of the Oracle ASM volume driver.

    Oracle ASM instances, Oracle ASM Proxy instances

    VUBG

    Volume drive Umbilicus Background

    Relays messages between Oracle ASM instance and Oracle ASM Proxy instance that is used by ADVM (for ACFS)

     

    Oracle ASM instances, Oracle ASM Proxy instances

    Wnnn

    Space Management Slave Process

    Performs various background space management tasks, including proactive space allocation and space reclamation

    Wnnn slave processes perform work on behalf of Space Management and on behalf of the Oracle In-Memory Option.

    • Wnnn processes are slave processes dynamically spawned by SMCO to perform space management tasks in the background. These tasks include preallocating space into locally managed tablespace and SecureFiles segments based on space usage growth analysis, and reclaiming space from dropped segments. After being started, the slave acts as an autonomous agent. After it finishes task execution, it automatically picks up another task from the queue. The process terminates itself after being idle for a long time.

    • Wnnn processes execute in-memory populate and in-memory repopulate tasks for population or repopulation of in-memory enabled objects. For in-memory, both the IMCO background process and foreground processes will utilize Wnnnslaves for population and repopulation. Wnnn processes are utilized by the IMCO background process for prepopulation of in-memory enabled objects with priority LOW/MEDIUM/HIGH/CRITICAL, and for repopulation of in-memory objects. In-memory populate and repopulate tasks running on Wnnn slaves are also initiated from foreground processes in response to queries and DMLs that reference in-memory enabled objects.

    Database instances

    XDMG

    Exadata Automation Manager

    Initiates automation tasks involved in managing Exadata storage

    XDMG monitors all configured Exadata cells for state changes, such as a bad disk getting replaced, and performs the required tasks for such events. Its primary tasks are to watch for when inaccessible disks and cells become accessible again, and to initiate the ASM ONLINE operation. The ONLINE operation is handled by XDWK.

    Oracle ASM instances, Exadata

    XDWK

    Exadata Automation Manager

    Performs automation tasks requested by XDMG

    XDWK gets started when asynchronous actions such as ONLINE, DROP, and ADD an Oracle ASM disk are requested by XDMG. After a 5 minute period of inactivity, this process will shut itself down.

    Oracle ASM instances, Exadata

    Xnnn

    ASM Disk Expel Slave Process

    Performs Oracle ASM post-rebalance activities

    This process expels dropped disks after an Oracle ASM rebalance.

    Oracle ASM instances


    Posted by pat98


    Background Process 에 대한 설명 (11.2.0.2 이상 기준), Oracle 메뉴얼 발췌


    Name Expanded Name Short Description Long Description External Properties

    ABMR

    Auto BMR Background Process

    Coordinates execution of tasks such as filtering duplicate block media recovery requests and performing flood control

    When a process submits a block media recovery request to ABMR, it dynamically spawns slave processes (BMRn) to perform the recovery. ABMR and BMRn terminate after being idle for a long time.

    See Also: Oracle Database Backup and Recovery User's Guide

    Database instance

    ACFS

    ASM Cluster File System CSS Process

    Tracks the cluster membership in CSS and informs the file system driver of membership changes

    ACFS delivers CSS membership changes to the Oracle cluster file system. These membership changes are required for the file system to maintain file system consistency within the cluster.

    ASM instance, Oracle RAC

    ACMS

    Atomic Control File to Memory Service Process

    Coordinates consistent updates to a control file resource with its SGA counterpart on all instances in an Oracle RAC environment

    The ACMS process works with a coordinating caller to ensure that an operation is executed on every instance in Oracle RAC despite failures. ACMS is the process in which a distributed operation is called. As a result, this process can exhibit a variety of behaviors. In general, ACMS is limited to small, nonblocking state changes for a limited set of cross-instance operations.

    Database instance, Oracle RAC

    APnn

    Logical Standby / Streams Apply Process Coordinator Process

    Obtains transactions from the reader server and passes them to apply servers

    The coordinator process name is APnn, where nn can include letters and numbers.

    See Also: Oracle Streams Concepts and Administration

    Database instance, Data Guard, Oracle Streams

    ARBn

    ASM Rebalance Process

    Rebalances data extents within an ASM disk group

    Possible processes are ARB0-ARB9 and ARBA.

    ASM instance

    ARCn

    Archiver Process

    Copies the redo log files to archival storage when they are full or an online redo log switch occurs

    ARCn processes exist only when the database is in ARCHIVELOG mode and automatic archiving is enabled, in which case ARCn automatically archives online redo log files. LGWR cannot reuse and overwrite an online redo log group until it has been archived.

    The database starts multiple archiver processes as needed to ensure that the archiving of filled online redo logs does not fall behind. Possible processes include ARC0-ARC9 and ARCa-ARCt.

    The LOG_ARCHIVE_MAX_PROCESSESinitialization parameter specifies the number of ARCn processes that the database initially invokes.

    See Also: Oracle Database Concepts and Oracle Database Administrator's Guide

    Database instance

    ASMB

    ASM Background Process

    Communicates with the ASM instance, managing storage and providing statistics

    ASMB runs in ASM instances when the ASMCMD cp command runs or when the database instance first starts if the server parameter file is stored in ASM. ASMB also runs with Oracle Cluster Registry on ASM.

    Database and ASM instances

    ASnn

    Logical Standby / Streams Apply Process Reader Server or Apply Server

    • Computes dependencies between logical change records (LCRs) and assembles messages into transactions (Reader Server)

    • Applies LCRs to database objects or passes LCRs and user messages to their appropriate apply handlers (Apply Server)

    When the reader server finishes computing dependencies between LCRs and assembling transactions, it returns the assembled transactions to the coordinator process. QueryV$STREAMS_APPLY_READER for information about the reader server background process.

    An apply server receives the transactions from the coordinator background process, and either applies database changes in LCRs or sends LCRs or messages to apply handlers. Apply servers can also enqueue a queue. If an apply server encounters an error, then it then tries to resolve the error with a user-specified conflict handler or error handler. If an apply server cannot resolve an error, then it rolls back the transaction and places the entire transaction, including all of its messages, in the error queue. When an apply server commits a completed transaction, this transaction has been applied. When an apply server places a transaction in the error queue and commits, this transaction also has been applied. Query V$STREAMS_APPLY_SERVERfor information about the apply server background process.

    The coordinator process name is ASnn, where nn can include letters and numbers.

    Database instance

    BMRn

    Automatic Block Media Recovery Slave Pool Process

    Fetches blocks from a real-time readable standby database

    When a process submits a block media recovery request to ABMR, it dynamically spawns slave processes (BMRn) to perform the recovery. BMRn processes fetch blocks from a real-time readable standby database. ABMR and BMRnterminate after being idle for a long time.

    See Also: Oracle Database Backup and Recovery User's Guide

    Database instance

    Bnnn

    ASM Blocking Slave Process for GMON

    Performs maintenance actions on ASM disk groups

    Bnnn performs actions that require waiting for resources on behalf of GMON. GMON must be highly available and cannot wait.

    A Bnnn slave is spawned when a disk is taken offline in an ASM disk group. Offline timer processing and drop of the disk are performed in this slave. Up to five process (B000 to B004) can exist depending on the load.

    ASM instance

    CJQ0

    Job Queue Coordinator Process

    Selects jobs that need to be run from the data dictionary and spawns job queue slave processes (Jnnn) to run the jobs

    CJQ0 is automatically started and stopped as needed by Oracle Scheduler.

    The JOB_QUEUE_PROCESSESinitialization parameter specifies the maximum number of processes that can be created for the execution of jobs. CJQ0 starts only as many job queue processes as required by the number of jobs to run and available resources.

    See Also: Oracle Database Concepts and Oracle Database Administrator's Guide

    Database instance

    CKPT

    Checkpoint Process

    Signals DBWn at checkpoints and updates all the data files and control files of the database to indicate the most recent checkpoint

    At specific times CKPT starts a checkpoint request by messaging DBWn to begin writing dirty buffers. On completion of individual checkpoint requests, CKPT updates data file headers and control files to record most recent checkpoint.

    See Also: Oracle Database Concepts

    Database and ASM instances

    CPnn

    Streams Capture Process

    Captures database changes from the redo log by using the infrastructure of LogMiner

    The capture process name is CPnn, where nn can include letters and numbers. The underlying LogMiner process name is MSnn, where nncan include letters and numbers. The capture process includes one reader server that reads the redo log and divides it into regions, one or more preparer servers that scan the redo log, and one builder server that merges redo records from the preparer servers. Each reader server, preparer server, and builder server is a process. Query theV$STREAMS_CAPTURE view for information about this background process.

    See Also: Oracle Streams Concepts and Administration

    Database instance, Oracle Streams

    CSnn

    I/O Calibration Process

    Issues I/Os to storage as part of storage calibration.

    CSnn slave processes are started on execution of theDBMS_RESOURCE_MANAGER.CALIBRATE_IO() procedure. There is one slave process per CPU on each node of the database.

    Database instance, Oracle RAC

    CTWR

    Change Tracking Writer Process

    Tracks changed data blocks as part of the Recovery Manager block change tracking feature

    CTWR tracks changed blocks as redo is generated at a primary database and as redo is applied at a standby database. The process is slightly different depending on the type of database.

    See Also: Oracle Database Backup and Recovery User's Guide

    Database instance

    CXnn

    Streams Propagation Sender Process

    Sends LCRs to a propagation receiver

    The propagation sender process name is CXnn, where nn can include letters and numbers. In an Oracle Streams combined capture and apply optimization, the propagation sender sends LCRs directly to the propagation receiver to improve performance. The propagation receiver passes the LCRs to an apply process. QueryV$PROPAGATION_SENDER for information about a propagation sender.

    Database instance, Oracle Streams

    DBRM

    Database Resource Manager Process

    Sets resource plans and performs other tasks related to the Database Resource Manager

    If a resource plan is not enabled, then this process is idle.

    See Also: Oracle Database Administrator's Guide

    Database instance

    DBWn

    Database Writer Process

    Writes modified blocks from the database buffer cache to the data files

    The primary responsibility of DBWnis to write data blocks to disk. DBWnalso handles checkpoints, file open synchronization, and logging of Block Written records.

    In many cases the blocks that DBWnwrites are scattered throughout the disk. Thus, the writes tend to be slower than the sequential writes performed by LGWR. DBWnperforms multiblock writes when possible to improve efficiency. The number of blocks written in a multiblock write varies by operating system.

    The DB_WRITER_PROCESSESinitialization parameter specifies the number of DBWn processes (DBW0-DBW9 and DBWa-DBWz). The database selects an appropriate default setting for this parameter or adjusts a user-specified setting based on the number of CPUs and processor groups.

    See Also: Oracle Database Concepts and Oracle Database Performance Tuning Guide

    Database instance

    DIA0

    Diagnostic Process


    Detects and resolves hangs and deadlocks
     

    ASM and Database instances

    DIAG

    Diagnostic Capture Process


    Performs diagnostic dumps

    DIAG performs diagnostic dumps requested by other processes and dumps triggered by process or instance termination. In Oracle RAC, DIAG performs global diagnostic dumps requested by remote instances.

    ASM and Database instances

    DMnn

    Data Pump Master Process

    Coordinates the Data Pump job tasks performed by Data Pump worker processes and handles client interactions

    The Data Pump master (control) process is started during job creation and coordinates all tasks performed by the Data Pump job. It handles all client interactions and communication, establishes all job contexts, and coordinates all worker process activities on behalf of the job.

    Database instance, Data Pump

    DMON

    Data Guard Broker Monitor Process

    Manages and monitors a database that is part of a Data Guard broker configuration

    When you start the Data Guard broker, a DMON process is created. DMON runs for every database instance that is managed by the broker. DMON interacts with the local database and the DMON processes of the other databases to perform the requested function. DMON also monitors the health of the broker configuration and ensures that every database has a consistent description of the configuration.

    DMON maintains profiles about all database objects in the broker configuration in a binary configuration file. A copy of this file is maintained by the DMON process for each of the databases that belong to the broker configuration. The process is created when theDG_BROKER_START initialization parameter is set to true.

    See Also: Oracle Data Guard Broker

    Database instance, Data Guard

    Dnnn

    Dispatcher Process

    Performs network communication in the shared server architecture

    In the shared server architecture, clients connect to a dispatcher process, which creates a virtual circuit for each connection. When the client sends data to the server, the dispatcher receives the data into the virtual circuit and places the active circuit on the common queue to be picked up by an idle shared server. The shared server then reads the data from the virtual circuit and performs the database work necessary to complete the request. When the shared server must send data to the client, the server writes the data back into the virtual circuit and the dispatcher sends the data to the client. After the shared server completes the client request, the server releases the virtual circuit back to the dispatcher and is free to handle other clients.

    Several initialization parameters relate to shared servers. The principal parameters are:DISPATCHERSSHARED_SERVERS,MAX_SHARED_SERVERS,LOCAL_LISTENER,REMOTE_LISTENER.

    See Also: Oracle Database Concepts

    Database instance, shared servers

    DRnn

    ASM Disk Resynchronization Slave Process

    Resynchronizes the contents of an offline disk

    When a disk online SQL command is issued on a disk or disks that are offline, ASM spawns DRnn. Depending on the load, more than one slave may be spawned.

    ASM Instance

    DSKM

    Slave Diskmon Process

    Acts as the conduit between the database, ASM instances, and the Master Diskmon daemon to communicate information to Exadata storage

    This process is active only if Exadata Storage is used. DSKM performs operations related to Exadata I/O fencing and Exadata cell failure handling.

    ASM instance, Exadata

    DWnn

    Data Pump Worker Process

    Performs Data Pump tasks as assigned by the Data Pump master process

    The Data Pump worker process is responsible for performing tasks that are assigned by the Data Pump master process, such as the loading and unloading of metadata and data.

    Database instance

    EMNC

    EMON Coordinator Process

    Coordinates database event management and notifications

    EMNC coordinates event management and notification activity in the database, including Streams Event Notifications, Continuous Query Notifications, and Fast Application Notifications.

    Database and ASM instances

    Ennn

    EMON Slave Process

    Performs database event management and notifications

    The database event management and notification load is distributed among the EMON slave processes. These processes work on the system notifications in parallel, offering a capability to process a larger volume of notifications, a faster response time, and a lower shared memory use for staging notifications.

    Database and ASM instances

    FBDA

    Flashback Data Archiver Process

    Archives historical rows for tracked tables into flashback data archives and manages archive space, organization, and retention

    When a transaction that modifies a tracked table commits, FBDA stores the pre-image of the rows in the archive. FDBA maintains metadata on the current rows and tracks how much data has been archived.

    FBDA is also responsible for automatically managing the flashback data archive for space, organization (partitioning tablespaces), and retention. FBDA also keeps track of how far the archiving of tracked transactions has progressed.

    See Also: Oracle Database Advanced Application Developer's Guide

    Database and ASM instances

    FDnn

    Oracle ASM Stale FD Cleanup Slave Process

    Cleans up Oracle ASM stale file descriptors on foreground processes

    This process cleans up Oracle ASM stale file descriptors on foreground processes if an Oracle ASM disk is globally closed.

    Database and ASM instances

    FMON

    File Mapping Monitor Process

    Manages mapping information for the Oracle Database file mapping interface

    The DBMS_STORAGE_MAP package enables you to control the mapping operations. When instructed by the user, FMON builds mapping information and stores it in the SGA, refreshes the information when a change occurs, saves the information to the data dictionary, and restores it to the SGA at instance startup.

    FMON is started by the database whenever the FILE_MAPPINGinitialization parameter is set totrue.

    Database and ASM instances

    FSFP

    Data Guard Broker Fast Start Failover Pinger Process

    Maintains fast-start failover state between the primary and target standby databases

    FSFP is created when fast-start failover is enabled.

    Database instance, Data Guard

    GCRnFoot 1 

    Global Conflict Resolution Slave Process

    Performs synchronous tasks on behalf of LMHB

    GCRn processes are transient slaves that are started and stopped as required by LMHB to perform synchronous or resource intensive tasks.

    Database and ASM instances, Oracle RAC

    GEN0

    General Task Execution Process

    Performs required tasks including SQL and DML

     

    Database and ASM instances

    GMON

    ASM Disk Group Monitor Process

    Monitors all mounted ASM disk groups

    GMON monitors all the disk groups mounted in an ASM instance and is responsible for maintaining consistent disk membership and status information. Membership changes result from adding and dropping disks, whereas disk status changes result from taking disks offline or bringing them online.

    ASM instance

    GTXn

    Global Transaction Process

    Provides transparent support for XA global transactions in an Oracle RAC environment

    These processes help maintain the global information about XA global transactions throughout the cluster. Also, the processes help perform two-phase commit for global transactions anywhere in the cluster so that an Oracle RAC database behaves as a single system to the externally coordinated distributed transactions.

    The GLOBAL_TXN_PROCESSESinitialization parameter specifies the number of GTXn processes, wheren is 0-9 or a-j. The database automatically tunes the number of these processes based on the workload of XA global transactions. You can disable these processes by setting the parameter to 0. If you try to run XA global transactions with these process disabled, an error is returned.

    See Also: Oracle Real Application Clusters Administration and Deployment Guide

    Database instance, Oracle RAC

    Innn

    Disk and Tape I/O Slave Process

    Serves as an I/O slave process spawned on behalf of DBWR, LGWR, or an RMAN backup session

    I/O slave process can be configured on platforms where asynchronous I/O support is not available. These slaves are started by setting the corresponding slave enable parameter in the server parameter file. The I/O slaves simulate the asynchronous I/O behavior when the underlying platform does not have native support for asynchronous I/O.

    Database instance

    INSV

    Data Guard Broker Instance Slave Process

    Performs Data Guard broker communication among instances in an Oracle RAC environment

    INSV is created when theDG_BROKER_START initialization parameter is set to true.

    Database instance, Data Guard

    Jnnn

    Job Queue Slave Process

    Executes jobs assigned by the job coordinator

    Job slave processes are created or awakened by the job coordinator when it is time for a job to be executed.

    Job slaves gather all the metadata required to run the job from the data dictionary. The slave processes start a database session as the owner of the job, execute triggers, and then execute the job. After the job is complete, the slave processes commit and then execute appropriate triggers and close the session. The slave can repeat this operation in case additional jobs need to be run.

    Database instance

    LCK0

    Instance Enqueue Background Process

    Manages global enqueue requests and cross-instance broadcasts

    The process handles all requests for resources other than data blocks. For examples, LCK0 manages library and row cache requests.

    Database and ASM instances, Oracle RAC

    LGWR

    Log Writer Process

    Writes redo entries to the online redo log

    Redo log entries are generated in the redo log buffer of the system global area (SGA). LGWR writes the redo log entries sequentially into a redo log file. If the database has a multiplexed redo log, then LGWR writes the redo log entries to a group of redo log files.

    See Also: Oracle Database Concepts and Oracle Database Administrator's Guide

    Database and ASM instances

    LMD0

    Global Enqueue Service Daemon 0 Process

    Manages incoming remote resource requests from other instances

    LMD0 processes enqueue resources managed under Global Enqueue Service. In particular, LMD0 processes incoming enqueue request messages and controls access to global enqueues. It also performs distributed deadlock detections.

    Database and ASM instances, Oracle RAC

    LMHB

    Global Cache/Enqueue Service Heartbeat Monitor

    Monitor the heartbeat of LMON, LMD, and LMSn processes

    LMHB monitors LMON, LMD, and LMSn processes to ensure they are running normally without blocking or spinning.

    Database and ASM instances, Oracle RAC

    LMON

    Global Enqueue Service Monitor Process

    Monitors an Oracle RAC cluster to manage global resources

    LMON maintains instance membership within Oracle RAC. The process detects instance transitions and performs reconfiguration of GES and GCS resources.

    See Also: Oracle Real Application Clusters Administration and Deployment Guide

    Database and ASM instances, Oracle RAC

    LMSn

    Global Cache Service Process

    Manages resources and provides resource control among Oracle RAC instances

    LMS, where n is 0-9 or a-z, maintains a lock database for Global Cache Service (GCS) and buffer cache resources. This process receives, processes, and sends GCS requests, block transfers, and other GCS-related messages.

    See Also: Oracle Real Application Clusters Administration and Deployment Guide

    Database and ASM instances, Oracle RAC

    LSP0

    Logical Standby Coordinator Process

    Schedules transactions for Data Guard SQL Apply

    LSP0 is the initial process created upon startup of Data Guard SQL Apply. In addition to managing LogMiner and Apply processes, LSP0 is responsible for maintaining inter-transaction dependencies and appropriately scheduling transactions with applier processes. LSP0 is also responsible for detecting and enabling runtime parameter changes for the SQL Apply product as a whole.

    Database instance, Data Guard

    LSP1

    Logical Standby Dictionary Build Process

    Performs a logical standby dictionary build on a primary database

    The LSP1 process is spawned on a logical standby database that is intended to become the new primary database. A logical standby database becomes a primary database by means of switchover or failover. The dictionary is necessary for logical standby databases to interpret the redo of the new primary database.

    Database instance, Data Guard

    LSP2

    Logical Standby Set Guard Process

    Determines which database objects will be protected by the database guard

    The LSP2 process is created as needed during startup of SQL Apply to update the list of objects that are protected by the database guard.

    Database instance, Data Guard

    Lnnn

    Pooled Server Process

    Handles client requests in Database Resident Connection Pooling

    In Database Resident Connection Pooling, clients connect to a connection broker process. When a connection becomes active, the connection broker hands off the connection to a compatible pooled server process. The pooled server process performs network communication directly on the client connection and processes requests until the client releases the server. After being released, the connection is returned to the broker for monitoring, leaving the server free to handle other clients.

    See Also: Oracle Database Concepts

    Database instance, Database Resident Connection Pooling

    MARK

    Mark AU for Resynchronization Coordinator Process

    Marks ASM allocation units as stale following a missed write to an offline disk

    MARK essentially tracks which extents require resynchronization for offline disks. This process runs in the database instance and is started when the database instance first begins using the ASM instance. If required, MARK can also be started on demand when disks go offline in the ASM redundancy disk group.

    Database and ASM instances

    MMAN

    Memory Manager Process

    Serves as the instance memory manager

    This process performs the resizing of memory components on the instance.

    Database and ASM instances

    MMNL

    Manageability Monitor Lite Process

    Performs tasks relating to manageability, including active session history sampling and metrics computation

    MMNL performs many tasks relating to manageability, including session history capture and metrics computation.

    Database and ASM instances

    MMON

    Manageability Monitor Process

    Performs or schedules many manageability tasks

    MMON performs many tasks related to manageability, including taking Automatic Workload Repository snapshots and performing Automatic Database Diagnostic Monitor analysis.

    Database and ASM instances

    Mnnn

    MMON Slave Process

    Performs manageability tasks on behalf of MMON

    Mnnn performs manageability tasks dispatched to them by MMON. Tasks performed include taking Automatic Workload Repository snapshots and Automatic Database Diagnostic Monitor analysis.

    Database and ASM instances

    MRP0

    Managed Standby Recovery Process

    Coordinates the application of redo on a physical standby database

    MRP0 is spawned at the start of redo apply on a physical standby database. This process handles the extraction of redo and coordinates the application of that redo on a physical standby database.

    See Also: Oracle Data Guard Concepts and Administration

    Database instance, Data Guard

    MSnn

    LogMiner Worker Process

    Reads redo log files and translates and assembles into transactions

    Multiple MSnn processes can exists, where n is 0-9 or a-Z. A minimum of three MSnn processes work as a group to provide transactions to a LogMiner client, for example, a logical standby database. There may be more than one such group, for example, Downstream Capture sessions.

    Database instance, Logical Standby, Oracle Streams

    Nnnn

    Connection Broker Process

    Monitors idle connections and hands off active connections in Database Resident Connection Pooling

    In Database Resident Connection Pooling, clients connect to a connection broker process. When a connection becomes active, the connection broker hands off the connection to a compatible pooled server process. The pooled server process performs network communication directly on the client connection and processes requests until the client releases the server. After being released, the connection is returned to the broker for monitoring, leaving the server free to handle other clients.

    See Also: Oracle Database Concepts

    Database instance, Database Resident Connection Pooling

    NSAn

    Redo Transport NSA1 Process

    Ships redo from current online redo logs to remote standby destinations configured for ASYNC transport

    NSAn can run as multiple processes, where n is 1-9 or A-V.

    See Also: Oracle Data Guard Concepts and Administration

    Database instance, Data Guard

    NSSn

    Redo Transport NSS1 Process

    Acts as a slave for LGWR when SYNC transport is configured for a remote standby destination

    NSSn can run as multiple processes, where n is 1-9 or A-V.

    See Also: Oracle Data Guard Concepts and Administration

    Database instance, Data Guard

    NSVn

    Data Guard Broker NetSlave Process

    Performs broker network communications between databases in a Data Guard environment

    NSVn is created when a Data Guard broker configuration is enabled. There can be as many NSVnprocesses (where n is 0- 9 and A-U) created as there are databases in the Data Guard broker configuration.

    Database instance, Data Guard

    OCFn

    ASM CF Connection Pool Process

    Maintains a connection to the ASM instance for metadata operations

     

    Database and ASM instances

    Onnn

    ASM Connection Pool Process

    Maintains a connection to the ASM instance for metadata operations

    Onnn slave processes are spawned on demand. These processes communicate with the ASM instance.

    Database and ASM instances

    PING

    Interconnect Latency Measurement Process

    Assesses latencies associated with communications for each pair of cluster instances

    Every few seconds, the process in one instance sends messages to each instance. The message is received by PING on the target instance. The time for the round trip is measured and collected.

    Database and ASM instances, Oracle RAC

    PMON

    Process Monitor

    Monitors the other background processes and performs process recovery when a server or dispatcher process terminates abnormally

    PMON periodically performs cleanup of all the following:

    • Processes that died abnormally

    • Sessions that were killed

    • Detached transactions that have exceeded their idle timeout

    • Detached network connections which have exceeded their idle timeout

    In addition, PMON monitors, spawns, and stops the following as needed:

    • Dispatcher and shared server processes

    • Job queue processes

    • Pooled server processes for database resident connection pooling

    • Restartable background processes

    PMON is also responsible for registering information about the instance and dispatcher processes with the network listener.

    See Also: Oracle Database Concepts and Oracle Database Net Services Administrator's Guide

    Database and ASM instances

    Pnnn

    Parallel Query Slave Process

    Perform parallel execution of a SQL statement (query, DML, or DDL)

    Parallel Query has two components: a foreground process that acts as query coordinator and a set of parallel slaves (Pnnn) that are background processes. These background processes are spawned or reused during the start of a parallel statement. They receive and carry out units of work sent from the query coordinator.

    The maximum number of Pnnnprocesses is controlled by the initialization parameterPARALLEL_MAX_SERVERS. Slave processes are numbered from 0 to the PARALLEL_MAX_SERVERSsetting. If the query is a GV$ query, then these background processes are numbered backward, starting from PZ99.

    Database and ASM instances

    PRnn

    Parallel Recovery Process

    Performs tasks assigned by the coordinator process performing parallel recovery

    PRnn serves as a slave process for the coordinator process performing parallel media recovery and carries out tasks assigned by the coordinator. The default number of these processes is based on number of CPUs.

    Database instance

    PSP0

    Process Spawner Process

    Spawns Oracle background processes after initial instance startup

     

    Database and ASM instances

    QMNC

    AQ Coordinator Process

    Monitors AQ

    QMNC is responsible for facilitating various background activities required by AQ and Oracle Streams: time management of messages, management of nonpersistent queues, cleanup of resources, and so on. QMNC dynamically spawns Qnnn processes as needed for performing these tasks.

    Note that if the AQ_TM_PROCESSESinitialization parameter is set to 0, this process will not start. The database writes the following message to the alert log: WARNING: AQ_TM_PROCESSES is set to 0. System might be adversely affected.

    Database instance, Advanced Queuing

    Qnnn

    AQ Server Class Process

    Performs various AQ-related background task for QMNC

    Qnnn acts as a slave process for QMNC and carry out tasks assigned by QMNC. The number of these processes is dynamically managed by QMNC based on load.

    Database instance

    RBAL

    ASM Rebalance Master Process

    Coordinates rebalance activity

    In an ASM instance, it coordinates rebalance activity for disk groups. In a database instances, it manages ASM disk groups.

    Database and ASM instances

    RCBG

    Result Cache Background Process

    Handles result cache messages

    This process is used for handling invalidation and other messages generated by server processes attached to other instances in Oracle RAC.

    Database instance, Oracle RAC

    RECO

    Recoverer Process

    Resolves distributed transactions that are pending because of a network or system failure in a distributed database

    RECO uses the information in the pending transaction table to finalize the status of in-doubt transactions. At timed intervals, the local RECO attempts to connect to remote databases and automatically complete the commit or rollback of the local portion of any pending distributed transactions. All transactions automatically resolved by RECO are removed from the pending transaction table.

    See Also: Oracle Database Concepts and Oracle Database Net Services Administrator's Guide

    Database instance

    RMSn

    Oracle RAC Management Process

    Performs manageability tasks for Oracle RAC

    RMSn performs a variety of tasks, including creating resources related to Oracle RAC when new instances are added to a cluster.

    See Also: Oracle Real Application Clusters Administration and Deployment Guide

    Database instance, Oracle RAC

    Rnnn

    ASM Block Remap Slave Process

    Remaps a block with a read error

    A database instance reading from an ASM disk group can encounter an error during a read. If possible, ASM asynchronously schedules a Rnnn slave process to remap this bad block from a mirror copy.

    ASM instance

    RPnn

    Capture Processing Worker Process

    Processes a set of workload capture files

    RPnn are worker processes spawned by callingDBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE(capture_dir,parallel_level). Each worker process is assigned a set of workload capture files to process.

    Worker processes execute in parallel without needing to communicate with each other. After each process is finished processing its assigned files, it exits and informs its parent process.

    The number of worker processes is controlled by the parallel_levelparameter ofDBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE. By default,parallel_level is null. Then, the number of worker processes is computed as follows:

    SELECT VALUE 
    FROM   V$PARAMETER 
    WHERE  NAME='cpu_count';
    

    When parallel_level is 1, no worker processes are spawned.

    Database instance

    RSM0

    Data Guard Broker Worker Process

    Performs monitoring management tasks related to Data Guard on behalf of DMON

    The process is created when a Data Guard broker configuration is enabled.

    Database instance, Data Guard

    RSMN

    Remote Slave Monitor Process

    Manages background slave process creation and communication on remote instances in Oracle RAC

    This background process manages the creation of slave processes and the communication with their coordinators and peers. These background slave processes perform tasks on behalf of a coordinating process running in another cluster instance.

    Database instance, Oracle RAC

    RVWR

    Recovery Writer Process

    Writes flashback data to the flashback logs in the fast recovery area

    RVWR writes flashback data from the flashback buffer in the SGA to the flashback logs. RVWR also creates flashback logs and performs some tasks for flashback log automatic management.

    Database instance, Flashback Database

    SMCO

    Space Management Coordinator Process

    Coordinates the execution of various space management tasks

    This background process coordinates the execution of various space management tasks, including proactive space allocation and space reclamation. SMCO dynamically spawns slave processes (Wnnn) to implement these tasks.

    Database instance

    SMON

    System Monitor Process

    Performs critical tasks such as instance recovery and dead transaction recovery, and maintenance tasks such as temporary space reclamation, data dictionary cleanup, and undo tablespace management

    SMON performs many database maintenance tasks, including the following:

    • Creates and manages the temporary tablespace metadata

    • Reclaims space used by orphaned temporary segments

    • Maintains the undo tablespace by onlining, offlining, and shrinking the undo segments based on undo space usage statistics

    • Cleans up the data dictionary when it is in a transient and inconsistent state

    • Maintains the SCN to time mapping table used to support Oracle Flashback features

    In an Oracle RAC database, the SMON process of one instance can perform instance recovery for other instances that have failed.

    SMON is resilient to internal and external errors raised during background activities.

    See Also: Oracle Database Concepts

    Database instance

    Snnn

    Shared Server Process

    Handles client requests in the shared server architecture

    In the shared server architecture, clients connect to a dispatcher process, which creates a virtual circuit for each connection. When the client sends data to the server, the dispatcher receives the data into the virtual circuit and places the active circuit on the common queue to be picked up by an idle shared server. The shared server then reads the data from the virtual circuit and performs the database work necessary to complete the request. When the shared server must send data to the client, the server writes the data back into the virtual circuit and the dispatcher sends the data to the client. After the shared server completes the client request, the server releases the virtual circuit back to the dispatcher and is free to handle other clients.

    Several initialization parameters relate to shared servers. The principal parameters are:DISPATCHERSSHARED_SERVERS,MAX_SHARED_SERVERS,LOCAL_LISTENER,REMOTE_LISTENER.

    See Also: Oracle Database Concepts

    Database instance, shared servers

    TEMn

    ASM disk Test Error Emulation Process

    Emulates I/O errors on ASM disks through named events

    I/O errors can be emulated on ASM disk I/O through named events. The scope can be the process, instance, or even cluster. Optionally, a set of AUs can be chosen for error emulation.

    ASM instance

    VBGn

    Volume Background Process

    Communicates between the ASM instance and the operating system volume driver

    VBGn handles messages originating from the volume driver in the operating system and sends them to the ASM instance.

    VBGn can run as multiple processes, where n is 0-9.

    ASM instance

    VDBG

    Volume Driver Process

    Forwards ASM requests to perform various volume-related tasks

    VDBG handles requests to lock or unlock an extent for rebalancing, volume resize, disk offline, add or drop a disk, force and dismount disk group to the Dynamic Volume Manager driver.

    ASM instance

    VKRM

    Virtual Scheduler for Resource Manager Process

    Serves as centralized scheduler for Resource Manager activity

    VKRM manages the CPU scheduling for all managed Oracle processes. The process schedules managed processes in accordance with an active resource plan.

    Database instance

    VKTM

    Virtual Keeper of Time Process

    Provides a wall clock time and reference time for time interval measurements

    VKTM acts as a time publisher for an Oracle instance. VKTM publishes two sets of time: a wall clock time using a seconds interval and a higher resolution time (which is not wall clock time) for interval measurements. The VKTM timer service centralizes time tracking and offloads multiple timer calls from other clients.

    Database and ASM instances

    VMB0

    Volume Membership Process

    Maintains cluster membership on behalf of the ASM volume driver

    This process membership in the cluster as an I/O-capable client on behalf of the ASM volume driver.

    ASM instance

    Vnnn

    ASM Volume I/O Slave Process

    Initializes ASM volume contents during creation

    This process is responsible for initializing the ASM volume during creation.

    ASM instance

    Wnnn

    Space Management Slave Process

    Performs various background space management tasks, including proactive space allocation and space reclamation

    Wnnn processes are slave processes dynamically spawned by SMCO to perform space management tasks in the background. These tasks include preallocating space into locally managed tablespace and SecureFiles segments based on space usage growth analysis, and reclaiming space from dropped segments. At most 10 Wnnn slaves can run on one database instance. After being started, the slave acts as an autonomous agent. After it finishes task execution, it automatically picks up another task from the queue. The process terminates itself after being idle for a long time.

    Database instance

    XDMG

    Exadata Automation Manager

    Initiates automation tasks involved in managing Exadata storage

    XDMG monitors all configured Exadata cells for state changes, such as a bad disk getting replaced, and performs the required tasks for such events. Its primary tasks are to watch for inaccessible disks and cells and when they become accessible again, and to initiate the ASM ONLINE operation. The ONLINE operation is handled by XDWK.

    ASM instance, Exadata

    XDWK

    Exadata Automation Manager

    Performs automation tasks requested by XDMG

    XDWK gets started when asynchronous actions such as ONLINE, DROP, and ADD an ASM disk are requested by XDMG. After a 5 minute period of inactivity, this process will shut itself down.

    ASM instance, Exadata

    Xnnn

    ASM Disk Expel Slave Process

    Performs ASM post-rebalance activities

    This process expels dropped disks at the end of an ASM rebalance.

    ASM instance


    Footnote 1 This background process is available starting with Oracle Database 11g Release 2 (11.2.0.2).

    Posted by pat98

     

    환경 : GI  12.1.0.2,

             DB 11.2.0.4   같이 버전을 다르게 쓸때 나는거 같음.

     

    증상 :

     

    작업중 아래와 같은 메세지로 짜증유발..아 명령어가 안 먹혀..

     

    # crsctl modify resource ora.rac.db -attr "RESTART_ATTEMPTS=100"

     

    CRS-4995: The command 'Modify resource' is invalid in crsctl. Use srvctl for this command.

     

    Solution : crsctl modify resource ora.rac.db -attr "RESTART_ATTEMPTS=100" -unsupported

     

    웃긴건 막상 문서 내용에는 해결책이 없다는 것이다.. 구글신에게 감사..

     

    버그인데 개발자들이 안 고치는 듯...또는 12c 부터는  crsctl 명령어로 리소스 수정 자체를 수정못하게 해놓은듯 하다. 

     

    Posted by pat98

    sql 을 이용해서 CPU 100% 를  초간단하게 빨랑 만들어 보자

     

    아래와 같이 shell 을 하나 만들고

     

    vi cpu_stress.sh

     

    sqlplus sys/oracle as sysdba <<END
    alter session set plsql_optimize_level=0;
    exec loop null; end loop;
    END

     

    저장후에 백그라운드로 실행하면 끝. CPU가 미친듯이 돌아가게 된다.

    thread 가 여러개라면 실행하고자 하는 만큼 다른세션 접속하여에서 실행하면 CPU core 만큼 부하를 주게된다.

    Posted by pat98

    2016. 1. 28. 21:41 오라클

    PSU 11.2.0.4.8


    패치작업 11.2.0.4.8 (2015.10)

     

    BUG 21523375- GRID INFRASTRUCTURE SYSTEM PATCH 11.2.0.4.8 contains

     

    DB Patch Number : 21352635
    OCW PATCH       : 21352649
    ACFS PATCH      : 21352642

     

    Oracle Grid Infrastructure Patch Set Update 11.2.0.4.8 (Includes Database PSU 11.2.0.4.8)
    -------------------------------------

    GRID_HOME, ORACLE_HOME 을 개별로 각각 할때

     

    AIX는작 업전 slibclean

     

    (root 유저)
    # /oragrid/product/11.2.0.4/crs/install/rootcrs.pl -unlock (CRS 가 떠 있으면 script로 자동으로 내려버림)


    (grid 유저)
    /oragrid/product/11.2.0.4/OPatch/opatch napply -oh /oragrid/product/11.2.0.4 -local /arch/21523375/21352649
    /oragrid/product/11.2.0.4/OPatch/opatch napply -oh /oragrid/product/11.2.0.4 -local /arch/21523375/21352642
    /oragrid/product/11.2.0.4/OPatch/opatch apply -oh /oragrid/product/11.2.0.4 -local /arch/21523375/21352635

     

    (oracle 유저)
    /arch/21523375/21352649/custom/server/21352649/custom/scripts/prepatch.sh -dbhome /oracle/product/11.2.0.4 (권한체크)
    /oracle/product/11.2.0.4/OPatch/opatch napply -oh /oracle/product/11.2.0.4 -local /arch/21523375/21352649/custom/server/21352649
    /oracle/product/11.2.0.4/OPatch/opatch apply -oh /oracle/product/11.2.0.4 -local /arch/21523375/21352635
    /arch/21523375/21352649/custom/server/21352649/custom/scripts/postpatch.sh -dbhome /oracle/product/11.2.0.4 (권한체크)

     

    (root 유저)
    # /oragrid/product/11.2.0.4/rdbms/install/rootadd_rdbms.sh
    # /oragrid/product/11.2.0.4/crs/install/rootcrs.pl -patch (script로 자동으로 CRS를 올려 버림)


    [롤백하는 경우]
    GI Home
    (root로)
    # /oragrid/product/11.2.0.4/crs/install/rootcrs.pl -unlock

     

    (grid 유저로)
    /oragrid/product/11.2.0.4/OPatch/opatch rollback -local -id 21352649 -oh /oragrid/product/11.2.0.4
    /oragrid/product/11.2.0.4/OPatch/opatch rollback -local -id 21352642 -oh /oragrid/product/11.2.0.4 
    /oragrid/product/11.2.0.4/OPatch/opatch rollback -local -id 21352635  -oh /oragrid/product/11.2.0.4

     

    (oracle 유저로)
    /arch/21523375/20831122/custom/server/20831122/custom/scripts/prepatch.sh -dbhome /oracle/product/11.2.0.4
    /oracle/product/11.2.0.4/OPatch/opatch rollback -local -id 21352649 -oh /oracle/product/11.2.0.4
    /oracle/product/11.2.0.4/OPatch/opatch rollback -local -id 21352635  -oh /oracle/product/11.2.0.4
    /arch/21523375/20831122/custom/server/20831122/custom/scripts/postpatch.sh -dbhome /oracle/product/11.2.0.4

     

    Run post script

    (root로)
    # /oragrid/product/11.2.0.4/rdbms/install/rootadd_rdbms.sh
    # /oragrid/product/11.2.0.4/crs/install/rootcrs.pl -patch

     

    cd $ORACLE_HOME/rdbms/admin
    sqlplus /nolog
    SQL> CONNECT / AS SYSDBA
    SQL> STARTUP
    SQL> @catbundle.sql psu apply
    SQL> QUIT

    Posted by pat98

    2016. 1. 26. 13:12 오라클

    PSU 11.2.0.4.160119


    패치작업 11.2.0.4.160119 (2016.01)

     

    BUG 22191577- GRID INFRASTRUCTURE SYSTEM PATCH 11.2.0.4.160119 contains

    DB Patch Number : 21948347
    OCW PATCH       : 21948348
    ACFS PATCH      : 21948355

    -------------------------------------

    GRID_HOME, ORACLE_HOME 을 개별로 각각 할때

     

    AIX는 작업전 slibclean

    (root 유저)
    # /oragrid/product/11.2.0.4/crs/install/rootcrs.pl -unlock (CRS 가 떠 있으면 script로 자동으로 내려버림)


    (grid 유저)
    /oragrid/product/11.2.0.4/OPatch/opatch napply -oh /oragrid/product/11.2.0.4 -local /arch/22191577/21948348
    /oragrid/product/11.2.0.4/OPatch/opatch napply -oh /oragrid/product/11.2.0.4 -local /arch/22191577/21948355
    /oragrid/product/11.2.0.4/OPatch/opatch apply -oh /oragrid/product/11.2.0.4 -local /arch/22191577/21948347

     

    (oracle 유저)
    /arch/22191577/21948348/custom/server/21948348/custom/scripts/prepatch.sh -dbhome /oracle/product/11.2.0.4 (권한체크)
    /oracle/product/11.2.0.4/OPatch/opatch napply -oh /oracle/product/11.2.0.4 -local /arch/22191577/21948348/custom/server/21948348
    /oracle/product/11.2.0.4/OPatch/opatch apply -oh /oracle/product/11.2.0.4 -local /arch/22191577/21948347
    /arch/22191577/21948348/custom/server/21948348/custom/scripts/postpatch.sh -dbhome /oracle/product/11.2.0.4 (권한체크)

     

    (root 유저)
    # /oragrid/product/11.2.0.4/rdbms/install/rootadd_rdbms.sh
    # /oragrid/product/11.2.0.4/crs/install/rootcrs.pl -patch (script로 자동으로 CRS를 올려 버림)


    [롤백하는 경우]
    GI Home
    (root로)
    # /oragrid/product/11.2.0.4/crs/install/rootcrs.pl -unlock

     

    (grid 유저로)

    /oragrid/product/11.2.0.4/OPatch/opatch rollback -local -id 21948348 -oh /oragrid/product/11.2.0.4
    /oragrid/product/11.2.0.4/OPatch/opatch rollback -local -id 21948355 -oh /oragrid/product/11.2.0.4 
    /oragrid/product/11.2.0.4/OPatch/opatch rollback -local -id 21948347  -oh /oragrid/product/11.2.0.4

     

    (oracle 유저로)
    /arch/22191577/21948348/custom/server/21948348/custom/scripts/prepatch.sh -dbhome /oracle/product/11.2.0.4
    /oracle/product/11.2.0.4/OPatch/opatch rollback -local -id 21948348 -oh /oracle/product/11.2.0.4
    /oracle/product/11.2.0.4/OPatch/opatch rollback -local -id 21948347  -oh /oracle/product/11.2.0.4
    /arch/22191577/21948348/custom/server/21948348/custom/scripts/postpatch.sh -dbhome /oracle/product/11.2.0.4

     

    Run post script
    (root로)
    # /oragrid/product/11.2.0.4/rdbms/install/rootadd_rdbms.sh
    # /oragrid/product/11.2.0.4/crs/install/rootcrs.pl -patch

    cd $ORACLE_HOME/rdbms/admin
    sqlplus /nolog
    SQL> CONNECT / AS SYSDBA
    SQL> STARTUP
    SQL> @catbundle.sql psu apply
    SQL> QUIT

    Posted by pat98

    2015. 12. 30. 11:24 오라클

    sshUserSetup.sh 사용


    rac 설치 사전사업으로 양 노드의 passwordless ssh 설정을 반드시 해 주어야 한다.

     

    귀찮다. 설치 미디어내에 제공하는 쉘로 ssh 한방에 설정~~

     

    sshUserSetup.sh

     

    ./sshUserSetup.sh -user root -hosts "rac1 rac2" -noPromptPassphrase -advanced

    ./sshUserSetup.sh -user oracle -hosts "rac1 rac2" -noPromptPassphrase -advanced

     

    sshUserSetup.sh 쉘 내용

     

    #!/bin/sh
    # Nitin Jerath - Aug 2005
    #Usage sshUserSetup.sh  -user <user name> [ -hosts \"<space separated hostlist>\" | -hostfile <absolute path of cluster configuration file> ] [ -advanced ]  [ -verify] [ -exverify ] [ -logfile <desired absolute path of logfile> ] [-confirm] [-shared] [-help] [-usePassphrase] [-noPromptPassphrase]
    #eg. sshUserSetup.sh -hosts "host1 host2" -user njerath -advanced
    #This script is used to setup SSH connectivity from the host on which it is
    # run to the specified remote hosts. After this script is run, the user can use # SSH to run commands on the remote hosts or copy files between the local host
    # and the remote hosts without being prompted for passwords or confirmations.
    # The list of remote hosts and the user name on the remote host is specified as
    # a command line parameter to the script. Note that in case the user on the
    # remote host has its home directory NFS mounted or shared across the remote
    # hosts, this script should be used with -shared option.
    #Specifying the -advanced option on the command line would result in SSH
    # connectivity being setup among the remote hosts which means that SSH can be
    # used to run commands on one remote host from the other remote host or copy
    # files between the remote hosts without being prompted for passwords or
    # confirmations.
    #Please note that the script would remove write permissions on the remote hosts
    #for the user home directory and ~/.ssh directory for "group" and "others". This
    # is an SSH requirement. The user would be explicitly informed about this by teh script and prompted to continue. In case the user presses no, the script would exit. In case the user does not want to be prompted, he can use -confirm option.
    # As a part of the setup, the script would use SSH to create files within ~/.ssh
    # directory of the remote node and to setup the requisite permissions. The
    #script also uses SCP to copy the local host public key to the remote hosts so
    # that the remote hosts trust the local host for SSH. At the time, the script
    #performs these steps, SSH connectivity has not been completely setup  hence
    # the script would prompt the user for the remote host password.
    #For each remote host, for remote users with non-shared homes this would be
    # done once for SSH and  once for SCP. If the number of remote hosts are x, the
    # user would be prompted  2x times for passwords. For remote users with shared
    # homes, the user would be prompted only twice, once each for SCP and SSH.
    #For security reasons, the script does not save passwords and reuse it. Also,
    # for security reasons, the script does not accept passwords redirected from a
    #file. The user has to key in the confirmations and passwords at the prompts.
    #The -verify option means that the user just wants to verify whether SSH has
    #been set up. In this case, the script would not setup SSH but would only check
    # whether SSH connectivity has been setup from the local host to the remote
    # hosts. The script would run the date command on each remote host using SSH. In
    # case the user is prompted for a password or sees a warning message for a
    #particular host, it means SSH connectivity has not been setup correctly for
    # that host.
    #In case the -verify option is not specified, the script would setup SSH and
    #then do the verification as well.
    #In case the user speciies the -exverify option, an exhaustive verification would be done. In that case, the following would be checked:
    # 1. SSH connectivity from local host to all remote hosts.
    # 2. SSH connectivity from each remote host to itself and other remote hosts.

    #echo Parsing command line arguments
    numargs=$#

    ADVANCED=false
    HOSTNAME=`hostname`
    CONFIRM=no
    SHARED=false
    i=1
    USR=$USER

    if  test -z "$TEMP"
    then
      TEMP=/tmp
    fi

    IDENTITY=id_rsa
    LOGFILE=$TEMP/sshUserSetup_`date +%F-%H-%M-%S`.log
    VERIFY=false
    EXHAUSTIVE_VERIFY=false
    HELP=false
    PASSPHRASE=no
    RERUN_SSHKEYGEN=no
    NO_PROMPT_PASSPHRASE=no

    while [ $i -le $numargs ]
    do
      j=$1
      if [ $j = "-hosts" ]
      then
         HOSTS=$2
         shift 1
         i=`expr $i + 1`
      fi
      if [ $j = "-user" ]
      then
         USR=$2
         shift 1
         i=`expr $i + 1`
       fi
      if [ $j = "-logfile" ]
      then
         LOGFILE=$2
         shift 1
         i=`expr $i + 1`
       fi
      if [ $j = "-confirm" ]
      then
         CONFIRM=yes
       fi
      if [ $j = "-hostfile" ]
      then
         CLUSTER_CONFIGURATION_FILE=$2
         shift 1
         i=`expr $i + 1`
       fi
      if [ $j = "-usePassphrase" ]
      then
         PASSPHRASE=yes
       fi
      if [ $j = "-noPromptPassphrase" ]
      then
         NO_PROMPT_PASSPHRASE=yes
       fi
      if [ $j = "-shared" ]
      then
         SHARED=true
       fi
      if [ $j = "-exverify" ]
      then
         EXHAUSTIVE_VERIFY=true
       fi
      if [ $j = "-verify" ]
      then
         VERIFY=true
       fi
      if [ $j = "-advanced" ]
      then
         ADVANCED=true
       fi
      if [ $j = "-help" ]
      then
         HELP=true
       fi
      i=`expr $i + 1`
      shift 1
    done


    if [ $HELP = "true" ]
    then
      echo "Usage $0 -user <user name> [ -hosts \"<space separated hostlist>\" | -hostfile <absolute path of cluster configuration file> ] [ -advanced ]  [ -verify] [ -exverify ] [ -logfile <desired absolute path of logfile> ] [-confirm] [-shared] [-help] [-usePassphrase] [-noPromptPassphrase]"
    echo "This script is used to setup SSH connectivity from the host on which it is run to the specified remote hosts. After this script is run, the user can use  SSH to run commands on the remote hosts or copy files between the local host and the remote hosts without being prompted for passwords or confirmations.  The list of remote hosts and the user name on the remote host is specified as a command line parameter to the script. "
    echo "-user : User on remote hosts. "
    echo "-hosts : Space separated remote hosts list. "
    echo "-hostfile : The user can specify the host names either through the -hosts option or by specifying the absolute path of a cluster configuration file. A sample host file contents are below: "
    echo
    echo  "   stacg30 stacg30int 10.1.0.0 stacg30v  -"
    echo  "   stacg34 stacg34int 10.1.0.1 stacg34v  -"
    echo
    echo " The first column in each row of the host file will be used as the host name."
    echo
    echo "-usePassphrase : The user wants to set up passphrase to encrypt the private key on the local host. "
    echo "-noPromptPassphrase : The user does not want to be prompted for passphrase related questions. This is for users who want the default behavior to be followed."
    echo "-shared : In case the user on the remote host has its home directory NFS mounted or shared across the remote hosts, this script should be used with -shared option. "
    echo "  It is possible for the user to determine whether a user's home directory is shared or non-shared. Let us say we want to determine that user user1's home directory is shared across hosts A, B and C."
    echo " Follow the following steps:"
    echo "    1. On host A, touch ~user1/checkSharedHome.tmp"
    echo "    2. On hosts B and C, ls -al ~user1/checkSharedHome.tmp"
    echo "    3. If the file is present on hosts B and C in ~user1 directory and"
    echo "       is identical on all hosts A, B, C, it means that the user's home "
    echo "       directory is shared."
    echo "    4. On host A, rm -f ~user1/checkSharedHome.tmp"
    echo " In case the user accidentally passes -shared option for non-shared homes or viceversa,SSH connectivity would only be set up for a subset of the hosts. The user would have to re-run the setyp script with the correct option to rectify this problem."
    echo "-advanced :  Specifying the -advanced option on the command line would result in SSH  connectivity being setup among the remote hosts which means that SSH can be used to run commands on one remote host from the other remote host or copy files between the remote hosts without being prompted for passwords or confirmations."
    echo "-confirm: The script would remove write permissions on the remote hosts for the user home directory and ~/.ssh directory for "group" and "others". This is an SSH requirement. The user would be explicitly informed about this by the script and prompted to continue. In case the user presses no, the script would exit. In case the user does not want to be prompted, he can use -confirm option."
    echo  "As a part of the setup, the script would use SSH to create files within ~/.ssh directory of the remote node and to setup the requisite permissions. The script also uses SCP to copy the local host public key to the remote hosts so that the remote hosts trust the local host for SSH. At the time, the script performs these steps, SSH connectivity has not been completely setup  hence the script would prompt the user for the remote host password.  "
    echo "For each remote host, for remote users with non-shared homes this would be done once for SSH and  once for SCP. If the number of remote hosts are x, the user would be prompted  2x times for passwords. For remote users with shared homes, the user would be prompted only twice, once each for SCP and SSH.  For security reasons, the script does not save passwords and reuse it. Also, for security reasons, the script does not accept passwords redirected from a file. The user has to key in the confirmations and passwords at the prompts. "
    echo "-verify : -verify option means that the user just wants to verify whether SSH has been set up. In this case, the script would not setup SSH but would only check whether SSH connectivity has been setup from the local host to the remote hosts. The script would run the date command on each remote host using SSH. In case the user is prompted for a password or sees a warning message for a particular host, it means SSH connectivity has not been setup correctly for that host.  In case the -verify option is not specified, the script would setup SSH and then do the verification as well. "
    echo "-exverify : In case the user speciies the -exverify option, an exhaustive verification for all hosts would be done. In that case, the following would be checked: "
    echo "   1. SSH connectivity from local host to all remote hosts. "
    echo "   2. SSH connectivity from each remote host to itself and other remote hosts.  "
    echo The -exverify option can be used in conjunction with the -verify option as well to do an exhaustive verification once the setup has been done. 
    echo "Taking some examples: Let us say local host is Z, remote hosts are A,B and C. Local user is njerath. Remote users are racqa(non-shared), aime(shared)."
    echo "$0 -user racqa -hosts "A B C" -advanced -exverify -confirm"
    echo "Script would set up connectivity from Z -> A, Z -> B, Z -> C, A -> A, A -> B, A -> C, B -> A, B -> B, B -> C, C -> A, C -> B, C -> C."
    echo "Since user has given -exverify option, all these scenario would be verified too."
    echo
    echo "Now the user runs : $0 -user racqa -hosts "A B C" -verify"
    echo "Since -verify option is given, no SSH setup would be done, only verification of existing setup. Also, since -exverify or -advanced options are not given, script would only verify connectivity from Z -> A, Z -> B, Z -> C"

    echo "Now the user runs : $0 -user racqa -hosts "A B C" -verify -advanced"
    echo "Since -verify option is given, no SSH setup would be done, only verification of existing setup. Also, since  -advanced options is given, script would verify connectivity from Z -> A, Z -> B, Z -> C, A-> A, A->B, A->C, A->D"

    echo "Now the user runs:"
    echo "$0 -user aime -hosts "A B C" -confirm -shared"
    echo "Script would set up connectivity between  Z->A, Z->B, Z->C only since advanced option is not given."
    echo "All these scenarios would be verified too."

    exit
    fi

    if test -z "$HOSTS"
    then
       if test -n "$CLUSTER_CONFIGURATION_FILE" && test -f "$CLUSTER_CONFIGURATION_FILE"
       then
          HOSTS=`awk '$1 !~ /^#/ { str = str " " $1 } END { print str }' $CLUSTER_CONFIGURATION_FILE`
       elif ! test -f "$CLUSTER_CONFIGURATION_FILE"
       then
         echo "Please specify a valid and existing cluster configuration file."
       fi
    fi

    if  test -z "$HOSTS" || test -z $USR
    then
    echo "Either user name or host information is missing"
    echo "Usage $0 -user <user name> [ -hosts \"<space separated hostlist>\" | -hostfile <absolute path of cluster configuration file> ] [ -advanced ]  [ -verify] [ -exverify ] [ -logfile <desired absolute path of logfile> ] [-confirm] [-shared] [-help] [-usePassphrase] [-noPromptPassphrase]"
    exit 1
    fi

    if [ -d $LOGFILE ]; then
        echo $LOGFILE is a directory, setting logfile to $LOGFILE/ssh.log
        LOGFILE=$LOGFILE/ssh.log
    fi

    echo The output of this script is also logged into $LOGFILE | tee -a $LOGFILE

    if [ `echo $?` != 0 ]; then
        echo Error writing to the logfile $LOGFILE, Exiting
        exit 1
    fi

    echo Hosts are $HOSTS | tee -a $LOGFILE
    echo user is  $USR | tee -a $LOGFILE
    SSH="/usr/bin/ssh"
    SCP="/usr/bin/scp"
    SSH_KEYGEN="/usr/bin/ssh-keygen"
    calculateOS()
    {
        platform=`uname -s`
        case "$platform"
        in
           "SunOS")  os=solaris;;
           "Linux")  os=linux;;
           "HP-UX")  os=hpunix;;
             "AIX")  os=aix;;
                 *)  echo "Sorry, $platform is not currently supported." | tee -a $LOGFILE
                     exit 1;;
        esac

        echo "Platform:- $platform " | tee -a $LOGFILE
    }
    calculateOS
    BITS=1024
    ENCR="rsa"

    deadhosts=""
    alivehosts=""
    if [ $platform = "Linux" ]
    then
        PING="/bin/ping"
    else
        PING="/usr/sbin/ping"
    fi
    #bug 9044791
    if [ -n "$SSH_PATH" ]; then
        SSH=$SSH_PATH
    fi
    if [ -n "$SCP_PATH" ]; then
        SCP=$SCP_PATH
    fi
    if [ -n "$SSH_KEYGEN_PATH" ]; then
        SSH_KEYGEN=$SSH_KEYGEN_PATH
    fi
    if [ -n "$PING_PATH" ]; then
        PING=$PING_PATH
    fi
    PATH_ERROR=0
    if test ! -x $SSH ; then
        echo "ssh not found at $SSH. Please set the variable SSH_PATH to the correct location of ssh and retry."
        PATH_ERROR=1
    fi
    if test ! -x $SCP ; then
        echo "scp not found at $SCP. Please set the variable SCP_PATH to the correct location of scp and retry."
        PATH_ERROR=1
    fi
    if test ! -x $SSH_KEYGEN ; then
        echo "ssh-keygen not found at $SSH_KEYGEN. Please set the variable SSH_KEYGEN_PATH to the correct location of ssh-keygen and retry."
        PATH_ERROR=1
    fi
    if test ! -x $PING ; then
        echo "ping not found at $PING. Please set the variable PING_PATH to the correct location of ping and retry."
        PATH_ERROR=1
    fi
    if [ $PATH_ERROR = 1 ]; then
        echo "ERROR: one or more of the required binaries not found, exiting"
        exit 1
    fi
    #9044791 end
    echo Checking if the remote hosts are reachable | tee -a $LOGFILE
    for host in $HOSTS
    do
       if [ $platform = "SunOS" ]; then
           $PING -s $host 5 5
       elif [ $platform = "HP-UX" ]; then
           $PING $host -n 5 -m 5
       else
           $PING -c 5 -w 5 $host
       fi
      exitcode=`echo $?`
      if [ $exitcode = 0 ]
      then
         alivehosts="$alivehosts $host"
      else
         deadhosts="$deadhosts $host"
      fi
    done

    if test -z "$deadhosts"
    then
       echo Remote host reachability check succeeded.  | tee -a $LOGFILE
       echo The following hosts are reachable: $alivehosts.  | tee -a $LOGFILE
       echo The following hosts are not reachable: $deadhosts.  | tee -a $LOGFILE
       echo All hosts are reachable. Proceeding further...  | tee -a $LOGFILE
    else
       echo Remote host reachability check failed.  | tee -a $LOGFILE
       echo The following hosts are reachable: $alivehosts.  | tee -a $LOGFILE
       echo The following hosts are not reachable: $deadhosts.  | tee -a $LOGFILE
       echo Please ensure that all the hosts are up and re-run the script.  | tee -a $LOGFILE
       echo Exiting now...  | tee -a $LOGFILE
       exit 1
    fi

    firsthost=`echo $HOSTS | awk '{print $1}; END { }'`
    echo firsthost $firsthost
    numhosts=`echo $HOSTS | awk '{ }; END {print NF}'`
    echo numhosts $numhosts

    if [ $VERIFY = "true" ]
    then
       echo Since user has specified -verify option, SSH setup would not be done. Only, existing SSH setup would be verified. | tee -a $LOGFILE
       continue
    else
    echo The script will setup SSH connectivity from the host ''`hostname`'' to all  | tee -a $LOGFILE
    echo the remote hosts. After the script is executed, the user can use SSH to run  | tee -a $LOGFILE
    echo commands on the remote hosts or copy files between this host ''`hostname`'' | tee -a $LOGFILE
    echo and the remote hosts without being prompted for passwords or confirmations. | tee -a $LOGFILE
    echo  | tee -a $LOGFILE
    echo NOTE 1: | tee -a $LOGFILE
    echo As part of the setup procedure, this script will use 'ssh' and 'scp' to copy | tee -a $LOGFILE
    echo files between the local host and the remote hosts. Since the script does not  | tee -a $LOGFILE
    echo store passwords, you may be prompted for the passwords during the execution of  | tee -a $LOGFILE
    echo the script whenever 'ssh' or 'scp' is invoked. | tee -a $LOGFILE
    echo  | tee -a $LOGFILE
    echo NOTE 2: | tee -a $LOGFILE
    echo "AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY" | tee -a $LOGFILE
    echo AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE  | tee -a $LOGFILE
    echo "directories." | tee -a $LOGFILE
    echo  | tee -a $LOGFILE
    echo "Do you want to continue and let the script make the above mentioned changes (yes/no)?" | tee -a $LOGFILE

    if [ "$CONFIRM" = "no" ]
    then
      read CONFIRM
    else
      echo "Confirmation provided on the command line" | tee -a $LOGFILE
    fi
      
    echo  | tee -a $LOGFILE
    echo The user chose ''$CONFIRM'' | tee -a $LOGFILE
     
    if [ "$CONFIRM" = "no" ]
    then
      echo "SSH setup is not done." | tee -a $LOGFILE
      exit 1
    else
      if [ $NO_PROMPT_PASSPHRASE = "yes" ]
      then
        echo "User chose to skip passphrase related questions."  | tee -a $LOGFILE
      else
        typeset -i PASSPHRASE_PROMPT
        if [ $SHARED = "true" ]
        then
       PASSPHRASE_PROMPT=2*${numhosts}+1
        else
       PASSPHRASE_PROMPT=2*${numhosts}
        fi
        echo "Please specify if you want to specify a passphrase for the private key this script will create for the local host. Passphrase is used to encrypt the private key and makes SSH much more secure. Type 'yes' or 'no' and then press enter. In case you press 'yes', you would need to enter the passphrase whenever the script executes ssh or scp. " | tee -a $LOGFILE
        echo "The estimated number of times the user would be prompted for a passphrase is $PASSPHRASE_PROMPT. In addition, if the private-public files are also newly created, the user would have to specify the passphrase on one additional occasion. " | tee -a $LOGFILE
        echo "Enter 'yes' or 'no'." | tee -a $LOGFILE
        if [ $PASSPHRASE = "no" ]
        then
          read PASSPHRASE
        else
          echo "Confirmation provided on the command line" | tee -a $LOGFILE
        fi

        echo  | tee -a $LOGFILE
        echo The user chose ''$PASSPHRASE'' | tee -a $LOGFILE

        if [ "$PASSPHRASE" = "yes" ]
        then
           RERUN_SSHKEYGEN="yes"
    #Checking for existence of ${IDENTITY} file
           if test -f  $HOME/.ssh/${IDENTITY}.pub && test -f  $HOME/.ssh/${IDENTITY}
           then
          echo "The files containing the client public and private keys already exist on the local host. The current private key may or may not have a passphrase associated with it. In case you remember the passphrase and do not want to re-run ssh-keygen, press 'no' and enter. If you press 'no', the script will not attempt to create any new public/private key pairs. If you press 'yes', the script will remove the old private/public key files existing and create new ones prompting the user to enter the passphrase. If you enter 'yes', any previous SSH user setups would be reset. If you press 'change', the script will associate a new passphrase with the old keys." | tee -a $LOGFILE
          echo "Press 'yes', 'no' or 'change'" | tee -a $LOGFILE
                 read RERUN_SSHKEYGEN
                 echo The user chose ''$RERUN_SSHKEYGEN'' | tee -a $LOGFILE
           fi
         else
           if test -f  $HOME/.ssh/${IDENTITY}.pub && test -f  $HOME/.ssh/${IDENTITY}
           then
             echo "The files containing the client public and private keys already exist on the local host. The current private key may have a passphrase associated with it. In case you find using passphrase inconvenient(although it is more secure), you can change to it empty through this script. Press 'change' if you want the script to change the passphrase for you. Press 'no' if you want to use your old passphrase, if you had one."
             read RERUN_SSHKEYGEN
             echo The user chose ''$RERUN_SSHKEYGEN'' | tee -a $LOGFILE
           fi
         fi
      fi
      echo Creating .ssh directory on local host, if not present already | tee -a $LOGFILE
      mkdir -p $HOME/.ssh | tee -a $LOGFILE
    echo Creating authorized_keys file on local host  | tee -a $LOGFILE
    touch $HOME/.ssh/authorized_keys  | tee -a $LOGFILE
    echo Changing permissions on authorized_keys to 644 on local host  | tee -a $LOGFILE
    chmod 644 $HOME/.ssh/authorized_keys  | tee -a $LOGFILE
    mv -f $HOME/.ssh/authorized_keys  $HOME/.ssh/authorized_keys.tmp | tee -a $LOGFILE
    echo Creating known_hosts file on local host  | tee -a $LOGFILE
    touch $HOME/.ssh/known_hosts  | tee -a $LOGFILE
    echo Changing permissions on known_hosts to 644 on local host  | tee -a $LOGFILE
    chmod 644 $HOME/.ssh/known_hosts  | tee -a $LOGFILE
    mv -f $HOME/.ssh/known_hosts $HOME/.ssh/known_hosts.tmp | tee -a $LOGFILE


    echo Creating config file on local host | tee -a $LOGFILE
    echo If a config file exists already at $HOME/.ssh/config, it would be backed up to $HOME/.ssh/config.backup.
    echo "Host *" > $HOME/.ssh/config.tmp | tee -a $LOGFILE
    echo "ForwardX11 no" >> $HOME/.ssh/config.tmp | tee -a $LOGFILE

    if test -f $HOME/.ssh/config
    then
      cp -f $HOME/.ssh/config $HOME/.ssh/config.backup
    fi

    mv -f $HOME/.ssh/config.tmp $HOME/.ssh/config  | tee -a $LOGFILE
    chmod 644 $HOME/.ssh/config

    if [ $RERUN_SSHKEYGEN = "yes" ]
    then
      echo Removing old private/public keys on local host | tee -a $LOGFILE
      rm -f $HOME/.ssh/${IDENTITY} | tee -a $LOGFILE
      rm -f $HOME/.ssh/${IDENTITY}.pub | tee -a $LOGFILE
      echo Running SSH keygen on local host | tee -a $LOGFILE
      $SSH_KEYGEN -t $ENCR -b $BITS -f $HOME/.ssh/${IDENTITY}   | tee -a $LOGFILE

    elif [ $RERUN_SSHKEYGEN = "change" ]
    then
        echo Running SSH Keygen on local host to change the passphrase associated with the existing private key | tee -a $LOGFILE
        $SSH_KEYGEN -p -t $ENCR -b $BITS -f $HOME/.ssh/${IDENTITY} | tee -a $LOGFILE
    elif test -f  $HOME/.ssh/${IDENTITY}.pub && test -f  $HOME/.ssh/${IDENTITY}
    then
        continue
    else
        echo Removing old private/public keys on local host | tee -a $LOGFILE
        rm -f $HOME/.ssh/${IDENTITY} | tee -a $LOGFILE
        rm -f $HOME/.ssh/${IDENTITY}.pub | tee -a $LOGFILE
        echo Running SSH keygen on local host with empty passphrase | tee -a $LOGFILE
        $SSH_KEYGEN -t $ENCR -b $BITS -f $HOME/.ssh/${IDENTITY} -N ''  | tee -a $LOGFILE
    fi

    if [ $SHARED = "true" ]
    then
      if [ $USER = $USR ]
      then
    #No remote operations required
        echo Remote user is same as local user | tee -a $LOGFILE
        REMOTEHOSTS=""
        chmod og-w $HOME $HOME/.ssh | tee -a $LOGFILE
      else   
        REMOTEHOSTS="${firsthost}"
      fi
    else
      REMOTEHOSTS="$HOSTS"
    fi

    for host in $REMOTEHOSTS
    do
         echo Creating .ssh directory and setting permissions on remote host $host | tee -a $LOGFILE
         echo "THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR "group" AND "others" ON THE HOME DIRECTORY FOR $USR. THIS IS AN SSH REQUIREMENT." | tee -a $LOGFILE
         echo The script would create ~$USR/.ssh/config file on remote host $host. If a config file exists already at ~$USR/.ssh/config, it would be backed up to ~$USR/.ssh/config.backup. | tee -a $LOGFILE
         echo The user may be prompted for a password here since the script would be running SSH on host $host. | tee -a $LOGFILE
         $SSH -o StrictHostKeyChecking=no -x -l $USR $host "/bin/sh -c \"  mkdir -p .ssh ; chmod og-w . .ssh;   touch .ssh/authorized_keys .ssh/known_hosts;  chmod 644 .ssh/authorized_keys  .ssh/known_hosts; cp  .ssh/authorized_keys .ssh/authorized_keys.tmp ;  cp .ssh/known_hosts .ssh/known_hosts.tmp; echo \\"Host *\\" > .ssh/config.tmp; echo \\"ForwardX11 no\\" >> .ssh/config.tmp; if test -f  .ssh/config ; then cp -f .ssh/config .ssh/config.backup; fi ; mv -f .ssh/config.tmp .ssh/config\""  | tee -a $LOGFILE
         echo Done with creating .ssh directory and setting permissions on remote host $host. | tee -a $LOGFILE
    done

    for host in $REMOTEHOSTS
    do
      echo Copying local host public key to the remote host $host | tee -a $LOGFILE
      echo The user may be prompted for a password or passphrase here since the script would be using SCP for host $host. | tee -a $LOGFILE

      $SCP $HOME/.ssh/${IDENTITY}.pub  $USR@$host:.ssh/authorized_keys | tee -a $LOGFILE
      echo Done copying local host public key to the remote host $host | tee -a $LOGFILE
    done

    cat $HOME/.ssh/${IDENTITY}.pub >> $HOME/.ssh/authorized_keys | tee -a $LOGFILE

    for host in $HOSTS
    do
      if [ $ADVANCED = "true" ]
      then
        echo Creating keys on remote host $host if they do not exist already. This is required to setup SSH on host $host. | tee -a $LOGFILE
        if [ $SHARED = "true" ]
        then
          IDENTITY_FILE_NAME=${IDENTITY}_$host
          COALESCE_IDENTITY_FILES_COMMAND="cat .ssh/${IDENTITY_FILE_NAME}.pub >> .ssh/authorized_keys"
        else
          IDENTITY_FILE_NAME=${IDENTITY}
        fi

       $SSH  -o StrictHostKeyChecking=no -x -l $USR $host " /bin/sh -c \"if test -f  .ssh/${IDENTITY_FILE_NAME}.pub && test -f  .ssh/${IDENTITY_FILE_NAME}; then echo; else rm -f .ssh/${IDENTITY_FILE_NAME} ;  rm -f .ssh/${IDENTITY_FILE_NAME}.pub ;  $SSH_KEYGEN -t $ENCR -b $BITS -f .ssh/${IDENTITY_FILE_NAME} -N '' ; fi; ${COALESCE_IDENTITY_FILES_COMMAND} \"" | tee -a $LOGFILE
      else
    #At least get the host keys from all hosts for shared case - advanced option not set
        if test  $SHARED = "true" && test $ADVANCED = "false"
        then
          if [ $PASSPHRASE = "yes" ]
          then
      echo "The script will fetch the host keys from all hosts. The user may be prompted for a passphrase here in case the private key has been encrypted with a passphrase." | tee -a $LOGFILE
          fi
          $SSH  -o StrictHostKeyChecking=no -x -l $USR $host "/bin/sh -c true"
        fi
      fi
    done

    for host in $REMOTEHOSTS
    do
      if test $ADVANCED = "true" && test $SHARED = "false" 
      then
          $SCP $USR@$host:.ssh/${IDENTITY}.pub $HOME/.ssh/${IDENTITY}.pub.$host | tee -a $LOGFILE
          cat $HOME/.ssh/${IDENTITY}.pub.$host >> $HOME/.ssh/authorized_keys | tee -a $LOGFILE
          rm -f $HOME/.ssh/${IDENTITY}.pub.$host | tee -a $LOGFILE
        fi
    done

    for host in $REMOTEHOSTS
    do
       if [ $ADVANCED = "true" ]
       then
          if [ $SHARED != "true" ]
          then
             echo Updating authorized_keys file on remote host $host | tee -a $LOGFILE
             $SCP $HOME/.ssh/authorized_keys  $USR@$host:.ssh/authorized_keys | tee -a $LOGFILE
          fi
         echo Updating known_hosts file on remote host $host | tee -a $LOGFILE
         $SCP $HOME/.ssh/known_hosts $USR@$host:.ssh/known_hosts | tee -a $LOGFILE
       fi
       if [ $PASSPHRASE = "yes" ]
       then
      echo "The script will run SSH on the remote machine $host. The user may be prompted for a passphrase here in case the private key has been encrypted with a passphrase." | tee -a $LOGFILE
       fi
         $SSH -x -l $USR $host "/bin/sh -c \"cat .ssh/authorized_keys.tmp >> .ssh/authorized_keys; cat .ssh/known_hosts.tmp >> .ssh/known_hosts; rm -f  .ssh/known_hosts.tmp  .ssh/authorized_keys.tmp\"" | tee -a $LOGFILE
    done

    cat  $HOME/.ssh/known_hosts.tmp >> $HOME/.ssh/known_hosts | tee -a $LOGFILE
    cat  $HOME/.ssh/authorized_keys.tmp >> $HOME/.ssh/authorized_keys | tee -a $LOGFILE
    #Added chmod to fix BUG NO 5238814
    chmod 644 $HOME/.ssh/authorized_keys
    #Fix for BUG NO 5157782
    chmod 644 $HOME/.ssh/config
    rm -f  $HOME/.ssh/known_hosts.tmp $HOME/.ssh/authorized_keys.tmp | tee -a $LOGFILE
    echo SSH setup is complete. | tee -a $LOGFILE
    fi
    fi

    echo                                                                          | tee -a $LOGFILE
    echo ------------------------------------------------------------------------ | tee -a $LOGFILE
    echo Verifying SSH setup | tee -a $LOGFILE
    echo =================== | tee -a $LOGFILE
    echo The script will now run the 'date' command on the remote nodes using ssh | tee -a $LOGFILE
    echo to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,  | tee -a $LOGFILE
    echo THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR | tee -a $LOGFILE
    echo PASSWORDS. If you see any output other than date or are prompted for the | tee -a $LOGFILE
    echo password, ssh is not setup correctly and you will need to resolve the  | tee -a $LOGFILE
    echo issue and set up ssh again. | tee -a $LOGFILE
    echo The possible causes for failure could be:  | tee -a $LOGFILE
    echo   1. The server settings in /etc/ssh/sshd_config file do not allow ssh | tee -a $LOGFILE
    echo      for user $USR. | tee -a $LOGFILE
    echo   2. The server may have disabled public key based authentication.
    echo   3. The client public key on the server may be outdated.
    echo   4. ~$USR or  ~$USR/.ssh on the remote host may not be owned by $USR.  | tee -a $LOGFILE
    echo   5. User may not have passed -shared option for shared remote users or | tee -a $LOGFILE
    echo     may be passing the -shared option for non-shared remote users.  | tee -a $LOGFILE
    echo   6. If there is output in addition to the date, but no password is asked, | tee -a $LOGFILE
    echo   it may be a security alert shown as part of company policy. Append the | tee -a $LOGFILE
    echo   "additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file." | tee -a $LOGFILE
    echo ------------------------------------------------------------------------ | tee -a $LOGFILE
    #read -t 30 dummy
      for host in $HOSTS
      do
        echo --$host:-- | tee -a $LOGFILE

         echo Running $SSH -x -l $USR $host date to verify SSH connectivity has been setup from local host to $host.  | tee -a $LOGFILE
         echo "IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR." | tee -a $LOGFILE
         if [ $PASSPHRASE = "yes" ]
         then
           echo "The script will run SSH on the remote machine $host. The user may be prompted for a passphrase here in case the private key has been encrypted with a passphrase." | tee -a $LOGFILE
         fi
         $SSH -l $USR $host "/bin/sh -c date"  | tee -a $LOGFILE
    echo ------------------------------------------------------------------------ | tee -a $LOGFILE
      done


    if [ $EXHAUSTIVE_VERIFY = "true" ]
    then
       for clienthost in $HOSTS
       do

          if [ $SHARED = "true" ]
          then
             REMOTESSH="$SSH -i .ssh/${IDENTITY}_${clienthost}"
          else
             REMOTESSH=$SSH
          fi

          for serverhost in  $HOSTS
          do
             echo ------------------------------------------------------------------------ | tee -a $LOGFILE
             echo Verifying SSH connectivity has been setup from $clienthost to $serverhost  | tee -a $LOGFILE
             echo ------------------------------------------------------------------------ | tee -a $LOGFILE
             echo "IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL."  | tee -a $LOGFILE
             $SSH -l $USR $clienthost "$REMOTESSH $serverhost \"/bin/sh -c date\""  | tee -a $LOGFILE
             echo ------------------------------------------------------------------------ | tee -a $LOGFILE
          done 
           echo -Verification from $clienthost complete- | tee -a $LOGFILE
       done
    else
       if [ $ADVANCED = "true" ]
       then
          if [ $SHARED = "true" ]
          then
             REMOTESSH="$SSH -i .ssh/${IDENTITY}_${firsthost}"
          else
             REMOTESSH=$SSH
          fi
         for host in $HOSTS
         do
             echo ------------------------------------------------------------------------ | tee -a $LOGFILE
            echo Verifying SSH connectivity has been setup from $firsthost to $host  | tee -a $LOGFILE
            echo "IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL." | tee -a $LOGFILE
           $SSH -l $USR $firsthost "/bin/sh -c \"$REMOTESSH $host \\"/bin/sh -c date\\"\"" | tee -a $LOGFILE
             echo ------------------------------------------------------------------------ | tee -a $LOGFILE
        done
        echo -Verification from $clienthost complete- | tee -a $LOGFILE
      fi
    fi
    echo "SSH verification complete." | tee -a $LOGFILE

    Posted by pat98

    2015. 12. 30. 10:55 오라클

    db 수동생성



    DB 수동생성 작업

     

    1. mkdir -p /oracle/ora920/dbs 생성

     

    2. vi /oracle/ora920/dbs/initORA9.ora 생성

     

    *.background_dump_dest='/oracle/admin/ORA9/bdump'
    *.compatible='9.2.0.0.0'
    *.control_files='/oradata/ORA9/control01.ctl','/oradata/ORA9/control02.ctl','/oradata/ORA9/control03.ctl'
    *.core_dump_dest='/oracle/admin/ORA/cdump'
    *.db_block_size=8192
    *.db_cache_size=629145600
    *.db_file_multiblock_read_count=16
    *.db_name='ORA9'
    *.instance_name='ORA9'
    *.large_pool_size=314572800
    *.processes=150
    *.sga_max_size=1572864000
    *.shared_pool_size=209715200
    *.sort_area_size=524288
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS01'
    *.user_dump_dest='/oracle/admin/ORA9/udump'

     

    (11g 의 경우)

    ORCL.__db_cache_size=213909504
    ORCL.__java_pool_size=4194304
    ORCL.__large_pool_size=4194304
    ORCL.__oracle_base='/oracle/app/oracle '#ORACLE_BASE set from environment
    ORCL.__pga_aggregate_target=251658240
    ORCL.__sga_target=369098752
    ORCL.__shared_io_pool_size=0
    ORCL.__shared_pool_size=138412032
    ORCL.__streams_pool_size=0
    *.audit_file_dest='/oracle/app/oracle/admin/ORCL/adump'
    *.audit_trail='db'
    *.compatible='11.2.0.0.0'
    *.control_files='/data/ORCL/control01.ctl','/data/ORCL/control02.ctl','/data/ORCL/control03.ctl'
    *.db_block_size=8192
    *.db_domain=''
    *.db_name='ORCL'
    *.diagnostic_dest='/oracle/app/oracle'
    *.memory_target=620756992
    *.open_cursors=300
    *.processes=500
    *.remote_login_passwordfile='EXCLUSIVE'
    *.undo_tablespace='UNDOTBS1'

     

    3. cd  /oracle/ora920/dbs

    orapwd file=orapwORA9 password=manager entries=5

     

    4.

    mkdir -p /oracle/admin/ORA9/bdump
    mkdir -p /oracle/admin/ORA9/cdump
    mkdir -p /oracle/admin/ORA9/udump


    5. cr_db.sql 화일 작성

    SQL> startup nomount

    @cr_db.sql


    - 9i의 경우

    CREATE DATABASE ORA9
    MAXINSTANCES 1
    MAXLOGHISTORY 1
    MAXLOGFILES 10
    MAXLOGMEMBERS 10
    MAXDATAFILES 100
    DATAFILE '/oradata/ORA9/system01.dbf' SIZE 500M REUSE AUTOEXTEND ON NEXT  10240K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL
    DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE '/oradata/ORA9/temp01.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT  640K MAXSIZE UNLIMITED
    UNDO TABLESPACE "UNDOTBS01" DATAFILE '/oradata/ORA9/undotbs01.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT  5120K MAXSIZE UNLIMITED
    CHARACTER SET KO16KSC5601
    NATIONAL CHARACTER SET AL16UTF16
    LOGFILE GROUP 1 ('/oradata/ORA9/redo01.log') SIZE 102400K,
    GROUP 2 ('/oradata/ORA9/redo02.log') SIZE 102400K,
    GROUP 3 ('/oradata/ORA(/redo03.log') SIZE 102400K ;

     

    - 10g의 경우

    CREATE DATABASE ORCL
    MAXINSTANCES 1
    MAXLOGHISTORY 1
    MAXLOGFILES 32
    MAXLOGMEMBERS 5
    MAXDATAFILES 2000
    DATAFILE '/userc/oraprod/proddata/system01.dbf' SIZE 500M REUSE AUTOEXTEND ON NEXT  10240K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL
    DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE '/oradata/temp01.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT  640K MAXSIZE UNLIMITED
    UNDO TABLESPACE "UNDOTBS01" DATAFILE '/oradata/undotbs01.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT  5120K MAXSIZE UNLIMITED
    SYSAUX DATAFILE '/oradata/sysauxo1.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT  5120K MAXSIZE UNLIMITED
    CHARACTER SET KO16KSC5601
    NATIONAL CHARACTER SET AL16UTF16
    LOGFILE GROUP 1 ('/oradata/redo01.log') SIZE 102400K,
    GROUP 2 ('/oradata/redo02.log') SIZE 102400K,
    GROUP 3 ('/oradata/redo03.log') SIZE 102400K ;

     

    - 11g 의 경우

    CREATE DATABASE ORCL
    USER SYS IDENTIFIED BY manager
    USER SYSTEM IDENTIFIED BY manager
    LOGFILE GROUP 1 ('/data/ORCL/redo01.log') SIZE 100M,
    GROUP 2 ('/data/ORCL/redo02.log') SIZE 100M,
    GROUP 3 ('/data/ORCL/redo03.log') SIZE 100M
    MAXLOGFILES 5
    MAXLOGMEMBERS 5
    MAXLOGHISTORY 1
    MAXDATAFILES 1000
    MAXINSTANCES 1
    CHARACTER SET AL32UTF8
    NATIONAL CHARACTER SET AL16UTF16
    DATAFILE '/data/ORCL/system01.dbf' SIZE 1000M REUSE
    EXTENT MANAGEMENT LOCAL
    SYSAUX DATAFILE '/data/ORCL/sysaux01.dbf' SIZE 1000M REUSE
    DEFAULT TEMPORARY TABLESPACE temp
    TEMPFILE '/data/ORCL/temp01.dbf'
    SIZE 1000M REUSE
    UNDO TABLESPACE UNDOTBS1
    DATAFILE '/data/ORCL/undotbs01.dbf'
    SIZE 1000M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;

     

    6.

    sysdba 로 실행

    @?/rdbms/admin/catalog.sql
    @?/rdbms/admin/catproc.sql

    system유저로 실행

    @?/sqlplus/admin/pupbld.sql

     

    Posted by pat98

    01-11 00:11
    Flag Counter
    Yesterday
    Today
    Total

    글 보관함

    최근에 올라온 글

    달력

     « |  » 2025.1
    1 2 3 4
    5 6 7 8 9 10 11
    12 13 14 15 16 17 18
    19 20 21 22 23 24 25
    26 27 28 29 30 31

    최근에 달린 댓글