windows 11.2.0.4 패치셋이 10/25 일 릴리즈 되었습니다.

확인 결과 설치 잘 되네요..

자 이제 2012 서버에 마음껏 설치 !!

 

 

 

 

 

 

Posted by pat98

12c 는 당연히 되고요..11g는 아직까지는 안됩니다.

11.2.0.4 가 나온면 설치가능하다고 합니다. (2013년 4분기 Release예정)

아래는 windows2012 에 11.2.0.3 으로 설치해 보면 Hang 걸리면서 설치 안되는 모습..

안되는거 용쓰지 말고 기다리는게 정신건강에 좋습니다. ^^

 

 

Posted by pat98

설치후 ASM 초기 parameter

 

SQL> show parameter

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskgroups                       string
asm_diskstring                       string
asm_power_limit                      integer     1
asm_preferred_read_failure_groups    string
audit_file_dest                      string      /u01/11.2.0/grid/rdbms/audit
audit_sys_operations                 boolean     FALSE
audit_syslog_level                   string
background_core_dump                 string      partial
background_dump_dest                 string      /u01/app/oracle/diag/asm/+asm/
                                                 +ASM1/trace
cluster_database                     boolean     TRUE

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cluster_database_instances           integer     4
cluster_interconnects                string
core_dump_dest                       string      /u01/app/oracle/diag/asm/+asm/
                                                 +ASM1/cdump
cpu_count                            integer     1
db_cache_size                        big integer 0
db_ultra_safe                        string      OFF
db_unique_name                       string      +ASM
diagnostic_dest                      string      /u01/app/oracle
event                                string
file_mapping                         boolean     FALSE

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
filesystemio_options                 string      none
ifile                                file
instance_name                        string      +ASM1
instance_number                      integer     1
instance_type                        string      asm
large_pool_size                      big integer 12M
ldap_directory_sysauth               string      no
listener_networks                    string
local_listener                       string      (DESCRIPTION=(ADDRESS_LIST=(AD
                                                 DRESS=(PROTOCOL=TCP)(HOST=192.
                                                 168.56.102)(PORT=1521))))

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
lock_name_space                      string
lock_sga                             boolean     FALSE
max_dump_file_size                   string      unlimited
memory_max_target                    big integer 272M
memory_target                        big integer 272M
nls_calendar                         string
nls_comp                             string      BINARY
nls_currency                         string
nls_date_format                      string
nls_date_language                    string
nls_dual_currency                    string

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
nls_iso_currency                     string
nls_language                         string      AMERICAN
nls_length_semantics                 string      BYTE
nls_nchar_conv_excp                  string      FALSE
nls_numeric_characters               string
nls_sort                             string
nls_territory                        string      AMERICA
nls_time_format                      string
nls_time_tz_format                   string
nls_timestamp_format                 string
nls_timestamp_tz_format              string

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
os_authent_prefix                    string      ops$
os_roles                             boolean     FALSE
parallel_execution_message_size      integer     16384
pga_aggregate_target                 big integer 0
processes                            integer     120
remote_listener                      string
remote_login_passwordfile            string      EXCLUSIVE
remote_os_authent                    boolean     FALSE
remote_os_roles                      boolean     FALSE
service_names                        string      +ASM
sessions                             integer     202

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sga_max_size                         big integer 272M
sga_target                           big integer 0
shadow_core_dump                     string      partial
shared_pool_reserved_size            big integer 5872025
shared_pool_size                     big integer 0
sort_area_size                       integer     65536
spfile                               string      +DATA/rac-cluster/asmparameter
                                                 file/registry.253.813896719
sql_trace                            boolean     FALSE
statistics_level                     string      TYPICAL
timed_os_statistics                  integer     0

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
timed_statistics                     boolean     TRUE
trace_enabled                        boolean     TRUE
use_large_pages                      string      TRUE
user_dump_dest                       string      /u01/app/oracle/diag/asm/+asm/
                                                 +ASM1/trace
workarea_size_policy                 string      AUTO

Posted by pat98

11gRAC 설치작업중 실패후 Grid Infrastructure 의 제거

 

아래와 같이 하면 깔끔하게 지울수 있습니다.

2가지의 상황이 있을수 있다. root.sh 실행하지 않은 경우와 root.sh 를 이미 실행한 경우


1. root.sh 를 실행이 적이 없다면 grid user 로

$ $GRID_HOME/deinstall/deinstall


2. root.sh 를 이미 실행해 버렸다면

root user로

1) $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force

2) $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode

(마지막 노드에서 실행해줌, -lastnode 옵션을 주면 같이 OCR, VOTE 를 지워버림)

3) grid user 로

$ $GRID_HOME/deinstall/deinstall

Posted by pat98

OCR disk의 장애테스트

 

테스트 환경 : redhat linux 5.8

                    Oracle 11gR2 11.2.0.3

 

===========================================================================================

CRS 떠있는 상태에서

dd명령어로 지움


[root@rac1 /root]# dd if=/dev/zero of=/dev/raw/raw2 bs=8192
dd: writing `/dev/raw/raw2': No space left on device
38401+0 records in
38400+0 records out
314572800 bytes (315 MB) copied, 438.815 seconds, 717 kB/s

[root@rac1 /root]# ps -ef |grep d.bin
root      6082     1  0 17:25 ?        00:00:08 /u01/11.2.0/grid/bin/ohasd.bin reboot
oracle    6208     1  0 17:25 ?        00:00:01 /u01/11.2.0/grid/bin/oraagent.bin
oracle    6222     1  0 17:25 ?        00:00:00 /u01/11.2.0/grid/bin/mdnsd.bin
oracle    6234     1  0 17:25 ?        00:00:01 /u01/11.2.0/grid/bin/gpnpd.bin
oracle    6247     1  0 17:25 ?        00:00:03 /u01/11.2.0/grid/bin/gipcd.bin
root      6258     1  0 17:25 ?        00:00:01 /u01/11.2.0/grid/bin/cssdmonitor
root      6274     1  0 17:25 ?        00:00:01 /u01/11.2.0/grid/bin/cssdagent
oracle    6288     1  0 17:25 ?        00:00:07 /u01/11.2.0/grid/bin/ocssd.bin
root      6291     1  0 17:25 ?        00:00:06 /u01/11.2.0/grid/bin/orarootagent.bin
root      6305     1  1 17:25 ?        00:00:13 /u01/11.2.0/grid/bin/osysmond.bin
root      6416     1  0 17:26 ?        00:00:05 /u01/11.2.0/grid/bin/octssd.bin reboot
root      6438     1  1 17:26 ?        00:00:09 /u01/11.2.0/grid/bin/crsd.bin reboot
oracle    6443     1  0 17:26 ?        00:00:03 /u01/11.2.0/grid/bin/evmd.bin
oracle    6526  6443  0 17:28 ?        00:00:00 /u01/11.2.0/grid/bin/evmlogger.bin -o /u01/11.2.0/grid/evm/log/evmlogger.info -l /u01/11.2.0/grid/evm/log/evmlogger.log
root      6563     1  2 17:28 ?        00:00:17 /u01/11.2.0/grid/bin/ologgerd -M -d /u01/11.2.0/grid/crf/db/rac1
root      6730     1  0 17:36 ?        00:00:02 /u01/11.2.0/grid/bin/orarootagent.bin
oracle    6884     1  0 17:36 ?        00:00:01 /u01/11.2.0/grid/bin/oraagent.bin
oracle    6938     1  0 17:37 ?        00:00:00 /u01/11.2.0/grid/bin/tnslsnr LISTENER -inherit
root      7256  4440  0 17:40 pts/2    00:00:00 grep d.bin

 

[root@rac1 /root]# crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.

 

[root@rac2 /root]# ocrconfig -restore /dev/raw/raw2
PROT-19: Cannot proceed while the Cluster Ready Service is running

 

[root@rac1 /root]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2728
         Available space (kbytes) :     259392
         ID                       : 1682702384
         Device/File Name         : /dev/raw/raw1
                                    Device/File integrity check succeeded
         Device/File Name         : /dev/raw/raw2
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

 

crsctl disable crs 후 Reboot 함

 

[root@rac1 /root]# ocrconfig -restore /u01/11.2.0/grid/cdata/rac-cluster/backup00.ocr


[root@rac1 /root]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2732
         Available space (kbytes) :     259388
         ID                       : 1682702384
         Device/File Name         : /dev/raw/raw1
                                    Device/File integrity check succeeded
         Device/File Name         : /dev/raw/raw2
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

=================================================================================

CRS 가 떠 있는 상태에서..

다른방법으로 -delete 로 삭제


[root@rac1 /root]# ps -ef |grep d.bin
root      3737     1  1 10:01 ?        00:00:37 /u01/11.2.0/grid/bin/ohasd.bin reboot
oracle    3863     1  0 10:01 ?        00:00:06 /u01/11.2.0/grid/bin/oraagent.bin
oracle    3877     1  0 10:01 ?        00:00:00 /u01/11.2.0/grid/bin/mdnsd.bin
oracle    3889     1  0 10:01 ?        00:00:03 /u01/11.2.0/grid/bin/gpnpd.bin
oracle    3902     1  0 10:01 ?        00:00:11 /u01/11.2.0/grid/bin/gipcd.bin
root      3904     1  0 10:01 ?        00:00:24 /u01/11.2.0/grid/bin/orarootagent.bin
root      3923     1  1 10:01 ?        00:01:00 /u01/11.2.0/grid/bin/osysmond.bin
root      3938     1  0 10:01 ?        00:00:05 /u01/11.2.0/grid/bin/cssdmonitor
root      3959     1  0 10:01 ?        00:00:04 /u01/11.2.0/grid/bin/cssdagent
oracle    3973     1  0 10:01 ?        00:00:24 /u01/11.2.0/grid/bin/ocssd.bin
root      4062     1  1 10:02 ?        00:00:38 /u01/11.2.0/grid/bin/octssd.bin reboot
root      4085     1  1 10:02 ?        00:01:05 /u01/11.2.0/grid/bin/crsd.bin reboot
oracle    4089     1  0 10:02 ?        00:00:30 /u01/11.2.0/grid/bin/evmd.bin
oracle    4173  4089  0 10:03 ?        00:00:00 /u01/11.2.0/grid/bin/evmlogger.bin -o /u01/11.2.0/grid/evm/log/evmlogger.info -l /u01/11.2.0/grid/evm/log/evmlogger.log
root      4226     1  0 10:05 ?        00:00:21 /u01/11.2.0/grid/bin/orarootagent.bin
root      4262     1  2 10:05 ?        00:01:33 /u01/11.2.0/grid/bin/ologgerd -M -d /u01/11.2.0/grid/crf/db/rac1
oracle    4389     1  0 10:05 ?        00:00:08 /u01/11.2.0/grid/bin/oraagent.bin
oracle    4434     1  0 10:06 ?        00:00:00 /u01/11.2.0/grid/bin/tnslsnr LISTENER -inherit
root      5559  3682  0 10:57 pts/1    00:00:00 grep d.bin

 

[root@rac1 /root]# ocrconfig -delete /dev/raw/raw2

[root@rac1 /u01/11.2.0/grid/log/rac1/crsd]# ls -al
total 6720
drwxr-x---  2 root oinstall    4096 Jul  4 11:16 .
drwxr-xr-t 24 root oinstall    4096 Apr 28 02:22 ..
-rw-r--r--  1 root root     6855546 Jul  4 11:11 crsd.log
-rw-r--r--  1 root root        2180 Jul  4 10:02 crsdOUT.log

2013-07-04 11:01:44.327: [UiServer][2927553424] {1:60636:135} Done for ctx=0xa5e0928
2013-07-04 11:02:56.746: [  OCRRAW][2992073616]propriowv_bootbuf: Vote information on disk 0 [/dev/raw/raw1] is adjusted from [1/2] to [2/2]
[  OCRMAS][3002579856]th_master: Received group private data event. Incarnation [2]
[  OCRMAS][3002579856]th_master: Received group private data event. Incarnation [3]
[  OCRMAS][3002579856]th_master: Received group private data event. Incarnation [4]
2013-07-04 11:03:04.291: [  OCRRAW][2992073616]proprioo: for disk 0 (/dev/raw/raw1), id match (1), total id sets, (2) need recover (0), my votes (2), total votes (2), commit_lsn (1257), lsn (1257)
2013-07-04 11:03:04.291: [  OCRRAW][2992073616]proprioo: my id set: (1669906634, 1028247821, 0, 0, 0)
2013-07-04 11:03:04.291: [  OCRRAW][2992073616]proprioo: 1st set: (1669906634, 188263131, 0, 0, 0)
2013-07-04 11:03:04.291: [  OCRRAW][2992073616]proprioo: 2nd set: (1669906634, 1028247821, 0, 0, 0)
2013-07-04 11:03:04.519: [  OCRAPI][2992073616]u_masmd:11: clscrs_register_resource2 succeeded [0]. Return [0]
2013-07-04 11:03:04.530: [  OCRSRV][2992073616]proath_update_grppubdata: Successfully updated and published the configured devices in public data.


[root@rac1 /root]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2732
         Available space (kbytes) :     259388
         ID                       : 1682702384
         Device/File Name         : /dev/raw/raw1
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

 

[root@rac1 /root]# ocrconfig -add /dev/raw/raw2

2013-07-04 11:27:00.746: [  OCRRAW][2989972368]proprioo: my id set: (1669906634, 188263131, 0, 0, 0)
2013-07-04 11:27:00.746: [  OCRRAW][2989972368]proprioo: 1st set: (1669906634, 1028247821, 0, 0, 0)
2013-07-04 11:27:00.746: [  OCRRAW][2989972368]proprioo: 2nd set: (1669906634, 188263131, 0, 0, 0)
2013-07-04 11:27:00.900: [  OCRRAW][2989972368]propriogid:1_2: INVALID FORMAT
[  OCRMAS][3002579856]th_master: Received group private data event. Incarnation [6]
2013-07-04 11:27:04.512: [  OCRRAW][2989972368]propriowv_bootbuf: Vote information on disk 1 [/dev/raw/raw2] is adjusted from [0/0] to [1/2]
2013-07-04 11:27:04.528: [  OCRRAW][2989972368]propriowv_bootbuf: Vote information on disk 0 [/dev/raw/raw1] is adjusted from [2/2] to [1/2]

 

[root@rac1 /root]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2732
         Available space (kbytes) :     259388
         ID                       : 1682702384
         Device/File Name         : /dev/raw/raw1
                                    Device/File integrity check succeeded
         Device/File Name         : /dev/raw/raw2
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

Posted by pat98

Vote disk의 장애테스트

 

테스트 환경 : redhat linux 5.8

                    Oracle 11gR2 11.2.0.3


===========================================================================================

1개 지웠을 경우

 

[root@rac1 /root]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   85bfd9f8d4bc4fcebfb9068b3945763b (/dev/raw/raw3) []
 2. ONLINE   706c29fba45d4f4cbfee1c9f177fb8ab (/dev/raw/raw4) []
 3. ONLINE   99a7e17b746b4f00bfdc2ca37bc7ec8a (/dev/raw/raw5) []

dd를 빨리 하기위해 CRS를 내렸음. ONLINE 중에 dd 해도 되나 시간이 오려걸림

[root@rac1 /root]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'
....
이하 생략

[root@rac1 /root]# dd if=/dev/zero of=/dev/raw/raw5 bs=8192
dd: writing `/dev/raw/raw5': No space left on device
38401+0 records in
38400+0 records out
314572800 bytes (315 MB) copied, 20.3524 seconds, 15.5 MB/s

[root@rac1 /root]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.


[root@rac1 /root]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   85bfd9f8d4bc4fcebfb9068b3945763b (/dev/raw/raw3) []
 2. ONLINE   706c29fba45d4f4cbfee1c9f177fb8ab (/dev/raw/raw4) []
 3. OFFLINE  99a7e17b746b4f00bfdc2ca37bc7ec8a () []
Located 3 voting disk(s).


[root@rac1 /root]# crsctl add css votedisk /dev/raw/raw5
Now formatting voting disk: /dev/raw/raw5.
CRS-4603: Successful addition of voting disk /dev/raw/raw5.

[root@rac1 /root]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   85bfd9f8d4bc4fcebfb9068b3945763b (/dev/raw/raw3) []
 2. ONLINE   706c29fba45d4f4cbfee1c9f177fb8ab (/dev/raw/raw4) []
 3. OFFLINE  99a7e17b746b4f00bfdc2ca37bc7ec8a () []
 4. ONLINE   97431d31cd014fe7bf36a3483b6c653d (/dev/raw/raw5) []
Located 4 voting disk(s).

[root@rac1 /root]# crsctl delete css votedisk 99a7e17b746b4f00bfdc2ca37bc7ec8a
CRS-4611: Successful deletion of voting disk 99a7e17b746b4f00bfdc2ca37bc7ec8a.

[root@rac1 /root]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   85bfd9f8d4bc4fcebfb9068b3945763b (/dev/raw/raw3) []
 2. ONLINE   706c29fba45d4f4cbfee1c9f177fb8ab (/dev/raw/raw4) []
 3. ONLINE   97431d31cd014fe7bf36a3483b6c653d (/dev/raw/raw5) []

===========================================================================================

2개 지웠을 경우

[root@rac1 /root]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'
....
이하 생략

[root@rac1 /root]# dd if=/dev/zero of=/dev/raw/raw4 bs=8192
dd: writing `/dev/raw/raw4': No space left on device
38401+0 records in
38400+0 records out
314572800 bytes (315 MB) copied, 19.44 seconds, 16.2 MB/s
[root@rac1 /root]# dd if=/dev/zero of=/dev/raw/raw5 bs=8192
dd: writing `/dev/raw/raw5': No space left on device
38401+0 records in
38400+0 records out
314572800 bytes (315 MB) copied, 20.3099 seconds, 15.5 MB/s
[root@rac1 /root]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

[root@rac1 /root]# crsctl query css votedisk
Unable to communicate with the Cluster Synchronization Services daemon.

[root@rac1 /root]# ps -ef |grep d.bin
root      8557     1  2 15:13 ?        00:00:06 /u01/11.2.0/grid/bin/ohasd.bin reboot
oracle    8684     1  0 15:13 ?        00:00:00 /u01/11.2.0/grid/bin/oraagent.bin
oracle    8698     1  0 15:13 ?        00:00:00 /u01/11.2.0/grid/bin/mdnsd.bin
oracle    8710     1  0 15:13 ?        00:00:00 /u01/11.2.0/grid/bin/gpnpd.bin
oracle    8723     1  0 15:13 ?        00:00:01 /u01/11.2.0/grid/bin/gipcd.bin
root      8725     1  0 15:13 ?        00:00:00 /u01/11.2.0/grid/bin/orarootagent.bin
root      8744     1  1 15:13 ?        00:00:04 /u01/11.2.0/grid/bin/osysmond.bin
root      8835     1  2 15:13 ?        00:00:06 /u01/11.2.0/grid/bin/ologgerd -M -d /u01/11.2.0/grid/crf/db/rac1
root      9390     1  0 15:16 ?        00:00:00 /u01/11.2.0/grid/bin/cssdagent
root      9457  4507  0 15:18 pts/2    00:00:00 grep d.bin

[root@rac1 /root]# crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.

[root@rac2 /root]# crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.crf' on 'rac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.


[root@rac2 /root]# crsctl start crs -excl
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac2'
CRS-2676: Start of 'ora.crsd' on 'rac2' succeeded

-excl 옵션은 cluster 를 exclusive mode 로 시작하게 함. 그러므로 한쪽 노드에서만 실행하면 됨

[root@rac2 /root]# ps -ef |grep d.bin
root      9000     1  3 15:22 ?        00:00:03 /u01/11.2.0/grid/bin/ohasd.bin exclusive
oracle    9126     1  0 15:22 ?        00:00:00 /u01/11.2.0/grid/bin/oraagent.bin
oracle    9140     1  0 15:22 ?        00:00:00 /u01/11.2.0/grid/bin/mdnsd.bin
oracle    9152     1  0 15:22 ?        00:00:00 /u01/11.2.0/grid/bin/gpnpd.bin
root      9164     1  0 15:22 ?        00:00:00 /u01/11.2.0/grid/bin/cssdmonitor
oracle    9167     1  1 15:22 ?        00:00:01 /u01/11.2.0/grid/bin/gipcd.bin
root      9192     1  0 15:22 ?        00:00:00 /u01/11.2.0/grid/bin/cssdagent
oracle    9206     1  0 15:22 ?        00:00:00 /u01/11.2.0/grid/bin/ocssd.bin -X
root      9262     1  0 15:22 ?        00:00:00 /u01/11.2.0/grid/bin/orarootagent.bin
root      9276     1  0 15:22 ?        00:00:00 /u01/11.2.0/grid/bin/octssd.bin
root      9296     1  0 15:22 ?        00:00:00 /u01/11.2.0/grid/bin/crsd.bin reboot
root      9333 12463  0 15:23 pts/1    00:00:00 grep d.bin

[root@rac2 /root]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   85bfd9f8d4bc4fcebfb9068b3945763b (/dev/raw/raw3) []
 2. OFFLINE  706c29fba45d4f4cbfee1c9f177fb8ab () []
 3. OFFLINE  97431d31cd014fe7bf36a3483b6c653d () []

[root@rac2 /root]# crsctl start crs -excl -nocrs
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded

[root@rac2 /root]# crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS      
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        OFFLINE OFFLINE                               Instance Shutdown  
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       rac2                                        
ora.crf
      1        ONLINE  ONLINE       rac2                                        
ora.crsd
      1        ONLINE  ONLINE       rac2                                        
ora.cssd
      1        ONLINE  ONLINE       rac2                                        
ora.cssdmonitor
      1        ONLINE  ONLINE       rac2                                        
ora.ctssd
      1        ONLINE  ONLINE       rac2                     OBSERVER           
ora.diskmon
      1        OFFLINE OFFLINE                                                  
ora.evmd
      1        ONLINE  ONLINE       rac2                                        
ora.gipcd
      1        ONLINE  ONLINE       rac2                                        
ora.gpnpd
      1        ONLINE  ONLINE       rac2                                        
ora.mdnsd
      1        ONLINE  ONLINE       rac2   

-nocrs 옵션은 11.2.0.2 에 도입된 것으로 ora.crsd resource 의 시작을 방지해줌. 해당 옵션이

지정되지 않아 ora.crsd resource 가 fail 될 경우 결과적으로 ora.cluster_interconnect.haip resource 가

올라오지 않아 ASM 의 Crash를 가져올수 있음.

 

[root@rac2 /root]# ps -ef |grep d.bin
root      9388     1  2 15:26 ?        00:00:03 /u01/11.2.0/grid/bin/ohasd.bin exclusive
oracle    9515     1  0 15:26 ?        00:00:00 /u01/11.2.0/grid/bin/oraagent.bin
oracle    9529     1  0 15:26 ?        00:00:00 /u01/11.2.0/grid/bin/mdnsd.bin
oracle    9541     1  0 15:26 ?        00:00:00 /u01/11.2.0/grid/bin/gpnpd.bin
root      9553     1  0 15:26 ?        00:00:00 /u01/11.2.0/grid/bin/cssdmonitor
oracle    9556     1  0 15:26 ?        00:00:00 /u01/11.2.0/grid/bin/gipcd.bin
root      9581     1  0 15:26 ?        00:00:00 /u01/11.2.0/grid/bin/cssdagent
oracle    9610     1  0 15:26 ?        00:00:00 /u01/11.2.0/grid/bin/ocssd.bin -X

[root@rac2 /root]# crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.

[root@rac1 /root]# crsctl start crs -excl -nocrs
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac2, number 2, and is terminating
CRS-2674: Start of 'ora.cssd' on 'rac1' failed
CRS-2679: Attempting to clean 'ora.cssd' on 'rac1'
CRS-2681: Clean of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-4000: Command Start failed, or completed with errors.
[root@rac1 /root]# ps -ef |grep ora_
root      9941  4507  0 15:31 pts/2    00:00:00 grep ora_
[root@rac1 /root]# ps -ef |grep d.bin
root      9626     1  4 15:30 ?        00:00:03 /u01/11.2.0/grid/bin/ohasd.bin exclusive

[root@rac2 /root]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   85bfd9f8d4bc4fcebfb9068b3945763b (/dev/raw/raw3) []
 2. OFFLINE  706c29fba45d4f4cbfee1c9f177fb8ab () []
 3. OFFLINE  97431d31cd014fe7bf36a3483b6c653d () []
Located 3 voting disk(s).
[root@rac2 /root]#
[root@rac2 /root]#
[root@rac2 /root]#
[root@rac2 /root]#
[root@rac2 /root]# crsctl add css votedisk /dev/raw/raw4
Now formatting voting disk: /dev/raw/raw4.
CRS-4603: Successful addition of voting disk /dev/raw/raw4.
[root@rac2 /root]# crsctl add css votedisk /dev/raw/raw5
Now formatting voting disk: /dev/raw/raw5.
CRS-4603: Successful addition of voting disk /dev/raw/raw5.

[root@rac2 /root]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   85bfd9f8d4bc4fcebfb9068b3945763b (/dev/raw/raw3) []
 2. OFFLINE  706c29fba45d4f4cbfee1c9f177fb8ab () []
 3. OFFLINE  97431d31cd014fe7bf36a3483b6c653d () []
 4. ONLINE   87e7d964970a4f91bf8a33217557c04f (/dev/raw/raw4) []
 5. ONLINE   54ddb0c72e2e4f7abfe7684fbe847917 (/dev/raw/raw5) []
Located 5 voting disk(s).
[root@rac2 /root]# crsctl delete css votedisk 706c29fba45d4f4cbfee1c9f177fb8ab
CRS-4611: Successful deletion of voting disk 706c29fba45d4f4cbfee1c9f177fb8ab.
[root@rac2 /root]# crsctl delete css votedisk 97431d31cd014fe7bf36a3483b6c653d
CRS-4611: Successful deletion of voting disk 97431d31cd014fe7bf36a3483b6c653d.
[root@rac2 /root]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   85bfd9f8d4bc4fcebfb9068b3945763b (/dev/raw/raw3) []
 2. ONLINE   87e7d964970a4f91bf8a33217557c04f (/dev/raw/raw4) []
 3. ONLINE   54ddb0c72e2e4f7abfe7684fbe847917 (/dev/raw/raw5) []
Located 3 voting disk(s).
[root@rac2 /root]#

[root@rac2 /root]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.

[root@rac2 /root]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started

 

=> 결론

노드는 항상 Voting disk의 과반이상을 항상 access할수 있어야 함
최소한의 수를 access 못하면 노드는 cluster에서 evict 또는 remove 되어버림


예를 들어

3개로 구성했을 경우 적어도 2개의 Voting disk 에 access 할수 있어야 함
5개로 구성했을 경우 적어도 3개의 Voting disk 에 access 할수 있어야 함

ps. Oracle Clusterware는 최대 32개까지의 voting disk 구성을 지원함

Posted by pat98

2013. 8. 1. 21:01 오라클

srvctl commands


srvctl commands
SRVCTL:

srvctl command target [options]
commands: enable|disable|start|stop|relocate|status|add|remove|modify|getenv|setenv|unsetenv|config
targets: database/db|instance/inst|service/serv|nodeapps|asm|listener
targets: database/db|instance/inst|service/serv|nodeapps|asm|listener |diskgroup|home|ons|eons|filesystem|gns|oc4j|scan|scan_listener |srvpool|server|VIP -- From Oracle 11g R2

srvctl -help or srvctl -v

srvctl -V -- prints version
srvctl version: 10.2.0.0.0 (or) srvctl version: 11.2.0.1.0

srvctl -h -- print usage

srvctl status service –h

srvctl enable asm -n node_name [-i asm_inst_name]

-n node_name:Node name
-i inst_name:ASM instance name.


An example of this command is:

srvctl enable asm -n node01 -i asm1


To add/remove listener
srvctl add listener -n node_name -o ORACLE_HOME [-l listener_name]
srvctl add listener –n linuxrac01 –o $ORACLE_HOME –l listenerbhavik_test01
srvctl remove listener -n node_name [-l listener_name]
srvctl remove listener –n linuxrac02 –l listenerbhavik_test02


To add/remove listener in 11g Rel 2
srvctl add listener -l LISTENERASM01 -p "TCP:1525" -o $ORACLE_HOME
srvctl add listener -l listenerbhavik01 -p 1341 -o /db/oracle/ora11201
srvctl remove listener [-l lsnr_name|-a] [-f]
srvctl remove listener -l listenerbhavik01


To start/stop listener
srvctl start listener -n node_name [-l listener_names]
srvctl start listener -n linuxrac01
srvctl stop listener -n node_name [-l listener_names]
srvctl stop listener -n linuxrac01


To check the status of the listener
srvctl status listener [-n node_name] [-l listener_names]
srvctl status listener -n linuxrac02

To configure listener
srvctl config listener -n node_name
srvctl config listener –n linuxrac01
srvctl config listener [-l lsnr_name] [-a]
srvctl config listener -l listenerbhavik01

To modify the listener
srvctl modify listener -n node_name [-l listener_names] -o ORACLE_HOME
srvctl modify listener -n linuxrac03 -o /db/oracle/app/oracle/product/11.2/asm -l "LISTENERbhavik_test04"

To enable/disable listener
srvctl enable listener [-l lsnr_name] [-n node_name]
srvctl enable listener -l listenerbhavik_test02 -n linuxrac02
srvctl disable listener [-l lsnr_name] [-n node_name]
srvctl disable listener -l listenerbhavik_test02 -n linuxrac02

To get/set/unset environment parameter for listener
srvctl getenv listener [-l lsnr_name]
srvctl getenv listener -l listenerbhavik_test02

gets the environment configuration for a cluster database:

srvctl getenv database -d mndb

srvctl setenv listener [-l lsnr_name] [-t "name=val]
srvctl setenv listener -t LANG=en

srvctl unsetenv listener [-l lsnr_name] [-t name]
srvctl unsetenv listener -t "TNS_ADMIN"


Database:
srvctl add database -d db_name -o ORACLE_HOME [-m domain_name][-p spfile] [-A name|ip/netmask]
[-r {PRIMARY|PHYSICAL_STANDBY|LOGICAL_STANDBY|SNAPSHOT_STANDBY}]
[-s start_options] [-n db_name] [-y {AUTOMATIC|MANUAL}]
srvctl add database -d prod -o /u01/oracle/product/102/prod
 
srvctl remove database -d db_name [-f]
srvctl remove database -d prod

srvctl start database -d db_name [-o start_options] [-c connect_str|-q]
srvctl start database -d db_name [-o open]
srvctl start database -d db_name -o nomount
srvctl start database -d db_name -o mount

srvctl start db -d prod
srvctl start database -d apps -o open

srvctl stop database -d db_name [-o stop_options] [-c connect_str|-q]
srvctl stop database -d db_name [-o normal]
srvctl stop database -d db_name -o transactional
srvctl stop database -d db_name -o immediate
srvctl stop database -d db_name -o abort
srvctl stop db -d crm -o immediate

srvctl status database -d db_name [-f] [-v] [-S level]
srvctl status database -d db_name -v service_name

srvctl status database -d hrms

srvctl enable database -d db_name
srvctl enable database -d vis

srvctl disable database -d db_name
srvctl disable db -d vis

srvctl config database
srvctl config database -d db_name [-a] [-t]
srvctl config database
srvctl config database -d HYD -a

srvctl modify database -d db_name [-n db_name] [-o ORACLE_HOME] [-m domain_name] [-p spfile]
[-r {PRIMARY|PHYSICAL_STANDBY|LOGICAL_STANDBY|SNAPSHOT_STANDBY}] [-s start_options] [-y {AUTOMATIC|MANUAL}]
srvctl modify database -d hrms -r physical_standby
srvctl modify db -d RAC -p /u03/oradata/RAC/spfileRAC.ora -- moves p file
srvctl modify database –d HYD –o /u01/app/oracle/product/11.1/db –s open

srvctl getenv database -d db_name [-t name_list]
srvctl getenv database -d prod

srvctl setenv database -d db_name {-t name=val[,name=val,...]|-T name=val}
srvctl setenv database –d HYD –t “TNS_ADMIN=/u01/app/oracle/product/11.1/asm/network/admin”
srvctl setenv db -d prod -t LANG=en

srvctl unsetenv database -d db_name [-t name_list]
srvctl unsetenv database -d prod -t CLASSPATH

In 11g Release 2, some command's syntax has been changed:
srvctl add database -d db_unique_name -o ORACLE_HOME [-x node_name] [-m domain_name] [-p spfile] [-r {PRIMARY|PHYSICAL_STANDBY|LOGICAL_STANDBY|SNAPSHOT_STANDBY}] [-s start_options] [-t stop_options] [-n db_name] [-y {AUTOMATIC|MANUAL}] [-g server_pool_list] [-a "diskgroup_list"]
srvctl add database -d prod -o /u01/oracle/product/112/prod -m foo.com -p +dg1/prod/spfileprod.ora -r PRIMARY -s open -t normal -n db2 -y AUTOMATIC -g svrpool1,svrpool2 -a "dg1,dg2"

srvctl remove database -d db_unique_name [-f] [-y] [-v]
srvctl remove database -d prod -y

srvctl stop database -d db_unique_name [-o stop_options] [-f]
srvctl stop database -d dev -f

srvctl status database -d db_unique_name [-f] [-v]
srvctl status db -d sat -v

srvctl enable database -d db_unique_name [-n node_name]
srvctl enable database -d vis -n lnx01

srvctl disable database -d db_unique_name [-n node_name]
srvctl disable db -d vis -n lnx03

srvctl config database [-d db_unique_name [-a]]
srvctl config db -d db_erp -a

srvctl modify database -d db_unique_name [-n db_name] [-o ORACLE_HOME] [-u oracle_user] [-m domain] [-p spfile] [-r {PRIMARY|PHYSICAL_STANDBY|LOGICAL_STANDBY|SNAPSHOT_STANDBY}] [-s start_options] [-t stop_options] [-y {AUTOMATIC|MANUAL}] [-g "server_pool_list"] [-a "diskgroup_list"|-z]
srvctl modify db -d prod -r logical_standby
srvctl modify database -d racTest -a "SYSFILES,LOGS,OLTP"

Instance:
srvctl add instance –d db_name –i inst_name -n node_name
srvctl add instance -d prod -i prod01 -n linux01
 
srvctl remove instance –d db_name –i inst_name [-f]
srvctl remove instance -d prod -i prod01

srvctl start instance -d db_name -i inst_names [-o start_options] [-c connect_str|-q]
srvctl start instance –d db_name –i inst_names [-o open]
srvctl start instance –d db_name –i inst_names -o nomount
srvctl start instance –d db_name –i inst_names -o mount

srvctl start instance –d dev -i dev2

srvctl stop instance -d db_name -i inst_names [-o stop_options] [-c connect_str|-q]
srvctl stop instance –d db_name –i inst_names [-o normal]
srvctl stop instance –d db_name –i inst_names -o transactional
srvctl stop instance –d db_name –i inst_names -o immediate
srvctl stop instance –d db_name –i inst_names -o abort

srvctl stop inst –d vis -i vis

srvctl status instance –d db_name –i inst_names [-f] [-v] [-S level]

srvctl status inst –d racdb -i racdb2

srvctl enable instance –d db_name –i inst_names
srvctl enable instance -d prod -i "prod1,prod2"

srvctl disable instance –d db_name –i inst_names
srvctl disable inst -d prod -i "prod1,prod3"

srvctl modify instance -d db_name -i inst_name {-s asm_inst_name|-r} -- set dependency of instance to ASM
srvctl modify instance -d db_name -i inst_name -n node_name -- move the instance
srvctl modify instance -d db_name -i inst_name -r -- remove the instance

srvctl getenv instance –d db_name –i inst_name [-t name_list]
srvctl setenv instance –d db_name [–i inst_name] {-t "name=val[,name=val,...]" | -T "name=val"}
srvctl unsetenv instance –d db_name [–i inst_name] [-t name_list]

In 11g Release 2, some command's syntax has been changed:
srvctl start instance -d db_unique_name {-n node_name -i "instance_name_list"} [-o start_options]
srvctl start instance -d prod -n node2
srvctl start inst -d prod -i "prod2,prod3"

srvctl stop instance -d db_unique_name {[-n node_name]|[-i "instance_name_list"]} [-o stop_options] [-f]
srvctl stop inst -d prod -n node1
srvctl stop instance -d prod -i prod1

srvctl status instance -d db_unique_name {-n node_name | -i "instance_name_list"} [-f] [-v]
srvctl status instance -d prod -i "prod1,prod2" -v

srvctl modify instance -d db_unique_name -i instance_name {-n node_name|-z}
srvctl modify instance -d prod -i prod1 -n mynode
srvctl modify inst -d prod -i prod1 -z


Service:
srvctl add service -d db_name -s service_name -r pref_insts [-a avail_insts] [-P TAF_policy]
srvctl add service -d db_name -s service_name -u {-r "new_pref_inst" | -a "new_avail_inst"}
srvctl add service -d RAC -s PRD -r RAC01,RAC02 -a RAC03,RAC04
srvctl add serv -d CRM -s CRM -r CRM1 -a CRM3 -P basic

srvctl remove service -d db_name -s service_name [-i inst_name] [-f]
srvctl remove serv -d dev -s sales
srvctl remove service -d dev -s sales -i dev01,dev02

srvctl start service -d db_name [-s service_names [-i inst_name]] [-o start_options]
srvctl start service -d db_name -s service_names [-o open]
srvctl start service -d db_name -s service_names -o nomount
srvctl start service -d db_name -s service_names -o mount
srvctl start serv -d dwh -s dwh

srvctl stop service -d db_name [-s service_names [-i inst_name]] [-f]
srvctl stop serv -d dwh -s dwh

srvctl status service -d db_name [-s service_names] [-f] [-v] [-S level]
srvctl status service -d dev -s dev

srvctl enable service -d db_name -s service_names [–i inst_name]
srvctl enable service -d apps -s apps1

srvctl disable service -d db_name -s service_names [–i inst_name]
srvctl disable serv -d dev -s dev -i dev1

srvctl config service -d db_name [-s service_name] [-a] [-S level]
srvctl config service -d db_name -a -- -a shows TAF configuration
srvctl config service -d TEST -s test PREF:TST1 AVAIL:TST2

srvctl modify service -d db_name -s service_name -i old_inst_name -t new_inst_name [-f]
srvctl modify service -d db_name -s service_name -i avail_inst_name -r [-f]
srvctl modify service -d db_name -s service_name -n -i preferred_list [-a available_list] [-f]
srvctl modify service -d db_name -s service_name -i old_inst_name -a avail_inst -P TAF_policy
srvctl modify serv -d PROD -s DWH -n -i I1,I2,I3,I4 -a I5,I6

srvctl relocate service -d db_name -s service_name –i old_inst_name -t target_inst [-f]

srvctl getenv service -d db_name -s service_name -t name_list
srvctl setenv service -d db_name [-s service_name] {-t "name=val[,name=val,...]" | -T "name=val"}
srvctl unsetenv service -d db_name -s service_name -t name_list

In 11g Release 2, some command's syntax has been changed:
srvctl add service -d db_unique_name -s service_name [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC|MANUAL}] [-q {true|false}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}][-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}][-z failover_retries] [-w failover_delay]
srvctl add service -d rac -s rac1 -q TRUE -m BASIC -e SELECT -z 180 -w 5 -j LONG

srvctl add service -d db_unique_name -s service_name -u {-r preferred_list | -a available_list}

srvctl add service -d db_unique_name -s service_name
-g server_pool [-c {UNIFORM|SINGLETON}] [-k network_number]
[-l [PRIMARY|PHYSICAL_STANDBY|LOGICAL_STANDBY|SNAPSHOT_STANDBY]
[-y {AUTOMATIC|MANUAL}] [-q {TRUE|FALSE}] [-j {SHORT|LONG}]
[-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}]
[-m {NONE|BASIC}] [-P {BASIC|NONE|PRECONNECT}] [-x {TRUE|FALSE}]
[-z failover_retries] [-w failover_delay]

srvctl add service -d db_unique_name -s service_name -r preferred_list [-a available_list] [-P {BASIC|NONE|PRECONNECT}]
[-l [PRIMARY|PHYSICAL_STANDBY|LOGICAL_STANDBY|SNAPSHOT_STANDBY]
[-y {AUTOMATIC|MANUAL}] [-q {TRUE|FALSE}] [-j {SHORT|LONG}]
[-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}]
[-m {NONE|BASIC}] [-x {TRUE|FALSE}] [-z failover_retries] [-w failover_delay]
srvctl add serv -d dev -s sales -r dev01,dev02 -a dev03 -P PRECONNECT

srvctl start service -d db_unique_name [-s "service_name_list" [-n node_name | -i instance_name]] [-o start_options]
srvctl start serv -d dev -s dev
srvctl start service -d dev -s dev -i dev2

srvctl stop service -d db_unique_name [-s "service_name_list"] [-n node_name | -i instance_name] [-f]
srvctl stop service -d dev -s dev
srvctl stop serv -d dev -s dev -i dev2

srvctl status service -d db_unique_name [-s "service_name_list"] [-f] [-v]
srvctl status service -d dev -s dev -v

srvctl enable service -d db_unique_name -s "service_name_list" [-i instance_name | -n node_name]
srvctl enable service -d dev -s dev
srvctl enable serv -d dev -s dev -i dev1

srvctl disable service -d db_unique_name -s "service_name_list" [-i instance_name | -n node_name]
srvctl disable service -d dev -s "dev,marketing"
srvctl disable serv -d dev -s dev -i dev1

srvctl config service -d db_unique_name [-s service_name] [-a]
srvctl config service -d dev -s dev

srvctl modify service -d db_unique_name -s service_name
[-c {UNIFORM|SINGLETON}] [-P {BASIC|PRECONNECT|NONE}]
[-l {[PRIMARY]|[PHYSICAL_STANDBY]|[LOGICAL_STANDBY]|[SNAPSHOT_STANDBY]} [-q {TRUE|FALSE}] [-x {TRUE|FALSE}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z failover_retries] [-w failover_delay] [-y {AUTOMATIC|MANUAL}]
srvctl modify service -d db_unique_name -s service_name -i old_instance_name -t new_instance_name [-f]
srvctl modify service -d db_unique_name -s service_name -i avail_inst_name -r [-f]
srvctl modify service -d db_unique_name -s service_name -n -i preferred_list [-a available_list] [-f]
srvctl modify service -d dev -s dev -i dev1 -t dev2
srvctl modify serv -d dev -s dev -i dev1 -r
srvctl modify service -d dev -s dev -n -i dev1 -a dev2

srvctl relocate service -d db_unique_name -s service_name {-c source_node -n target_node|-i old_instance_name -t new_instance_name} [-f]
srvctl relocate service -d dev -s dev -i dev1 -t dev3

Nodeapps:
#srvctl add nodeapps -n node_name -o ORACLE_HOME -A name|ip/netmask[/if1[|if2|...]]
#srvctl add nodeapps -n lnx02 -o $ORACLE_HOME -A 192.168.0.151/255.255.0.0/eth0

#srvctl remove nodeapps -n node_names [-f]

#srvctl start nodeapps -n node_name -- Starts GSD, VIP, listener & ONS

#srvctl stop nodeapps -n node_name [-r] -- Stops GSD, VIP, listener & ONS

#srvctl status nodeapps -n node_name

#srvctl config nodeapps -n node_name [-a] [-g] [-o] [-s] [-l]
-a Display VIP configuration
-g Display GSD configuration
-s Display ONS daemon configuration
-l Display listener configuration

#srvctl modify nodeapps -n node_name [-A new_vip_address]
#srvctl modify nodeapps -n lnx06 -A 10.50.99.43/255.255.252.0/eth0

#srvctl getenv nodeapps -n node_name [-t name_list]

#srvctl setenv nodeapps -n node_name {-t "name=val[,name=val,...]"|-T "name=val"}
#srvctl setenv nodeapps –n adcracdbq3 –t “TNS_ADMIN=/u01/app/oracle/product/11.1/asm/network/admin”

#srvctl unsetenv nodeapps -n node_name [-t name_list]

In 11g Release 2, some command's syntax has been changed:
srvctl add nodeapps -n node_name -A {name|ip}/netmask[/if1[|if2|...]] [-m multicast_ip_address] [-p multicast_port_number] [-l ons_local_port] [-r ons_remote-port] [-t host[:port][,host[:port],...]] [-v]
srvctl add nodeapps -S subnet/netmask[/if1[|if2|...]] [-d dhcp_server_type] [-m multicast_ip_address] [-p multicast_port_number] [-l ons_local_port] [-r ons_remote-port] [-t host[:port][,host[:port],...]] [-v]
#srvctl add nodeapps -n devnode1 -A 1.2.3.4/255.255.255.0

srvctl remove nodeapps [-f] [-y] [-v]
srvctl remove nodeapps

srvctl start nodeapps [-n node_name] [-v]
srvctl start nodeapps

srvctl stop nodeapps [-n node_name] [-r] [-v]
srvctl stop nodeapps

srvctl status nodeapps

srvctl enable nodeapps [-g] [-v]
srvctl enable nodeapps -g -v

srvctl disable nodeapps [-g] [-v]
srvctl disable nodeapps -g -v

srvctl config nodeapps [-a] [-g] [-s] [-e]
srvctl config nodeapps -a -g -s -e

srvctl modify nodeapps [-n node_name -A new_vip_address] [-S subnet/netmask[/if1[|if2|...]] [-m multicast_ip_address] [-p multicast_port_number] [-e eons_listen_port] [-l ons_local_port] [-r ons_remote_port] [-t host[:port][,host:port,...]] [-v]
srvctl modify nodeapps -n mynode1 -A 100.200.300.40/255.255.255.0/eth0

srvctl getenv nodeapps [-a] [-g] [-s] [-e] [-t "name_list"] [-v]
srvctl getenv nodeapps -a

srvctl setenv nodeapps {-t "name=val[,name=val][...]" | -T "name=val"} [-v]
srvctl setenv nodeapps -T "CLASSPATH=/usr/local/jdk/jre/rt.jar" -v

srvctl unsetenv nodeapps -t "name_list" [-v]
srvctl unsetenv nodeapps -t "test_var1,test_var2"

ASM:
srvctl add asm -n node_name -i asminstance -o ORACLE_HOME [-p spfile]

srvctl remove asm -n node_name [-i asminstance] [-f]
srvctl remove asm -n db6

srvctl start asm -n node_name [-i asminstance] [-o start_options] [-c connect_str|-q]
srvctl start asm -n node_name [-i asminstance] [-o open]
srvctl start asm -n node_name [-i asminstance] -o nomount
srvctl start asm -n node_name [-i asminstance] -o mount
srvctl start asm -n linux01

srvctl stop asm -n node_name [-i asminstance] [-o stop_options] [-c connect_str|-q]
srvctl stop asm -n node_name [-i asminstance] [-o normal]
srvctl stop asm -n node_name [-i asminstance] -o transactional
srvctl stop asm -n node_name [-i asminstance] -o immediate
srvctl stop asm -n node_name [-i asminstance]-o abort
srvctl stop asm -n racnode1
srvctl stop asm -n devnode1 -i +asm1

srvctl status asm -n node_name
srvctl status asm -n racnode1

srvctl enable asm -n node_name [-i asminstance]
srvctl enable asm -n lnx03 -i +asm3

srvctl disable asm -n node_name [-i asminstance]
srvctl disable asm -n lnx02 -i +asm2

srvctl config asm -n node_name
srvctl config asm -n lnx08

srvctl modify asm -n node_name -i asminstance [-o ORACLE_HOME] [-p spfile]
srvctl modify asm –n rac6 -i +asm6 –o /u01/app/oracle/product/11.1/asm

In 11g Release 2, some command's syntax has been changed:
srvctl add asm [-l lsnr_name] [-p spfile] [-d asm_diskstring]
srvctl add asm
srvctl add asm -l LISTENERASM -p +dg_data/spfile.ora

srvctl remove asm [-f]
srvctl remove asm -f

srvctl start asm [-n node_name] [-o start_options]
srvctl start asm -n devnode1

srvctl stop asm [-n node_name] [-o stop_options] [-f]
srvctl stop asm -n devnode1 -f

srvctl status asm [-n node_name] [-a]
srvctl status asm -n devnode1 -a

srvctl enable asm [-n node_name]
srvctl enable asm -n devnode1

srvctl disable asm [-n node_name]
srvctl disable asm -n devnode1

srvctl config asm [-a]
srvctl config asm -a

srvctl modify asm [-l lsnr_name] [-p spfile] [-d asm_diskstring]
srvctl modify asm [-n node_name] [-l listener_name] [-d asm_diskstring] [-p spfile_path_name]
srvctl modify asm -l lsnr1

srvctl getenv asm [-t name[, ...]]
srvctl getenv asm

srvctl setenv asm {-t "name=val [,...]" | -T "name=value"}
srvctl setenv asm -t LANG=en

srvctl unsetenv asm -t "name[, ...]"
srvctl unsetenv asm -t CLASSPATH

Listener:
srvctl add listener -n node_name -o ORACLE_HOME [-l listener_name] -- 11g R1 command

srvctl remove listener -n node_name [-l listener_name] -- 11g R1 command

srvctl start listener -n node_name [-l listener_names]
srvctl start listener -n node1

srvctl stop listener -n node_name [-l listener_names]
srvctl stop listener -n node1

srvctl status listener [-n node_name] [-l listener_names] -- 11g R1 command
srvctl status listener -n node2

srvctl config listener -n node_name

srvctl modify listener -n node_name [-l listener_names] -o ORACLE_HOME -- 11g R1command
srvctl modify listener -n racdb4 -o /u01/app/oracle/product/11.1/asm -l "LISTENER_RACDB4"

In 11g Release 2, some command's syntax has been changed:
srvctl add listener [-l lsnr_name] [-s] [-p "[TCP:]port[, ...][/IPC:key][/NMP:pipe_name][/TCPS:s_port] [/SDP:port]"] [-k network_number] [-o ORACLE_HOME]
srvctl add listener -l LISTENERASM -p "TCP:1522" -o $ORACLE_HOME
srvctl add listener -l listener112 -p 1341 -o /ora/ora112

srvctl remove listener [-l lsnr_name|-a] [-f]
srvctl remove listener -l lsnr01

srvctl stop listener [-n node_name] [-l lsnr_name] [-f]

srvctl enable listener [-l lsnr_name] [-n node_name]
srvctl enable listener -l listener_dev -n node5

srvctl disable listener [-l lsnr_name] [-n node_name]
srvctl disable listener -l listener_dev -n node5

srvctl config listener [-l lsnr_name] [-a]
srvctl config listener

srvctl modify listener [-l listener_name] [-o oracle_home] [-u user_name] [-p "[TCP:]port_list[/IPC:key][/NMP:pipe_name][/TCPS:s_port][/SDP:port]"] [-k network_number]
srvctl modify listener -n node1 -p "TCP:1521,1522"

srvctl getenv listener [-l lsnr_name] [-t name[, ...]]
srvctl getenv listener

srvctl setenv listener [-l lsnr_name] {-t "name=val [,...]" | -T "name=value"}
srvctl setenv listener -t LANG=en

srvctl unsetenv listener [-l lsnr_name] -t "name[, ...]"
srvctl unsetenv listener -t "TNS_ADMIN"

New srvctl commands in 11g Release 2

Diskgroup:
srvctl remove diskgroup -g diskgroup_name [-n node_list] [-f]
srvctl remove diskgroup -g DG1 -f

srvctl start diskgroup -g diskgroup_name [-n node_list]
srvctl start diskgroup -g diskgroup1 -n node1,node2

srvctl stop diskgroup -g diskgroup_name [-n node_list] [-f]
srvctl stop diskgroup -g ASM_FRA_DG
srvctl stop diskgroup -g dg1 -n node1,node2 -f

srvctl status diskgroup -g diskgroup_name [-n node_list] [-a]
srvctl status diskgroup -g dg_data -n node1,node2 -a

srvctl enable diskgroup -g diskgroup_name [-n node_list]
srvctl enable diskgroup -g diskgroup1 -n node1,node2

srvctl disable diskgroup -g diskgroup_name [-n node_list]
srvctl disable diskgroup -g dg_fra -n node1, node2

Home:
srvctl start home -o ORACLE_HOME -s state_file [-n node_name]
srvctl start home -o /u01/app/oracle/product/11.2.0/db_1 -s ~/state.txt

srvctl stop home -o ORACLE_HOME -s state_file [-t stop_options] [-n node_name] [-f]
srvctl stop home -o /u01/app/oracle/product/11.2.0/db_1 -s ~/state.txt

srvctl status home -o ORACLE_HOME -s state_file [-n node_name]
srvctl status home -o /u01/app/oracle/product/11.2.0/db_1 -s ~/state.txt

ONS (Oracle Notification Service) :
srvctl add ons [-l ons-local-port] [-r ons-remote-port] [-t host[:port][,host[:port]...]] [-v]
srvctl add ons -l 6200

srvctl remove ons [-f] [-v]
srvctl remove ons -f

srvctl start ons [-v]
srvctl start ons -v

srvctl stop ons [-v]
srvctl stop ons -v

srvctl status ons

srvctl enable ons [-v]
srvctl enable ons

srvctl disable ons [-v]
srvctl disable ons

srvctl config ons

srvctl modify ons [-l ons-local-port] [-r ons-remote-port] [-t host[:port][,host[:port]...]] [-v]
srvctl modify ons

EONS:
srvctl add eons [-p portnum] [-m multicast-ip-address] [-e eons-listen-port] [-v]
#srvctl add eons -p 2018

srvctl remove eons [-f] [-v]
srvctl remove eons -f

srvctl start eons [-v]
srvctl start eons

srvctl stop eons [-f] [-v]
srvctl stop eons -f

srvctl status eons

srvctl enable eons [-v]
srvctl enable eons

srvctl disable eons [-v]
srvctl disable eons

srvctl config eons

srvctl modify eons [-m multicast_ip_address] [-p multicast_port_number] [-e eons_listen_port] [-v]
srvctl modify eons -p 2018

FileSystem:
srvctl add filesystem -d volume_device -v volume_name -g diskgroup_name [-m mountpoint_path] [-u user_name]
srvctl add filesystem -d /dev/asm/d1volume1 -v VOLUME1 -d RAC_DATA -m /oracle/cluster1/acfs1

srvctl remove filesystem -d volume_device_name [-f]
srvctl remove filesystem -d /dev/asm/racvol1

srvctl start filesystem -d volume_device_name [-n node_name]
srvctl start filesystem -d /dev/asm/racvol3

srvctl stop filesystem -d volume_device_name [-n node_name] [-f]
srvctl stop filesystem -d /dev/asm/racvol1 -f

srvctl status filesystem -d volume_device_name
srvctl status filesystem -d /dev/asm/racvol2

srvctl enable filesystem -d volume_device_name
srvctl enable filesystem -d /dev/asm/racvol9

srvctl disable filesystem -d volume_device_name
srvctl disable filesystem -d /dev/asm/racvol1

srvctl config filesystem -d volume_device_path

srvctl modify filesystem -d volume_device_name -u user_name
srvctl modify filesystem -d /dev/asm/racvol1 -u sysadmin

SrvPool:
srvctl add srvpool -g server_pool [-i importance] [-l min_size] [-u max_size] [-n node_list] [-f]
srvctl add srvpool -g SP1 -i 1 -l 3 -u 7 -n node1,node2

srvctl remove srvpool -g server_pool
srvctl remove srvpool -g srvpool1

srvctl status srvpool [-g server_pool] [-a]
srvctl status srvpool -g srvpool2 -a

srvctl config srvpool [-g server_pool]
srvctl config srvpool -g dbpool

srvctl modify srvpool -g server_pool [-i importance] [-l min_size] [-u max_size] [-n node_name_list] [-f]
srvctl modify srvpool -g srvpool4 -i 0 -l 2 -u 4 -n node3, node4

Server:
srvctl status server -n "server_name_list" [-a]
srvctl status server -n server11 -a

srvctl relocate server -n "server_name_list" -g server_pool_name [-f]
srvctl relocate server -n "linux1, linux2" -g sp2

Scan (Single Client Access Name):
srvctl add scan -n scan_name [-k network_number] [-S subnet/netmask[/if1[|if2|...]]]
#srvctl add scan -n scan.mycluster.example.com

srvctl remove scan [-f]
srvctl remove scan -f

srvctl start scan [-i ordinal_number] [-n node_name]
srvctl start scan -i 1 -n node1

srvctl stop scan [-i ordinal_number] [-f]
srvctl stop scan -i 1

srvctl status scan [-i ordinal_number]
srvctl status scan -i 1

srvctl enable scan [-i ordinal_number]
srvctl enable scan -i 1

srvctl disable scan [-i ordinal_number]
srvctl disable scan -i 3

srvctl config scan [-i ordinal_number]
srvctl config scan -i 2

srvctl modify scan -n scan_name
srvctl modify scan -n scan1

srvctl relocate scan -i ordinal_number [-n node_name]
srvctl relocate scan -i 2 -n node2

ordinal_number=1,2,3

Scan_listener:
srvctl add scan_listener [-l lsnr_name_prefix] [-s] [-p "[TCP:]port_list[/IPC:key][/NMP:pipe_name][/TCPS:s_port] [/SDP:port]"]
#srvctl add scan_listener -l myscanlistener

srvctl remove scan_listener [-f]
srvctl remove scan_listener -f

srvctl start scan_listener [-n node_name] [-i ordinal_number]
srvctl start scan_listener -i 1

srvctl stop scan_listener [-i ordinal_number] [-f]
srvctl stop scan_listener -i 3

srvctl status scan_listener [-i ordinal_number]
srvctl status scan_listener -i 1

srvctl enable scan_listener [-i ordinal_number]
srvctl enable scan_listener -i 2

srvctl disable scan_listener [-i ordinal_number]
srvctl disable scan_listener -i 1

srvctl config scan_listener [-i ordinal_number]
srvctl config scan_listener -i 3

srvctl modify scan_listener {-p [TCP:]port[/IPC:key][/NMP:pipe_name] [/TCPS:s_port][/SDP:port] | -u }
srvctl modify scan_listener -u

srvctl relocate scan_listener -i ordinal_number [-n node_name]
srvctl relocate scan_listener -i 1

ordinal_number=1,2,3

GNS (Grid Naming Service):
srvctl add gns -i ip_address -d domain
srvctl add gns -i 192.124.16.96 -d cluster.mycompany.com

srvctl remove gns [-f]
srvctl remove gns

srvctl start gns [-l log_level] [-n node_name]
srvctl start gns

srvctl stop gns [-n node_name [-v] [-f]
srvctl stop gns

srvctl status gns [-n node_name]
srvctl status gns

srvctl enable gns [-n node_name]
srvctl enable gns

srvctl disable gns [-n node_name]
srvctl disable gns -n devnode2

srvctl config gns [-a] [-d] [-k] [-m] [-n node_name] [-p] [-s] [-V] [-q name] [-l] [-v]
srvctl config gns -n lnx03

srvctl modify gns [-i ip_address] [-d domain]
srvctl modify gns -i 192.000.000.007

srvctl relocate gns [-n node_name]
srvctl relocate gns -n node2

VIP (Virtual Internet Protocol):
srvctl add vip -n node_name -A {name|ip}/netmask[/if1[if2|...]] [-k network_number] [-v]
#srvctl add vip -n node96 -A 192.124.16.96/255.255.255.0 -k 2

srvctl remove vip -i "vip_name_list" [-f] [-y] [-v]
srvctl remove vip -i "vip1,vip2,vip3" -f -y -v

srvctl start vip {-n node_name|-i vip_name} [-v]
srvctl start vip -i dev1-vip -v

srvctl stop vip {-n node_name|-i vip_name} [-r] [-v]
srvctl stop vip -n node1 -v

srvctl status vip {-n node_name|-i vip_name}
srvctl status vip -i node1-vip

srvctl enable vip -i vip_name [-v]
srvctl enable vip -i prod-vip -v

srvctl disable vip -i vip_name [-v]
srvctl disable vip -i vip3 -v

srvctl config vip {-n node_name|-i vip_name}
srvctl config vip -n devnode2

srvctl getenv vip -i vip_name [-t "name_list"] [-v]
srvctl getenv vip -i node1-vip

srvctl setenv vip -i vip_name {-t "name=val[,name=val,...]" | -T "name=val"}
srvctl setenv vip -i dev1-vip -t LANG=en

srvctl unsetenv vip -i vip_name -t "name_list" [-v]
srvctl unsetenv vip -i myvip -t CLASSPATH

OC4J (Oracle Container for Java):
srvctl add oc4j [-v]
srvctl add oc4j

srvctl remove oc4j [-f] [-v]
srvctl remove oc4j

srvctl start ocj4 [-v]
srvctl start ocj4 -v

srvctl stop oc4j [-f] [-v]
srvctl stop oc4j -f -v

srvctl status oc4j [-n node_name]
srvctl status oc4j -n lnx01

srvctl enable oc4j [-n node_name] [-v]
srvctl enable oc4j -n dev3

srvctl disable oc4j [-n node_name] [-v]
srvctl disable oc4j -n dev1

srvctl config oc4j

srvctl modify oc4j -p oc4j_rmi_port [-v]
srvctl modify oc4j -p 5385

srvctl relocate oc4j [-n node_name] [-v]
srvctl relocate oc4j -n lxn06 -v

Posted by pat98

RAC GI 설치 시 에러

 

작업환경 : AIX 7.1 (PowerHA)  Oracle RAC 11.2.0.3

 

1번 노드는 잘 진행이 되며 2번노드 에러 발생...요놈 원인 찾는라 시간 좀 걸림..

 

 2번 서버 root.sh 실행시 에러발생

rac02-[root]:/# /oragrid/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /oragrid/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oragrid/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
User grid has the required capabilities to run CSSD in realtime mode
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Failed to create a peer profile for Oracle Cluster GPnP. gpnptool rc=65280
Creation of Oracle GPnP peer profile failed for rac02
Failed to create GPnP peer profile for rac02 at /oragrid/11.2.0/grid/crs/install/crsconfig_lib.pm line 4946.
/oragrid/11.2.0/grid/perl/bin/perl -I/oragrid/11.2.0/grid/perl/lib -I/oragrid/11.2.0/grid/crs/install /oragrid/11.2.0/grid/crs/insta
ll/rootcrs.pl execution failed
REAL_drwibscdb02-[root]:/#

[해결방법]
/oragrid/product/grid/lib 에 link 되어 있는 libskgxn2.so libskgxnr.so 2개의 파일을 link 삭제후 (vendor clusterware가 설치되면 생기는 것들)

 

($GRID_HOME/cfgtoollogs/crsconfig/rootcrs_<hostname>.log 를 검토해 보면 해당 library 들이 정상적으로  link 걸려 있으나 loading 하지 못해 에러남.

 

링크가 잘 걸려있기 때문에 전혀 의심을 안 했었음..

 

/opt/ORCLcluster/lib 에 있는 화일들을 
/oragrid/product/grid/lib 로 직접 copy 한후 재작업 성공 !
Posted by pat98

Opatch 이용 GI PSU 적용시 Space 부족으로 인한 장애시 처리방법

 

- XXX 은행 작업시 발생했던 상황 (AIX 7.1 with  Oracle 11.2.0.3)

 

패치 적용시 공간이 충분함에도 아래와 같은 메세지와 함께 패치가 진행되지 않는다.

 

Required amount of space(23428.225MB) is not available.
UtilSession failed: 
Prerequisite check "CheckSystemSpace" failed.
Log file location: /oracle/app/11.2.0/grid/cfgtoollogs/opatch/opatch2012-10-07_18-26-07PM_1.log
OPatch failed with error code 73
 

마운드 포인트가 /50G 인데도 어이없는 메세지가 뜬다.

 

고민하지 말고  AIX 버그(비공식 버그 (bug 9780505) 이므로 아래와 같이 Opatch 옵션을 붙여주고 수행하면 Space check 를 하지 않기 때문에

 

바로 해결가능하다.

 

opatch apply OPatch.SKIP_VERIFY_SPACE=true

 

그리고 조금 진행하다 보면 libclsra11.so 화일을 Copy 할수 없다고 에러가 뜨는데 이때는 다른 창 열어서 root 유저로 slibclean 실행시키고

 

retry 하면 잘 된다.

 

AIX Platform 은 다른 OS와 달리 GI PSU 공간도 실제로 많이 차지하며 (적용하면 GI_HOME 이 7G 늘어남) 유독 Library 버그성 문제도 많은것 같다.

Posted by pat98

요즘 보안 때문에 X 환경이 되지 않는 환경이 많습니다.

포기하지 마시고 silent 모드(GUI 없이 text) 로 오라클 설치 가능합니다.

 

11gR2 에서는 rsp 화일 설정 없이도 금방 가능합니다.

 

오라클 이미지를 다운받아 아무데나 압축을 풉니다. 압축푼 곳으로 가서 아래 명령어를 실행해 줍니다. 설정은 각자 환경에 맞게 살짝 고쳐야 하겠지요?

 

p10404530_112030_LINUX_1of7.zip

p10404530_112030_LINUX_2of7.zip

 

(ORACLE_HOME 및 inventory 등 필요한 directory 는 미리 만들어 줘야 합니다. 물론 oracle 계정 만들어 하구요.)

 

$ cd /11gR2/database
$ ./runInstaller -silent -debug -force \
FROM_LOCATION=/11gR2/database/stage/products.xml \
oracle.install.option=INSTALL_DB_SWONLY \
UNIX_GROUP_NAME=oinstall \
INVENTORY_LOCATION=/u01/app/oraInventory \
ORACLE_HOME=/u01/app/oracle/product/11201/db_1 \
ORACLE_HOME_NAME="OraDb11g_Home1" \
ORACLE_BASE=/u01/app/oracle \
oracle.install.db.InstallEdition=EE \
oracle.install.db.isCustomInstall=false \
oracle.install.db.DBA_GROUP=dba \
oracle.install.db.OPER_GROUP=dba \
DECLINE_SECURITY_UPDATES=true

 

그러면 조금 진행되다가 요 부분에서 한참 멈춰 있습니다. 엔터 쳐도 진행은 계속 됩니다.

해당 로그를 다른 창을 열어서 tail -f  /u01/app/oraInventory/logs/installActions2013-04-19_02-54-24PM.log 열어보면 열심히 설치가 진행됩니다.

 

[Worker 3] [ 2013-04-19 14:54:38.097 KST ] [ClusterConfig$ExecuteCommand.run:3069]  Released Semaphore by worker=Worker 3
[Worker 3] [ 2013-04-19 14:54:38.100 KST ] [Semaphore.acquire:109]  SyncBufferFull:Acquire called by thread Worker 3 m_count=0
[main] [ 2013-04-19 14:54:38.100 KST ] [ResultSet.traceResultSet:359] 

Target ResultSet AFTER Upload===>
        Overall Status->SUCCESSFUL

        single2-->SUCCESSFUL


You can find the log of this install session at:
 /u01/app/oraInventory/logs/installActions2013-04-19_02-54-24PM.log
[OUISetupDriver.JobExecutorThread] [ 2013-04-19 14:54:43.017 KST ] [Version.isPre:528]  version to be checked 11.2.0.3.0 major version to check against10
[OUISetupDriver.JobExecutorThread] [ 2013-04-19 14:54:43.017 KST ] [Version.isPre:539]  isPre.java: Returning FALSE
[OUISetupDriver.JobExecutorThread] [ 2013-04-19 14:54:43.017 KST ] [UnixSystem.getCSSConfigType:2418]  configFile=/etc/oracle/ocr.loc
[OUISetupDriver.JobExecutorThread] [ 2013-04-19 14:54:43.017 KST ] [UnixSystem.getCSSConfigType:2462]  configType=null

 

 

한 10분정도 지나면 설치가 완료되고 아래 명령을 root 로 실행하라고 나옵니다.

 

Please check '/u01/app/oraInventory/logs/silentInstall2013-04-19_02-54-24PM.log' for more details.

As a root user, execute the following script(s):
        1. /u01/app/oraInventory/orainstRoot.sh
        2. /u01/app/oracle/product/11.2.0/db_1/root.sh


Successfully Setup Software.
copying /u01/app/oraInventory/logs/oraInstall2013-04-19_02-54-24PM.err to /u01/app/oracle/product/11.2.0/db_1/cfgtoollogs/oui/oraInstall2013-04-19_02-54-24PM.err
copying /u01/app/oraInventory/logs/oraInstall2013-04-19_02-54-24PM.out to /u01/app/oracle/product/11.2.0/db_1/cfgtoollogs/oui/oraInstall2013-04-19_02-54-24PM.out
copying /u01/app/oraInventory/logs/installActions2013-04-19_02-54-24PM.log to /u01/app/oracle/product/11.2.0/db_1/cfgtoollogs/oui/installActions2013-04-19_02-54-24PM.log
copying /u01/app/oraInventory/logs/silentInstall2013-04-19_02-54-24PM.log to /u01/app/oracle/product/11.2.0/db_1/cfgtoollogs/oui/silentInstall2013-04-19_02-54-24PM.log

 

접속해 보면 잘 됩니다.

 

[oracle:/home/oracle]#sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Fri Apr 19 15:15:48 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to an idle instance.

SQL>

 

 

 

 


 

Posted by pat98

01-29 05:31
Flag Counter
Yesterday
Today
Total

글 보관함

최근에 올라온 글

달력

 « |  » 2025.1
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31

최근에 달린 댓글