ASM 이 아닌 CFS (Cluster File System)로 구성시 OCR/VOTE 디스크 갯수는 아래와 같이 하여야 한다.

 

설치 화면에서 OCR 디스크 갯수는 1개 또는 3개로 지정해야함. (2개 지정시 안 넘어감)

VOTING 디스크 갯수는 1개 또는 3개로 지정해야 함. (2개 지정시 안 넘어감)

 

해당 환경에서 추가/삭제 테스트 필요..

ASM <-> CFS

CFS <-> CFS

Posted by pat98

2019. 12. 13. 12:28 오라클

19c RAC OCR VOTE location



Oracle 19c (12.2.0.3) RAC 부터 OCR/VOTE 디스크를 위치를 다시 direct shared disk (CFS) 로 쓸수 있게 다시 바뀌었다.


이게 초기 설치에는 별 문제가 없고 업그레이드 할때 굉장히 짜증나는 부분이었는데 다시 support 하게 되었다.


https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/release-changes.html#GUID-534E3021-931A-4AB8-9D54-21F2A97C199A

Resupport of Direct File Placement for OCR and Voting Disks

Starting with Oracle Grid Infrastructure 19c, the desupport for direct OCR and voting disk file placement on shared file systems is rescinded for Oracle Standalone Clusters. For Oracle Domain Services Clusters the requirement to place OCR and voting files in Oracle Automatic Storage Management (Oracle ASM) on top of files hosted on shared file systems and used as ASM disks remains.

In Oracle Grid Infrastructure 12c Release 2 (12.2), Oracle announced that it would no longer support the placement of the Oracle Grid Infrastructure Oracle Cluster Registry (OCR) and voting files directly on a shared file system. This desupport is now rescinded. Starting with Oracle Grid Infrastructure 19c (19.3), with Oracle Standalone Clusters, you can again place OCR and voting disk files directly on shared file systems.


https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/using-a-cluster-file-system-for-oracle-clusterware-files.html#GUID-B91CAB65-2B3D-440C-A5A4-87E38FBE93EF

Using a Cluster File System for Oracle Clusterware Files

Starting with Oracle Grid Infrastructure 19c, you can use Oracle Automatic Storage Management (Oracle ASM) or certified shared file system to store OCR files and voting files.

For new Oracle Standalone Cluster installations, you can use Oracle ASM or shared file system to store voting files and OCR files. For other cluster types, you must use Oracle Automatic Storage Management (Oracle ASM) to store voting files and OCR files. For Linux 86-64 (64-bit) and Linux Itanium platforms, Oracle provides a cluster file system, OCFS2. However, Oracle does not recommend using OCFS2 for Oracle Clusterware files.





Posted by pat98

OCR disk의 장애테스트

 

테스트 환경 : redhat linux 5.8

                    Oracle 11gR2 11.2.0.3

 

===========================================================================================

CRS 떠있는 상태에서

dd명령어로 지움


[root@rac1 /root]# dd if=/dev/zero of=/dev/raw/raw2 bs=8192
dd: writing `/dev/raw/raw2': No space left on device
38401+0 records in
38400+0 records out
314572800 bytes (315 MB) copied, 438.815 seconds, 717 kB/s

[root@rac1 /root]# ps -ef |grep d.bin
root      6082     1  0 17:25 ?        00:00:08 /u01/11.2.0/grid/bin/ohasd.bin reboot
oracle    6208     1  0 17:25 ?        00:00:01 /u01/11.2.0/grid/bin/oraagent.bin
oracle    6222     1  0 17:25 ?        00:00:00 /u01/11.2.0/grid/bin/mdnsd.bin
oracle    6234     1  0 17:25 ?        00:00:01 /u01/11.2.0/grid/bin/gpnpd.bin
oracle    6247     1  0 17:25 ?        00:00:03 /u01/11.2.0/grid/bin/gipcd.bin
root      6258     1  0 17:25 ?        00:00:01 /u01/11.2.0/grid/bin/cssdmonitor
root      6274     1  0 17:25 ?        00:00:01 /u01/11.2.0/grid/bin/cssdagent
oracle    6288     1  0 17:25 ?        00:00:07 /u01/11.2.0/grid/bin/ocssd.bin
root      6291     1  0 17:25 ?        00:00:06 /u01/11.2.0/grid/bin/orarootagent.bin
root      6305     1  1 17:25 ?        00:00:13 /u01/11.2.0/grid/bin/osysmond.bin
root      6416     1  0 17:26 ?        00:00:05 /u01/11.2.0/grid/bin/octssd.bin reboot
root      6438     1  1 17:26 ?        00:00:09 /u01/11.2.0/grid/bin/crsd.bin reboot
oracle    6443     1  0 17:26 ?        00:00:03 /u01/11.2.0/grid/bin/evmd.bin
oracle    6526  6443  0 17:28 ?        00:00:00 /u01/11.2.0/grid/bin/evmlogger.bin -o /u01/11.2.0/grid/evm/log/evmlogger.info -l /u01/11.2.0/grid/evm/log/evmlogger.log
root      6563     1  2 17:28 ?        00:00:17 /u01/11.2.0/grid/bin/ologgerd -M -d /u01/11.2.0/grid/crf/db/rac1
root      6730     1  0 17:36 ?        00:00:02 /u01/11.2.0/grid/bin/orarootagent.bin
oracle    6884     1  0 17:36 ?        00:00:01 /u01/11.2.0/grid/bin/oraagent.bin
oracle    6938     1  0 17:37 ?        00:00:00 /u01/11.2.0/grid/bin/tnslsnr LISTENER -inherit
root      7256  4440  0 17:40 pts/2    00:00:00 grep d.bin

 

[root@rac1 /root]# crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.

 

[root@rac2 /root]# ocrconfig -restore /dev/raw/raw2
PROT-19: Cannot proceed while the Cluster Ready Service is running

 

[root@rac1 /root]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2728
         Available space (kbytes) :     259392
         ID                       : 1682702384
         Device/File Name         : /dev/raw/raw1
                                    Device/File integrity check succeeded
         Device/File Name         : /dev/raw/raw2
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

 

crsctl disable crs 후 Reboot 함

 

[root@rac1 /root]# ocrconfig -restore /u01/11.2.0/grid/cdata/rac-cluster/backup00.ocr


[root@rac1 /root]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2732
         Available space (kbytes) :     259388
         ID                       : 1682702384
         Device/File Name         : /dev/raw/raw1
                                    Device/File integrity check succeeded
         Device/File Name         : /dev/raw/raw2
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

=================================================================================

CRS 가 떠 있는 상태에서..

다른방법으로 -delete 로 삭제


[root@rac1 /root]# ps -ef |grep d.bin
root      3737     1  1 10:01 ?        00:00:37 /u01/11.2.0/grid/bin/ohasd.bin reboot
oracle    3863     1  0 10:01 ?        00:00:06 /u01/11.2.0/grid/bin/oraagent.bin
oracle    3877     1  0 10:01 ?        00:00:00 /u01/11.2.0/grid/bin/mdnsd.bin
oracle    3889     1  0 10:01 ?        00:00:03 /u01/11.2.0/grid/bin/gpnpd.bin
oracle    3902     1  0 10:01 ?        00:00:11 /u01/11.2.0/grid/bin/gipcd.bin
root      3904     1  0 10:01 ?        00:00:24 /u01/11.2.0/grid/bin/orarootagent.bin
root      3923     1  1 10:01 ?        00:01:00 /u01/11.2.0/grid/bin/osysmond.bin
root      3938     1  0 10:01 ?        00:00:05 /u01/11.2.0/grid/bin/cssdmonitor
root      3959     1  0 10:01 ?        00:00:04 /u01/11.2.0/grid/bin/cssdagent
oracle    3973     1  0 10:01 ?        00:00:24 /u01/11.2.0/grid/bin/ocssd.bin
root      4062     1  1 10:02 ?        00:00:38 /u01/11.2.0/grid/bin/octssd.bin reboot
root      4085     1  1 10:02 ?        00:01:05 /u01/11.2.0/grid/bin/crsd.bin reboot
oracle    4089     1  0 10:02 ?        00:00:30 /u01/11.2.0/grid/bin/evmd.bin
oracle    4173  4089  0 10:03 ?        00:00:00 /u01/11.2.0/grid/bin/evmlogger.bin -o /u01/11.2.0/grid/evm/log/evmlogger.info -l /u01/11.2.0/grid/evm/log/evmlogger.log
root      4226     1  0 10:05 ?        00:00:21 /u01/11.2.0/grid/bin/orarootagent.bin
root      4262     1  2 10:05 ?        00:01:33 /u01/11.2.0/grid/bin/ologgerd -M -d /u01/11.2.0/grid/crf/db/rac1
oracle    4389     1  0 10:05 ?        00:00:08 /u01/11.2.0/grid/bin/oraagent.bin
oracle    4434     1  0 10:06 ?        00:00:00 /u01/11.2.0/grid/bin/tnslsnr LISTENER -inherit
root      5559  3682  0 10:57 pts/1    00:00:00 grep d.bin

 

[root@rac1 /root]# ocrconfig -delete /dev/raw/raw2

[root@rac1 /u01/11.2.0/grid/log/rac1/crsd]# ls -al
total 6720
drwxr-x---  2 root oinstall    4096 Jul  4 11:16 .
drwxr-xr-t 24 root oinstall    4096 Apr 28 02:22 ..
-rw-r--r--  1 root root     6855546 Jul  4 11:11 crsd.log
-rw-r--r--  1 root root        2180 Jul  4 10:02 crsdOUT.log

2013-07-04 11:01:44.327: [UiServer][2927553424] {1:60636:135} Done for ctx=0xa5e0928
2013-07-04 11:02:56.746: [  OCRRAW][2992073616]propriowv_bootbuf: Vote information on disk 0 [/dev/raw/raw1] is adjusted from [1/2] to [2/2]
[  OCRMAS][3002579856]th_master: Received group private data event. Incarnation [2]
[  OCRMAS][3002579856]th_master: Received group private data event. Incarnation [3]
[  OCRMAS][3002579856]th_master: Received group private data event. Incarnation [4]
2013-07-04 11:03:04.291: [  OCRRAW][2992073616]proprioo: for disk 0 (/dev/raw/raw1), id match (1), total id sets, (2) need recover (0), my votes (2), total votes (2), commit_lsn (1257), lsn (1257)
2013-07-04 11:03:04.291: [  OCRRAW][2992073616]proprioo: my id set: (1669906634, 1028247821, 0, 0, 0)
2013-07-04 11:03:04.291: [  OCRRAW][2992073616]proprioo: 1st set: (1669906634, 188263131, 0, 0, 0)
2013-07-04 11:03:04.291: [  OCRRAW][2992073616]proprioo: 2nd set: (1669906634, 1028247821, 0, 0, 0)
2013-07-04 11:03:04.519: [  OCRAPI][2992073616]u_masmd:11: clscrs_register_resource2 succeeded [0]. Return [0]
2013-07-04 11:03:04.530: [  OCRSRV][2992073616]proath_update_grppubdata: Successfully updated and published the configured devices in public data.


[root@rac1 /root]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2732
         Available space (kbytes) :     259388
         ID                       : 1682702384
         Device/File Name         : /dev/raw/raw1
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

 

[root@rac1 /root]# ocrconfig -add /dev/raw/raw2

2013-07-04 11:27:00.746: [  OCRRAW][2989972368]proprioo: my id set: (1669906634, 188263131, 0, 0, 0)
2013-07-04 11:27:00.746: [  OCRRAW][2989972368]proprioo: 1st set: (1669906634, 1028247821, 0, 0, 0)
2013-07-04 11:27:00.746: [  OCRRAW][2989972368]proprioo: 2nd set: (1669906634, 188263131, 0, 0, 0)
2013-07-04 11:27:00.900: [  OCRRAW][2989972368]propriogid:1_2: INVALID FORMAT
[  OCRMAS][3002579856]th_master: Received group private data event. Incarnation [6]
2013-07-04 11:27:04.512: [  OCRRAW][2989972368]propriowv_bootbuf: Vote information on disk 1 [/dev/raw/raw2] is adjusted from [0/0] to [1/2]
2013-07-04 11:27:04.528: [  OCRRAW][2989972368]propriowv_bootbuf: Vote information on disk 0 [/dev/raw/raw1] is adjusted from [2/2] to [1/2]

 

[root@rac1 /root]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2732
         Available space (kbytes) :     259388
         ID                       : 1682702384
         Device/File Name         : /dev/raw/raw1
                                    Device/File integrity check succeeded
         Device/File Name         : /dev/raw/raw2
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

Posted by pat98
이전버튼 1 이전버튼

05-10 00:00
Flag Counter
Yesterday
Today
Total

글 보관함

최근에 올라온 글

달력

 « |  » 2024.5
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31

최근에 달린 댓글