Applies to:

Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.2.0.2 - Release: 10.2 to 11.2
HP-UX PA-RISC (64-bit)
HP-UX Itanium
HP-UX PA-RISC - HP-UX 11iv3 September 2009 Operating Environment Update Release
HP-UX Integrity Blade Server - HP-UX 11iv3 September 2009 Operating Environment Update Release
VxFS 5.0.1 - Using OnlineJFS 5.0.1

Goal

How to use Concurrent I/O on HP-UX and improve throughput on an Oracle single-instance database.

Solution

What is Concurrent I/O ?

Concurrent I/O allows multiple processes to read from or write to the same file without blocking other read(2) or write(2) calls.

POSIX semantics requires read and write calls to be serialized on a file with other read and write calls.

With POSIX semantics, a read call either reads the data before or after the write call occurred.

With Concurrent I/O, the read and write operations are not serialized as in the case of a character device.

This advisory is generally used by applications that require high performance for accessing data and do not perform overlapping writes to the same file.

It is the responsibility of the application or the running threads to coordinate the write activities to the same file when using Concurrent I/O.

How to enable Concurrent I/O ?

Concurrent I/O can be enabled in the following ways:

A. By using the “-o cio” mount option.

The read(2) and write(2) operations occurring on all of the files in this particular filesystem will use Concurrent I/O.

- Steps for new filesystems created using 5.0.1 OnlineJFS:

# mount -F vxfs -o cio <device_special_file> <mount_point>

- Steps for already existing filesystems which were mounted without Concurrent I/O or created with older vxfs versions:

  Existing filesystems (older filesystems created or filesystems that were not mounted with Concurrent I/O
  option) will have to be unmounted and mounted again with "-o cio" to enable Concurrent I/O.

  Note that remount command/option should not be used while mounting a filesystem with "-o cio".

1. Unmount the filesystem

# umount <mount_point>

2. Upgrade to VxFS 5.0.1 with 5.0.1 OnlineJFS installed on the system

Refer to “Veritas 5.0.1 Installation Guide” on http://docs.hp.com for detailed upgrade instructions.

3. Mount the filesystem with “-o cio” option

# mount -F vxfs -o cio,<other_options_as_needed> <device_special_file> <mount_point>

Concurrent I/O is a licensed feature of VxFS. If “-o cio” is specified, but the feature is not licensed, the mount command prints an error message and terminates the operation without mounting the filesystem.

NOTE:

Do not use "-o cio" and "-o mincache=direct,convosync=direct" together. Use either Direct I/O or Concurrent I/O.

Using Direct I/O and Concurrent I/O("-o mincache=direct,convosync=direct,cio") may cause performance regression.

B. By specifying the VX_CONCURRENT advisory flag for the file descriptor in the VX_SETCACHE ioctl command.

Only the read(2) and write(2) calls occurring through this file descriptor use concurrent I/O. The read and write operations occurring through other file descriptors for the same file will still follow the POSIX semantics.

Concurrent I/O (CIO) can be set through the file descriptor and ioctl() operation using the VX_SETCACHE ioctl command with the VX_CONCURRENT advisory flag.

The VX_CONCURRENT advisory can be set via the VX_SETCACHE ioctl descriptor on a file.

For example:

ioctl(fd,VX_SETCACHE,VX_CONCURRENT);

where fd is the file descriptor.

Concurrent I/O requirements

• With Concurrent I/O, the read and write operations are not serialized. It is the responsibility of the application or the running threads to coordinate the write activities and ensure they are to non-overlapping blocks of the same file.

• To gain maximum throughput, Application must perform non-overlapping writes to the same file.

• Performance increases if application write offsets are block aligned and size of I/O’s are in multiple of device block size.

• Concurrent I/O bypasses inode locking and hence application (or database used) must have its own inode-locking (serialization) mechanism for multiple writers.

• The starting file offset must be aligned to a 1024-byte boundary.

• The ending file offset must be aligned to a 1024-byte boundary, or the length must be a multiple of 1024 bytes.

NOTE:

If the Concurrent I/O alignment requirements are not met properly, then I/O’s defaults to data synchronous I/O which could cause performance degradation.

If an application issues overlapping writes(on a filesystem with Concurrent I/O enabled) to the same file without having its own serialization mechanism, behavior of write(2) is undefined and may lead to data loss.

Using Concurrent I/O with Oracle

It is recommended to place the Oracle binaries ($ORACLE_BASE directory) on a separate filesystem mounted with default options.

Placing the Oracle Database's datafiles on filesystems mounted with Concurrent I/O ("-o cio) delivers performance very close to that of raw logical volumes.

Placing Oracle binaries ($ORACLE_BASE directory) on a filesystem mounted with "cio" may cause data loss and other unexpected problems.

Hence mounting Oracle binaries ($ORACLE_BASE directory) on a filesystem mounted with "cio"option is not supported.

Concurrent I/O can significantly improve performance of a filesystem based Single instance Oracle database installation.

Concurrent I/O performs at between 93-99% of raw logical volumes.

Thus, Concurrent I/O provides performance very close to that of raw logical volumes in addition to the manageability benefits provided by a filesystem.

Concurrent IO is not expected to provide a performance benefit over direct IO when used with online and archived redo logs.

Summary

Option 1. Production and other “performance critical” databases: 3 VG’s and 5 filesystems.


VG
Filesystem
Contents
Concurrent or Direct IO ?
Filesystem Block Size
Notes
1
vgSIDlog
origlogA
Online redo logs group A
Direct
Default
Dedicated VG can (i) improve performance as writes don’t have to wait on shared SCSI queue – as would happen if db files and logs were in same VG (ii) simplify I/O performance analysis (log writer response times).


origlogB
Online redo logs group B
Direct
Default

2
vgSIDdata
Data1
Database files
Concurrent
8KB
Dedicated VG allows BC’s
3
vgSIDfs
Archfile
Archived redo logs
Direct
Default



Home
Database binaries
Default
Default
Binaries could be placed on a separate VG however there is no performance benefit in doing so

Option 2. Non-production and other less performance sensitive databases; 2 VG’s and 5 filesystems.


VG
Filesystem
Contents
Concurrent or Direct IO ?
Filesystem Block Size
Notes
1
vgSIDdata
origlogA
Online redo logs group A
Direct
Default



origlogB
Online redo logs group B
Direct
Default



Data1
Database files
Concurrent
8KB
If BC is used db files should be placed into a dedicated VG i.e. as per “production” layout.
2
vgSIDfs
Archfile
Archived redo logs
Direct
Default



Home
Database binaries
Default
Default
Binaries could be placed on a separate VG however there is no performance benefit in doing so

Posted by pat98
이전버튼 1 이전버튼

05-10 00:00
Flag Counter
Yesterday
Today
Total

글 보관함

최근에 올라온 글

달력

 « |  » 2024.5
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31

최근에 달린 댓글