
+Sophisticated+support+utilities%3A.jpg)
RMAN-03002: failure of restore command at 15:58:03 RMAN-00569: = ERROR MESSAGE STACK FOLLOWS = ORA-27044: unable to write the header block of file

Using target database control file instead of recovery catalogĬhannel d0: starting archived log restore to default destinationĬhannel d0: reading from backup piece /backup/dbs02/DBNAME/DBNAME_20150906_97qebtd5_1_1_archĬhannel d0: ORA-19870: error while restoring backup piece /backup/dbs02/DBNAME/DBNAME_20150906_97qebtd5_1_1_arch

All rights reserved.Ĥ> restore archivelog sequence 88182 thread 2 Ħ> delete noprompt archivelog sequence 88182 thread 2 Ĭonnected to target database: DBNAME (DBID=41348668, not open) The problem was that after enabling direct and async I/O by setting filesystemio_options to setall, during archived log restore phase we were getting such errors: According to another MOS note: Certification Information for Oracle Database on Linux x86-64 (Doc ID 1304727.1), it is certified on RHEL7, but we would need to wait at least few weeks until it could be introduced into our infrastructure. Of course we don't need certification to use it only for recovery servers, unless it works. To make things more complicated, XFS for Oracle database seems to be not supported on RHEL6, not even tested: Oracle Database - Filesystem & I/O Type Supportability on Oracle Linux 6 (Doc ID 1601759.1). Since we needed big filesystem to provide space for our recoveries, we've chosen XFS to create 88 TB filesystem with RAID10 underneath. Of course ) we've forgotten about this "little" fact while moving to new servers where we are not using NAS, but directly attached storage. Our recovery system uses template initialization files, where mentioned parameter has not been specified at all, that's why it gets default value which on Linux is none. As MOS note Initialization Parameter 'filesystemio_options' Setting With Direct NFS (dNFS) (Doc ID 1479539.1) confirms, this means that direct and async I/O are always enabled, no matter what value filesystemio_options parameter is set to. To give you some background - on our production (and old recovery servers) we are using NAS as a storage backend - with Direct NFS enabled.

Everything went fine until we started to run recoveries - they were much slower than before, even though they were running on more powerful hardware. We started investigation and found some misconfigurations, but after correcting them, performance gain was still too small.įinally, I found the real problem - we were not using direct I/O, which caused especially recovery phase to be very slow. Recently we were refreshing our recovery system infrastructure, by moving automatic recoveries to new servers, with big bunch of disks directly connected to each of them.
