Friday, November 22, 2013

Recovering from QFS corruption, prepare before you run samfsck on huge file system

Whenever you face a file system corruption issue, we probably know what commands to run to fix the corruption.
But, its a good idea to prepare yourself before you start running the repair commands.

So, far I've experienced at least 3 file corruption issues related to QFS. And its a very painful experience.

I just wanted to recommend some tips you should consider if you are dealing with huge file system indicating inconsistency. Frankly QFS samfsck is not very reliable, less self healing and cannot deal with too many inodes repair, again it depends on how much is too much.

In my experience if you are recovering about 10K inodes, consider creating a new file system and restore from backup. The samfsck runs completely and still cannot make the file system clean. It gets into the loop to repair the inode number again and again even if it has marked it damaged. Sometimes it will wipe off the entire lost+found.

If you cannot get a clean file system after samfsck, consider not using it.

Here are some estimates:
10TG file system with 7.2TB of data, it takes about 8hrs. to complete.


Inodes processed: 10983424

total data kilobytes       = 10737376960
total data kilobytes free  = 2979166784
INFO:  FS online_data repaired:
        start:  Fri May 03 23:18:27 2013
        finish: Sat May 04 06:56:03 2013


Oracle support will say they have never seen this problem before and you have run sammkfs and restore.

You may have run samfsck multiple times to repair the file system.

Every time you run samfsck.

# umount

#Make sure the file system you are repairing has a clean lost+found and big enough to hold all corrupted inodes. In my experience if you are dealing with, let's say 5-10TB file system, it takes about 6-8 hrs. to scan thru all the inode numbers and then complains at the end that lost+found is not big enough. Please rerun samfsck again.

Here is the step
cd /filesystem/lost+found

N=0
while [ $N -lt 102400 ]; do
touch TMPFILE$N
N=`expr $N + 1`
done
rm TMPFILE*


It will create more than 100K files, and then rm will fail.

We have to manually remove all the TMPFILE files from lost+found, If you do this step correctly it will save you lot of time. By default it can handle around 28mb. 

With this step lost+found can handle around 250MB

/opt/SUNWsamfs/sbin/samfsck -F -V | tee /var/tmp/samfsck.`date '+%Y%m%d.%H%M%S'`
# After samfsck completes, try to mount and try to write and read from the file system to confirm data
Keep monitoring QFS logs /var/adm/messages or configured otherwise for error related to SAM QFS .

-----------------------------------------------------------------------------------------------------
Oracle KB 1006526.1 describes how to increase the lost+found size
-----------------------------------------------------------------------------------------------------

Sun QFS and Storage Archive Manager (SAM): samfsck Complains "Orphan processing stopped due to full lost+found" [ID 1006526.1]

Applies to:

Sun QFS and Storage Archive Manager (SAM) - Version 4.0 and later
All Platforms
***Checked for relevance on 18-Nov-2011***


Symptoms:
When attempting to repair your SAM-FS or SAM-QFS file system with samfsck(1M), it complains it is unable to process all of the orphan inodes due to a full lost+found directory.
Ex :

samfsck: NOTICE: Filesystem samfs2 requires fsck
name:     samfs2       version:     2
First pass
Second pass
Third pass
NOTICE: ino 2155175.3,  Repaired link count from 4 to 1
NOTICE: Orphan ino:        ino 2155175 moved to lost+found
NOTICE: Orphan ino:        ino 2155404 moved to lost+found
NOTICE: Orphan ino:        ino 2155689 moved to lost+found
...
NOTICE: Orphan ino:        ino 2155852 moved to lost+found
NOTICE: Orphan ino:        ino 2155853 moved to lost+found
NOTICE: Orphan processing stopped due to full lost+found.
Increase lost+found and rerun.
NOTICE: ino 9717769.5,  Repaired link count from 0 to 8
NOTICE: ino 9718942.3,  Repaired link count from 9 to 10
...
NOTICE: ino 9718949.1,  Repaired link count from 0 to 1
NOTICE: ino 9718950.1,  Repaired link count from 0 to 1
Inodes processed: 9808384
total data kilobytes       = 272121856
total data kilobytes free  = 77938544
NOTICE: Reclaimed 8126464 bytes

Cause

During normal operation, when directories are created within a SAM-QFS file system, it is able to accommodate X number of files.

When the X+1 file is created, the directory size is expanded to accommodate an additional X files (X+X). And this trend continues. Since samfsck(1M) is performing fixes on the file system while it is unmounted, it's unable to 'expand' the directory to make room, so if the existing space is filled up, it cannot proceed. As such, it may become necessary to create additional file entries within the directory when the file system is online, and remove these files to make room for orphan inodes.

As files are removed, the directory size is not decreased. Depending on the number of orphans, you may have to use a value larger than 1,024. Please adjust this accordingly. The 'samfsck -V' command (without the  -F option) can be used to find out how many orphan inodes exist. This command can be used while the file system is mounted or unmounted. Once these commands are executed, the lost+found directory should have sufficient room for the orphan inodes processed by samfsck(1M).

Solution
The following is taken from the samfsck(1M) man page:

     If there are files encountered that are not  attached  to  a
     parent    directory,    they    will   be   moved   to   the
     /mount_point/lost+found directory.  If this  directory  does
     not  exist, you must create this directory first and make it
     sufficently large to hold the  expected  number  of  discon-
     nected  files if you wish this to happen.  Here is how to do
     this in the Bourne shell for a SAM file  system  mounted  on
     /sam:

     /bin/mkdir /sam/lost+found
     cd /sam/lost+found
     N=0
     while [ $N -lt 1024 ]; do
         touch TMPFILE$N
         N=`expr $N + 1`
     done
     rm TMPFILE*

Additional Information

Directory Size (bytes) Maximum number of files
4096 144 (initial size)
8192 289 (+145)
12288 434 (+145)
16384 579 (+145)

The directory size will be (N/145 + 1)*4096 bytes, where N is the number of files in the directory and integer division is used(no fraction).

For example, for 870 files:
(870/145 + 1)*4096 (6 + 1)*4096 = 28672 bytes

Error sam_putapage: sparse block

Errors in /var/adm/messages:

Mar 28 18:31:12 dam-app1 samfs: [ID 247537 kern.warning] WARNING: SAM-QFS: online_3par: sam_putapage: sparse block, ip=30087ca1568, ino=9820091.1, pp=7001 03acc00, off=0
Mar 28 18:31:12 dam-app1 samfs: [ID 756621 kern.warning] WARNING: SAM-QFS: online_3par: inode 0x95d7bb has a SAM-QFS: sam_putapage: sparse block error


File system message indicating that there is an error in flushing disk blocks for the given inode. This is sometimes seen from a client node that is unable to flush its disk block pages because the server claims there is a problem with the range.

This is not always a file system problem, it sometimes indicates that the client has stale cached information and the server is blocking the request. If this is happening on a MDS system, then it is possible it indicates a file system corruption problem.

If on a client, the file system may have to be forcibly unmounted, when the file system is remounted on the client, the correct information is obtained from the server and these messages should go away.
If these messages are on the MDS server, a samfsck would be recommended.

Tuesday, August 13, 2013

Duplicate physical volumes and multipath missing


Problem:
We ran into a glitch when were trying to extend few database file systems.

Physical volumes shows duplicates and 2 physical volumes multipath missing

[root@mdc2pr002 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.6 (Tikanga)


[root@mdc2pr002 host0]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      3.9G  872M  2.9G  24% /
/dev/mapper/VolGroup00-LogVol03
                      6.8G  149M  6.3G   3% /tmp
/dev/mapper/VolGroup00-LogVol02
                      9.7G  1.9G  7.4G  20% /var
/dev/mapper/VolGroup00-LogVol01
                      9.9G  2.7G  6.8G  29% /opt
/dev/mapper/VolGroup00-LogVol05
                      9.7G  3.0G  6.3G  32% /home
/dev/mapper/VolGroup00-LogVol04
                      9.7G  2.2G  7.1G  24% /usr
/dev/mapper/VolGroup00-oracleinstall
                       40G   18G   20G  49% /oracle


[root@mdc2pr002 lvm]# pvs
  Found duplicate PV duB4K3d0lBnFTdgAwwfoDx6MxZx6Mdxu: using /dev/sdag1 not /dev/sdq1
  Found duplicate PV FV0I6TmjEEn5kg0hyVF5zEISNTokcdK1: using /dev/sdr1 not /dev/sdb1
  Found duplicate PV jN3eWS2UZ22dmYE37QyV0vSDkuevnTRf: using /dev/sds1 not /dev/sdc1
  Found duplicate PV FV0I6TmjEEn5kg0hyVF5zEISNTokcdK1: using /dev/sdj1 not /dev/sdr1
  Found duplicate PV FV0I6TmjEEn5kg0hyVF5zEISNTokcdK1: using /dev/sdz1 not /dev/sdj1
  Found duplicate PV jN3eWS2UZ22dmYE37QyV0vSDkuevnTRf: using /dev/sdk1 not /dev/sds1
  Found duplicate PV jN3eWS2UZ22dmYE37QyV0vSDkuevnTRf: using /dev/sdaa1 not /dev/sdk1
  PV                             VG         Fmt  Attr PSize   PFree
  /dev/mpath/350002ac0000e0ae2p1            lvm2 a-   333.00G 333.00G
  /dev/mpath/350002ac0000f0ae2p1            lvm2 a-   333.00G 333.00G
  /dev/mpath/350002ac000100ae2p1            lvm2 a-   333.00G 333.00G
  /dev/mpath/350002ac000140ae2p1 VolGroup02 lvm2 a-   499.99G  81.22G    <-- 5="" exported="" mdc2pr002="" nbsp="" p="" to="" x="">  /dev/mpath/350002ac000150ae2p1 VolGroup02 lvm2 a-   499.99G  81.22G    <-- 5="" exported="" mdc2pr002="" nbsp="" p="" to="" x="">  /dev/mpath/350002ac000160ae2p1 VolGroup02 lvm2 a-   499.99G  81.22G    <-- 5="" exported="" mdc2pr002="" nbsp="" p="" to="" x="">  /dev/sda2                      VolGroup00 lvm2 a-   136.00G  45.00G
  /dev/sdaa1                   VolGroup01 lvm2 a-   499.99G 351.16G      <-- 5="" exported="" mdc2pr002="" nbsp="" p="" to="" x="">  /dev/sdz1                      VolGroup01 lvm2 a-   499.99G 252.16G      <-- 5="" exported="" mdc2pr002="" nbsp="" p="" to="" x="">

Multpath shows only 3 x 500GB luns

[root@mdc2pr002 ~]# multipath -ll
mpath2 (350002ac000150ae2) dm-11 3PARdata,VV
[size=500G][features=1 queue_if_no_path][hwhandler=0][rw]    <-- 500gb="" p="">\_ round-robin 0 [prio=0][active]
 \_ 5:0:1:3  sdac 65:192 [active][ready]
 \_ 3:0:0:3  sde  8:64   [active][ready]
 \_ 3:0:1:3  sdm  8:192  [active][ready]
 \_ 5:0:0:3  sdu  65:64  [active][ready]
mpath1 (350002ac000140ae2) dm-10 3PARdata,VV
[size=500G][features=1 queue_if_no_path][hwhandler=0][rw]    <-- 500gb="" p="">\_ round-robin 0 [prio=0][active]
 \_ 5:0:1:2  sdab 65:176 [active][ready]
 \_ 3:0:0:2  sdd  8:48   [active][ready]
 \_ 3:0:1:2  sdl  8:176  [active][ready]
 \_ 5:0:0:2  sdt  65:48  [active][ready]
mpath6 (350002ac0000e0ae2) dm-15 3PARdata,VV
[size=333G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
 \_ 5:0:1:7  sdag 66:0   [active][ready]
 \_ 3:0:0:7  sdi  8:128  [active][ready]
 \_ 3:0:1:7  sdq  65:0   [active][ready]
 \_ 5:0:0:7  sdy  65:128 [active][ready]
mpath5 (350002ac0000f0ae2) dm-14 3PARdata,VV
[size=333G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
 \_ 5:0:1:6  sdaf 65:240 [active][ready]
 \_ 3:0:0:6  sdh  8:112  [active][ready]
 \_ 3:0:1:6  sdp  8:240  [active][ready]
 \_ 5:0:0:6  sdx  65:112 [active][ready]
mpath4 (350002ac000100ae2) dm-13 3PARdata,VV
[size=333G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
 \_ 5:0:1:5  sdae 65:224 [active][ready]
 \_ 3:0:0:5  sdg  8:96   [active][ready]
 \_ 3:0:1:5  sdo  8:224  [active][ready]
 \_ 5:0:0:5  sdw  65:96  [active][ready]
mpath3 (350002ac000160ae2) dm-12 3PARdata,VV
[size=500G][features=1 queue_if_no_path][hwhandler=0][rw]    <-- 500gb="" p="">\_ round-robin 0 [prio=0][active]
 \_ 5:0:1:4  sdad 65:208 [active][ready]
 \_ 3:0:0:4  sdf  8:80   [active][ready]
 \_ 3:0:1:4  sdn  8:208  [active][ready]
 \_ 5:0:0:4  sdv  65:80  [active][ready]

We are not seeing 2 luns (/dev/sdz1, /dev/sdaa1) device via multipath

root@mdc2pr002 ~]# fdisk -l /dev/sdz
Disk /dev/sdz: 536.8 GB, 536870912000 bytes
255 heads, 63 sectors/track, 65270 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdz1               1       65270   524281243+  8e  Linux LVM"






Solution:
We do see that we have potential problem with devices because we choose to create physical volumes with raw device instead of multipath devices.
/dev/sdaa1                   VolGroup01 lvm2 a-   499.99G 351.16G      <-- 5="" exported="" mdc2pr002="" nbsp="" p="" to="" x="">/dev/sdz1                      VolGroup01 lvm2 a-   499.99G 252.16G      <-- 5="" exported="" mdc2pr002="" nbsp="" p="" to="" x="">
#0. Step
 We changed the lvm filter /etc/lvm/lvm.conf as
filter = [ "a|/dev/mapper/mpath.*|", "a|/dev/sda$|", "a|/dev/sda1$|", "a|/dev/sda2$|", "r|.*|" ]
instead of default to scan just the multipath devices and /sda and removes all other block devices
#filter = [ "a/.*/" ]


#0. Step
We also changed /etc/multipath.conf as
 user_friendly_names yes


#1. Step
Rebuilding the initrd (RHEL 3, 4, 5)

https://access.redhat.com/site/solutions/1958
[root@mdc2pr002 boot]# uname -r
2.6.18-238.el5

[root@mdc2pr002 boot]# mv /boot/initrd-2.6.18-238.el5.img /boot/initrd-2.6.18-238.el5.img_07302013
[root@mdc2pr002 boot]# mkinitrd -f -v /boot/initrd-2.6.18-238.el5.img 2.6.18-238.el5 $(uname -r)

[root@mdc2pr002 boot]# ls -lat /boot/
total 30051
-rw-------  1 root root 4113444 Jul 30 19:25 initrd-2.6.18-238.el5.img
drwxr-xr-x  4 root root    1024 Jul 30 19:25 .
drwxr-xr-x 34 root root    4096 Jul 30 17:21 ..
drwxr-xr-x  2 root root    1024 Mar 31  2012 grub
-rw-------  1 root root 4113141 Mar 31  2012 initrd-2.6.18-238.el5.img_07302013
-rw-------  1 root root 4113141 Mar 31  2012 initrd-2.6.18-238.el5.img.bak"



#reboot

Now we have multipath friendly name enabled.

[root@mdc2pr002 ~]# pvscan
  PV /dev/mapper/mpath7p1   VG VolGroup01      lvm2 [499.99 GB / 252.16 GB free]
  PV /dev/mapper/mpath0p1   VG VolGroup01      lvm2 [499.99 GB / 351.16 GB free]
  PV /dev/mapper/mpath1p1   VG VolGroup02      lvm2 [499.99 GB / 14.56 GB free]
  PV /dev/mapper/mpath2p1   VG VolGroup02      lvm2 [499.99 GB / 14.56 GB free]
  PV /dev/mapper/mpath3p1   VG VolGroup02      lvm2 [499.99 GB / 14.56 GB free]
  PV /dev/sda2              VG VolGroup00      lvm2 [136.00 GB / 45.00 GB free]
  PV /dev/mapper/mpath5p1                      lvm2 [333.00 GB]
  PV /dev/mapper/mpath4p1                      lvm2 [333.00 GB]
  PV /dev/mapper/mpath6p1                      lvm2 [333.00 GB]"


But we are still missing volume group VolGroup01 and VolGroup02

[root@mdc2pr002 ~]# ls /dev/mapper/
control   mpath1    mpath2p1  mpath4    mpath5p1  mpath7    VolGroup00-LogVol00  VolGroup00-LogVol03  VolGroup00-oracleinstall
mpath0    mpath1p1  mpath3    mpath4p1  mpath6    mpath7p1  VolGroup00-LogVol01  VolGroup00-LogVol04
mpath0p1  mpath2    mpath3p1  mpath5    mpath6p1  mpath8    VolGroup00-LogVol02  VolGroup00-LogVol05

[root@mdc2pr002 host0]# mount -a
mount: special device /dev/VolGroup02/db4-stripe-default does not exist
mount: special device /dev/VolGroup02/db5-stripe-default does not exist
mount: special device /dev/VolGroup02/db6-stripe-default does not exist
mount: special device /dev/VolGroup02/db7-stripe-default does not exist
mount: special device /dev/VolGroup02/db8-stripe-default does not exist



[root@mdc2pr002 ~]# lvscan
  inactive          '/dev/VolGroup01/swap' [99.00 GB] inherit
  inactive          '/dev/VolGroup01/vmdb-stripe-default' [50.00 GB] inherit
  inactive          '/dev/VolGroup01/dbwork-stripe-default' [247.66 GB] inherit
  inactive          '/dev/VolGroup02/db4-stripe-default' [239.06 GB] inherit
  inactive          '/dev/VolGroup02/db5-stripe-default' [269.09 GB] inherit
  inactive          '/dev/VolGroup02/db6-stripe-default' [220.01 GB] inherit
  inactive          '/dev/VolGroup02/db7-stripe-default' [259.07 GB] inherit
  inactive          '/dev/VolGroup02/db8-stripe-default' [469.08 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol00' [4.00 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol03' [7.00 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol02' [10.00 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol01' [10.22 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol05' [10.00 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol04' [10.00 GB] inherit
  ACTIVE            '/dev/VolGroup00/oracleinstall' [39.78 GB] inherit"



We realized these volume groups were inactive state

#2. Step
#vgchange -a y VolGroup01
#vgchange -a y VolGroup02


[root@mdc2pr002 ~]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "VolGroup01" using metadata type lvm2
  Found volume group "VolGroup02" using metadata type lvm2
  Found volume group "VolGroup00" using metadata type lvm2


#3. Step
resize and mount

Monday, April 29, 2013

3PAR remote copy setup

Remote copy status and troubleshooting


before starting remote copy steps:
Setup remotecopy ip interface and licenses

DataCenter1 remote copy port and IP
2:6:1 - 19.162.27.7
3:6:1 - 19.162.27.8

3par2 cli% showport -rcip
N:S:P State ---HwAddr---     IPAddr         Netmask     Gateway  MTU  Rate Duplex AutoNeg
2:6:1 ready 0002AC69249B 19.162.27.7 255.255.255.240 19.162.27.14 1500 1Gbps   Full     Yes
3:6:1 ready 0002AC6A2867 19.162.27.8 255.255.255.240 19.162.27.14 1500 1Gbps   Full     Yes


DataCenter2 remote copy port and IP
2:6:1 - 19.220.33.145
3:6:1 - 19.220.33.146

3par1 cli% showport -rcip
N:S:P   State ---HwAddr---        IPAddr         Netmask       Gateway  MTU  Rate Duplex AutoNeg
0:6:1 offline 0002AC5410C7             -               -             -    -   n/a    n/a     n/a
1:6:1 offline 0002AC540DFC             -               -             -    -   n/a    n/a     n/a
2:6:1   ready 0002AC540CE6 19.220.33.145 255.255.255.240 19.220.33.158 1500 1Gbps   Full     Yes
3:6:1   ready 0002AC540C23 19.220.33.146 255.255.255.240 19.220.33.158 1500 
1Gbps   Full     Yes

F400 ports:



T800 ports:


7 steps to setup 3PAR remote copy

Step #1. Starting remotecopy
Step #2. Setting up copy target
Step #3. Checking the links
Step #4. Creating Volume Groups for Synchronous Long Distance Remote Copy
Step #5. Add luns to the remote volume group
Step #6. Status
Step #7. Start replication


Please review 3PAR Remote Copy Software User's Guide 3.1.2 under attachment for more details.

Step #1. Starting remotecopy
Starting rcopy on source
3par1 cli% startrcopy

Starting rcopy on target
3par2 cli% startrcopy


Step #2. Setting up copy target
Setting Up the Primary System
creatercopytarget IP : :

3par1 cli% creatercopytarget 3par2 IP 2:6:1:19.220.33.145 3:6:1:19.220.33.146

Setting Up the Backup Systems
creatercopytarget IP : :

3par2 cli% creatercopytarget 3par1 IP 2:6:1:19.162.27.7 3:6:1:19.162.27.8


Step #3. Checking the links
Checking the Links between Systems
3par1 cli% showrcopy
Remote Copy System Information
Status: Started, Normal
Target Information
Name; ID Type Status Options Policy
3par2 1 IP; ready; mirror_config
Link Information
Target Node Address Status Options
3par2 2:6:1 19.220.33.145 Up
3par2 3:6:1 19.220.33.146 Up
receive 2:6:1 receive Up
receive 3:6:1 receive Up

3par2 cli% showrcopyRemote Copy System Information
Status: Started, Normal
Target Information
Name               ID Type Status Options Policy
3par1 1  IP   ready          mirror_config


Link Information
Target             Node  Address    Status Options
3par1 2:6:1 19.162.27.7 Up
3par1 3:6:1 19.162.27.8 Up
receive            2:6:1 receive    Up
receive            3:6:1 receive    Up


Step #4. Creating Volume Groups for Synchronous Long Distance Remote Copy
creatercopygroup :
automatic synchronization.sync command

3par1 cli% creatercopygroup group-remotecopy1 3par2:sync
3par1 cli% creatercopygroup group-bcom-nearline 3par2:periodicsync| periodic
(for synchrouns and async mode)
(Optional).

If you are creating a volume group that uses periodic modeperiod commands
setrcopygroup period s|m|h|d
or
setrcopygroup period s|m|h|d -pat setrcopygroup period 1h 3par2 group-remotecopy1
setrcopygroup period 1h 3par2 -pat  

Auto recover setting:
setrcopygroup pol auto_recover
setrcopygroup pol auto_recover -pat


Step #5. Add luns to the remote volume group
On the source, add the pre-existing virtual volume(s) to the volume group
admitrcopyvv 
admitrcopyvv -pat
admitrcopyvv VV-NL-RCOPYTEST4-R5 group-remotecopy1 3par2:VV-NL-RCOPYTEST1-R5
admitrcopyvv -pat VV-NL-RCOPYTEST1-R5 group-remotecopy1 3par2:VV-NL-RCOPYTEST1-R5 
3par1 cli% admitrcopyvv VV-NL-RCOPYTEST4-R5 group-remotecopy1 3par2:VV-NL-RCOPYTEST1-R5
3par1 cli% admitrcopyvv VV-NL-RCOPYTEST5-R5 group-remotecopy1 3par2:VV-NL-RCOPYTEST2-R5
3par1 cli% admitrcopyvv VV-NL-RCOPYTEST6-R5 group-remotecopy1 3par2:VV-NL-RCOPYTEST3-R5 


Step #6. Status
3par1 cli% showrcopy

Remote Copy System Information
Status: Started, Normal
Target Information
Name   ID Type Status Options Policy
3par2 1  IP   ready          mirror_config

Link Information
Target  Node  Address       Status Options
3par2  2:6:1 19.220.33.145 Up
3par2  3:6:1 19.220.33.146 Up
receive 2:6:1 receive       Up
receive 3:6:1 receive       Up

Group Information
Name              Target     Status   Role       Mode     Options
group-remotecopy1 3par2     New      Primary    Sync
  LocalVV             ID   RemoteVV            ID   SyncStatus    LastSyncTime
  VV-NL-RCOPYTEST4-R5 29   VV-NL-RCOPYTEST1-R5 125  New           NA
  VV-NL-RCOPYTEST5-R5 30   VV-NL-RCOPYTEST2-R5 167  New           NA
  VV-NL-RCOPYTEST6-R5 31   VV-NL-RCOPYTEST3-R5 168  New           NA 


3par1 cli% showrcopy

Remote Copy System Information
Status: Started, Normal
Target Information
Name   ID Type Status Options Policy
3par2 1  IP   ready          mirror_config

Link Information
Target  Node  Address       Status Options
3par2  2:6:1 19.220.33.145 Up
3par2  3:6:1 19.220.33.146 Up
receive 2:6:1 receive       Up
receive 3:6:1 receive       Up

Group Information
Name              Target     Status   Role       Mode     Options
group-remotecopy1 3par2     Started  Primary    Sync
  LocalVV             ID   RemoteVV            ID   SyncStatus    LastSyncTime
  VV-NL-RCOPYTEST4-R5 29   VV-NL-RCOPYTEST1-R5 125  Syncing (0%)  NA
  VV-NL-RCOPYTEST5-R5 30   VV-NL-RCOPYTEST2-R5 167  Syncing (0%)  NA
  VV-NL-RCOPYTEST6-R5 31   VV-NL-RCOPYTEST3-R5 168  Syncing (0%)  NA


3par1 cli% showtask
  Id Type             Name                Status Phase   Step ----StartTime------ -FinishTime- -Priority-
9404 remote_copy_sync VV-NL-RCOPYTEST6-R5 active   2/3 0/1024 2012-09-27 19:24:15 EDT -            n/a
9405 remote_copy_sync VV-NL-RCOPYTEST5-R5 active   2/3 0/1024 2012-09-27 19:24:15 EDT -            n/a
9406 remote_copy_sync VV-NL-RCOPYTEST4-R5 active   2/3 0/1024 2012-09-27 19:24:15 EDT -            n/a 


3par1 cli% showtask
  Id Type             Name                Status Phase Step ----StartTime---- ---FinishTime------ -Priority-
9404 remote_copy_sync VV-NL-RCOPYTEST6-R5 done     --  -- 2012-09-27 19:24:15 EDT 2012-09-27 19:27:37 EDT n/a
9405 remote_copy_sync VV-NL-RCOPYTEST5-R5 done     --  -- 2012-09-27 19:24:15 EDT 2012-09-27 19:27:43 EDT n/a
9406 remote_copy_sync VV-NL-RCOPYTEST4-R5 done     --  -- 2012-09-27 19:24:15 EDT 2012-09-27 19:27:43 EDT n/a


Step #7. Start replication
Starting Initial Replication: Copying Data Directly from Primary Volume Groups
startrcopygroup 
startrcopygroup group-remotecopy1