Tuesday, August 13, 2013
Duplicate physical volumes and multipath missing
Problem:
We ran into a glitch when were trying to extend few database file systems.
Physical volumes shows duplicates and 2 physical volumes multipath missing
[root@mdc2pr002 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.6 (Tikanga)
[root@mdc2pr002 host0]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
3.9G 872M 2.9G 24% /
/dev/mapper/VolGroup00-LogVol03
6.8G 149M 6.3G 3% /tmp
/dev/mapper/VolGroup00-LogVol02
9.7G 1.9G 7.4G 20% /var
/dev/mapper/VolGroup00-LogVol01
9.9G 2.7G 6.8G 29% /opt
/dev/mapper/VolGroup00-LogVol05
9.7G 3.0G 6.3G 32% /home
/dev/mapper/VolGroup00-LogVol04
9.7G 2.2G 7.1G 24% /usr
/dev/mapper/VolGroup00-oracleinstall
40G 18G 20G 49% /oracle
[root@mdc2pr002 lvm]# pvs
Found duplicate PV duB4K3d0lBnFTdgAwwfoDx6MxZx6Mdxu: using /dev/sdag1 not /dev/sdq1
Found duplicate PV FV0I6TmjEEn5kg0hyVF5zEISNTokcdK1: using /dev/sdr1 not /dev/sdb1
Found duplicate PV jN3eWS2UZ22dmYE37QyV0vSDkuevnTRf: using /dev/sds1 not /dev/sdc1
Found duplicate PV FV0I6TmjEEn5kg0hyVF5zEISNTokcdK1: using /dev/sdj1 not /dev/sdr1
Found duplicate PV FV0I6TmjEEn5kg0hyVF5zEISNTokcdK1: using /dev/sdz1 not /dev/sdj1
Found duplicate PV jN3eWS2UZ22dmYE37QyV0vSDkuevnTRf: using /dev/sdk1 not /dev/sds1
Found duplicate PV jN3eWS2UZ22dmYE37QyV0vSDkuevnTRf: using /dev/sdaa1 not /dev/sdk1
PV VG Fmt Attr PSize PFree
/dev/mpath/350002ac0000e0ae2p1 lvm2 a- 333.00G 333.00G
/dev/mpath/350002ac0000f0ae2p1 lvm2 a- 333.00G 333.00G
/dev/mpath/350002ac000100ae2p1 lvm2 a- 333.00G 333.00G
/dev/mpath/350002ac000140ae2p1 VolGroup02 lvm2 a- 499.99G 81.22G <-- 5="" exported="" mdc2pr002="" nbsp="" p="" to="" x=""> /dev/mpath/350002ac000150ae2p1 VolGroup02 lvm2 a- 499.99G 81.22G <-- 5="" exported="" mdc2pr002="" nbsp="" p="" to="" x=""> /dev/mpath/350002ac000160ae2p1 VolGroup02 lvm2 a- 499.99G 81.22G <-- 5="" exported="" mdc2pr002="" nbsp="" p="" to="" x=""> /dev/sda2 VolGroup00 lvm2 a- 136.00G 45.00G
/dev/sdaa1 VolGroup01 lvm2 a- 499.99G 351.16G <-- 5="" exported="" mdc2pr002="" nbsp="" p="" to="" x=""> /dev/sdz1 VolGroup01 lvm2 a- 499.99G 252.16G <-- 5="" exported="" mdc2pr002="" nbsp="" p="" to="" x="">
Multpath shows only 3 x 500GB luns
[root@mdc2pr002 ~]# multipath -ll
mpath2 (350002ac000150ae2) dm-11 3PARdata,VV
[size=500G][features=1 queue_if_no_path][hwhandler=0][rw] <-- 500gb="" p="">\_ round-robin 0 [prio=0][active]
\_ 5:0:1:3 sdac 65:192 [active][ready]
\_ 3:0:0:3 sde 8:64 [active][ready]
\_ 3:0:1:3 sdm 8:192 [active][ready]
\_ 5:0:0:3 sdu 65:64 [active][ready]
mpath1 (350002ac000140ae2) dm-10 3PARdata,VV
[size=500G][features=1 queue_if_no_path][hwhandler=0][rw] <-- 500gb="" p="">\_ round-robin 0 [prio=0][active]
\_ 5:0:1:2 sdab 65:176 [active][ready]
\_ 3:0:0:2 sdd 8:48 [active][ready]
\_ 3:0:1:2 sdl 8:176 [active][ready]
\_ 5:0:0:2 sdt 65:48 [active][ready]
mpath6 (350002ac0000e0ae2) dm-15 3PARdata,VV
[size=333G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
\_ 5:0:1:7 sdag 66:0 [active][ready]
\_ 3:0:0:7 sdi 8:128 [active][ready]
\_ 3:0:1:7 sdq 65:0 [active][ready]
\_ 5:0:0:7 sdy 65:128 [active][ready]
mpath5 (350002ac0000f0ae2) dm-14 3PARdata,VV
[size=333G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
\_ 5:0:1:6 sdaf 65:240 [active][ready]
\_ 3:0:0:6 sdh 8:112 [active][ready]
\_ 3:0:1:6 sdp 8:240 [active][ready]
\_ 5:0:0:6 sdx 65:112 [active][ready]
mpath4 (350002ac000100ae2) dm-13 3PARdata,VV
[size=333G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
\_ 5:0:1:5 sdae 65:224 [active][ready]
\_ 3:0:0:5 sdg 8:96 [active][ready]
\_ 3:0:1:5 sdo 8:224 [active][ready]
\_ 5:0:0:5 sdw 65:96 [active][ready]
mpath3 (350002ac000160ae2) dm-12 3PARdata,VV
[size=500G][features=1 queue_if_no_path][hwhandler=0][rw] <-- 500gb="" p="">\_ round-robin 0 [prio=0][active]
\_ 5:0:1:4 sdad 65:208 [active][ready]
\_ 3:0:0:4 sdf 8:80 [active][ready]
\_ 3:0:1:4 sdn 8:208 [active][ready]
\_ 5:0:0:4 sdv 65:80 [active][ready]
We are not seeing 2 luns (/dev/sdz1, /dev/sdaa1) device via multipath
root@mdc2pr002 ~]# fdisk -l /dev/sdz
Disk /dev/sdz: 536.8 GB, 536870912000 bytes
255 heads, 63 sectors/track, 65270 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdz1 1 65270 524281243+ 8e Linux LVM"
Solution:
We do see that we have potential problem with devices because we choose to create physical volumes with raw device instead of multipath devices.
/dev/sdaa1 VolGroup01 lvm2 a- 499.99G 351.16G <-- 5="" exported="" mdc2pr002="" nbsp="" p="" to="" x="">/dev/sdz1 VolGroup01 lvm2 a- 499.99G 252.16G <-- 5="" exported="" mdc2pr002="" nbsp="" p="" to="" x="">
#0. Step
We changed the lvm filter /etc/lvm/lvm.conf as
filter = [ "a|/dev/mapper/mpath.*|", "a|/dev/sda$|", "a|/dev/sda1$|", "a|/dev/sda2$|", "r|.*|" ]
instead of default to scan just the multipath devices and /sda and removes all other block devices
#filter = [ "a/.*/" ]
#0. Step
We also changed /etc/multipath.conf as
user_friendly_names yes
#1. Step
Rebuilding the initrd (RHEL 3, 4, 5)
https://access.redhat.com/site/solutions/1958
[root@mdc2pr002 boot]# uname -r
2.6.18-238.el5
[root@mdc2pr002 boot]# mv /boot/initrd-2.6.18-238.el5.img /boot/initrd-2.6.18-238.el5.img_07302013
[root@mdc2pr002 boot]# mkinitrd -f -v /boot/initrd-2.6.18-238.el5.img 2.6.18-238.el5 $(uname -r)
[root@mdc2pr002 boot]# ls -lat /boot/
total 30051
-rw------- 1 root root 4113444 Jul 30 19:25 initrd-2.6.18-238.el5.img
drwxr-xr-x 4 root root 1024 Jul 30 19:25 .
drwxr-xr-x 34 root root 4096 Jul 30 17:21 ..
drwxr-xr-x 2 root root 1024 Mar 31 2012 grub
-rw------- 1 root root 4113141 Mar 31 2012 initrd-2.6.18-238.el5.img_07302013
-rw------- 1 root root 4113141 Mar 31 2012 initrd-2.6.18-238.el5.img.bak"
#reboot
Now we have multipath friendly name enabled.
[root@mdc2pr002 ~]# pvscan
PV /dev/mapper/mpath7p1 VG VolGroup01 lvm2 [499.99 GB / 252.16 GB free]
PV /dev/mapper/mpath0p1 VG VolGroup01 lvm2 [499.99 GB / 351.16 GB free]
PV /dev/mapper/mpath1p1 VG VolGroup02 lvm2 [499.99 GB / 14.56 GB free]
PV /dev/mapper/mpath2p1 VG VolGroup02 lvm2 [499.99 GB / 14.56 GB free]
PV /dev/mapper/mpath3p1 VG VolGroup02 lvm2 [499.99 GB / 14.56 GB free]
PV /dev/sda2 VG VolGroup00 lvm2 [136.00 GB / 45.00 GB free]
PV /dev/mapper/mpath5p1 lvm2 [333.00 GB]
PV /dev/mapper/mpath4p1 lvm2 [333.00 GB]
PV /dev/mapper/mpath6p1 lvm2 [333.00 GB]"
But we are still missing volume group VolGroup01 and VolGroup02
[root@mdc2pr002 ~]# ls /dev/mapper/
control mpath1 mpath2p1 mpath4 mpath5p1 mpath7 VolGroup00-LogVol00 VolGroup00-LogVol03 VolGroup00-oracleinstall
mpath0 mpath1p1 mpath3 mpath4p1 mpath6 mpath7p1 VolGroup00-LogVol01 VolGroup00-LogVol04
mpath0p1 mpath2 mpath3p1 mpath5 mpath6p1 mpath8 VolGroup00-LogVol02 VolGroup00-LogVol05
[root@mdc2pr002 host0]# mount -a
mount: special device /dev/VolGroup02/db4-stripe-default does not exist
mount: special device /dev/VolGroup02/db5-stripe-default does not exist
mount: special device /dev/VolGroup02/db6-stripe-default does not exist
mount: special device /dev/VolGroup02/db7-stripe-default does not exist
mount: special device /dev/VolGroup02/db8-stripe-default does not exist
[root@mdc2pr002 ~]# lvscan
inactive '/dev/VolGroup01/swap' [99.00 GB] inherit
inactive '/dev/VolGroup01/vmdb-stripe-default' [50.00 GB] inherit
inactive '/dev/VolGroup01/dbwork-stripe-default' [247.66 GB] inherit
inactive '/dev/VolGroup02/db4-stripe-default' [239.06 GB] inherit
inactive '/dev/VolGroup02/db5-stripe-default' [269.09 GB] inherit
inactive '/dev/VolGroup02/db6-stripe-default' [220.01 GB] inherit
inactive '/dev/VolGroup02/db7-stripe-default' [259.07 GB] inherit
inactive '/dev/VolGroup02/db8-stripe-default' [469.08 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol00' [4.00 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol03' [7.00 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol02' [10.00 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol01' [10.22 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol05' [10.00 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol04' [10.00 GB] inherit
ACTIVE '/dev/VolGroup00/oracleinstall' [39.78 GB] inherit"
We realized these volume groups were inactive state
#2. Step
#vgchange -a y VolGroup01
#vgchange -a y VolGroup02
[root@mdc2pr002 ~]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "VolGroup01" using metadata type lvm2
Found volume group "VolGroup02" using metadata type lvm2
Found volume group "VolGroup00" using metadata type lvm2
#3. Step
resize and mount
-->-->-->-->-->-->-->-->-->-->
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment