Monday, December 17, 2012

Is there way to find dd status on Solaris?

Well, this is all started when we had to move around 24TB data between the 2 data centers and we decided to use dd command. I guess it is much easier on Linux where you have a way to can use progress bar etc. and dd standard out gives you percentage finished. Unfortunately I couldn't find anything fancy with Solaris.

If you are using a dd command as below and you want find out when it will finish.What options do you have?
dd if=/share/disks/disk-402.img | ssh root@server "dd of=/dev/dsk/c6t50002AC000AE0AE2d0s0"


One of the option is this which is I found on net, but it didn't work for me. It does sends a user signal 1 to the process but it kills the dd process, that means the process take a standard/default kill instead of USR1

Printing dd status

I recently used dd to zero out some hard drives on my Fedora Core workstation, and found that this operation takes a good deal of time (even when large blocksizes are used, it still takes a while). The dd utility doesn’t report status information by default, but when fed a SIGUSR1 signal it will dump the status of the current operation:
dd if=/dev/zero of=/dev/hda1 bs=512 &
kill -SIGUSR1 1749
1038465+0 records in
1038465+0 records out
531694080 bytes (532 MB) copied, 11.6338 seconds, 45.7 MB/s
watch -n 10 kill -USR1
It still amazes me how much stuff I have left to learn about the utilities I use daily.



The other option was to see if there is any way we can from the storage side what is total blocks written so far and how much time it will take to write the whole 24TB on that lun. The problem with fully provisioned lun is that the storage allocates all the blocks to the lun as soon as its provisioned. And you cannot see anything under that block, there may be a way to see this if its thin provisioned.



The easiest option is to use iostat and see how fast you are writing to the device and do the math

-bash-3.00#  iostat -xnmMpz 1
                    extended device statistics
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    1.0    0.0    0.0  0.0  0.0    0.0    5.5   0   1 c1t0d0
    0.0    1.0    0.0    0.0  0.0  0.0    0.0    5.5   0   1 c1t0d0s0 (/)
  982.5  982.5    7.7    7.7  0.0  1.1    0.0    0.5   1  62 c6t50002AC000B10AE2d0
  982.5  982.5    7.7    7.7  0.0  1.1    0.0    0.5   1  62 c6t50002AC000B10AE2d0s0
                    extended device statistics
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
  989.6  989.6    7.7    7.7  0.0  1.1    0.0    0.5   1  61 c6t50002AC000B10AE2d0
  989.6  989.6    7.7    7.7  0.0  1.1    0.0    0.5   1  61 c6t50002AC000B10AE2d0s0


As we see here we are writing at 7.7MB/s.
Now if we are writing 1TB it should take around 36hrs and 24TB would take around 865 hrs.