Friday, December 26, 2008

Automating ftp on unix using .netrc

Automating ftp on unix using .netrc

Step One: Creating .netrc in yr home directory
Please create .netrc in your home directory and set permission on the file. It should be unreadable

for everybody except the owner:

#chmod 600 .netrc
#ls -al .netrc
-rw------- 1 pankaj staff 212 Aug 21 11:14 .netrc

#Step two: Contents of .netrc
Contents of .netrc are in two parts, machine info and macros.

Machine definitions:
The first part of the .netrc is filled up with server information:

machine ftp.world.com
login pankaj
password world4me

machine myownmachine
login username
password password

This is as simple as it looks. You are connecting to these servers with these username and passwords.

Now, the second part.

Macro definitions:
This part of the .netrc consists of macros which can be used to perform automated tasks.

macdef mytest
cd /home/pankaj
bin
put filename1.tar.gz
quit

macdef dailyupload
cd /pub/tests
bin
put daily-$1.tar.gz
quit

Keep in mind that there should be an empty line after the last macdef statement. If you don't do this,

ftp will complain about it.

The final .netrc file looks like this now.
machine ftp.world.com
login pankaj
password world4me

macdef mytest
cd /home/pankaj
bin
put filename1.tar.gz
quit

machine myownmachine
login username
password password


macdef dailyupload
cd /pub/tests
bin
put daily-$1.tar.gz
quit

Step Three: Usage of the .netrc
Macros can be called from inside ftp or from the command line.

#ftp myownmachine
ftp: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to localhost.
220 myownmachine FTP server (Version 6.00LS) ready.
331 Password required for myusername.
230 User myusername logged in.
Remote system type is UNIX.
Using binary mode to transfer files.

ftp> $ uploadtest
cd temp
250 CWD command successful.
put filename.tar.gz
local: filename.tar.gz remote: filename.tar.gz
150 Opening BINARY mode data connection for 'filename.tar.gz'
100% |**************************************************| 1103 00:00 ETA
226 Transfer complete.
1103 bytes sent in 0.01 seconds (215.00 KB/s)
quit
221 Goodbye.

...or from on the command-line:

#echo "\$ uploadtest" | ftp myownmachine
ftp: connect to address ::1: Connection refused
Trying 127.0.0.1...
100% |**************************************************| 1103 00:00 ETA

There is not much information here, because ftp doesn't expect a terminal here. If you use ftp -v,

there will be more output.

An example with arguments is

#echo "\$ dailyupload `date +'%Y%m%d'`"
$ dailyupload 20010827

#echo "\$ dailyupload `date +'%Y%m%d'`" | ftp myownmachine
ftp: connect to address ::1: Connection refused
Trying 127.0.0.1...
100% |**************************************************| 1103 00:00 ETA

It will upload the file daily-20010827.tar.gz.

Tuesday, December 23, 2008

Setting up Solaris RAID 5 metadevice using solstice vol mgr

Setting up Solaris RAID 5 metadevice

These are available 7 slices of 926GB each:
3. c3t5006048AD5312D07d34
/pci@0/pci@0/pci@8/pci@0/pci@a/SUNW,emlxs@0,1/fp@0,0/ssd@w5006048ad5312d07,22
4. c3t5006048AD5312D07d35
/pci@0/pci@0/pci@8/pci@0/pci@a/SUNW,emlxs@0,1/fp@0,0/ssd@w5006048ad5312d07,23
5. c3t5006048AD5312D07d36
/pci@0/pci@0/pci@8/pci@0/pci@a/SUNW,emlxs@0,1/fp@0,0/ssd@w5006048ad5312d07,24
6. c3t5006048AD5312D07d37
/pci@0/pci@0/pci@8/pci@0/pci@a/SUNW,emlxs@0,1/fp@0,0/ssd@w5006048ad5312d07,25
7. c3t5006048AD5312D07d38
/pci@0/pci@0/pci@8/pci@0/pci@a/SUNW,emlxs@0,1/fp@0,0/ssd@w5006048ad5312d07,26
8. c3t5006048AD5312D07d39
/pci@0/pci@0/pci@8/pci@0/pci@a/SUNW,emlxs@0,1/fp@0,0/ssd@w5006048ad5312d07,27
9. c3t5006048AD5312D07d40
/pci@0/pci@0/pci@8/pci@0/pci@a/SUNW,emlxs@0,1/fp@0,0/ssd@w5006048ad5312d07,28


Each device looks like this:
partition> p
Current partition table (original):
Total disk cylinders available: 65533 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 8 130.39MB (9/0/0) 267030
1 swap wu 9 - 17 130.39MB (9/0/0) 267030
2 backup wu 0 - 65532 927.15GB (65533/0/0) 1944364110
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 usr wm 18 - 65532 926.89GB (65515/0/0) 1943830050
7 unassigned wm 0 0 (0/0/0) 0


Slice 0 has 130M for metadb
And then using slice 6 for 926GB


Step one: Creating metadevice database
metadb -a -f -c2 c3t5006048AD5312D07d34s0 c3t5006048AD5312D07d35s0 c3t5006048AD5312D07d36s0 c3t5006048AD5312D07d37s0 c3t5006048AD5312D07d38s0 c3t5006048AD5312D07d39s0 c3t5006048AD5312D07d40s0

This will create a RAID database replicas on all 7 slices listed in the above command. These database contains state and configuration information.
The -a switch tells metadb to attach a new database device, and modifies /etc/system to tell the system to reattach the devices at boot-time.
The -f switch is used to create the initial state database.
The -c switch is used to determine the number of database replicas that will be created on each of the specified slices.

# List the metabd just created:
bash-3.00# metadb -i
flags first blk block count
a u 16 8192 /dev/dsk/c3t5006048AD5312D07d34s0
a u 8208 8192 /dev/dsk/c3t5006048AD5312D07d34s0
a u 16 8192 /dev/dsk/c3t5006048AD5312D07d35s0
a u 8208 8192 /dev/dsk/c3t5006048AD5312D07d35s0
a u 16 8192 /dev/dsk/c3t5006048AD5312D07d36s0
a u 8208 8192 /dev/dsk/c3t5006048AD5312D07d36s0
a u 16 8192 /dev/dsk/c3t5006048AD5312D07d37s0
a u 8208 8192 /dev/dsk/c3t5006048AD5312D07d37s0
a u 16 8192 /dev/dsk/c3t5006048AD5312D07d38s0
a u 8208 8192 /dev/dsk/c3t5006048AD5312D07d38s0
a u 16 8192 /dev/dsk/c3t5006048AD5312D07d39s0
a u 8208 8192 /dev/dsk/c3t5006048AD5312D07d39s0
a u 16 8192 /dev/dsk/c3t5006048AD5312D07d40s0
a u 8208 8192 /dev/dsk/c3t5006048AD5312D07d40s0

Step Two: Creating RAID5 devices
We need to define RAID-5 metadevice by name, slices participating in the metadevice, and stripe width

metainit d6 -r c3t5006048AD5312D07d34s6 c3t5006048AD5312D07d35s6 c3t5006048AD5312D07d36s6 c3t5006048AD5312D07d37s6 c3t5006048AD5312D07d38s6 c3t5006048AD5312D07d39s6 c3t5006048AD5312D07d40s6 -i 65k

>>output
d6: RAID is setup


In this example, d6 is the name of the volume, -r designates this metadevice as a RAID-5 metadevice, and the parameters are the slices that participate in the metadevice. In this case, we’ve chosen slice 6 on each disk. One could choose other slices, so long as each slice is the same size. -i 65k defines a stripe interlace size of 65KB (the default is 16KB), which will give us a stripe size of 260KB (7 disks * 65KB per disk). Others have empirically determined that 256KB-512KB is the optimum stripe width for a general-purpose RAID-5 volume, because most writes will fit into a single stripe, minimizing the numbers of reads and writes that must occur for parity calculations[1]. If you know your average file size, then you should tailor the stripe interlace size accordingly.

Note that if your stripe size is a power of 2, there’s a good chance that all of your superblocks and inodes will end up on the same physical disk, which will negatively impact performance. That’s why the example uses 65KB as an interlace size instead of 64KB.


Step Three: Starting Volume Manager
bash-3.00# sh /etc/init.d/volmgt start
>>output
volume management starting.

To be able to use our metadevice as if it were a physical device, we have to start the logical volume manager (our RAID-5 metadevice is a logical volume)


Now, you must wait until the metadevice is initialized before proceeding to create a new filesystem on the metadevice.
You can watch the progress of metadevice initialization via repeated invocations of the command
#metastat -i

>>output while intializing
d6: RAID
State: Initializing
Initialization in progress: 1.6% done
Interlace: 130 blocks
Size: 11662950630 blocks (5.4 TB)
Original device:
Size: 11662971840 blocks (5.4 TB)
Device Start Block Dbase State Reloc Hot Spare
c3t5006048AD5312D07d34s6 1310 No Initializing Yes
c3t5006048AD5312D07d35s6 1310 No Initializing Yes
c3t5006048AD5312D07d36s6 1310 No Initializing Yes
c3t5006048AD5312D07d37s6 1310 No Initializing Yes
c3t5006048AD5312D07d38s6 1310 No Initializing Yes
c3t5006048AD5312D07d39s6 1310 No Initializing Yes
c3t5006048AD5312D07d40s6 1310 No Initializing Yes


>>output after initialization finished
d6: RAID
State: Okay
Interlace: 130 blocks
Size: 11662950630 blocks (5.4 TB)
Original device:
Size: 11662971840 blocks (5.4 TB)
Device Start Block Dbase State Reloc Hot Spare
c3t5006048AD5312D07d34s6 1310 No Okay Yes
c3t5006048AD5312D07d35s6 1310 No Okay Yes
c3t5006048AD5312D07d36s6 1310 No Okay Yes
c3t5006048AD5312D07d37s6 1310 No Okay Yes
c3t5006048AD5312D07d38s6 1310 No Okay Yes
c3t5006048AD5312D07d39s6 1310 No Okay Yes
c3t5006048AD5312D07d40s6 1310 No Okay Yes


Step Four: Creating filesytem on the metadevice
newfs -c 256 -i 8192 -m 8 -C 65 /dev/md/rdsk/d6
>>output
/dev/md/rdsk/d6: Unable to find Media type. Proceeding with system determined parameters.
Warning: cylinders/group is obsolete for this device and will be ignored.
newfs: construct a new file system /dev/md/rdsk/d6: (y/n)? y
Warning: 1824 sector(s) in last cylinder unallocated
/dev/md/rdsk/d6: 11662950624 sectors in 1898267 cylinders of 48 tracks, 128 sectors
5694800.0MB in 13275 cyl groups (143 c/g, 429.00MB/g, 448 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 878752, 1757472, 2636192, 3514912, 4393632, 5272352, 6151072, 7029792,
7908512,
Initializing cylinder groups:
...............................................................................
...............................................................................
...............................................................................
............................
super-block backups for last 10 cylinder groups at:
11654525088, 11655403808, 11656282528, 11657161248, 11658039968, 11658918688,
11659797408, 11660676128, 11661554848, 11662433568



Step five: mount the new file system
bash-3.00# mkdir newmount
bash-3.00# mount /dev/md/dsk/d6 /newmount

Wednesday, December 3, 2008

NFS mount from Redhat Linux(NFSv3) to Solaris 10(NFSv4)

NFS mount from Redhat Linux(NFSv3) to Solaris 10(NFSv4)

On Linux
Edit /etc/exports
/home 11.12.13.414(rw) ->with partition and which server you want to mount to

The /etc/exports file controls which file systems are exported to remote hosts and specifies options

#exportfs -r
#service nfs start/reload/restart


On solaris side
#dfshares (linux server name) -> will list all shares from the linux server

Manually mount
#mount -F nfs -o vers=3 linuxservername:/home /data

Edit /etc/vfstab
servername:/home - /data nfs - yes soft,bg
-----------------------------------------------------------------------------------

Error:
mount: /data: Not owner
-bash-3.00# mount servername:/home /data
nfs mount: mount: /data: Not owner

Solution:
It is because you are trying to mount an nfsV3 share with an nfsV4 client.
This occurs when you want to share an nfs from a redhat linux host to a solaris 10 host.

Use the option vers=3 to avoid this problem or configure nfsV4 on the linux host.
#mount -F nfs -o vers=3 server:/data /mnt/nfs


Also, you should check /etc/default/nfs on the server side to see both client and server side are set to same version you are planning to implement.
NFS_SERVER_VERSMAX=3

Try not to mix nfs versions, and be consistent on version3 or version4

Currently, there are three versions of NFS. NFS version 2 (NFSv2) is older and is widely supported. NFS version 3 (NFSv3) has more features, including

variable size file handling and better error reporting, but is not fully compatible with NFSv2 clients. NFS version 4 (NFSv4) includes Kerberos security,

works through firewalls and on the Internet, no longer requires portmapper, supports ACLs, and utilizes stateful operations. Red Hat Enterprise Linux

supports NFSv2, NFSv3, and NFSv4 clients, and when mounting a file system via NFS, Red Hat Enterprise Linux uses NFSv4 by default, if the server supports it.

All versions of NFS can use Transmission Control Protocol (TCP) running over an IP network, with NFSv4 requiring it. NFSv2 and NFSv3 can use the User

Datagram Protocol (UDP) running over an IP network to provide a stateless network connection between the client and server.

When using NFSv2 or NFSv3 with UDP, the stateless UDP connection under normal conditions minimizes network traffic, as the NFS server sends the client a

cookie after the client is authorized to access the shared volume. This cookie is a random value stored on the server's side and is passed along with RPC

requests from the client. The NFS server can be restarted without affecting the clients and the cookie remains intact. However, because UDP is stateless, if

the server goes down unexpectedly, UDP clients continue to saturate the network with requests for the server. For this reason, TCP is the preferred protocol

when connecting to an NFS server.

When using NFSv4, a stateful connection is made, and Kerberos user and group authentication with various security levels is optionally available. NFSv4 has

no interaction with portmapper, rpc.mountd, rpc.lockd, and rpc.statd, since they have been rolled into the kernel. NFSv4 listens on the well known TCP port

2049.

Some useful commands:
---------------------
#nfsstat -a
#rpcinfo solaris-nfs-server