Monday, November 10, 2014

umask setting using sshd_config on Solaris 10 for scp,sftp file transfers

I was looking to setup a custom umask for specific user using sftp and scp connection types.

1. sftp
2. scp
3. ssh hostname
4. ssh hostname program

The difference between 3. and 4. is that the former starts a shell which usually reads the/etc/profile information while the latter doesn't.

In addition by reading this post I've became aware of the -u option that is present in newer versions of
OpenSSH.

However this doesn't work.
I must also add that /etc/profile now includes umask 0027.

Going point by point:
sftp - Setting -u 0027 in sshd_config as mentioned here, is not enough


It's quite easy to force environments variables in an SSH session, since /etc/profile, /etc/bash.bashrc etc. are read. But when you launch commands with SSH without opening a session, these files are not parsed, so it gets harder to set the environment.


So it can be useful to know that /etc/environment is read by SSH as well as login.
The format is "VARIABLE=VALUE" for each line.

In my case, I needed to force TMPDIR to "/var/lib/gforge-dop/chroot/tmp" so I just put "TMPDIR=/var/lib/gforge-dop/chroot/tmp" in /etc/environment and it worked :)

The umask is not an environment variable; it is a property of the process and has to be set by a system call.

---------------------------------------------------------------------------------------------------------
Solaris 11
http://docs.oracle.com/cd/E26502_01/html/E29042/ssh-config-4.html#REFMAN4ssh-config-4

Solaris 10
http://docs.oracle.com/cd/E26505_01/html/816-5174/sshd-config-4.html#REFMAN4sshd-config-4
---------------------------------------------------------------------------------------------------------

Hello Pankaj,

As per our conversation, it is not possible to set umask per user in S10 with sftp .
This feature is only available in S11.

Here is the RFE/bug filed:
6803109: Add option for sftp/scp server to set a default umask

It was addressed in S11 and not in S10.

Customer transfers files and changes umask on sftp server is the workaround or upgrade system to S11.

Regards,

Oracle

Solaris and Network Domain, Global Systems Support
Phone: +1 800-223-1711
Oracle Global Customer Services

Experience with Redhat Storage (RHS 2.1.1)

docs
https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/

Installation

mount -o loop /home/pkg/RHSS-2.1-20131122.0-RHS-x86_64-DVD1.iso /mnt/RHS

Redhat storage version:

redhat-storage-server-2.1.1.0-6.el6rhs.noarch

Nodes:
11.16.153.226 pgvr1126 RHS POC PG 1/23/2014

11.16.153.227 pgvr1127 RHS POC PG 1/23/2014

11.16.153.228 pgvr1128 RHS POC PG 1/23/2014

11.16.153.229 pgvr1129 RHS POC PG 1/23/2014

XFS - Format the back-end file system using XFS for glusterFS bricks. XFS can journal

metadata, resulting in faster crash recovery.

The XFS file system can also be defragmented and expanded while mounted and active.

3 different installing RHS: ISO, PXE server, Red Hat Satellite Server

Created a VM under RHS POC with 8GB/16G RAM

booted with RHS iso RHSS-2.1-20131122.0-RHS-x86_64-DVD1.iso

---Install and upgrade doesn't work because of drivers issues

---Install with basic driver (second option)

Starting and Stopping the glusterd service

# service glusterd start/stop

Node 1

[root@pgvr1126 ~]# cat /etc/redhat-release

Red Hat Enterprise Linux Server release 6.4 (Santiago)

[root@pgvr1126 ~]# pvcreate /dev/sdb

Physical volume "/dev/sdb" successfully created

[root@pgvr1126 ~]# vgcreate rhsvg01 /dev/sdb
Volume group "rhsvg01" successfully created

[root@pgvr1126 ~]# lvcreate -n rhslv01 -L 9G rhsvg01

Logical volume "rhslv01" created

[root@pgvr1126 ~]# mkfs.xfs -i size=512 /dev/mapper/rhsvg01-rhslv01

meta-data=/dev/mapper/rhsvg01-rhslv01 isize=512    agcount=4, agsize=589824 blks

=                       sectsz=512   attr=2, projid32bit=0 data   =  bsize=4096   blocks=2359296, imaxpct=25

=                       sunit=0      swidth=0 blks naming   =version 2   bsize=4096   ascii-ci=0

log      =internal log           bsize=4096   blocks=2560, version=2   =   sectsz=512   sunit=0 blks, lazy-count=1 realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@pgvr1126 ~]# pvs

 PV         VG            Fmt  Attr PSize  PFree

  /dev/sda2  vg_pgvr1126 lvm2 a--  15.51g       0

  /dev/sdb   rhsvg01       lvm2 a--  10.00g 1020.00m

[root@pgvr1126 ~]# mkdir -p /gluster/xfs

[root@pgvr1126 ~]# vi /etc/fstab

/dev/mapper/rhsvg01-rhslv01  /gluster/xfs    xfs    defaults,inode64,noatime   0 0

[root@pgvr1126 ~]# mount -a

[root@pgvr1126 ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/vg_pgvr1126-lv_root

                      7.6G  1.8G  5.5G  25% /

tmpfs                 4.0G     0  4.0G   0% /dev/shm

/dev/sda1             485M   33M  427M   8% /boot

/dev/mapper/rhsvg01-rhslv01

                      9.0G   33M  9.0G   1% /gluster/xfs

[root@pgvr1126 ~]# vi /etc/hosts

11.16.153.226    pgvr1126

11.16.153.227    pgvr1127

11.16.153.228    pgvr1128

11.16.153.229    pgvr1129

Gluster FS commands:

[root@pgvr1126 ~]# gluster peer probe pgvr1127

peer probe: success.

[root@pgvr1126 ~]# gluster peer status

Number of Peers: 1

Hostname: pgvr1127

Uuid: f216c593-e358-4842-8c8e-c51e3152af63

State: Peer in Cluster (Connected)

[root@pgvr1126 ~]# gluster volume create rhs01 replica 2 pgvr1126:/gluster/xfs/rhs01 md

volume create: rhs01: success: please start the volume to access data

[root@pgvr1126 ~]# gluster volume start rhs01

volume start: rhs01: success

[root@pgvr1126 ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/vg_pgvr1126-lv_root

                      7.6G  1.8G  5.5G  25% /

tmpfs                 4.0G     0  4.0G   0% /dev/shm

/dev/sda1             485M   33M  427M   8% /boot

/dev/mapper/rhsvg01-rhslv01

                      9.0G   33M  9.0G   1% /gluster/xfs

[root@pgvr1126 ~]# gluster volume info

Volume Name: rhs01

Type: Replicate

Volume ID: 13a73b40-dfce-4b79-b047-c7bffaa2a879

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: pgvr1126:/gluster/xfs/rhs01

Brick2: pgvr1127:/gluster/xfs/rhs01

[root@pgvr1126 ~]# cd /gluster/xfs/rhs01

[root@pgvr1126 rhs01]# ls -l

total 0

drwxr-xr-x 2 root root 6 Feb 11 17:11 pankaj

Node 2

[root@pgvr1127 ~]# pvcreate /dev/sdb

  Physical volume "/dev/sdb" successfully created

[root@pgvr1127 ~]#  vgcreate rhsvg01 /dev/sdb

  Volume group "rhsvg01" successfully created

[root@pgvr1127 ~]# lvcreate -n rhslv01 -L 9G rhsvg01

  Logical volume "rhslv01" created

[root@pgvr1127 ~]#  mkdir -p /gluster/xfs

[root@pgvr1127 ~]#  mkfs.xfs -i size=512 /dev/mapper/rhsvg01-rhslv01

meta-data=/dev/mapper/rhsvg01-rhslv01 isize=512    agcount=4, agsize=589824 blks

         =                       sectsz=512   attr=2, projid32bit=0

data     =                       bsize=4096   blocks=2359296, imaxpct=25

         =                       sunit=0      swidth=0 blks

naming   =version 2              bsize=4096   ascii-ci=0

log      =internal log           bsize=4096   blocks=2560, version=2

         =                       sectsz=512   sunit=0 blks, lazy-count=1

realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@pgvr1127 ~]# vi /etc/fstab

[root@pgvr1127 ~]# mount -a

[root@pgvr1127 ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/vg_pgvr1127-lv_root

                       12G  1.8G  9.1G  17% /

tmpfs                1004M     0 1004M   0% /dev/shm

/dev/sda1             485M   33M  427M   8% /boot

/dev/mapper/rhsvg01-rhslv01

                      9.0G   33M  9.0G   1% /gluster/xfs

[root@pgvr1127 ~]# vi /etc/hosts

[root@pgvr1127 ~]# gluster peer probe pgvr1126

peer probe: success.

[root@pgvr1127 ~]# gluster peer status

Number of Peers: 1

Hostname: pgvr1126

Uuid: 29a69702-0ffd-42f7-9105-474098d7be30

State: Peer in Cluster (Connected)

[root@pgvr1127 ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/vg_pgvr1127-lv_root

                       12G  1.8G  9.1G  17% /

tmpfs                1004M     0 1004M   0% /dev/shm

/dev/sda1             485M   33M  427M   8% /boot

/dev/mapper/rhsvg01-rhslv01

                      9.0G   33M  9.0G   1% /gluster/xfs

File system block size 1024 limitations

User was trying to copy a 36GB file and terminates after about 17GB. She tried couple of times and it terminates. We reproduced the issue and confirmed that the file size doesn't increase after about 17GB however the scp from the remote host keep writing to the disk.

[root@mdc2vr6009 data]# blockdev --getbsz /dev/VolGroup01/LogVol11
4096

Please confirm the block size
[root@mdc2vr6009 data]# blockdev --getbsz /dev/VolGroup02/data
1024

The default block size is 4096 bytes
 blockdev --report
dumpe2fs /dev/sdb3 | grep -i 'Block size'

https://ext4.wiki.kernel.org/index.php/Ext4_Disk_Layout#Blocks
By default a filesystem can contain 2^32 blocks; if the '64bit' feature is enabled, then a filesystem can have 2^64 blocks.
File System Maximums
32-bit mode64-bit mode
Item1KiB2KiB4KiB64KiB1KiB2KiB4KiB64KiB
Blocks2^322^322^322^322^642^642^642^64
Inodes2^322^322^322^322^322^322^322^32
File System Size4TiB8TiB16TiB256PiB16ZiB32ZiB64ZiB1YiB
Blocks Per Block Group8,19216,38432,768524,2888,19216,38432,768524,288
Inodes Per Block Group8,19216,38432,768524,2888,19216,38432,768524,288
Block Group Size8MiB32MiB128MiB32GiB8MiB32MiB128MiB32GiB
Blocks Per File, Extents2^322^322^322^322^322^322^322^32
Blocks Per File, Block Maps16,843,020134,480,3961,074,791,4364,398,314,962,95616,843,020134,480,3961,074,791,4364,398,314,962,956
File Size, Extents4TiB8TiB16TiB256TiB4TiB8TiB16TiB256TiB
File Size, Block Maps16GiB256GiB4TiB256PiB16GiB256GiB4TiB256PiB
Note: Files not using extents (i.e. files using block maps) must be placed in the first 2^32 blocks of a filesystem.




[root@mdc1brc0107 ~]# blockdev --getbsz /dev/mapper/VolGroup01-backup_lv
4096


[root@mdc2vr6009 /]# lvcreate -n data --size 290G VolGroup02
Logical volume "data" created
[root@mdc2vr6009 /]# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
LogVol10 VolGroup01 -wi-ao 4.00g
LogVol11 VolGroup01 -wi-ao 6.00g
LogVol12 VolGroup01 -wi-ao 30.00g
LogVol13 VolGroup01 -wi-ao 6.00g
LogVol14 VolGroup01 -wi-ao 6.00g
LogVol15 VolGroup01 -wi-ao 30.00g
swap VolGroup01 -wi-ao 12.00g
data VolGroup02 -wi-a- 290.00g

[root@mdc2vr6009 /]# mkfs
mkfs mkfs.ext2 mkfs.ext4 mkfs.msdos
mkfs.cramfs mkfs.ext3 mkfs.ext4dev mkfs.vfat
[root@mdc2vr6009 /]# mkfs.ext4 /dev/VolGroup02/data
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
19005440 inodes, 76021760 blocks
3801088 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
2320 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 23 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

[root@mdc2vr6009 /]# mount /opt/iwov_data

[root@mdc2vr6009 /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup01-LogVol10
4.0G 522M 3.3G 14% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
/dev/sda1 248M 36M 200M 16% /boot
/dev/mapper/VolGroup01-LogVol14
6.0G 87M 5.6G 2% /home
/dev/mapper/VolGroup01-LogVol12
30G 14G 15G 48% /opt
/dev/mapper/VolGroup01-LogVol11
6.0G 1.8G 3.9G 31% /usr
/dev/mapper/VolGroup01-LogVol13
6.0G 392M 5.3G 7% /var
/dev/mapper/VolGroup01-LogVol15
30G 3.9G 25G 14% /www
tmpfs 8.0G 3.3M 8.0G 1% /tmp
none 30G 14G 15G 48% /iwmnt/default
none 30G 14G 15G 48% /iwmnt/iwadmin
/dev/mapper/VolGroup02-data
286G 191M 271G 1% /opt/iwov_data

[root@mdc2vr6009 /]# ls -l /opt/iwov_data
total 16
drwx------ 2 root root 16384 Mar 24 17:13 lost+found

[root@mdc2vr6009 /]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 VolGroup01 lvm2 a-- 99.72g 5.72g
/dev/sdb VolGroup02 lvm2 a-- 300.00g 10.00g

[root@mdc2vr6009 /]# cd /opt/iwov_data
[root@mdc2vr6009 iwov_data]# touch pankaj
[root@mdc2vr6009 iwov_data]# ls -l
total 16
drwx------ 2 root root 16384 Mar 24 17:13 lost+found
-rw-r--r-- 1 root root 0 Mar 24 17:15 pankaj

[root@mdc2vr6009 iwov_data]# cat > deleteit
^C
[root@mdc2vr6009 iwov_data]# ls -l
total 16
-rw-r--r-- 1 root root 0 Mar 24 17:16 deleteit
drwx------ 2 root root 16384 Mar 24 17:13 lost+found
-rw-r--r-- 1 root root 0 Mar 24 17:15 pankaj



[root@mdc2vr6009 iwov_data]# blockdev --getbsz /dev/VolGroup02/data
4096

XFS: mkdir command gives: "mkdir: cannot create directory '###': No space left on device"

Issuing a mkdir command gives: "mkdir: cannot create directory '###': No space left on device"


umount /pgdata
mount -o inode64 /pgdata

edit /etc/fstab as below
/dev/vg_pgdata/lv_pgdata    /pgdata     xfs           inode64             1 0
By default in RHEL5 and RHEL6, xfs will only create inodes in disk blocks which result in inode numbers less than 2^32. If all of these low disk blocks are full, no more files can be created. Mounting with -o inode64 allows inodes to be created anywhere on disk. However, some 32-bit applications cannot handle 64-bit inode numbers.

RHEL7 will default to allowing 64-bit inode numbers.

Another possible cause is severely fragmented freespace. XFS allocates inodes in contiguous clusters of disk blocks; if no sufficiently large regions of freespace are available, no more inodes can be created.