Tested the conversion with RHEL8 and AmzonLinx2
from m4=>m5 and r4=>r5
Before changing your instance to a nitro based system, make sure
1. The (ENA) elastic network adapter is installed and enabled for the instance.
2. The NVMe driver is installed on the instance and is loaded in the initramfs image of the instance.
3. Use "/etc/fstab" to mount the file systems using UUID/Label.
[root@pod ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 200G 0 disk
├─xvda1 202:1 0 1M 0 part
└─xvda2 202:2 0 200G 0 part /
xvdd 202:48 0 500G 0 disk /data
xvde 202:80 0 500G 0 disk /data/logs
xvdf 202:160 0 500G 0 disk /data/active
NVMe EBS volume
from m4=>m5 and r4=>r5
As of now following instances are based on the Nitro system:
A1, C5, C5d, C5n, G4, I3en, Inf1, M5, M5a, M5ad, M5d, M5dn, M5n,
A1, C5, C5d, C5n, G4, I3en, Inf1, M5, M5a, M5ad, M5d, M5dn, M5n,
p3dn.24xlarge
, R5, R5a, R5ad, R5d, R5dn, R5n, T3, T3a, and z1dBefore changing your instance to a nitro based system, make sure
1. The (ENA) elastic network adapter is installed and enabled for the instance.
2. The NVMe driver is installed on the instance and is loaded in the initramfs image of the instance.
3. Use "/etc/fstab" to mount the file systems using UUID/Label.
EBS volumes
[root@pod ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 200G 0 disk
├─xvda1 202:1 0 1M 0 part
└─xvda2 202:2 0 200G 0 part /
xvdd 202:48 0 500G 0 disk /data
xvde 202:80 0 500G 0 disk /data/logs
xvdf 202:160 0 500G 0 disk /data/active
NVMe EBS volume
[root@pod ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:8 0 200G 0 disk
├─nvme0n1p1 259:9 0 1M 0 part
└─nvme0n1p2 259:10 0 200G 0 part /
nvme1n1 259:3 0 500G 0 /data
nvme2n1 259:4 0 500G 0 /data/logs
nvme3n1 259:2 0 500G 0 /data/active
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:8 0 200G 0 disk
├─nvme0n1p1 259:9 0 1M 0 part
└─nvme0n1p2 259:10 0 200G 0 part /
nvme1n1 259:3 0 500G 0 /data
nvme2n1 259:4 0 500G 0 /data/logs
nvme3n1 259:2 0 500G 0 /data/active
NVMe map command:
# /sbin/ebsnvme-id /dev/nvme1n1
Volume ID: vol-0fb8db1fddd1f5834
xvde
# /sbin/ebsnvme-id /dev/nvme1n1
Volume ID: vol-0fb8db1fddd1f5834
xvde
Suggested /etc/fstab
UUID=35e495d3-a1df-40d9-bb6c-e37b81aec11f /data xfs noatime 0 0
UUID=075bd9b0-370e-48b1-9de2-4d061d16ca5b /data/logs xfs noatime 0 0
UUID=5e5fde64-1346-45e0-9eeb-6c68be8b81bd /data/active xfs noatime 0 0
Commands to check instance if its ready to changed to nitro system
lsinitrd /boot/initramfs-$(uname -r).img|grep nvmemodinfo nvme
modinfo ena
dracut -f -v
Lesson learned:
Currently AWS takes care of these volume device name change from 4 series instance to 5 series nitro based instances on the amazon linux 2 only. And the same steps should also work with Redhat hosts if we apply the udev rule and python script.
With amazon linux 2 the magic code that is creating the symlinks from /dev/xvdb, or /dev/xvdc devices to /dev/nvme1n1 or /dev/nvme2n1 is by the udev rules
In this file "/etc/udev/rules.d/70-ec2-nvme-devices.rules",
and more specifically this line KERNEL=="nvme[0-9]*n[0-9]*", ENV{DEVTYPE}=="disk", ATTRS{model}=="Amazon Elastic Block Store", PROGRAM="/sbin/ebsnvme-id -u /dev/%k", SYMLINK+="%c"
If we read that - this is saying, when a device is attached to the system, that has a kernel device name that matches the pattern given, run this program and create a symlink with the output of that program.
Is a python script (/sbin/ebsnvme-id) to get the underlying mount information then to create a sym link,
One thing to keep in mind here is that this only works if the device presented to amazon linux 2 hosts are in /dev/xvdX format and NOT /dev/sdb or /dev/sdc which is an old format. In that case you may have tweak the rules to get the results you desire
Recommendation from c5_m5_checks_script
OK NVMe Module is installed and available on your instance
OK ENA Module with version 2.0.2K is installed and available on your instance
Printing correct fstab file below:
# /etc/fstab
UUID=a727b695-0c21-404a-b42b-3075c8deb6ab / xfs defaults 0 0
UUID=587cde86-c167-4e73-92cb-b67739d9991d /data/logs xfs noatime 0 0
UUID=6410c3de-3714-4198-b31b-afeca374ef43 /data/active xfs noatime 0 0