Published: 16. 10. 2019   Category: GNU/Linux

Online resize and modifications of different file systems in GNU/Linux or UNIX

Resizing file systems on virtualized hardware is pretty simple, just use your management console (AWS, VirtualBox, etc.) to allocate more data and then as root use standard OS commands to modify the disk partition.

Be extremely careful, because some of the following commands are pretty destructive!

Resizing AWS volume in command line

Have your credentials setup for using awscli and then follow this example:

# List volumes attached to some instance
aws ec2 describe-instances \
    --instance-id=i-01508790348a21307 \
    --region=eu-west-1
    --query="Reservations[*].Instances[*].BlockDeviceMappings"

# Get information about the selected volume
aws ec2 describe-volumes \
    --volume-id vol-0eb12becf38326c9f \
    --region=eu-west-1

# Resize that volume
aws ec2 modify-volume \
    --volume-id vol-0eb12becf38326c9f \
    --size 7700 \
    --region=eu-west-1

# Check status of the volume modification
aws ec2 describe-volumes-modifications \
    --volume-id vol-0eb12becf38326c9f \
    --region=eu-west-1

Resize single primary partition with Ext file system

  1. Run fdisk on a device, in this case /dev/xvda. Use 'p' to display partition information:
    Disk /dev/xvda: 214.7 GB, 214748364800 bytes
    255 heads, 63 sectors/track, 26108 cylinders, total 419430400 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
     
     Device     Boot Start End        Blocks    Id System
     /dev/xvda1 *    16065 209712509 104848222+ 83 Linux
    
    The important is the start sector number, in this example 16065.
  2. Use 'd' to delete /dev/xvda1 (first primary partition).
  3. Create a new primary partition with 'p', as a start sector use the number above (16065) and fdisk will provide a new maximal number of block.
  4. Set boot flag with 'a'.
  5. Write new partition table to disk with 'w'. The new settings are:
    Disk /dev/xvda: 214.7 GB, 214748364800 bytes
    255 heads, 63 sectors/track, 26108 cylinders, total 419430400 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
     
     Device     Boot Start End        Blocks    Id System
     /dev/xvda1 *    16065 419430399 209707167+ 83 Linux
    
  6. Refresh kernel hardware table: partprobe.
  7. Resize filesytem: resize2fs /dev/xvda1.
    If there is an error message: The filesystem is already 52426791 blocks long. Nothing to do! Partprobe was not executed and kernel did not refresh devices!
  8. Check new file system utilization with df.

Resize parition sitting on RAID1 array (mirror)

  1. Our AWS EC2 servers has usually one device /dev/sda which is running operating system and two volumes /dev/sdd and /dev/sde (may differ) used for software RAID.
  2. Check details of software RAID on server: mdadm --detail /dev/md0
    /dev/md0:
            Version : 1.2
      Creation Time : Thu Jun 30 14:53:24 2016
         Raid Level : raid1
         Array Size : 4170419968 (3977.22 GiB 4270.51 GB)
      Used Dev Size : 4170419968 (3977.22 GiB 4270.51 GB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
     
        Update Time : Wed Oct  4 00:19:21 2017
              State : clean
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
     
               Name : are-X-devD-docker-0:2
               UUID : b5654e63:4e7be141:9117b522:e60077f8
             Events : 2783
     
        Number   Major   Minor   RaidDevice State
           0     202       48        0      active sync   /dev/xvdd
           1     202       64        1      active sync   /dev/xvde
    
    As you can see, these /dev/sdd and /dev/sde are visible as /dev/xvdd and xvde in current Linux installation.
  3. In AWS console, select volumes and do Modify → Resize. Put new size of volume, the same for both volumes. Resize operation will take less then one minute on AWS side, but there is some optimization running on the background which may affect I/O performance of the server. Once the console is reloaded new size is shown. You can use fdisk -l to show current size:
    $ fdisk -l | grep 'Disk /dev/'
     
    Disk /dev/xvde doesn't contain a valid partition table
    Disk /dev/xvdd doesn't contain a valid partition table
    Disk /dev/md0 doesn't contain a valid partition table
    Disk /dev/xvda: 42.9 GB, 42949672960 bytes
    Disk /dev/xvde: 5471.8 GB, 5471788335104 bytes
    Disk /dev/xvdd: 5471.8 GB, 5471788335104 bytes
    Disk /dev/md0: 4270.5 GB, 4270510047232 bytes
    
  4. Resize /dev/md0: mdadm --grow /dev/md0 --size=max
  5. Now, when underlay disk array has updated size, it is possible to resize filesystem:
    $ resize2fs /dev/md0
     
    resize2fs 1.42.9 (4-Feb-2014)
    Filesystem at /dev/md0 is mounted on /mnt/are_md0; on-line resizing required
    old_desc_blocks = 249, new_desc_blocks = 315
    The filesystem on /dev/md0 is now 1317177280 blocks long.
    
  6. File system is now resized, check df and you will see that mount point /dev/md0 has new size.

Identify NVMe devices in AWS

Some new hardware types in Amazon Web Services are not using SSDs but NVMe (Non-volatie Memories), they can be managed different way and you can get their volume identification with this script using command nvme:

#!/bin/bash
( printf "AWS console;Volume Id;Local device\n"
for nvme_dev in $(nvme list | grep -Eo '/dev/nvme[0-9]n[0-9](p[0-9])?' | grep -v 'p[0-9]$')
do
    aws_dev=$(nvme id-ctrl -v $nvme_dev | grep '^0000' | grep -Eo '(/dev/)?[a-z]{3,4}')
    aws_id=$(nvme id-ctrl -v $nvme_dev | sed -ne '/^sn/{s/.*vol\([[:xdigit:]]*\)/vol-\1/;p}')

    if [[ $aws_dev != /dev/* ]] ; then
        aws_dev=/dev/$aws_dev
    fi
    printf "%s;%s;%s\n" $aws_dev $aws_id $nvme_dev
done ) | column -s';' -t

Add new disk in volume group on HP-UX with OnlineJFS

This variant uses classic method for disk connected from some external disk array via fiber channel interfaces. Those disks are added to the Logical Volume Manager:

  1. Check if you have license for Veritas OnlineJFS: vxlicrep.
  2. Find newly connected disks and their LUN: ioscan -fnC disk
  3. Create device files /dev/dsk/* and /dev/rdsk*: insf -vC disk
  4. Resize volume group vg02 with c7t0d3 and c9t0d3 and create new logical volume data02:
    pvcreate /dev/rdsk/c7t0d3
    pvcreate /dev/rdsk/c9t0d3
    vgextend vg02 /dev/dsk/c7t0d3 /dev/dsk/c9t0d3
    vgdisplay -v vg02 | more
    lvcreate -n data02 -l 6398 vg02 # size is number of extens
    
    When mirroring is enabled the number of extents must be double.
  5. Format new partition: newfs -F vxfs -o largefiles -b 8192 /dev/vg02/rdata02
  6. Add new volume mount:
     
    mkdir /oradata2
    vi /etc/fstab
    /dev/vg02/data02 /oradata2 vxfs delaylog 0 2
    mount -a
    
  7. Change permissions of file system after mount.

Resize logical volume

  1. Check parameters of VG: vgdisplay -b vgEMC02 for number of non-allocated i.e. free space/extens.
  2. Resize logical volume, set new size 102400 megabytes: lvextend -L 102400 /dev/vgEMC02/lv_data01