Increase EC2 (root) file system size

Some years ago I create a new instance in EC2 with the minimal configuration needed. The disk size of the root device and partition is set to 8 GB. Today I am reaching the limit of the disk size and need more space. Having the server in the cloud allows me to “simply” increase the size without having to buy a new HDD.

To increase the size of an EBS volume, you need to execute three tasks:

  1. Take snapshot
  2. Resize volume
  3. Resize file system

The commands to resize partition and file system are (gp2, ext4, t2):

sudo growpart /dev/xvda 1
sudo resize2fs /dev/xvda1

Take snapshot

Before starting, create a snapshot of the volume. See my blog on how to do this.

Resize volume

AWS documentation

You can use the EC2 console or CLI to extend a volume. I’ll use EC2 console. The volume used as root device for my EC2 instance is based on Elastic Block Store (EBS) and type gp2. This step is very easy to do, as you inform AWS that you need more storage and you get more storage assigned. You won’t be able to make use of that new storage as long as the file system isn’t resized.

Go to EBS > Volumes

A list of volumes is shown. Find the correct one using the volume ID. The root volume of my instance has 8GB size and type gp2.

To modify the volume, select the volume and then click on Actions > Modify Volume

The current configuration of the volume is shown. Last chance to verify you are changing the right volume.

I’ll only modify the size of the volume. From 8GB to 20 GB.

Confirm the change. Click on Yes.

In case AWS was able to assign more storage to your volume, a confirmation message is shown.

The size of the volume is now shown as 20 GB in the volume table.

Resize file system

AWS documentation

Assigning more storage to the volume is one step. To make use of the new disk space, the partition and filesystem must be resized. To see the available partition:

sudo file -s /dev/xvd*

Resize partition

The size of the volume is adjusted. The partition on the disk must be resized to make use of that space. To see the size of the disk and partition:

lsblk

The available space is 20G in total, with the partition xvda1 taking 8G only. Increase size of partition

sudo growpart /dev/xvda 1

The check if the partition was resized, run lsblk again. The partition xvda1 should now be 20G large.

lsblk

Resize file system

Resizing the EBS volume and partition is not resizing the file system. The file system still thinks it only has 8GB available.

df -h

To change size, the file system must be resized. My root file system is using EXT4 (see output above), therefore I can use resize2fs to adjust it.

sudo resize2fs /dev/xvda1

After resize2fs finishes, the file system can now use the new 20G of the EBS volume.

df -h

Let the world know

How to add a new disk to RAID5

I have a RAID5 consisting of three 10TB HDDs. This RAID5 has a total capacity of 20 TB.

I bought a new 10 TB HDD that I want to use to extend the RAID5: 4 HDDs with a total capacity of 30 TB. The file system on md0 is ext4. Currently, the RAID5 disks are sdc1, sdf1 and sde1. The additional disk is sdd1.

cat /proc/mdstat

The RAID5 is formatted with ext4 and available as md0.

mount

Steps

  1. Prepare new disk
  2. Add disk to RAID
  3. Grow RAID
  4. Extend ext4 files system.

Prepare new disk

First start with the preparation of the new disk. The disk is /dev/sdd and needs to have a partition. I use parted for this. First, create a label of type gpt.

parted -s -a optimal /dev/sdd mklabel gpt

Next is to create the partition using parted. This time, I am using the interface.

parted /dev/sdd

Add disk to RAID

The RAID is a software RAID on Linux, therefore mdadm is used to control the raid. To add a new disk, option –add is used and the raid and new disk are passed as parameters.

mdadm --add /dev/md0 /dev/sdd1

The result of the operation can be seen in mdstat.

cat /poc/mdstat

The new disk is added as a spare device. The (S) behind sdd1 means spare device. In case a device would fail, the spare device will take over automatically and a RAID rebuild will be triggered. This gives me less trouble in case a device fails, as I won’t have to do anything, but it won’t give me more space. The RAID5 is still at 20 TB.

Grow RAID

To make the RAID5 aware of the new disk and that it should be used for data storage, the RAID must be informed to use the new HDD using the grow command.

mdadm --grow --raid-devices=4 /dev/md0

The command informs the RAID that there are now 4 HDDs to be used, instead of 3. This command will trigger a RAID rebuild, as the information must be distributed to the HDDs.

This process will take some time. To learn how to increase the speed the sync, see my other blog about this topic.

The RAID5 consists now of 4 HDD, all working [UUUU]. The size of the RAID is still 20 TB. This is because the md0 has capacity of 30 TB, but the ext4 filesystem is still configured to make use of 20 TB.

Resize ext4 filesystem

To be able to use the 30TB available on the RAID5, you need to resize the file system. First, run an integrity check.

e2fsck -f /dev/md0

After the e2fsck ended without errors, the file system can be extended. This is done by using the tool resize2fs.

resize2fs /dev/md0

After resize2fs completes (can take a while), the size available is now 30TB:

mount /dev/md0 /mnt/md0/

Links

Let the world know