Tech Blog

In this post, I am going to resize both DRBD resources created in my last post titled CentOS 7 Active/Active iSCSI Cluster with no down-time.  In fact, I will have 2 virtual mahines running, one per DRBD resource, while I make this change.  The primary consideration is that the backing device must have the ability to grow online (think LVM).  I'm using virtual machines so I will be able to enlarge the resources at the hypervisor level.  A lot of the information in this post comes from the DRBD site http://www.drbd.org/en/doc/users-guide-83/s-resizing.  I am just adding a few steps along the way and including some screen captures.

My setup is on virtual machines so I opted to shut down one node at a time and enlarge the backing device at the hypervisor level before powering them back on.  Then, the DRBD resources can be enlarged, followed by enlarging the LVM physical volumes and finally the LVM logical volumes.

Before getting started, here is a screen capture of iscsivg01 created using pvdisplay:
pvdisplay before

And here is one for lvdisplay:
lvdisplay before

The first thing I am going to do is to verify that the DRBD resources are all in a healthy state by issuing the following command:
# cat /proc/drbd

Here is what mine looks like:
drbd healthy

As you can see above, my DRBD resource are in good shape.  If they are not, you should correct the problem(s) before continuing.

Next, I am going to set the first node I will be working on, node 5 in my case, into standby mode:
# pcs cluster standby node5-ha.theharrishome.lan

That should move all resources off that node so we can work on it without anything being affected.  Now let's verify nothing is running on node5.  Here is what mine looks like and notice it now says node 5 is in standby:
pcs node5 standby

Now we can shutdown node5 and enlarge the physical partition using whatever means you have to do so.  When it comes back online, node5 will still be in standby so we need to fix that with the following command:
# pcs cluster unstandby node5-ha.theharrishome.lan

Verify that node 5 did come out of standy and then put node6 in standby so we can shut it down and resize the data partitions on it:
# pcs cluster standby node6-ha.theharrishome.lan

Once we have enlarged the drives on node6 and powered it back on, we need to bring it back out of standby:
# pcs cluster unstandby node6-ha.theharrishome.lan

Now both of our nodes should be back online but if you look at the output of lvdisplay, you will notice that their size has not changed.  First, we must tell DRBD to synchronize the new space we added.  Once again, make sure that both nodes are connected with only one shown as the primary and enter the following on whichever node has that particular resource:
# drbdadm resize iscsivg01
# drbdadm resize iscsivg02

Now we should see that DRBD is once again syncronizing.  Here is what mine looks like during the resync process:
drbd syncing

Once they are back in sync, we can resize the LVM physical volume.  We resize them as follows keeping in mind that you will need to do this from whatever node currently has the respective DRBD resource:
# pvresize /dev/drbd0
# pvresize /dev/drbd1

Here is what pvdisplay now shows for /dev/drbd0 with the new 20 GB size:
pvdisplay after

Now we just need to resize the logical volumes on whatever node currently has them.  The following commands should take care of that, again making sure we run them on whatever node currently has the respective resource:
# lvextend -l +100%FREE /dev/mapper/iscsivg01-lun1
# lvextend -l +100%FREE /dev/mapper/iscsivg02-lun1

And here is what lvdisplay shows for iscsivg01 after resizing: lvdisplay after

Pretty cool stuff!  And again, I had two virtual machines located on the DRBD resources that I was accessing via iSCSI that never went down but rather were simply moved from one node to the other.

Thanks again for reading.

- Kyle H.