FreeBSD on ZFS root, using 'gptzfsboot' - upgrading the disks

So, you've (I've) filled up your (my) NAS host - this was going to happen... You also probably (I definitely) haven't archived a full backup somewhere safe and offsite in some time! In that time, the most efficient size/cost ratio disks have probably increased in size massively.

It is time to procure 2 new bigger disks, and when they arrive, and are tested to be cool... shutdown the NAS host, and carefully shove them in...



# halt

  • add new mirror disks in the 2 spare drive slots, these will appear as BIOS disks 2 and 3, boot up, do:


# gpart show

  • note the old mirror disks partitioning, here it was 128K p1 (boot), 4G p2 (swap), the rest as p3 (part of pool0)
  • create same on new mirror disks:


# gpart create -s gpt ada2
# gpart add -s 128K -t freebsd-boot ada2
# gpart add -s 4G -t freebsd-swap ada2
# gpart add -t freebsd-zfs ada2
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2

# gpart create -s gpt ada3
# gpart add -s 128K -t freebsd-boot ada3
# gpart add -s 4G -t freebsd-swap ada3
# gpart add -t freebsd-zfs ada3
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada3

# gpart show
# zpool status

  • add the new mirror disks p3 to the pool0:


# zpool attach pool0 ada0p3 ada2p3 ada3p3
# zpool status

  • this will start with a resilver duration estimate approaching the End Of Time, this will drop fast, but I found my sata-2 enabled host took 30hrs to sync 2TB, so come back very later and check progress, and if done, check disk space:


# while true; do clear; zpool status; sleep 5; done
# df -h

  • note! no more space! this is obviously because the smallest vdev in the mirror is still 2TB, so pull out the old smaller disks, so unless you have hotswap, halt the host:


# halt

  • remove new mirror, BIOS disks 2 and 3, reboot onto OLD mirror ( this is to tidy it up a little bit, you /could/ skip it... ), do:


# zpool status
# zpool detach pool0 ada2p3
# zpool status
# zpool detach pool0 ada3p3
# zpool status

  • so now the host is as it was before new mirror disks were added, archive these somewhere very safe and offsite, halt the host again:


# halt

  • remove old mirror, BIOS disks 0 and 1, add new mirror disks back in, these will now be BIOS disks 0 and 1, boot up, do:


# zpool status

  • note! it appears ata0 and ata1 are ONLINE, and another ata0 and ata1 are MISSING, fear not! the following will work:


# zpool detach pool0 ada0p3 
# zpool detach pool0 ada1p3
# zpool status
# zpool list
# df -h

  • note! the usable size has NOT increased! WTFF!! manual suggests zpool will increase automatically to the size of the smallest vdev, what has gone wrong?! this is probably a bug-ette... so nervously try a workaround, and remove 1 mirror plex - hopefully, the size of the smallest vdev will be noticed...


# zpool detach pool0 ada1p3
# zpool status
# zpool list
# df -h

  • woo! well, sort of, the full capacity is now seen, but now we are in for a pointless resilver after reattaching the previously detached plex... NOT COOL!


# zpool attach pool0 ada0p3 ada1p3
# zpool status
# zpool list

  • this is annoyingly suboptimal, as there is no disk redundancy during the resilver - that said, there is an entirely cool bootable mirror archived... moving swiftly on, recreate the swap mirror - this could have been done earlier, but I didn't want to have to edit fstab :)


# swapinfo 

  • that should say nothing... if so proceed:


# gmirror label -b prefer swap /dev/ada[01]p2
# swapon -a
# swapinfo 

  • done. check on the resilver status, yawn!


# while true; do clear; zpool status; sleep 5; done

Posted by doug on Saturday, November 24, 2012