Single disk to raid1 with LVM on Debian stretch

Recently, I remembered that I have a second disk on my server and missed to use it when installing the system… So I decided to migrate to raid 1 on a single LVM volume group.

The best guide I found was, like often, on the ArchLinux wiki. Here’s my guide for Debian stretch.

First create a partition on the second disk using fdisk and make it bootable:

% fdisk /dev/sdb
Welcome to fdisk (util-linux 2.29.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0xcb9034eb.
Command (m for help): n
Partition type
p   primary (0 primary, 0 extended, 4 free)
e   extended (container for logical partitions)
Select (default p):
Using default response p.
Partition number (1-4, default 1):
First sector (2048-586072367, default 2048): 2048
Last sector, +sectors or +size{K,M,G,T,P} (2048-586072367, default 586072367):
Created a new partition 1 of type 'Linux' and of size 279.5G GiB.
Command (m for help): a
Selected partition 1
The bootable flag on partition 1 is enabled now.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Create a degraded raid 1 array on /dev/sdb1, dump the mdadm config into /etc/mdadm/mdadm.conf and re-generate the initramfs (otherwise the raid device will be named /dev/md127 instead of /dev/md0, which is probably a bug…):

% mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device.  If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
% mdadm --detail --scan >> /etc/mdadm/mdadm.conf
% update-initramfs -u

Create a LVM physical volume on /dev/md0, a volume group and a new volume for our root filesystem and create a ext4 filesystem on it:

% pvcreate /dev/md0
Physical volume "/dev/md0" successfully created.
% vgcreate vg0 /dev/md0
Volume group "vg0" successfully created
% lvcreate -L 20G -n root vg0
Logical volume "root" created.
% mkfs.ext4 /dev/mapper/vg0-root 
mke2fs 1.43.4 (31-Jan-2017)
Creating filesystem with 2359296 4k blocks and 589824 inodes
Filesystem UUID: bb31a512-9d4a-4501-98a8-f4755ebdb5d3
Superblock backups stored on blocks: 
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

Copy the content of your current root content to the newly created root filesystem using rsync:

% mkdir /mnt/root
% mount /dev/mapper/vg0-root /mnt/root
% rsync -avxHAX --numeric-ids --progress / /mnt/root

If you used a separate /boot partition, also rsync it to /mnt/root/boot.

Then edit /mnt/root/etc/fstab to mount the root partition from vg0/root, the simpler way to do this is to rely on disks uuid:

% blkid /dev/mapper/vg0-root
/dev/mapper/vg0-root: UUID="bb31a512-9d4a-4501-98a8-f4755ebdb5d3" TYPE="ext4"

The corresponding fstab line should be:

UUID=bb31a512-9d4a-4501-98a8-f4755ebdb5d3 /               ext4    errors=remount-ro 0       1

Then chroot to your new root and install and update grub configuration:

% mount --bind /sys /mnt/root/sys
% mount --bind /proc /mnt/root/proc
% mount --bind /dev /mnt/root/dev
% chroot /mnt/root /bin/bash
% grub-install /dev/sdb
% update-grub

You may encounter some warnings like WARNING: Failed to connect to lvmetad. Falling back to device scanning. or /usr/sbin/grub-probe: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image.., they where not relevant for me.

Then exit the chroot umount the new root:

% exit
% umount /mnt/root/dev
% umount /mnt/root/proc
% umount /mnt/root/sys
% umount /mnt/root

At this point if you tell your bios to boot on the second disk it should work and use vg0/root as root filesystem.

Unfortunately I was doing this on a remote server with no way to configure the bios and not access to the console. In this case you want to configure grub from the first disk to boot on the second disk (you will need to have os-prober installed):

% update-grub
(it should output a line like "Found Debian GNU/Linux 9 (stretch) on /dev/mapper/vg0-root")

Then we want to select boot on second disk by default in grub, edit /boot/grub/grub.cfg and check for menuentry line, then try to figure out what would looks like the grub screen, you can change the default boot by setting GRUB_DEFAULT in /etc/default/grub.

For me it was:

0 Debian GNU/Linux, with Linux 4.9.0-9-amd64
1 Debian GNU/Linux, with Linux 4.9.0-9-amd64 (recovery mode)
2 Debian GNU/Linux 9 (stretch) (on /dev/mapper/vg0-root)
[...]

So I wrote GRUB_DEFAULT=2 in /etc/default/grub and ran update-grub again.

Reboot, cross your fingers and hopefully you will boot on your new root.

Then ensure you don’t use anything from /dev/sda (or copy it to your LVM on vg0), then apply the partitioning scheme from /dev/sdb to /dev/sda using sfdisk (assuming the two disk are identical).

% sfdisk -d /dev/sdb | sfdisk /dev/sda

Finally add /dev/sda1 to the raid array:

% mdadm /dev/md0 -a /dev/sda1

Wait for the raid synchronization to finish by looking at /proc/mdstat.

Then re-install grub on /dev/sda and regenerate initramfs and grub configs::

% grub-install /dev/sda
% update-grub

Reboot and you should boot on your raid array, yay !