Files
DevOps/ZFS/Remove a vdev from a zpool.md
2025-11-21 18:29:53 +01:00

1.8 KiB

Remove a vdev from a zpool

https://www.truenas.com/community/threads/remove-a-vdev-from-a-zpool.35608/

Funny how two minds can think alike, that is exactly what I did and I like to write it down for further generations to come (or myself if I do it again) ;)

# zpool status -v tank
pool: tank
state: ONLINE

NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/e41b2476-22bd-11e2-ac2d-3cd92b06cdcb ONLINE 0 0 0
gptid/e4c521be-22bd-11e2-ac2d-3cd92b06cdcb ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/b4580a46-2fcd-11e5-af9d-3cd92b06cdcb ONLINE 0 0 0
gptid/b51e45b8-2fcd-11e5-af9d-3cd92b06cdcb ONLINE 0 0 0

First you have to remove one of the new drives from the mirror.

zpool detach tank /dev/gptid/b51e45b8-2fcd-11e5-af9d-3cd92b06cdcb

Then, create a new pool on the just detached drive

zpool create sonne /dev/gptid/b51e45b8-2fcd-11e5-af9d-3cd92b06cdcb

Now replicate all your data from the old filesystem to the new. (snd/rcv can take a while)

zfs snapshot -r tank@nas_backup
zfs send -Rv tank@nas_backup | zfs receive -Fv sonne

To make FreeNAS aware of your newly created filesystem

zfs export sonne

Then, while in the WebGUI, click "Storage -> Import Volume" to import it. Since I found no other way, I manually changed all path (user home dir, shares etc.) to their new values. Reboot.

Then comes the scary part

zpool destroy tank
zpool attach sonne gptid/b51e45b8-2fcd-11e5-af9d-3cd92b06cdcb gptid/b4580a46-2fcd-11e5-af9d-3cd92b06cdcb

Then the resilvering will set in. Done.

Reminder: During the whole process, there is no redundancy. If any drive fails, your data is gone.

By the way, if anyone knows how to tell FreeNAS (or ZFS) to not use gptid in "zpool status", instead make it look like https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/zfs-zpool.html, please PM me.