ZFS

From Indie IT Wiki
Revision as of 16:03, 6 November 2019 by imported>Plittlefield (→‎Archiving Encrypted Snapshots to AWS S3)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Introduction

http://www.howtogeek.com/175159/an-introduction-to-the-z-file-system-zfs-for-linux/

https://wiki.gentoo.org/wiki/ZFS

https://wiki.archlinux.org/index.php/ZFS

Usage

https://wiki.ubuntu.com/Kernel/Reference/ZFS

Installation

sudo apt-get install zfsutils-linux

Pools

https://wiki.ubuntu.com/ZFS/ZPool

Create the root storage directory for the pools...

sudo mkdir /zfs

Create the mirror (raid1) zpool...

sudo zpool create -f -m /zfs/zpool1 zpool1 mirror /dev/sdb /dev/sdc

Check...

sudo zpool list

NAME     SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zpool1  1008M   368K  1008M         -     1%     0%  1.00x  ONLINE  -
sudo zpool status

pool: zpool1
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Wed Jun  8 13:55:46 2016
config:
       NAME        STATE     READ WRITE CKSUM
       zpool1      ONLINE       0     0     0
         mirror-0  ONLINE       0     0     0
           sdb     ONLINE       0     0     0
           sdc     ONLINE       0     0     0
errors: No known data errors
sudo zpool iostat zpool1

              capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zpool1       151G  3.48T      0     63    285  6.28M
sudo tree /zfs

/zfs/
└── zpool1

Increase Storage Space

If you have a mirror (raid1) pool, you have to add disks in pairs.

Here, we add 2 more drives to the existing pool and double the size of the storage space available...

sudo zpool add -f zpool1 mirror /dev/sdd /dev/sde

Now, check the size of the pool...

sudo zpool list

Datasets

Create a dataset for Documents...

sudo zfs create zpool1/Documents

Check...

sudo zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
zpool1             154K   976M    19K  /zfs/zpool1
zpool1/Documents    19K   976M    19K  /zfs/zpool1/Documents

If you want to, set a quota limit of 100Mb...

sudo zfs set quota=100M zpool1/Documents

sudo zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
zpool1             155K   976M    19K  /zfs/zpool1
zpool1/Documents    19K   100M    19K  /zfs/zpool1/Documents

Now check the dataset variables...

sudo zfs get all zpool1/Documents

Now, you can create datasets for the other main directories...

sudo zfs create zpool1/Music
sudo zfs create zpool1/Pictures
sudo zfs create zpool1/Videos
sudo zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
zpool1             227K   976M    19K  /zfs/zpool1
zpool1/Documents    19K   100M    19K  /zfs/zpool1/Documents
zpool1/Music        19K   976M    19K  /zfs/zpool1/Music
zpool1/Pictures     19K   976M    19K  /zfs/zpool1/Pictures
zpool1/Videos       19K   976M    19K  /zfs/zpool1/Videos
sudo tree /zfs/
/zfs/
└── zpool1
    ├── Documents
    ├── Music
    ├── Pictures
    └── Videos

Tutorial & Information

http://flux.org.uk/tech/2007/03/zfs_tutorial_1.html

http://flux.org.uk/tech/2007/03/zfs_tutorial_2.html

https://docs.oracle.com/cd/E19253-01/819-5461/

http://www.fibrevillage.com/storage/168-zfs-pool-zfs-datasets-and-zfs-volumes

Rename A Pool

zpool export <your_pool_name>
zpool import <your_pool_name> <your_new_pool_name>

https://prefetch.net/blog/index.php/2006/11/15/renaming-a-zfs-pool/

Replace A Disk

https://forum.proxmox.com/threads/disk-replacement-procedure-for-a-zfs-raid-1-install.21356/#post-133719

Delete A ZFS Pool

sudo zpool destroy <your_pool_name>

NOTE: There is no prompt to confirm destruction.

Finding A Pool

zpool import

Starting A Pool

zpool import <name>

Mounting A Pool

mkdir -p /zfs/vpool
zfs set mountpoint=/zfs/vpool vpool
zfs get mountpoint vpool
zfs get mounted vpool

Increase Storage By Replacing Disks With Larger Disks

https://www.dan.me.uk/blog/2012/11/14/increase-capacity-of-freebsd-zfs-array-by-replacing-disks/

https://madaboutbrighton.net/articles/2016/increase-zfs-pool-by-adding-larger-disks

Snapshots

Introduction

zfs snapshot is a read-only copy of zfs file system or volume. They consume no extra space in the zfs pool and can be created instantly. They can be used to save a state of file system at particular point of time and can later be rolled back to exactly same state. You can also extract some of the files from the snapshot if required and not doing a complete roll back.

Usage

Set variable so you can see the snapshots and retrieve individual files...

sudo zfs set snapdir=visible zpool1/dataset

Create the snapshot...

sudo zfs snapshot zpool1/dataset@20180523

List the snapshots...

sudo zfs list -t snapshot

Check the contents...

sudo tree -a /zfs/

Send a snapshot to another ZFS filesystem on another server...

sudo zfs send zpool1/dataset@20180523 | ssh server2 zfs receive zpool1/dataset

Delete a snapshot...

sudo zfs destroy zpool1/dataset@20180523

https://www.thegeekdiary.com/zfs-tutorials-creating-zfs-snapshot-and-clones/

http://kbdone.com/zfs-snapshots-clones/

Automation

http://www.zfsnap.org/about.html

http://www.zfsnap.org/zfsnap_manpage.html

Create snapshot that will expire in 7 days...

sudo zfsnap snapshot -v -a 7d zpool1/backups

Delete snapshots older than 7 days...

sudo zfsnap destroy -v -n -F 7d zpool1/backups

or...

zfs-auto-snapshot - ZFS automatic snapshot service

...with some kind of 'find -exec rm' etc.

ZFS Snapshots using Sanoid

Archiving Encrypted Snapshots to AWS S3

Manual

Sending full backup...

zfs send -R <pool name>@<snapshot name> | gzip | gpg --no-use-agent  --no-tty --passphrase-file ./passphrase -c - | aws s3 cp - s3://<bucketname>/<filename>.zfs.gz.gpg

Sending incremental backup...

zfs send -R -I <pool name>@<snapshot to do incremental backup from> <pool name>@<snapshot name> | gzip | gpg --no-use-agent  --no-tty --passphrase-file ./passphrase -c - | aws s3 cp - s3://<bucketname>/<filename>.zfs.gz.gpg

Restoring backup...

aws s3 cp s3://<bucketname>/<filename>.zfs.gz.gpg - | gpg --no-use-agent --passphrase-file ./passphrase -d - | gunzip | sudo zfs receive <new dataset name>

https://stackoverflow.com/questions/45786142/storing-locally-encrypted-incremental-zfs-snapshots-in-amazon-glacier

Scripted

https://github.com/presslabs/z3