ZFS: Difference between revisions

From DWIKI
 
Line 199: Line 199:
==Get sizes/reservations==
==Get sizes/reservations==
  zfs get quota,reservation tank/vol1
  zfs get quota,reservation tank/vol1
==Set maximum seize of dataset==
zfs set quota=200G tank/myset


==Caching==
==Caching==

Latest revision as of 14:09, 20 December 2024

Links

Documentation

ARC/Caching

L2ARC

sysctl kstat.zfs.misc.arcstats | egrep 'l2_(hits|misses)'

and

egrep 'l2_(hits|misses)' /proc/spl/kstat/zfs/arcstats

Tuning ZFS

(Monitoring and Tuning ZFS Performance

ARC statistics

ZFS module parameters

/sys/module/zfs/parameters/
cat /proc/spl/kstat/zfs/arcstats

data_size

size of cached user data

dnode_size

hdr_size

size of L2ARC headers stored in main ARC

metadata_size

size of cached metadata

Tools


kstat-analyzer

prefetch hit rate is low, consider tuning prefetcher

Check:

Supposed to leave that at 0:

cat /sys/module/zfs/parameters/zfs_vdev_cache_size


Code:

if (float(kstats['hits']) / accesses) < PREFETCH_RATIO_OK

Relevant links:

Processes

arc_evict

Evict buffers from list until we've removed the specified number of bytes. Move the removed buffers to the appropriate evict state. If the recycle flag is set, then attempt to "recycle" a buffer: - look for a buffer to evict that is `bytes' long. - return the data block from this buffer rather than freeing it. This flag is used by callers that are trying to make space for a new buffer in a full arc cache.


This function makes a "best effort". It skips over any buffers it can't get a hash_lock on, and so may not catch all candidates. It may also return without evicting as much space as requested.

arc_prune

Commands

Getting arc statistics

arcstat
arc_summary

Tip, for details use

arc_summary -d

There is also

cat /proc/spl/kstat/zfs/arcstats

and

zfetchstat + kstat-analyzer from zfs-linux-tools


zil/slog statistics

arc_summary -s zil

or

cat /proc/spl/kstat/zfs/zil

or

zilstat

or

 zpool iostat -v

l2arc statistics

arc_summary -s l2arc

Getting IO statistics

zpool iostat -v 300

Terms and acronyms

vdev

Virtual Device.

ARC

Adaptive Replacement Cache

Portion of RAM used to cache data to speed up read performance

L2ARC

Level 2 Adaptive Replacement Cache

"L2ARC is usually considered if hit rate for the ARC is below 90% while having 64+ GB of RAM"

SSD cache

DMU

Data Management Unit


MFU

Most Frequently Used

MRU

Most Recently Used

zvol

kind of block device whose space is allocated from the pool, useful for iscsi targets

Scrubbing

Checking disks/data integrity

zpool status <poolname | grep scrub

and

zpool scrub <poolname>

probably taken care of by cron.


SLOG

See [ZIL]

ZIL

ZIL explained

the space synchronous writes are logged before the confirmation is sent back to the client

prefetch

See /proc/spl/kstat/zfs/zfetchstats

HOWTO

Get sizes/reservations

zfs get quota,reservation tank/vol1


Set maximum seize of dataset

zfs set quota=200G tank/myset

Caching

Add log/cache

For l2arc mirrors make little sense, just add disks

zpool add rpool cache sdf

or maybe better

zpool add rpool cache /dev/disk/by-id/ata-SAMSUNG_MZ7LH960HAJR-00005_S45NNA0N47394

or simply

zpool add rpool cache ata-SAMSUNG_MZ7LH960HAJR-00005_S45NNA0N47394

Add ZIL/SLOG write cache

zpool add rpool log mirror sdk sdl

Remove ZIl/SLOG mirrored cache

zpool remove mypool mirror-4 sdn1 sdo1

Getting statistics

Show cache activity

dstat --zfs-arc --zfs-l2arc --zfs-zil -d 5

zpool

zpool iostat

More statistics, every 5 seconds

zpool -v iostat 5

Flush linux caches

echo 3 > /proc/sys/vm/drop_caches

arc statistics

l2arc statistics

ZIL statistics

cat /proc/spl/kstat/zfs/zil

Create zfs filesystem

zfs create poolname/fsname

this also creates mountpoint


Add vdev to pool

zpool add mypool raidz1 sdg sdh sdi

Replace disk in zfs

Some links

Get information first:

Name of disk

zpool status

Find uid of disk to replace

take it offline

zpool offline poolname ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M5RLZC6V

Get the disk guid:

zdb

guid: 15233236897831806877

Get list of disk by id:

ls -al /dev/disk/by-id

Save the id, shutdown, replace disk, boot:

Find the new disk:

ls -al /dev/disk/by-id

Run replace command. The id is the guid of the old disk, name is of the new disk

zpool replace tank /dev/disk/by-id/13450850036953119346 /dev/disk/by-id/ata-ST4000VN000-1H4168_Z302FQVZ


or just

zpool replace tank /dev/sdi


If disk is shown as UNAVAIL

zpool offline tank sdi

Showing information about ZFS pools and datasets

Show pools with sizes

zpool list 

or

zpool list -H -o name,size


Show reservations on datasets

zfs list -o name,reservation

Swap on zfs

https://askubuntu.com/questions/228149/zfs-partition-as-swap

zfs create pool/swap -V 4G -b 4K
mkswap -f /dev/pool/swap
swapon /dev/pool/swap

and remember fstab

vdevs

multiple vdevs

Multiple vdevs in a zpool get striped. What about balance?

invalid vdev specification

Probably means you need -f

show balance between vdevs

zpool iostat -v 'pool' [interval in seconds]

orjust

zpool iostat -vc 'pool'

Tuning arc settings

See Tuning ZFS modules parameters

zfs_arc_max

Linux defaults to giving 50% of RAM to arc, this is when:

cat /sys/module/zfs/parameters/zfs_arc_max
0
grep c_max /proc/spl/kstat/zfs/arcstats

To change this:

echo 5368709120 > /sys/module/zfs/parameters/zfs_arc_max

and add to /etc/modprobe.d/zfs.conf

options zfs zfs_arc_max=5368709120

NOTE you might need to run (for example when running / on zfs)

update-initramfs -u -k all

and perhaps clear caches and reset counters:

echo 3 > /proc/sys/vm/drop_caches

Tune zfs_arc_dnode_limit_percent

Assuming zfs_arc_dnode_limit = 0:

echo 20 > /sys/module/zfs/parameters/zfs_arc_dnode_limit_percent

In /etc/modprobe.d/zfs.conf:


options zfs zfs_arc_dnode_limit_percent=20


export iscsi

https://linuxhint.com/share-zfs-volumes-via-iscsi/

FAQ

arc_summary

VDEV cache disabled, skipping section

This is normal, vdev caching is considered bad in current code

Arc metadata size exceeds maximum

So arc_meta_used > arc_meta_limit


increasing feed rate

show status and disks

zpool status

show drives/pools

zfs list
      

check raid level

zfs list -a


Estimate raidz speeds

raidz1: N/(N-1) * IOPS
raidz2: N/(N-2) * IOPS
raidz3: N/(N-3) * IOPS


VDEV cache disabled, skipping section

Looks like you just don't have l2arc cache


cannot export 'tank': pool is busy

After checking stuff like nfs etc try:

zfs unshare -a
zfs umount -a -f
zpool export -f tank