LVM: Difference between revisions

From DWIKI
 
(12 intermediate revisions by the same user not shown)
Line 16: Line 16:
*[http://blog.gadi.cc/better-lvm-for-kvm/ http://blog.gadi.cc/better-lvm-for-kvm/]  
*[http://blog.gadi.cc/better-lvm-for-kvm/ http://blog.gadi.cc/better-lvm-for-kvm/]  
*[[https://www.tecmint.com/take-snapshot-of-logical-volume-and-restore-in- https://www.tecmint.com/take-snapshot-of-logical-volume-and-restore-in-]  
*[[https://www.tecmint.com/take-snapshot-of-logical-volume-and-restore-in- https://www.tecmint.com/take-snapshot-of-logical-volume-and-restore-in-]  
*[https://wiki.gentoo.org/wiki/LVM/en Gentoo doc on LVM]
*[http://tldp.org/HOWTO/LVM-HOWTO/snapshots_backup.html Taking a Backup Using Snapshots]  
*[http://tldp.org/HOWTO/LVM-HOWTO/snapshots_backup.html Taking a Backup Using Snapshots]  
*[https://linoxide.com/identify-linux-lvm-mirror/ LVM mirrors]  
*[https://linoxide.com/identify-linux-lvm-mirror/ LVM mirrors]  
*https://www.thegeekdiary.com/how-to-convert-a-volume-to-stripe-raid0-volume-in-lvm/
*https://www.thegeekdiary.com/how-to-convert-a-volume-to-stripe-raid0-volume-in-lvm/
*[https://mydbops.wordpress.com/2019/09/08/get-the-most-iops-out-of-your-hard-disk-mounts-using-lvm/ Get the most IOPS out of your physical volumes using LVM]
*[https://mydbops.wordpress.com/2019/09/08/get-the-most-iops-out-of-your-hard-disk-mounts-using-lvm/ Get the most IOPS out of your physical volumes using LVM]
NOTE: remember to add snapshot support when calling lvcreate
NOTE: remember to add snapshot support when calling lvcreate


Line 33: Line 35:


= LVM Thin provisioning =
= LVM Thin provisioning =
man lvmthin
*https://www.tecmint.com/setup-thin-provisioning-volumes-in-lvm/
*https://www.tecmint.com/setup-thin-provisioning-volumes-in-lvm/
*https://www.theurbanpenguin.com/thin-provisioning-lvm2/
*https://www.theurbanpenguin.com/thin-provisioning-lvm2/
== Create thin pool ==
  lvcreate -L 100G -T vg001/mythinpool


== Show some more about thin volumes ==
== Show some more about thin volumes ==
 
  lvs -a
  lvm -a
and
and
  lvdisplay
  lvdisplay
==Grow thin pool==
lvextend -L 1T <VG>/<LVThin_pool>
==Resize metadata thin pool==
TODO verify
lvresize --poolmetadatasize +<size[M,G]> <VG>/<LVThin_pool>
==Create thin volume==
lvcreate -V 100G -T <VG>/<LVThin_pool> -n mythinvolume


= Related commands =
= Related commands =
Line 196: Line 214:


= FAQ =
= FAQ =
==Insufficient suitable allocatable extents for logical volume==
Probably striping multiple PVs.


== Access logical volumes within logical volume ==
== Access logical volumes within logical volume ==
Line 309: Line 329:
   wipefs -af /dev/sdg
   wipefs -af /dev/sdg


&nbsp;


&nbsp;


== WARNING: Device /dev/dm-17 not initialized in udev database even after waiting 10000000 microseconds. ==
== WARNING: Device /dev/dm-17 not initialized in udev database even after waiting 10000000 microseconds. ==
Line 359: Line 377:
== Insufficient suitable allocatable extents for logical volume ==
== Insufficient suitable allocatable extents for logical volume ==
You're probably trying to extend a striped volume.
You're probably trying to extend a striped volume.
Try
lvextend -l+100%FREE -i1
(this will be performance penalty!)

Latest revision as of 15:32, 14 October 2024

Logical Volume Management on Linux

 

Links

NOTE: remember to add snapshot support when calling lvcreate

Raid on LVM

Check:

lvm -o +devices,segtype

Convert linear to striped

LVM Thin provisioning

man lvmthin

Create thin pool

 lvcreate -L 100G -T vg001/mythinpool

Show some more about thin volumes

lvs -a

and

lvdisplay


Grow thin pool

lvextend -L 1T <VG>/<LVThin_pool>

Resize metadata thin pool

TODO verify

lvresize --poolmetadatasize +<size[M,G]> <VG>/<LVThin_pool>


Create thin volume

lvcreate -V 100G -T <VG>/<LVThin_pool> -n mythinvolume

Related commands

lvs

Logical Volume Attributes

lvs output shows "Attr",

Example

rwi-aor---

lvdisplay

lvrename

lvrename groupname oldname newname


pvs

Show the pyhysical volumens

pvscan

pvresize

To use after changing disk/partition size

vgrename

lsblk

partprobe

lvchange

vgchange

dmsetup

vgs

create physical volume

pvcreate /dev/sda3

 

lvcreate

lvcreate -L12G -nmyvol myvolumegroup
lvcreate -l 100%FREE -nmyvol myvolumegroup

lvresize

Grow filesystem together with the volume

lvresize --resizefs -L+20G /dev/vg/foo

or grow to all remaining space vg:

lvresize -l +100%FREE /dev/myvg/myvol

 

 

lvremove

To remove all volumes in group VGname

lvremove VGname

To remove a volume

lvremove VGname/LVname
      

Do you really want to remove active logical volume

lvchange -a n vgname/lvname

just to make sure

 

Logical volume X/Y contains a filesystem in use

https://www.thegeekdiary.com/lvremove-command-fails-with-error-lvm-cant-remove-open-logical-volume/

Could be NFS. Always remember NFS! If NFS has indeed been involved, restarting nfs service will most likely fix this.

pvck

To find the metadata:

pvck /dev/sdb1

 

LVM snapshot

lvcreate --size 1G --snapshot --name snap-1 /dev/myvg/mylv

where size should be enough to hold the data changes

Show snapshot information

LVM Snapshots information

HOWTO

List striped volumes

lvs -o+lv_layout,stripes
lvdisplay -m

Extend striped volume

not trivial

Convert linear logical volume to striped

See here and here and here and here and this one seems most useful

Volume groups

Create volume group

vgcreate vgname /dev/sdc1 /dev/sdd1

Add disk to volume group

pvcreate /dev/sdc
vgextend MYVG /dev/sdc

And if you need space right now:

lvextend -l +100%FREE /dev/NYVG/mylv 

And then grow fs

Remove volume group

Deactivate the volume group:

vgchange -a n my_volume_group

Now you actually remove the volume group:

vgremove my_volume_group

Remove physical drive from a volume group

See here Make sure the data fits on remaining drives, then

pvmove /dev/sdbX

When you get "No data to move for vg_sdg" that means pvmove is already done or not needed

vgreduce myvg /dev/sdbX

If you get "still in use" you might have to run pvmove again

FAQ

Insufficient suitable allocatable extents for logical volume

Probably striping multiple PVs.

Access logical volumes within logical volume

partprobe /dev/mapper/vg-mydata
lsblk

This will show the (sub) partitions/volumes, then edit /etc/lvm/lvm.conf

filter = [ "a|.*/|", "a|mydata|","r|.*|" ]

Then run:

vgscan
lvscan
vgs

Now you should see the names of the volumes you're looking for, so now:

vgchange -a y guestsname_mydata-home

and then you should be able to

mount /dev/mapper/guestsname_mydata-home

When done, remember to change back the filter in lvm.conf, default is

filter = [ "a|.*/|" ]

and of course then once again

vgscan
lvscan

 

vgreduce Can't remove final physical volume

Means you're trying to remove the last physical volume, instead just use

vgremove

grow logical volume

https://www.tldp.org/HOWTO/LVM-HOWTO/extendlv.html

lvextend -L+100G /dev/myvg/myvol



lvremove: Logical volume vg-kvm/vps-snapshot is used by another device.

Could be kpartx, see

/dev/mapper/

lvremove Logical volume foo/bar in use

check with lsof, fuser and:

losetup -l

to see if a /dev/dm-* looks familiar

 

some say lvchange -an the snapshot first, but that disables the lv it's connected to as well

OR

dmsetup info -c

http://blog.roberthallam.org/2017/12/solved-logical-volume-is-used-by-another-device/comment-page-1/

 

lvremove Do you really want to remove and DISCARD active logical volume

If you like, deactivate the volume first:

lvchange -an vgname/lvname

 

Grow physical volume

Assuming your LVM partition is the last one, use fdisk to delete and recreate it, remember to set type to LVM again and reboot. Then use pvresize /dev/sdaX

Or just:

pvresize /dev/sdb


pvcreate Can't open /dev/sdg exclusively. Mounted filesystem?

https://blog.hqcodeshop.fi/archives/274-Replacing-physical-drive-for-LVM-pvcreate-Cant-open-dev-exclusively.html

 

pvcreate: Cannot use /dev/sdb: device is partitioned

wipefs --all /dev/sdb

Device /dev/sdb excluded by filter

Check disk label(gpt!!

Try:

 wipefs -af /dev/sdg


WARNING: Device /dev/dm-17 not initialized in udev database even after waiting 10000000 microseconds.

Try

udevadm trigger

 

wipefs: error: /dev/sdg: probing initialization failed: Device or resource busy

try all ways to unbusy

Check if volume is in use

dmsetup info -c

Check the 'Open' column

WARNING: PV /dev/sda5 in VG foo-vg is using an old PV header, modify the VG to update.

vgck --updatemetadata foo-vg

 

Mount logical volume from disk image

See also https://backdrift.org/mounting-a-file-system-on-a-partition-inside-of-an-lvm-volume

kpartx -av /path/to.img
lvscan
mount /dev/mapper/what-ever-var /mnt/loop
umount
kpartx -d /path/to.img


Error reading device sdb

After pvremoving a disk:

wipefs -a /dev/sdb


pvmove: Cluster mirror log daemon is not running

Useless message, but if pvmove is failing: did you shrink your LV yet?

Insufficient suitable allocatable extents for logical volume

You're probably trying to extend a striped volume. Try

lvextend -l+100%FREE -i1

(this will be performance penalty!)