LVM: Difference between revisions

From DWIKI
mNo edit summary
(23 intermediate revisions by the same user not shown)
Line 2: Line 2:
Logical Volume Management on [[Linux|Linux]]
Logical Volume Management on [[Linux|Linux]]


 


= Links =
= Links =
Line 16: Line 17:
*[[https://www.tecmint.com/take-snapshot-of-logical-volume-and-restore-in- https://www.tecmint.com/take-snapshot-of-logical-volume-and-restore-in-]  
*[[https://www.tecmint.com/take-snapshot-of-logical-volume-and-restore-in- https://www.tecmint.com/take-snapshot-of-logical-volume-and-restore-in-]  
*[http://tldp.org/HOWTO/LVM-HOWTO/snapshots_backup.html Taking a Backup Using Snapshots]  
*[http://tldp.org/HOWTO/LVM-HOWTO/snapshots_backup.html Taking a Backup Using Snapshots]  
*[https://linoxide.com/identify-linux-lvm-mirror/ LVM mirrors]
*[https://linoxide.com/identify-linux-lvm-mirror/ LVM mirrors]  
 
*https://www.thegeekdiary.com/how-to-convert-a-volume-to-stripe-raid0-volume-in-lvm/
*[https://mydbops.wordpress.com/2019/09/08/get-the-most-iops-out-of-your-hard-disk-mounts-using-lvm/ Get the most IOPS out of your physical volumes using LVM]
NOTE: remember to add snapshot support when calling lvcreate
NOTE: remember to add snapshot support when calling lvcreate


 
*[https://wiki.gentoo.org/wiki/LVM/en Gentoo wiki on LVM]


== Raid on LVM ==
== Raid on LVM ==
Line 28: Line 30:


 
 
 
= LVM Thin provisioning =
== Show some more about thin volumes ==
lvm -a


= Related commands =
= Related commands =
== lvs ==


== lvdisplay ==
== lvdisplay ==
Line 36: Line 48:


  lvrename groupname oldname newname
  lvrename groupname oldname newname
== pvs ==
Show the pyhysical volumens
== pvscan ==
== pvresize ==
To use after chaning disk/partition size


== vgrename ==
== vgrename ==
Line 64: Line 85:
== lvresize ==
== lvresize ==


  lvresize --resizefs -L-2G /dev/vg/foo
Grow filesystem together with the volume
 
  lvresize --resizefs -L+20G /dev/vg/foo


or grow to all remaining space vg:
or grow to all remaining space vg:
Line 91: Line 114:
just to make sure
just to make sure


 


=== Logical volume X/Y contains a filesystem in use ===
=== Logical volume X/Y contains a filesystem in use ===


https://www.thegeekdiary.com/lvremove-command-fails-with-error-lvm-cant-remove-open-logical-volume/
[https://www.thegeekdiary.com/lvremove-command-fails-with-error-lvm-cant-remove-open-logical-volume/ https://www.thegeekdiary.com/lvremove-command-fails-with-error-lvm-cant-remove-open-logical-volume/]


Could be NFS. Always remember NFS! If NFS has indeed been involved, restarting nfs service will most likely fix this.
Could be NFS. Always remember NFS! If NFS has indeed been involved, restarting nfs service will most likely fix this.
Line 115: Line 139:


 
 
= HOWTO =
==Extend striped volume==
See [https://web.mit.edu/rhel-doc/5/RHEL-5-manual/Cluster_Logical_Volume_Manager/stripe_extend.html here]
==Convert linear logical volume to striped==
See [https://www.thegeekdiary.com/how-to-convert-a-volume-to-stripe-raid0-volume-in-lvm/ here] and [https://robbat2.livejournal.com/243144.html here] and [http://www.voleg.info/lvm2-convert-stripe-volume.html here] and [https://www.depesz.com/2015/10/08/converting-logical-volume-so-that-its-striped/ here]
and [https://www.handigeknakker.nl/?x=entry:entry160330-193558 this one] seems most useful
==Volume groups==
=== Add disk to volume group ===
pvcreate /dev/sdc
vgextend MYVG /dev/sdc
And if you need space right now:
lvextend -l +100%FREE /dev/NYVG/mylv
And then grow fs


=== Remove volume group ===
Deactivate the volume group:
vgchange -a n my_volume_group


Now you actually remove the volume group:
vgremove my_volume_group
=== Remove physical drive from a volume group ===
See [https://www.2daygeek.com/linux-remove-delete-physical-volume-pv-from-volume-group-vg-in-lvm/ here]
Make sure the data fits on remaining drives, then
pvmove /dev/sdbX
When you get "No data to move for vg_sdg" that means pvmove is already done or not needed
vgreduce myvg /dev/sdbX


= FAQ =
= FAQ =
Line 166: Line 222:
  lvextend -L+100G /dev/myvg/myvol
  lvextend -L+100G /dev/myvg/myvol


== Add disk to volume group ==
pvcreate /dev/sdc
vgextend MYVG /dev/sdc


And if you need space right now:
lvextend -l +100%FREE /dev/NYVG/mylv


And then grow fs


== lvremove: Logical volume vg-kvm/vps-snapshot is used by another device. ==
== lvremove: Logical volume vg-kvm/vps-snapshot is used by another device. ==
Line 205: Line 253:
 
 


==lvremove Do you really want to remove and DISCARD active logical volume==
== lvremove Do you really want to remove and DISCARD active logical volume ==
 
If you like, deactivate the volume first:
If you like, deactivate the volume first:
  lvchange -an vgname/lvname
  lvchange -an vgname/lvname


 


== Grow physical volume ==
== Grow physical volume ==
Line 217: Line 268:


  pvresize /dev/sdb
  pvresize /dev/sdb
 
== Remove physical drive from a volume group ==
Make sure the data fits on remaining drives, then
pvmove /dev/sdbX
vgreduce vgname /dev/sdbX


== Create volume group ==
== Create volume group ==
Line 239: Line 278:


 
 
== pvcreate: Cannot use /dev/sdb: device is partitioned ==
wipefs --all /dev/sdb


== Device /dev/sdb excluded by filter ==
== Device /dev/sdb excluded by filter ==
Line 277: Line 319:


== Mount logical volume from disk image ==
== Mount logical volume from disk image ==
See also https://backdrift.org/mounting-a-file-system-on-a-partition-inside-of-an-lvm-volume


  kpartx -av /path/to.img
  kpartx -av /path/to.img
Line 282: Line 325:
  mount /dev/mapper/what-ever-var /mnt/loop
  mount /dev/mapper/what-ever-var /mnt/loop
   [[Category:System Administration]] [[Category:Linux]]
   [[Category:System Administration]] [[Category:Linux]]
umount
kpartx -d /path/to.img
== Error reading device sdb ==
After pvremoving a disk:
wipefs -a /dev/sdb
== pvmove: Cluster mirror log daemon is not running ==
Useless message, but if pvmove is failing: did you shrink your LV yet?

Revision as of 11:09, 5 October 2022

Logical Volume Management on Linux

 

Links

NOTE: remember to add snapshot support when calling lvcreate

Raid on LVM

 

 

LVM Thin provisioning

Show some more about thin volumes

lvm -a

Related commands

lvs

lvdisplay

lvrename

lvrename groupname oldname newname


pvs

Show the pyhysical volumens

pvscan

pvresize

To use after chaning disk/partition size

vgrename

lsblk

partprobe

lvchange

vgchange

dmsetup

vgs

create physical volume

pvcreate /dev/sda3

 

lvcreate

lvcreate -L12G -nmyvol myvolumegroup
lvcreate -l 100%FREE -nmyvol myvolumegroup

lvresize

Grow filesystem together with the volume

lvresize --resizefs -L+20G /dev/vg/foo

or grow to all remaining space vg:

lvresize -l +100%FREE /dev/myvg/myvol

 

 

lvremove

To remove all volumes in group VGname

lvremove VGname

To remove a volume

lvremove VGname/LVname
      

Do you really want to remove active logical volume

lvchange -a n vgname/lvname

just to make sure

 

Logical volume X/Y contains a filesystem in use

https://www.thegeekdiary.com/lvremove-command-fails-with-error-lvm-cant-remove-open-logical-volume/

Could be NFS. Always remember NFS! If NFS has indeed been involved, restarting nfs service will most likely fix this.

pvck

To find the metadata:

pvck /dev/sdb1

 

LVM snapshot

lvcreate --size 1G --snapshot --name snap-1 /dev/myvg/mylv

where size should be enough to hold the data changes

 

HOWTO

Extend striped volume

See here

Convert linear logical volume to striped

See here and here and here and here and this one seems most useful

Volume groups

Add disk to volume group

pvcreate /dev/sdc
vgextend MYVG /dev/sdc

And if you need space right now:

lvextend -l +100%FREE /dev/NYVG/mylv 

And then grow fs

Remove volume group

Deactivate the volume group:

vgchange -a n my_volume_group

Now you actually remove the volume group:

vgremove my_volume_group

Remove physical drive from a volume group

See here Make sure the data fits on remaining drives, then

pvmove /dev/sdbX

When you get "No data to move for vg_sdg" that means pvmove is already done or not needed

vgreduce myvg /dev/sdbX

FAQ

Access logical volumes within logical volume

partprobe /dev/mapper/vg-mydata
lsblk

This will show the (sub) partitions/volumes, then edit /etc/lvm/lvm.conf

filter = [ "a|.*/|", "a|mydata|","r|.*|" ]

Then run:

vgscan
lvscan
vgs

Now you should see the names of the volumes you're looking for, so now:

vgchange -a y guestsname_mydata-home

and then you should be able to

mount /dev/mapper/guestsname_mydata-home

When done, remember to change back the filter in lvm.conf, default is

filter = [ "a|.*/|" ]

and of course then once again

vgscan
lvscan

 

vgreduce Can't remove final physical volume

Means you're trying to remove the last physical volume, instead just use

vgremove

grow logical volume

https://www.tldp.org/HOWTO/LVM-HOWTO/extendlv.html

lvextend -L+100G /dev/myvg/myvol



lvremove: Logical volume vg-kvm/vps-snapshot is used by another device.

Could be kpartx, see

/dev/mapper/

lvremove Logical volume foo/bar in use

check with lsof, fuser and:

losetup -l

to see if a /dev/dm-* looks familiar

 

some say lvchange -an the snapshot first, but that disables the lv it's connected to as well

OR

dmsetup info -c

http://blog.roberthallam.org/2017/12/solved-logical-volume-is-used-by-another-device/comment-page-1/

 

lvremove Do you really want to remove and DISCARD active logical volume

If you like, deactivate the volume first:

lvchange -an vgname/lvname

 

Grow physical volume

Assuming your LVM partition is the last one, use fdisk to delete and recreate it, remember to set type to LVM again and reboot. Then use pvresize /dev/sdaX

Or just:

pvresize /dev/sdb

Create volume group

vgcreate vgname /dev/sdc1 /dev/sdd1

pvcreate Can't open /dev/sdg exclusively. Mounted filesystem?

https://blog.hqcodeshop.fi/archives/274-Replacing-physical-drive-for-LVM-pvcreate-Cant-open-dev-exclusively.html

 

pvcreate: Cannot use /dev/sdb: device is partitioned

wipefs --all /dev/sdb

Device /dev/sdb excluded by filter

Check disk label(gpt!!

Try:

 wipefs -af /dev/sdg

 

 

WARNING: Device /dev/dm-17 not initialized in udev database even after waiting 10000000 microseconds.

Try

udevadm trigger

 

wipefs: error: /dev/sdg: probing initialization failed: Device or resource busy

try all ways to unbusy

Check if volume is in use

dmsetup info -c

Check the 'Open' column

WARNING: PV /dev/sda5 in VG foo-vg is using an old PV header, modify the VG to update.

vgck --updatemetadata foo-vg

 

Mount logical volume from disk image

See also https://backdrift.org/mounting-a-file-system-on-a-partition-inside-of-an-lvm-volume

kpartx -av /path/to.img
lvscan
mount /dev/mapper/what-ever-var /mnt/loop
umount
kpartx -d /path/to.img


Error reading device sdb

After pvremoving a disk:

wipefs -a /dev/sdb


pvmove: Cluster mirror log daemon is not running

Useless message, but if pvmove is failing: did you shrink your LV yet?