LVM usage examples in Linux

Let's check what disks we have:

$ sudo lsblk -p
Output:
NAME                    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
/dev/sr0                 11:0    1 1024M  0 rom  
/dev/vda                252:0    0   20G  0 disk 
├─/dev/vda1             252:1    0    1G  0 part /boot
└─/dev/vda2             252:2    0   19G  0 part 
  ├─/dev/mapper/cs-root 253:0    0   17G  0 lvm  /
  └─/dev/mapper/cs-swap 253:1    0    2G  0 lvm  [SWAP]
/dev/vdb                252:16   0    5G  0 disk 
/dev/vdc                252:32   0    5G  0 disk 
/dev/vdd                252:48   0   10G  0 disk 
/dev/vde                252:64   0   10G  0 disk
As you can see, we have 4 additional disks which we can use for our LVM usage examples.

Other commands to see disks details:
df -h
fdisk -l
Let's create a Physical Volume:
# pvcreate /dev/vdb
  Physical volume "/dev/vdb" successfully created.
List the PVs:
# pvs
  PV         VG Fmt  Attr PSize   PFree
  /dev/vda2  cs lvm2 a--  <19.00g    0 
  /dev/vdb      lvm2 ---    5.00g 5.00g
Create Volume Group:
# vgcreate vg_app /dev/vdb
  Volume group "vg_app" successfully created
View our VGs:
# vgs
  VG     #PV #LV #SN Attr   VSize   VFree 
  cs       1   2   0 wz--n- <19.00g     0 
  vg_app   1   0   0 wz--n-  <5.00g <5.00g
Look again at PVs:
# pvs
  PV         VG     Fmt  Attr PSize   PFree 
  /dev/vda2  cs     lvm2 a--  <19.00g     0 
  /dev/vdb   vg_app lvm2 a--   <5.00g <5.00g
Create a Logical Volume inside this VG:
# lvcreate -L 2G -n lv_data vg_app
  Logical volume "lv_data" created.
View Logical Volumes:
# lvs
  LV      VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root    cs     -wi-ao---- <17.00g                                                    
  swap    cs     -wi-ao----   2.00g                                                    
  lv_data vg_app -wi-a-----   2.00g
There is another way to look to the LVs:
# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/vg_app/lv_data
  LV Name                lv_data
  VG Name                vg_app
  LV UUID                z4C8Y0-ur33-ZpZi-DfdD-cIdr-cnCJ-sfkGha
  LV Write Access        read/write
  LV Creation host, time c9, 2023-04-29 19:54:56 +0300
  LV Status              available
  # open                 0
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2
   
  --- Logical volume ---
  LV Path                /dev/cs/swap
  LV Name                swap
  VG Name                cs
  LV UUID                vkrfjg-9IVE-HoqM-L2SJ-Xh53-LyBd-qhgqe9
  LV Write Access        read/write
  LV Creation host, time c9, 2022-01-01 23:31:18 +0200
  LV Status              available
  # open                 2
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
   
  --- Logical volume ---
  LV Path                /dev/cs/root
  LV Name                root
  VG Name                cs
  LV UUID                AetdBL-cU7b-ukDh-w2XJ-lEVA-CRfE-q8MH3I
  LV Write Access        read/write
  LV Creation host, time c9, 2022-01-01 23:31:18 +0200
  LV Status              available
  # open                 1
  LV Size                <17.00 GiB
  Current LE             4351
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
Notice LV Path, which can be used to create the filasystem on this LV:
# mkfs.ext4 /dev/vg_app/lv_data
mke2fs 1.46.2 (28-Feb-2021)
Discarding device blocks: done                            
Creating filesystem with 524288 4k blocks and 131072 inodes
Filesystem UUID: 7c3bce79-b22d-4949-8d7c-87bbd79420ba
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
Create a directory and mount LV:
# mkdir /data
# mount /dev/vg_app/lv_data /data
# df -h /data
Filesystem                  Size  Used Avail Use% Mounted on
/dev/mapper/vg_app-lv_data  2.0G   24K  1.8G   1% /data
Let's create another LV named lv_app and give it 500M of space:
# lvcreate -L 500M -n lv_app vg_app
  Logical volume "lv_app" created.
# lvs
  LV      VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root    cs     -wi-ao---- <17.00g                                                    
  swap    cs     -wi-ao----   2.00g                                                    
  lv_app  vg_app -wi-a----- 500.00m                                                    
  lv_data vg_app -wi-ao----   2.00g
Make filesystem and mount point this new LV:
# mkfs.ext4 /dev/vg_app/lv_app
mke2fs 1.46.2 (28-Feb-2021)
Discarding device blocks: done                            
Creating filesystem with 512000 1k blocks and 128016 inodes
Filesystem UUID: 93699dfc-1b8d-4610-ac3c-cefb20b87f3a
Superblock backups stored on blocks: 
	8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done 

# mkdir /app
Let's include this filesystem in fstab to mount it at boot-up. Open /etc/fstab and add this as new line:
/dev/vg_app/lv_app /app ext4 defaults 0 0
Save and mount it:
# mount /app
# df -h /app
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/vg_app-lv_app  474M   14K  445M   1% /app
Look at the devices:
# ls -l /dev/vg_app/lv_app 
lrwxrwxrwx. 1 root root 7 Apr 29 20:03 /dev/vg_app/lv_app -> ../dm-3
Let's look at the other path:
# ls -la /dev/mapper/vg_app-lv_app
lrwxrwxrwx. 1 root root 7 Apr 29 20:03 /dev/mapper/vg_app-lv_app -> ../dm-3
This is the same path:
# ls -la /dev/dm-3
brw-rw----. 1 root disk 253, 3 Apr 29 20:03 /dev/dm-3
Be aware that this path might change during reboot, so don't refer to it directly when mounting or configuring via /etc/fstab and so on. Use the path that it is displayed with:
# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/vg_app/lv_data
  LV Name                lv_data
  VG Name                vg_app
  LV UUID                z4C8Y0-ur33-ZpZi-DfdD-cIdr-cnCJ-sfkGha
  LV Write Access        read/write
  LV Creation host, time c9, 2023-04-29 19:54:56 +0300
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2
   
  --- Logical volume ---
  LV Path                /dev/vg_app/lv_app
  LV Name                lv_app
  VG Name                vg_app
  LV UUID                JtrIVw-1z2V-DxNb-ZuCm-IHV3-hpDR-A9HIYk
  LV Write Access        read/write
  LV Creation host, time c9, 2023-04-29 20:01:52 +0300
  LV Status              available
  # open                 1
  LV Size                500.00 MiB
  Current LE             125
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3
   
  --- Logical volume ---
  LV Path                /dev/cs/swap
  LV Name                swap
  VG Name                cs
  LV UUID                vkrfjg-9IVE-HoqM-L2SJ-Xh53-LyBd-qhgqe9
  LV Write Access        read/write
  LV Creation host, time c9, 2022-01-01 23:31:18 +0200
  LV Status              available
  # open                 2
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
   
  --- Logical volume ---
  LV Path                /dev/cs/root
  LV Name                root
  VG Name                cs
  LV UUID                AetdBL-cU7b-ukDh-w2XJ-lEVA-CRfE-q8MH3I
  LV Write Access        read/write
  LV Creation host, time c9, 2022-01-01 23:31:18 +0200
  LV Status              available
  # open                 1
  LV Size                <17.00 GiB
  Current LE             4351
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
Let's see how much remaining space we have in VG:
# vgs
  VG     #PV #LV #SN Attr   VSize   VFree 
  cs       1   2   0 wz--n- <19.00g     0 
  vg_app   1   2   0 wz--n-  <5.00g <2.51g
Let's create the LV on the remaining 2.51G size. If we are get to create the LV with the same method, we will get an error:
# lvcreate -L 2.51G -n lv_logs vg_app
  Rounding up size to full physical extent 2.51 GiB
  Volume group "vg_app" has insufficient free space (642 extents): 643 required.
Let's find why we are getting this error.
So, we have additional two layers of abstractions in LVM: LV are actually divided to LE which are Logical Extents. So, a collection of Logical Extents - makes up a Logical Volume. This is how LVM allows us to extend or shrink a LV: it just changes number of LE.

To view the information about this LEs, use lvdisplay command:
# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/vg_app/lv_data
  LV Name                lv_data
  VG Name                vg_app
  LV UUID                z4C8Y0-ur33-ZpZi-DfdD-cIdr-cnCJ-sfkGha
  LV Write Access        read/write
  LV Creation host, time c9, 2023-04-29 19:54:56 +0300
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2
   
  --- Logical volume ---
  LV Path                /dev/vg_app/lv_app
  LV Name                lv_app
  VG Name                vg_app
  LV UUID                JtrIVw-1z2V-DxNb-ZuCm-IHV3-hpDR-A9HIYk
  LV Write Access        read/write
  LV Creation host, time c9, 2023-04-29 20:01:52 +0300
  LV Status              available
  # open                 1
  LV Size                500.00 MiB
  Current LE             125
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3
   
  --- Logical volume ---
  LV Path                /dev/cs/swap
  LV Name                swap
  VG Name                cs
  LV UUID                vkrfjg-9IVE-HoqM-L2SJ-Xh53-LyBd-qhgqe9
  LV Write Access        read/write
  LV Creation host, time c9, 2022-01-01 23:31:18 +0200
  LV Status              available
  # open                 2
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
   
  --- Logical volume ---
  LV Path                /dev/cs/root
  LV Name                root
  VG Name                cs
  LV UUID                AetdBL-cU7b-ukDh-w2XJ-lEVA-CRfE-q8MH3I
  LV Write Access        read/write
  LV Creation host, time c9, 2022-01-01 23:31:18 +0200
  LV Status              available
  # open                 1
  LV Size                <17.00 GiB
  Current LE             4351
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
Look at the line:
  Current LE             125
These are number of LE that comprises current LV.

Use lvdisplay -m to show a map of this LE:
# lvdisplay -m
  --- Logical volume ---
  LV Path                /dev/vg_app/lv_data
  LV Name                lv_data
  VG Name                vg_app
  LV UUID                z4C8Y0-ur33-ZpZi-DfdD-cIdr-cnCJ-sfkGha
  LV Write Access        read/write
  LV Creation host, time c9, 2023-04-29 19:54:56 +0300
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2
   
  --- Segments ---
  Logical extents 0 to 511:
    Type		linear
    Physical volume	/dev/vdb
    Physical extents	0 to 511
   
   
  --- Logical volume ---
  LV Path                /dev/vg_app/lv_app
  LV Name                lv_app
  VG Name                vg_app
  LV UUID                JtrIVw-1z2V-DxNb-ZuCm-IHV3-hpDR-A9HIYk
  LV Write Access        read/write
  LV Creation host, time c9, 2023-04-29 20:01:52 +0300
  LV Status              available
  # open                 1
  LV Size                500.00 MiB
  Current LE             125
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3
   
  --- Segments ---
  Logical extents 0 to 124:
    Type		linear
    Physical volume	/dev/vdb
    Physical extents	512 to 636
   
   
  --- Logical volume ---
  LV Path                /dev/cs/swap
  LV Name                swap
  VG Name                cs
  LV UUID                vkrfjg-9IVE-HoqM-L2SJ-Xh53-LyBd-qhgqe9
  LV Write Access        read/write
  LV Creation host, time c9, 2022-01-01 23:31:18 +0200
  LV Status              available
  # open                 2
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
   
  --- Segments ---
  Logical extents 0 to 511:
    Type		linear
    Physical volume	/dev/vda2
    Physical extents	0 to 511
   
   
  --- Logical volume ---
  LV Path                /dev/cs/root
  LV Name                root
  VG Name                cs
  LV UUID                AetdBL-cU7b-ukDh-w2XJ-lEVA-CRfE-q8MH3I
  LV Write Access        read/write
  LV Creation host, time c9, 2022-01-01 23:31:18 +0200
  LV Status              available
  # open                 1
  LV Size                &tt;17.00 GiB
  Current LE             4351
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Segments ---
  Logical extents 0 to 4350:
    Type		linear
    Physical volume	/dev/vda2
    Physical extents	512 to 4862
So, for example we have:
  --- Segments ---
  Logical extents 0 to 124:
    Type		linear
    Physical volume	/dev/vdb
    Physical extents	512 to 636
This tells us that the LE for the LV lv_app resides on the /dev/vdb disk.

Just like a LV is divided in LE, a PV is divided in PE, which stand for Physical Extents. There is one-to-one mapping between LE and PE.

PEs can be viewed using pvdisplay -m:
# pvdisplay -m
  --- Physical volume ---
  PV Name               /dev/vdb
  VG Name               vg_app
  PV Size               5.00 GiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              1279
  Free PE               642
  Allocated PE          637
  PV UUID               rg4S4w-hiqg-d8Uu-gczP-UffV-xWNn-BSbMXb
   
  --- Physical Segments ---
  Physical extent 0 to 511:
    Logical volume	/dev/vg_app/lv_data
    Logical extents	0 to 511
  Physical extent 512 to 636:
    Logical volume	/dev/vg_app/lv_app
    Logical extents	0 to 124
  Physical extent 637 to 1278:
    FREE
   
  --- Physical volume ---
  PV Name               /dev/vda2
  VG Name               cs
  PV Size               <19.00 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              4863
  Free PE               0
  Allocated PE          4863
  PV UUID               vOZhCT-9efu-p8Ng-ovnP-AZfG-2kcQ-i8fkKa
   
  --- Physical Segments ---
  Physical extent 0 to 511:
    Logical volume	/dev/cs/swap
    Logical extents	0 to 511
  Physical extent 512 to 4862:
    Logical volume	/dev/cs/root
    Logical extents	0 to 4350
So we have:
  --- Physical Segments ---
  Physical extent 0 to 511:
    Logical volume	/dev/vg_app/lv_data
    Logical extents	0 to 511
  Physical extent 512 to 636:
    Logical volume	/dev/vg_app/lv_app
    Logical extents	0 to 124
  Physical extent 637 to 1278:
    FREE
PE from 0 to 511 belong to lv_data LV and they correspond to 0 to 511 LE of that LV
PE from 512 to 636 belong to lv_app LV and they correspond to 0 to 124 LE of that LV
Finally we see PE 637 to 1278 are unused - this is actually the space that we want to use for our new LV.

If we back up just a bit we can see that:
  PE Size               4.00 MiB
the size of each PEs and LEs is 4Mb

Just below is a number of total number of PEs:
  Total PE              1279
If we do the math 4Mb * 1279 we will end the PV size:
  PV Size               5.00 GiB / not usable 4.00 MiB
Notice the message that states not usable. The reason about this 4Mb space is being unusable, is due to LVM metadata, the alignment of disk sectors and so on. More or less, think about this data as rounding error - it is very insignificant.

Let's go back to that create error and display it again:
# lvcreate -L 2.51G -n lv_logs vg_app
  Rounding up size to full physical extent 2.51 GiB
  Volume group "vg_app" has insufficient free space (642 extents): 643 required.
Now you know why this message is being given and what exactly what it means: we are asking for 2.51G of space which works as 643 * 4Mb extents, but we only have 642 * 4Mb extents available to allocate.

To check again the number of PE, use vgdisplay command:
# vgdisplay 
  --- Volume group ---
  VG Name               vg_app
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <5.00 GiB
  PE Size               4.00 MiB
  Total PE              1279
  Alloc PE / Size       637 / <2.49 GiB
  Free  PE / Size       642 / <2.51 GiB
  VG UUID               LOy2wE-HX0N-oTfF-QiCT-1eBV-7Qmx-xiy4wg
   
  --- Volume group ---
  VG Name               cs
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <19.00 GiB
  PE Size               4.00 MiB
  Total PE              4863
  Alloc PE / Size       4863 / <19.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               F41ErW-b74A-6pHd-Qg9H-140G-zDoA-cq9uRx
So, it is:
  Free  PE / Size       642 / <2.51 GiB
At this point we can use the -l option to lvcreate command to cpecify the size in the number of Extents. Use:
# lvcreate --help
and you will get the output:
[ -l|--extents Number[PERCENT] ]
So we can run:
# lvcreate -l 642 -n lv_logs vg_app
Or we can use PERCENT:
# lvcreate -l100%FREE -n lv_logs vg_app
  Logical volume "lv_logs" created.
# lvs
  LV      VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root    cs     -wi-ao---- <17.00g                                                    
  swap    cs     -wi-ao----   2.00g                                                    
  lv_app  vg_app -wi-ao---- 500.00m                                                    
  lv_data vg_app -wi-ao----   2.00g                                                    
  lv_logs vg_app -wi-a-----  <2.51g
That's exactly that you can squeeze every possible amount of space from your VG and put it into the LV.

Next - let's say that the lv_data is getting full and we need to extend it. If we look into VG we can see that we allocated all of the free space to the LVs:
# vgs
  VG     #PV #LV #SN Attr   VSize   VFree
  cs       1   2   0 wz--n- <19.00g    0 
  vg_app   1   3   0 wz--n-  <5.00g    0
so we have VFree equal to 0.

In this particular case, we need to extend the VG itsel, before we can extend the LV within that VG.
So, let's repeat our initial process by scanning for disks:
# lvmdiskscan 
  /dev/vda2 [     <19.00 GiB] LVM physical volume
  /dev/vdb  [       5.00 GiB] LVM physical volume
  0 disks
  0 partitions
  1 LVM physical volume whole disk
  1 LVM physical volume
# lsblk -p
NAME                         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
/dev/sr0                      11:0    1 1024M  0 rom  
/dev/vda                     252:0    0   20G  0 disk 
├─/dev/vda1                  252:1    0    1G  0 part /boot
└─/dev/vda2                  252:2    0   19G  0 part 
  ├─/dev/mapper/cs-root      253:0    0   17G  0 lvm  /
  └─/dev/mapper/cs-swap      253:1    0    2G  0 lvm  [SWAP]
/dev/vdb                     252:16   0    5G  0 disk 
├─/dev/mapper/vg_app-lv_data 253:2    0    2G  0 lvm  /data
├─/dev/mapper/vg_app-lv_app  253:3    0  500M  0 lvm  /app
└─/dev/mapper/vg_app-lv_logs 253:4    0  2.5G  0 lvm  
/dev/vdc                     252:32   0    5G  0 disk 
/dev/vdd                     252:48   0   10G  0 disk 
/dev/vde                     252:64   0   10G  0 disk
# df -ha
Filesystem                  Size  Used Avail Use% Mounted on
proc                           0     0     0    - /proc
sysfs                          0     0     0    - /sys
devtmpfs                    964M     0  964M   0% /dev
securityfs                     0     0     0    - /sys/kernel/security
tmpfs                       984M     0  984M   0% /dev/shm
devpts                         0     0     0    - /dev/pts
tmpfs                       394M   12M  383M   3% /run
cgroup2                        0     0     0    - /sys/fs/cgroup
pstore                         0     0     0    - /sys/fs/pstore
none                           0     0     0    - /sys/fs/bpf
/dev/mapper/cs-root          17G  4.8G   13G  28% /
selinuxfs                      0     0     0    - /sys/fs/selinux
systemd-1                      0     0     0    - /proc/sys/fs/binfmt_misc
mqueue                         0     0     0    - /dev/mqueue
debugfs                        0     0     0    - /sys/kernel/debug
hugetlbfs                      0     0     0    - /dev/hugepages
tracefs                        0     0     0    - /sys/kernel/tracing
fusectl                        0     0     0    - /sys/fs/fuse/connections
configfs                       0     0     0    - /sys/kernel/config
/dev/vda1                  1014M  190M  825M  19% /boot
sunrpc                         0     0     0    - /var/lib/nfs/rpc_pipefs
tmpfs                       197M  108K  197M   1% /run/user/1000
gvfsd-fuse                  0.0K  0.0K  0.0K    - /run/user/1000/gvfs
/dev/mapper/vg_app-lv_data  2.0G   24K  1.8G   1% /data
/dev/mapper/vg_app-lv_app   474M   14K  445M   1% /app
So, looks that we have available:
/dev/vdc
/dev/vdd
/dev/vde
So, let's use /dev/vdc and pvcreate that:
# pvcreate /dev/vdc
  Physical volume "/dev/vdc" successfully created.
Now, we can add this PV to our VG with the vgextent command:
# vgextend vg_app /dev/vdc
  Volume group "vg_app" successfully extended
Look at the status:
# vgs
  VG     #PV #LV #SN Attr   VSize   VFree 
  cs       1   2   0 wz--n- <19.00g     0 
  vg_app   2   3   0 wz--n-   9.99g <5.00g
Under the column #PV we have 2 PVs and at the end we have VFree <5.00g
If we look at the pvs we can see that both PV belong to one VG vg_app:
# pvs
  PV         VG     Fmt  Attr PSize   PFree 
  /dev/vda2  cs     lvm2 a--  <19.00g     0 
  /dev/vdb   vg_app lvm2 a--   <5.00g     0 
  /dev/vdc   vg_app lvm2 a--   <5.00g <5.00g
we can also see that one PV is completely full and other have <5.00g of free space. Look at PFree column.
Before to extend, let's look at the filesystem size:
# df -h /data
Filesystem                  Size  Used Avail Use% Mounted on
/dev/mapper/vg_app-lv_data  2.0G   24K  1.8G   1% /data
We can notice that the size is 2Gb. Now, use lvextent command to add 512Mb of space to that LV - in addition to add the size, we need to grow the filesystem on that LV to fill up this new space. For this use -r option:
# lvextend -L +512M -r /dev/vg_app/lv_data
  Size of logical volume vg_app/lv_data changed from 2.00 GiB (512 extents) to 2.50 GiB (640 extents).
  Logical volume vg_app/lv_data successfully resized.
resize2fs 1.46.2 (28-Feb-2021)
Filesystem at /dev/mapper/vg_app-lv_data is mounted on /data; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/mapper/vg_app-lv_data is now 655360 (4k) blocks long
Now, let's look at the status:
# lvs
  LV      VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root    cs     -wi-ao---- <17.00g                                                    
  swap    cs     -wi-ao----   2.00g                                                    
  lv_app  vg_app -wi-ao---- 500.00m                                                    
  lv_data vg_app -wi-ao----   2.50g                                                    
  lv_logs vg_app -wi-a-----  <2.51g
Let's look at the filesystem level:
# df -h /data
Filesystem                  Size  Used Avail Use% Mounted on
/dev/mapper/vg_app-lv_data  2.4G  6.0M  2.3G   1% /data
If you forget -r option in lvextend, you have to resize the filesystem manually. To demonstrate that:
# lvextend -L +512M /dev/vg_app/lv_data
  Size of logical volume vg_app/lv_data changed from 2.50 GiB (640 extents) to 3.00 GiB (768 extents).
  Logical volume vg_app/lv_data successfully resized.
The filesystem did not change:
# df -h /data
Filesystem                  Size  Used Avail Use% Mounted on
/dev/mapper/vg_app-lv_data  2.4G  6.0M  2.3G   1% /data
To fix this - you need to use the resize tool for spefic filesystem, For EXT4 it is resize2fs:
# resize2fs /dev/vg_app/lv_data 
resize2fs 1.46.2 (28-Feb-2021)
Filesystem at /dev/vg_app/lv_data is mounted on /data; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/vg_app/lv_data is now 786432 (4k) blocks long.
Check the filesystem size again:
# df -h /data
Filesystem                  Size  Used Avail Use% Mounted on
/dev/mapper/vg_app-lv_data  2.9G  6.0M  2.8G   1% /data
Let's look at the map of this LV specifically:
# lvdisplay -m /dev/vg_app/lv_data 
  --- Logical volume ---
  LV Path                /dev/vg_app/lv_data
  LV Name                lv_data
  VG Name                vg_app
  LV UUID                z4C8Y0-ur33-ZpZi-DfdD-cIdr-cnCJ-sfkGha
  LV Write Access        read/write
  LV Creation host, time c9, 2023-04-29 19:54:56 +0300
  LV Status              available
  # open                 1
  LV Size                3.00 GiB
  Current LE            768
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2
   
  --- Segments ---
  Logical extents 0 to 511:
    Type		linear
    Physical volume	/dev/vdb
    Physical extents	0 to 511
   
  Logical extents 512 to 767:
    Type		linear
    Physical volume	/dev/vdc
    Physical extents	0 to 255
If you look at the bottom of output you can see that some of extents live on /dev/vdb and others - on /dev/vdc. Again, this is what makes LVM so powerful and flexible: you have one filesystem that spans multiple storage devices.

Let's create new PVs for new VG. Let's see what we have:
# lvmdiskscan 
  /dev/vda2 [     <19.00 GiB] LVM physical volume
  /dev/vdb  [       5.00 GiB] LVM physical volume
  /dev/vdc  [       5.00 GiB] LVM physical volume
  0 disks
  0 partitions
  2 LVM physical volume whole disks
  1 LVM physical volume
# lsblk -p
NAME                         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
/dev/sr0                      11:0    1 1024M  0 rom  
/dev/vda                     252:0    0   20G  0 disk 
├─/dev/vda1                  252:1    0    1G  0 part /boot
└─/dev/vda2                  252:2    0   19G  0 part 
  ├─/dev/mapper/cs-root      253:0    0   17G  0 lvm  /
  └─/dev/mapper/cs-swap      253:1    0    2G  0 lvm  [SWAP]
/dev/vdb                     252:16   0    5G  0 disk 
├─/dev/mapper/vg_app-lv_data 253:2    0    3G  0 lvm  /data
├─/dev/mapper/vg_app-lv_app  253:3    0  500M  0 lvm  /app
└─/dev/mapper/vg_app-lv_logs 253:4    0  2.5G  0 lvm  
/dev/vdc                     252:32   0    5G  0 disk 
└─/dev/mapper/vg_app-lv_data 253:2    0    3G  0 lvm  /data
/dev/vdd                     252:48   0   10G  0 disk 
/dev/vde                     252:64   0   10G  0 disk 
# df -ha
Filesystem                  Size  Used Avail Use% Mounted on
proc                           0     0     0    - /proc
sysfs                          0     0     0    - /sys
devtmpfs                    964M     0  964M   0% /dev
securityfs                     0     0     0    - /sys/kernel/security
tmpfs                       984M     0  984M   0% /dev/shm
devpts                         0     0     0    - /dev/pts
tmpfs                       394M   12M  383M   3% /run
cgroup2                        0     0     0    - /sys/fs/cgroup
pstore                         0     0     0    - /sys/fs/pstore
none                           0     0     0    - /sys/fs/bpf
/dev/mapper/cs-root          17G  4.8G   13G  28% /
selinuxfs                      0     0     0    - /sys/fs/selinux
systemd-1                      -     -     -    - /proc/sys/fs/binfmt_misc
mqueue                         0     0     0    - /dev/mqueue
debugfs                        0     0     0    - /sys/kernel/debug
hugetlbfs                      0     0     0    - /dev/hugepages
tracefs                        0     0     0    - /sys/kernel/tracing
fusectl                        0     0     0    - /sys/fs/fuse/connections
configfs                       0     0     0    - /sys/kernel/config
/dev/vda1                  1014M  190M  825M  19% /boot
sunrpc                         0     0     0    - /var/lib/nfs/rpc_pipefs
tmpfs                       197M  108K  197M   1% /run/user/1000
gvfsd-fuse                  0.0K  0.0K  0.0K    - /run/user/1000/gvfs
/dev/mapper/vg_app-lv_data  2.9G  6.0M  2.8G   1% /data
/dev/mapper/vg_app-lv_app   474M   14K  445M   1% /app
binfmt_misc                    0     0     0    - /proc/sys/fs/binfmt_misc
So we can see that /dev/vdd and /dev/vde are available. Let's create both PVs at the same time:
# pvcreate /dev/vdd /dev/vde
  Physical volume "/dev/vdd" successfully created.
  Physical volume "/dev/vde" successfully created.
Let's check:
# pvs
  PV         VG     Fmt  Attr PSize   PFree 
  /dev/vda2  cs     lvm2 a--  <19.00g     0 
  /dev/vdb   vg_app lvm2 a--   <5.00g     0 
  /dev/vdc   vg_app lvm2 a--   <5.00g <4.00g
  /dev/vdd          lvm2 ---   10.00g 10.00g
  /dev/vde          lvm2 ---   10.00g 10.00g
Let's create a new VG:
# vgcreate vg_safe /dev/vdd /dev/vde
  Volume group "vg_safe" successfully created
Check status:
# vgs
  VG      #PV #LV #SN Attr   VSize   VFree 
  cs        1   2   0 wz--n- <19.00g     0 
  vg_app    2   3   0 wz--n-   9.99g <4.00g
  vg_safe   2   0   0 wz--n-  19.99g 19.99g
Let's create a mirrored LV: this will insure that exact copy of data will be stored on 2 different storage devices. With option -m 1, where 1 os how many additional copies of our data we want to be (total 2 copies):
# lvcreate -m 1 -L 512M -n lv_secrets vg_safe
  Logical volume "lv_secrets" created.
Check the status:
# lvs
  LV         VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root       cs      -wi-ao---- <17.00g                                                    
  swap       cs      -wi-ao----   2.00g                                                    
  lv_app     vg_app  -wi-ao---- 500.00m                                                    
  lv_data    vg_app  -wi-ao----   3.00g                                                    
  lv_logs    vg_app  -wi-a-----  <2.51g                                                    
  lv_secrets vg_safe rwi-a-r--- 512.00m                                    100.00
Notice Cpy%Sync column which states how the data is syncronizes in percent.

Let's look at lvs -a, which stands for all:
# lvs -a
  LV                    VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root                  cs      -wi-ao---- <17.00g                                                    
  swap                  cs      -wi-ao----   2.00g                                                    
  lv_app                vg_app  -wi-ao---- 500.00m                                                    
  lv_data               vg_app  -wi-ao----   3.00g                                                    
  lv_logs               vg_app  -wi-a-----  <2.51g                                                    
  lv_secrets            vg_safe rwi-a-r--- 512.00m                                    100.00          
  [lv_secrets_rimage_0] vg_safe iwi-aor--- 512.00m                                                    
  [lv_secrets_rimage_1] vg_safe iwi-aor--- 512.00m                                                    
  [lv_secrets_rmeta_0]  vg_safe ewi-aor---   4.00m                                                    
  [lv_secrets_rmeta_1]  vg_safe ewi-aor---   4.00m
The LV we created is actually RAID1, so the mirror and RAID1 is the same thing.

When you create such volume, LVM creates a metadata subvolume which is 1 extent in size. In this case it resulted in 2 metadata subvolumes v_secrets_rmeta_0 and v_secrets_rmeta_1 and 2 data subvolumes: lv_secrets_rimage_0 and lv_secrets_rimage_1.

Let's look at map with pvdisplay:
#  pvdisplay -m
  --- Physical volume ---
  PV Name               /dev/vdd
  VG Name               vg_safe
  PV Size               10.00 GiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              2559
  Free PE               2430
  Allocated PE          129
  PV UUID               kK3N3O-iHR7-cwbf-k3cb-CXXF-pI3V-8gyD2a
   
  --- Physical Segments ---
  Physical extent 0 to 0:
    Logical volume	/dev/vg_safe/lv_secrets_rmeta_0
    Logical extents	0 to 0
  Physical extent 1 to 128:
    Logical volume	/dev/vg_safe/lv_secrets_rimage_0
    Logical extents	0 to 127
  Physical extent 129 to 2558:
    FREE
   
  --- Physical volume ---
  PV Name               /dev/vde
  VG Name               vg_safe
  PV Size               10.00 GiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              2559
  Free PE               2430
  Allocated PE          129
  PV UUID               YeTZeU-UQ50-dEac-Kf5D-to4Z-ZufY-z7Fb5d
   
  --- Physical Segments ---
  Physical extent 0 to 0:
    Logical volume	/dev/vg_safe/lv_secrets_rmeta_1
    Logical extents	0 to 0
  Physical extent 1 to 128:
    Logical volume	/dev/vg_safe/lv_secrets_rimage_1
    Logical extents	0 to 127
  Physical extent 129 to 2558:
    FREE
   
  --- Physical volume ---
  PV Name               /dev/vdb
  VG Name               vg_app
  PV Size               5.00 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              1279
  Free PE               0
  Allocated PE          1279
  PV UUID               rg4S4w-hiqg-d8Uu-gczP-UffV-xWNn-BSbMXb
   
  --- Physical Segments ---
  Physical extent 0 to 511:
    Logical volume	/dev/vg_app/lv_data
    Logical extents	0 to 511
  Physical extent 512 to 636:
    Logical volume	/dev/vg_app/lv_app
    Logical extents	0 to 124
  Physical extent 637 to 1278:
    Logical volume	/dev/vg_app/lv_logs
    Logical extents	0 to 641
   
  --- Physical volume ---
  PV Name               /dev/vdc
  VG Name               vg_app
  PV Size               5.00 GiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              1279
  Free PE               1023
  Allocated PE          256
  PV UUID               df906Z-abiq-0pWf-hdMs-nFTE-rMgi-AC3bNI
   
  --- Physical Segments ---
  Physical extent 0 to 255:
    Logical volume	/dev/vg_app/lv_data
    Logical extents	512 to 767
  Physical extent 256 to 1278:
    FREE
   
  --- Physical volume ---
  PV Name               /dev/vda2
  VG Name               cs
  PV Size               <19.00 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              4863
  Free PE               0
  Allocated PE          4863
  PV UUID               vOZhCT-9efu-p8Ng-ovnP-AZfG-2kcQ-i8fkKa
   
  --- Physical Segments ---
  Physical extent 0 to 511:
    Logical volume	/dev/cs/swap
    Logical extents	0 to 511
  Physical extent 512 to 4862:
    Logical volume	/dev/cs/root
    Logical extents	0 to 4350
You can see that /dev/vg_safe/lv_secrets_rmeta_0 and /dev/vg_safe/lv_secrets_rimage_0 resides on /dev/vdd disk.
And /dev/vg_safe/lv_secrets_rmeta_1 with /dev/vg_safe/lv_secrets_rimage_1 resides on disk /dev/vde.

Let's create a filesystem, mountpoint and mount it:
# mkfs.ext4 /dev/vg_safe/lv_secrets 
mke2fs 1.46.2 (28-Feb-2021)
Discarding device blocks: done                            
Creating filesystem with 131072 4k blocks and 32768 inodes
Filesystem UUID: c02ff19d-1e6c-479e-86f9-fb02f058649f
Superblock backups stored on blocks: 
	32768, 98304

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

# mkdir /secrets
# mount /dev/vg_safe/lv_secrets /secrets
# df -h /secrets/
Filesystem                      Size  Used Avail Use% Mounted on
/dev/mapper/vg_safe-lv_secrets  488M   24K  452M   1% /secrets
Let's remove what we did. We will start with unmounting and then removing LV:
# umount /secrets 
# lvremove /dev/vg_safe/lv_secrets 
Do you really want to remove active logical volume vg_safe/lv_secrets? [y/n]: y
  Logical volume "lv_secrets" successfully removed.

# vgs
  VG      #PV #LV #SN Attr   VSize   VFree 
  cs        1   2   0 wz--n- <19.00g     0 
  vg_app    2   3   0 wz--n-   9.99g <4.00g
  vg_safe   2   0   0 wz--n-  19.99g 19.99g
Next remove a PV from VG:
# vgreduce vg_safe /dev/vde
  Removed "/dev/vde" from volume group "vg_safe"
# vgs
  VG      #PV #LV #SN Attr   VSize   VFree  
  cs        1   2   0 wz--n- <19.00g      0 
  vg_app    2   3   0 wz--n-   9.99g  <4.00g
  vg_safe   1   0   0 wz--n- <10.00g <10.00g
Now, we have one volume in the VG, Next remove the PV:
# pvremove /dev/vde
  Labels on physical volume "/dev/vde" successfully wiped.
Now, when looking to the status we do not see the /dev/vde:
# pvs
  PV         VG      Fmt  Attr PSize   PFree  
  /dev/vda2  cs      lvm2 a--  <19.00g      0 
  /dev/vdb   vg_app  lvm2 a--   <5.00g      0 
  /dev/vdc   vg_app  lvm2 a--   <5.00g  <4.00g
  /dev/vdd   vg_safe lvm2 a--  <10.00g <10.00g
Let's finish destroying vg_safe:
# vgremove vg_safe 
  Volume group "vg_safe" successfully removed
# vgs
 VG     #PV #LV #SN Attr   VSize   VFree 
  cs       1   2   0 wz--n- <19.00g     0 
  vg_app   2   3   0 wz--n-   9.99g <4.00g
Now the disk is free:
# pvs
  PV         VG     Fmt  Attr PSize   PFree 
  /dev/vda2  cs     lvm2 a--  <19.00g     0 
  /dev/vdb   vg_app lvm2 a--   <5.00g     0 
  /dev/vdc   vg_app lvm2 a--   <5.00g <4.00g
  /dev/vdd          lvm2 ---   10.00g 10.00g
Remove it:
# pvremove /dev/vdd 
  Labels on physical volume "/dev/vdd" successfully wiped.
# pvs
  PV         VG     Fmt  Attr PSize   PFree 
  /dev/vda2  cs     lvm2 a--  <19.00g     0 
  /dev/vdb   vg_app lvm2 a--   <5.00g     0 
  /dev/vdc   vg_app lvm2 a--   <5.00g <4.00g
Let's say that the storage on /dev/vde is faster and have more space. Let's say that we want to move all data from /dev/vdb to that now disk.

So, initialize our disk and perform all requiredi operations:
# pvcreate /dev/vde
  Physical volume "/dev/vde" successfully created.
# vgextend vg_app /dev/vde
  Volume group "vg_app" successfully extended
# pvs
  PV         VG     Fmt  Attr PSize   PFree  
  /dev/vda2  cs     lvm2 a--  <19.00g      0 
  /dev/vdb   vg_app lvm2 a--   <5.00g      0 
  /dev/vdc   vg_app lvm2 a--   <5.00g  <4.00g
  /dev/vde   vg_app lvm2 a--  <10.00g <10.00g
To perform the data migration we just use the pvmove command to move all data from /dev/vdb to /dev/vde:
# pvmove /dev/vdb /dev/vde
  /dev/vdb: Moved: 0.47%
  /dev/vdb: Moved: 40.03%
  /dev/vdb: Moved: 49.80%
  /dev/vdb: Moved: 100.00%
# pvs
  PV         VG     Fmt  Attr PSize   PFree 
  /dev/vda2  cs     lvm2 a--  <19.00g     0 
  /dev/vdb   vg_app lvm2 a--   <5.00g <5.00g
  /dev/vdc   vg_app lvm2 a--   <5.00g <4.00g
  /dev/vde   vg_app lvm2 a--  <10.00g  5.00g
We notice free disk space on /dev/vdb. Let's make a closer look with pvdisplay:
# pvdisplay /dev/vdb
  --- Physical volume ---
  PV Name               /dev/vdb
  VG Name               vg_app
  PV Size               5.00 GiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              1279
  Free PE               1279
  Allocated PE          0
  PV UUID               rg4S4w-hiqg-d8Uu-gczP-UffV-xWNn-BSbMXb
Here we can see that the:
  Allocated PE          0
There are no PE in use
Now, wnen we are done with this disk, we can remove it from VG and next remove storage from PV:
# vgreduce vg_app /dev/vdb
  Removed "/dev/vdb" from volume group "vg_app"
# pvremove /dev/vdb
  Labels on physical volume "/dev/vdb" successfully wiped.
# pvs
  PV         VG     Fmt  Attr PSize   PFree 
  /dev/vda2  cs     lvm2 a--  <19.00g     0 
  /dev/vdc   vg_app lvm2 a--   <5.00g <4.00g
  /dev/vde   vg_app lvm2 a--  <10.00g  5.00g
These operations was performed on live filesystem without unmounting them.