Extend Harddisk VPS Linux LVM
Extend Harddisk VPS Linux LVM
Until you start to write lots of logs and data... and then it fills up quick. So let's talk about how to resize a
based partition on a live server without reboots... Reboots are for Windows! This system is a CentOS 6.x machine running in VMWare 5.x that's currently got a 16GiB VMDK base
see what we've got to work with:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/myfulldisk--vg-root 12G 11G 0 100% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 7.4G 4.0K 7.4G 1% /dev
tmpfs 1.5G 572K 1.5G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 7.4G 0 7.4G 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/sda1 236M 37M 187M 17% /boot
/dev/sdc1 246G 44G 190G 19% /data
Hmm... time to get on this then. Now, luckily we're running in VMWare. A quick edit to our VM to enlarge the VMDK (not covered in this how-to) will fix this... First, what device
talking about?
$ dmesg|grep sd
[ 1.562363] sd 2:0:0:0: Attached scsi generic sg1 type 0
[ 1.562384] sd 2:0:0:0: [sda] 33554432 512-byte logical blocks: (17.1 GB/16.0 GiB)
[ 1.562425] sd 2:0:0:0: [sda] Write Protect is off
[ 1.562426] sd 2:0:0:0: [sda] Mode Sense: 61 00 00 00
[ 1.562460] sd 2:0:0:0: [sda] Cache data unavailable
[ 1.562461] sd 2:0:0:0: [sda] Assuming drive cache: write through
[ 1.563331] sd 2:0:0:0: [sda] Cache data unavailable
[ 1.563451] sd 2:0:1:0: Attached scsi generic sg2 type 0
[ 1.563452] sd 2:0:1:0: [sdb] 8388608 512-byte logical blocks: (4.29 GB/4.00 GiB)
[ 1.563479] sd 2:0:1:0: [sdb] Write Protect is off
[ 1.563481] sd 2:0:1:0: [sdb] Mode Sense: 61 00 00 00
[ 1.563507] sd 2:0:1:0: [sdb] Cache data unavailable
[ 1.563508] sd 2:0:1:0: [sdb] Assuming drive cache: write through
[ 1.563755] sd 2:0:2:0: Attached scsi generic sg3 type 0
[ 1.563881] sd 2:0:2:0: [sdc] 524288000 512-byte logical blocks: (268 GB/250 GiB)
[ 1.563942] sd 2:0:2:0: [sdc] Write Protect is off
[ 1.563944] sd 2:0:2:0: [sdc] Mode Sense: 61 00 00 00
[ 1.564008] sd 2:0:2:0: [sdc] Cache data unavailable
[ 1.564010] sd 2:0:2:0: [sdc] Assuming drive cache: write through
[ 1.564282] sd 2:0:2:0: [sdc] Cache data unavailable
[ 1.564283] sd 2:0:2:0: [sdc] Assuming drive cache: write through
[ 1.564360] sd 2:0:1:0: [sdb] Cache data unavailable
[ 1.564362] sd 2:0:1:0: [sdb] Assuming drive cache: write through
[ 1.564989] sd 2:0:0:0: [sda] Assuming drive cache: write through
[ 1.571010] sdb: sdb1
[ 1.571426] sd 2:0:1:0: [sdb] Cache data unavailable
[ 1.571514] sd 2:0:1:0: [sdb] Assuming drive cache: write through
[ 1.571626] sd 2:0:1:0: [sdb] Attached SCSI disk
[ 1.574181] sda: sda1 sda2 < sda5 >
[ 1.574797] sd 2:0:0:0: [sda] Cache data unavailable
[ 1.574888] sd 2:0:0:0: [sda] Assuming drive cache: write through
[ 1.575003] sd 2:0:0:0: [sda] Attached SCSI disk
[ 1.579250] sdc: sdc1
[1918441.489622] sda: detected capacity change from 17179869184 to 1073741824
So, that's good, it sees our increased size. Now, lets enlarge that Volume Group. First we get info about the volume group.
$ vgdisplay
--- Volume group ---
VG Name myfulldisk-vg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000ade37
Hopefully, partprobe has found your new partition for you and enlightened the kernel with it's wisdom (or at least fresh load of zeros). Now we need to make it an available volu
for expanding the disk. This consists of making it a 'Physical volume', and then adding that physical volume to the Volume Group containing the disk we want to expand.
$ pvcreate /dev/sda3
Physical volume "/dev/sda3" successfully created
$ pvdisplay
--- Physical volume ---
PV Name /dev/sda5
VG Name myfulldisk-vg
PV Size 15.76 GiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 4034
Free PE 6
Allocated PE 4028
$ vgdisplay
--- Volume group ---
VG Name myfulldisk-vg
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 99.76 GiB
PE Size 4.00 MiB
Total PE 25538
none 7.4G 0 7.4G 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/sda1 236M 37M 187M 17% /boot
/dev/sdc1 246G 44G 190G 19% /data
Okay, the VG might be bigger, but no one else knows that. Because the filesystem ON the VG is still the same size, luckily there's a command for that too!
$ resize2fs /dev/myfulldisk-vg/root
resize2fs 1.42.9 (4-Feb-2014)
Filesystem at /dev/myfulldisk-vg/root is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 6
The filesystem on /dev/myfulldisk-vg/root is now 24903680 blocks long.
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/myfulldisk--vg-root 94G 11G 79G 12% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 7.4G 4.0K 7.4G 1% /dev
tmpfs 1.5G 576K 1.5G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 7.4G 0 7.4G 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/sda1 236M 37M 187M 17% /boot
/dev/sdc1 246G 44G 190G 19% /data