File systems

[#whichfs]: Deciding which filesystem(s) to use

There is a section about deciding which file system(s) to use. Influential in such a decision may be details about the various popular filesystem formats.

Decisions to make

There are options on how the partitions should be laid out on a hard drive. Many times many users will simply choose what seems like the simplest of all options: Use an entire disk for the native file that is most frequently recommended for the operating system being installed.

However, is that best? In some cases (particularly if old-style hibernation support would lead to data loss) the answer is a resounding “No!”

Of the various reasons to not use a single large hard drive space, support for multiple operating systems may be foremost in the list of reasons that technicians commonly think of quickly. However, there are some other reasons. Allocating all of the space immediately, even before the operating system is installed, is very often unnecessary and that approach does have one significant drawback: it limits options. (See the section about partition size to determine some recommendations that allow more flexibility.)

Interaction/Compatibility with system/hardware's startup sequence code
...
Disk Layout

Filesystem volumes are often stored within the boundaries of a defined disk layout. Further information about handling a disk layout, such as adjusting editing a partition (using a traditional MBR partition layout), are available.

[#mkfilsys]: Creating/making a new filesystem/format
See: creating/making/formatting a new filesystem volume.
[#adjflsys]: Adjusting/Tuning properties of a file system
Perhaps described by setupos.htm. (especially section about adjusting how frequently something is mounted. Perhaps info should be moved here, or to pages about individual filesystems?)
FAT drives

FAT drives have a label, as well as a “serial number”/“Volume ID”. These may commonly show up when people run the dir command. Less commonly, the Vol command may also show such information.

Volume label

The label may up to 11 characters. Further precise limitations may vary between operationg systems. Windows XP Pro Product Documentation says, “FAT volume labels cannot contain any of the following characters:”:
* ? / \ | . , ; : + = [ ] < > "
Windows XP Pro Product Documentation: “New ways to do familiar tasks” says “The caret (^) and ampersand (&) symbols can be used in a volume label.” (This implies that older Microsoft operating systems may not have had that option.)

The precise location of the volume label may be dependent on the operating system: Microsoft KB Q140418: Detailed Explanation of FAT Boot Sector (for Windows NT 3.x) says that a field in the boot sector “was used to store the volume label, but the volume label is now stored as a special file in the root directory.”

In DOS, a command called label may adjust the label. The disk label is typically shown when viewing the list of files using the dir command.

In Unix, the mtools package has an mlabel command. (A graphical interface may be available using MToolsFM.sf.net's software which uses the GPL.)

Serial Number

This may get automatically created, without any possible user intervention to customize the value, when the filesystem is created.

Perhaps VolumeID by Sysinternals may handle this? (See the NTFS section for a hyperlink for that software.)

In Unix, the mtools package has an mlabel command.

NTFS
Label
Similar to a FAT drive, the label command may be able to be used to adjust the label. The label may be up to 32 characters long.
Volume ID
VolumeID by Sysinternals
Ext2 and successors

(This paragraph's information may need to be verified. Also the next paragraph may need to be reviewed at that same time.) The command to tune the native filesystem type is tunefs. In operating systems using a Linux kernel, the command to tune Ext2 and similar file systems is tunefs. In other operating systems which are largely compatible, such as BSD, the same command (tunefs) might instead be meant for other filesystems.

Especially for systems that do not use Ext2 as the native command, the the e2fsprogs package at http://e2fsprogs.sf.net may contain a command called tune2fs. To keep these instructions fairly generic, the example command shown may be tune2fs. However, note that for many popular operating systems, a pre-existing tunefs command is what people generally use.

(A symbolic link from tune2fs to tunefs could be created for compatibility/genericization. Use which tunefs to find the command's location. Then place a symbolic link whereever that command is found, which can be done using something like the following command: “sudo ln -s /sbin/tunefs /sbin/tune2fs ”. Then check the results, including permissions.)

Routine checking

To adjust how often mount checks occur, one may use:

(Note: This documentation will likely include the default values. However, this documentation does not yet have that information. One way to see the defaults may be to make a partition.).
tune2fs -c 2 -i 2

e.g.:

sudo tune2fs -c 2 -i 2 /dev/wd1m

The tune2fs man page (fix hyperlink: See if official man page is somewhere, like on main site? maybe TexInfo file, or maybe tune2fs man page from third party site?) says, “It is strongly recommended that either -c (mount-count-dependent) or -i (time-dependent) checking be enabled to force periodic full e2fsck checking of the filesystem. Failure to do so may lead to filesystem corruption (due to bad disks, cables, memory, or kernel bugs) going unnoticed, ultimately resulting in data loss or corruption.” (Quote modified to alter formatting.)

It might be true that one can tell whether the next mount will force an fsck by viewing the will show “Mount count” and “Maximum mount count” lines in the output of “tune2fs -l /dev/drv0

Notes: tune2fs -l shows some current values. Some of the values that may be changed, and which are related to the perceived need to automatically start a disk checking process, include:

tune2fs -c (set max count) and tune2fs -C (set current count), tune2fs -i (interval of time between checks) and tune2fs -T (time when last check happened).

e.g. of some output lines of tune2fs:

Mount count:              0
Maximum mount count:      28
Last checked:             Tue Jun  7 23:55:30 2011
Check interval:           15552000 (6 months)
Next check after:         Sun Dec  4 22:55:30 2011
Ext3 journal mode

If using Ext3 file systems, the default may be to use the “Ordered” level of journaling. This may be sacrificing some level of safety when files are being overwritten, with the expected benefit to be speed. Additional safety may be made by switching to the “Journal” level of journaling. Those who really want to live on the edge may want to switch to “Writeback”. That poses greater risk of file system corruption occurring due to a power outage, although still any such risk is only likely to be to some of the most recent files. For those who are willing to lose the latest copy of the data, relying on backups which may be a bit outdated, there may be some speed benefit at the cost of data being more likely to be right.

tune2fs -o +journal_data,^journal_data_ordered,^journal_data_writeback /dev/someHDD

In the above example, the name of the device (“someHDD”) is expected to need to be customized.

Waste less space

Use tune2fs with appropriate -m and -r parameters.

For instance, Zimba's Performance Tuning Guidelines for Large Deployments: section about file systems says, “Only 2% needs to be reserved for root on large filesystems.” However, there's nothing all that magical about the amount of 2% either. mke2fs manual page (hosted on a third party website) describes an effect of reducing fragmentation and allowing processes owned by root, such as the example of a daemon like syslogd, “to continue to function correctly after non-privileged processes are prevented from writing to the filesystem.” On a large drive, there may be quite a lot of space being reserved for just one user. Using -m number with a lower number may free up some space.

-r also affects the number of reserved filesystem blocks, but instead of specifying a percentage, an actual precise number of filesystem blocks is defined. This would seem to be preferable when done with an automated process which can easily calculate a percentage (in contrast to doing things by hand, where specifying a percentage may be more convenient for somebody who just traditionally always uses a specific amount of reserved space). Also, specifying a percentage may also be done with mke2fs.

To see the current amount of reserved blocks, use “ tune2fs -l /dev/someHDD ” and check for “Reserved block count”.

Label
A label of up to 16 characters can be specified. e2label and/or tune2fs -L. (The mke2fs command also supports a -L command line switch.)
Misc
...???
[#bsdlbprp]: BSD disklabel/bsdlabel properties

The “BSD disklabel”/“bsdlabel” can store a name (perhaps a default is “QEMU HARDDISK   ”, although it also seems like more than 16 characters may be accepted) and a numeric (16-hexadecimal digit) identifier for each disklabel.

OpenBSD FAQ (14): Disks and Partitions: section on “Disklabel Unique Identifiers” states, “It is worth noting that the DUID is a property of the disklabel, though as OpenBSD only supports one disklabel per disk, this is mostly” trivial knowledge that is fairly unimpactful.

In OpenBSD's “ disklabel -E ”, l command will “list”(/show/display) some of the disklabel's properties, including both the “name” label and the duid. The numeric DUID may be changed with the i (to change the ID).

Changing the name

The name may be modified by using e to edit the disklabel's record of “device parameters”. (Changing the DUID is safe. Changing (m)any of the other parameters may often fairly unsafe.)

When editing the device parameters, several questions will be asked: the disk type (usually ESDI (Enhanced Small Disk Interface)), the name of the disk, sec/track, track/cyl, sec/cyl, num/cyl, total sec

Naturally, just as the l command could list the initial values before changes where made, the same command can be used to list the updated values after changes are made.

The w command writes changes to the disklabel. The q command writes changes to the disklabel and then quits (and says “No label changes.” if there were no changes since the disklabel was either loaded or written). Alternatively, the x command will exit without writing to the disklabel (although the “exit” command does not undo any changes that were already written to the disk).

FFS(2)/UFS(2)
Fragmentation handling

Perhaps run:

tunefs -o space /

(specifying a mount point) or...

tunefs -o space /dev/sd0a

(specifying a disk device). This is discussed further by topics related to fragmentation on FFS drives.

label

Unlike a FAT filesystem volume, or an Ext2 filesystem volume, it seems that FFS(2)/UFS(2) filesystem might not provide support for storing a unique name for each filesystem. However, the BSD disklabel/bsdlabel does. (See: )

[#tstrpafs]: Testing/repairing a file system

Information is available about testing/repairing a file system. (A similar topic: Info about checking a disk is available in the section about testing hardware.

Mounting a file system

(This section may need furhter clean-up) see: mounting an ISO image, section about mount points?

This documentation should (but might currently not) : describe how to mount via command line, via fstab e.g. mounting later. Perhaps described by Tutorial for setting up an operating system installation. (Perhaps info should move here, and that tutorial should hyperlink here)

Converting a filesystem
Converting to a filesystem in the Ext family

Switching between ext2 and ext3 may be rather painless (if certain characteristics about the filesystem(s) are true).

Converting to Ext2
Converting from Ext3 (to Ext2)

Note that Ext2 cannot use a journal.

...

Converting to Ext3
Converting from Ext2 (to Ext3)

Ext4 Wiki at Kernel.org: section on “Converting an ext3 filesystem to ext4” has a guide for converting ext3 to ext4 (and ext2 to ext3, simply by adding a journal with tune2fs -j). That guide references an acknowledgement of an error message being “expected”.

Converting to Ext4
Converting from Ext3 to Ext4

Ext4 Wiki at Kernel.org: section on “Converting an ext3 filesystem to ext4” has a guide for converting ext3 to ext4 (and ext2 to ext3, simply by adding a journal with tune2fs -j). That guide references an acknowledgement of an error message being “expected”.

Converting to FAT32
Converting from FAT16 (to FAT32)

FAT: TechNet: Win98 RK Part 2: System Configuration, Chapter 10: Disks and File Systems mentions Drive Converter Wizard, etc. Microsoft KB Q307881: How to convert a FAT16 volume or a FAT32 volume to an NTFS file system in Windows XP

Converting to NTFS
Converting from FAT16 or FAT32 to NTFS

Consider the impact of the loss of compatibility: FAT is more widely supported than NTFS. Are there features of NTFS that are really worth going through this step?

Microsoft KB Q307881: How to convert a FAT16 volume or a FAT32 volume to an NTFS file system in Windows XP, Microsoft KB Q295723: The Autoconvert Tool Does Not Convert a File Allocation Table Partition to an NTFS File System Partition, Microsoft KB Q156560: Free Space Required to Convert FAT to NTFS (Win2K/NT4), Microsoft KB Q314875: The Free Space That Is Required to Convert FAT to NTFS (WinXP)

Conversions involving HPFS
Microsoft KB Q100012 describes CUHPFS.DLL as being related to “HPFS file system conversion” The article states that it applies to Windows NT 3.1.
[#growfs]: Growing a filesystem

This is a delicate process. This involvse using tools that can cause significant damage to data.

This guide has been used successfully. Comments regarding LVM were written with somewhat vague understandings, so definitely proceed at your own risk keeping that in mind.

This guide includes verifyign that the process of extending a partition will be rather straightforward. If it is not, and partitions need to be manually moved, then something more extensive than this rather short/quick guide will be needed.

This does involve rebooting (and whatever downtime that may entail).

This technique has been performed on machines using CentOS/AlmaLinux. Other operating systems may differ, including having some different device names.

Clearly sample names should be customized. This guide does not include some of the hand-holding as some other training material, so perform at your own risk. If you determine you will benefit from help, get that planned for early on, and use such help. Much of this is considered delicate, requiring precision to prevent possible/likely notable data loss.

Evaluate

See if this is looks like a good idea. This will probably work if:

  • Either LVM is used...
  • ... or the partition to be resized is the last partition on a disk. To help verify this:
    • Identify the partition in question.
      • e.g. /home/
    • Find it in the mount list
      • e.g.: mount | grep /home | grep -v virtfs
    • Remove the partition identifier. e.g., in Linux, if the /home/ directory is on sda7 then the disk is sda and the partition is 7.
    • Look at the disk device. e.g. fdisk -l /dev/sda7
    • Looka t the disk device's partitions (e.g. for “sd”) and see if the partition that is being evaluated for change is the last partition (highest numbnred, last on the line).

If you want to resize another partition on a disk without LVM, you might be able to do so after moving a partition, or multiple partitions. However, moving the location of a partition may be a noteworthy task itself, and this guide doesn't extensively cover how to move a partition first, before trying to extend a partition's length. (parted might be useful for moving a partition??)

Check on the disk's upgradability
  • e.g., if using vSphere
    • Check One
      • “Edit virtual machine settings”
      • Find related disk device
      • See the “Provisioned&drquo; size, and the “Maximum” size
        • A typical result may be “Type:” “Thin Provision”
      • See where the disk is
        • The .vmkd file's location may start with the name of a VMware “data store”
      • Click “Cancel”; nothing here is something we want to change while the machine is running.
    • Determine if the host system has enough space
      • Look at the data store. On the virtual machine's “SUmmary” tab, look in a section called “Storage”, scroll tot he right, and see Free Space.
    • Determine if the backup system has enough free space
Determine filesystem type
  • If XFS, then this guide probably has the details to be helpful
  • For other filesystem types, some additional research may be needed. (Some Ext4 info is provided. It might need a bit further testing/research/refinement. Or maybe it is fine now...)
Determine Size
  • This step is probably rather safe to skip as long as parted 3 or neweri s used, as you may just be able to refer to “100%” of possible/available space.
  • However, if you are trying to do this with older software...
    • then it is recommended to determine the probable desired size, in sectors. While that information isn't strictly needed yet, having this handy may simplify a step later in the process, during which a production server might be experiencing downtime.
More checks
  • Check what is mounted on the disk. (What partitions are mounted on the disk?) (e.g., if the first partition is /dev/sdc1 then use mount | grep sdc
  • Determine date of last good backup.
    • If this is lacking, then it is best to address that first!
  • Hint to users of VMware ESXi: if there are pre-existing snapshots, consider consolidating and applying/removing/deleting. Then, maybe you want to make another. However, snapshots in ESXi are often not recommended for long-term use, so plan to remove it later.
    • If using vSphere ESXi, then in vSphere ESXi, right click the system, choose “Snapshot manager”.
  • Plan for the downtime
    • On an example machine using cPanel, downtime could be estimated at about 20 minutes. If this causes downtime of services, that may be more considerable downtime then what is typically preferred. So, having some expertise is recommended. If you are unfamiliar with Linux-based platforms, it may be heavily advisable to have someone with more expertise being very ready, if not actively involved. Even with experienced staff, scheduling such downtime activity to happen du ring “after [business] hours” may be recommended to lessen downtime's overall impact.
install
e.g.: if using an operating system that uses YUM and sudo, then sudo parted install parted
Downtime steps
Getting necessary data unmounted

If using LVM and if there is untapped physical free space, perhaps the only data that needs to be unmounted is the partition to bemodified. However, if using an MBR-based setup and if not using LVM, the safest approach may be to unmount any partition that is part of an “extended partition” being used. So, the amount that needs to be unmounted can vary based on what setup is being used.

Here are some details for a scenario that may be somewhat of a worst-case scenario, although such scenarios may be reasonably common with certain data setups.

  • Prepare for downtime. Let people know this is being started.
  • On the machine: sudo poweroff (or similar, halt -p or appropriate use of shutdown)
  • Increase disk space
    • if physical hardware, maybe this involves replacing a disk, with contents copied from the old disk to the new one. (This sort of scenario is not intended/expected to be extensively covered here...)
    • If this is happening on a virtual machine using VMware ESXi...
      • If you are using vCenter:
        • In vSphere ESXi, make sure virtual machine is stopped
        • in vSphere, increase disk space
          • May need no snapshots for this to work? Or, may need to wait a few minutes after system is off?
        • In vSphere ESXi, make a snapshot
          • Right click the system, choose Snapshots, then choose to take one. Give it a name.
        • in vSphere, power system on
        • in vSphere, open console
  • Plan to modify GRUB
    • If you don't the system to fully boot up, then plan to intervene quickly after being started. (You may want to preview the next few steps before powering on the machine.)
  • Power on machine
  • If you're using virtualization software or some sort of remote access method that works during the boot sequence, then ensure you have an open console so you can interact with the machine
  • Alter GRUB boot process
    • Press arrow keys to stop the timer
    • (If you're using VMWare Console, you also may need to first click into the window to type in it.)
    • Highlight first option, and press e to edit the boot options
    • on the line that says "ro", change to "rw" and, at the end, add " single" (a space, and the word single).
    • Go ahead and boot. (The instructions likely say to press Ctrl-X.)
    • While the system is booting, make sure you have the system's root password handy. (For a system that is often administered using a web interface, this may often be what is typically getting used when people log into that web interface to administer the system.)
    • If you're using VMWare Console, you might need to scroll down to see the prompt for the username. You also may need to click into the window to type in it.
  • Once booted in single user mode:
    • Run mount. See where stuff is mounted from. Make notes if needed.
    • unmount /home and other /home* partitions that may exist (e.g. /home2/)
      • umount /home*
    • Get sector size with fdisk
      • use p command in fdisk to print details. The size in sectors will show up above the partition table.
      • Also, see if there are any devices that are swap.
    • Unmount anything else needed
      • e.g., if you have an “extended partition”, and if the /var/ mount point is in the extended partition, then unmount it (“umount /var
        • You may need to unmount sub-directories first, e.g., /var/lib/ before unmounting /var (even if /var/lib/ has its own partition outside of the extended partition)
      • disable any swap in the extended partition. e.g.: swapoff /dev/sda6
Using parted on the device

Use parted on the device e.g. parted /dev/sdb

  • set units to desired value
    • You could specify sectors: “unit s” (sectors?)
    • You could specify “unit b” (bytes??)
  • print free shows the table. If the recommended word print free is also included, then this also includes details about free space
  • If the partition to be resized is inside an extended partition, resize that extended partition first (as noted below)
  • Determine how to resize, and resize, as described in the following details:
    • Be careful
    • Determine the command that will be used
      • It seems like with parted 3.0, we use resizepart (e.g. resizepart 3 100%)
      • With older parted, perhaps used resize:
        • The old way:
          • (add up size column's value for the lvm drive, and the free space)
          • e.g. (if using bytes): resize part 3 1197650564608
            • 1197650564608 is meant to be the value reached after adding the old amount of Size on the partition, and the Size of Free Space.
          • Probably doesn't work in newer parted versions (parted 2.4 maybe okay, but parted 3.0 not)
    • Determine the first partition to resize
      • If you're using LVM, likely the LVM, the one of type “Linux LVM” (not the ones of type “EFI System” or “Linux Filesystem”)
      • If the desired partition is within an extended partition, then:
        1. use parted to resize the size of the extended partition first. (e.g. partition 4)
        2. Then (the “logical drive” partition that is inside of the “extended partition”), (at the end) use parted to resizepart 7 100% (the final partition which was /home or whatever is being expanded.)
    • know the desired size
      • may want to end on the sector number which is one number less than the total number of sectors. e.g., if you have 100080500 sectors, end on sector 100080499 (since sector zero counts as a sector). e.g., resizepart 1 100080499
    • Know the command
      • With parted, one may be able to use “resizepart PartitionNumber 100%” to fill up free space,
      • while older versions may have used parted, one may be able to use “resize PartitionNumber numberOfSectors” to fill up free space (perhaps? unverified...)
  • Proceed with the change(s)
    • e.g., if LVM, partition 3 might be common:
      • resizepart 3 100%)
    • but if a Logical Drive in an MBR, then a dual-prong approach may be more common...
      • starting with the extended drive (partition 1 through partition 4, probably usually not partition 1, and partition 2 might be most common)
        • resizepart 4 100%)
        • and then the partition number related to the logical drive to expand (likely partition 5 or higher, even if there are less than 4 partitions in the primary partition table)
          • resizepart 7 100%)
    • after making changes
      • print free
        • Maybe it won't use up all the free space requested, but most of it...
      • specify the command quit to leave parted
      • There may be more to do
        • (edit fstab?) The program might refer to updating fstab when a partition table changes, but there might frequently not be any changes to the fstab needed when there is still the same number of partitions in the same order, with the same type, even if some changes to size may have happened.
        • Although, running partprobe (or reooting) may still be beneficial. (Example discussion: https://serverfault.com/a/418679/262387)
Follow-up step(s)
  • Consider whether to re-size the filesystem volume (within the partition) now.
    • if Ext4: old notes suggested that running “time resize2fs /dev/sda7” could be appropraite to do here. However, what appears to be some newer notes seem to indicate maybe this can now be done while the drive is online. If that is the case, this could be done after the system is brought back up (to minimize downtime).
      • Or maybe resize4fs??
    • If XFS: No. Resizing the volume needs to be done while it is mounted, and it can be done while the volume is used. So, in order to minimize downtime, it is better to do this after rebooting (to get out of single-user mode).
  • Recommended: Then, running partprobe (or reooting) may still be beneficial. (Example discussion: https://serverfault.com/a/418679/262387)
  • Reboot (after partitioning, and to get out of single-user mode if that is as undesirable as is typical)
  • Wrap-Up Steps
    Afterwards:
    • If you're still in single user mode, reboot.
    • (If the system is being monitored, make sure that the monitors realize that the system is no longer offline.)
    • Log in. (This can be by SSH.)
    • If using LVM:
      1. the following might also be worthwhile to do now that the system's main partition has grown:
        • pvresize /dev/sda3
        • echo ${?}
      2. Handle the Logical partition area:
        • Optional: View the logical setup:
          • lvs ” (shows a list of logical sections)
          • lvdisplay | grep "LV Path" ” (shows a list of logical sections)
          • lvdisplay | less ” (if you want to see more data)
        • Make a larger partition (by creating a new partition, or extending one)
          • Know the partition's identifier, shown in the first column of df -h
            • The following examples will be based on the filename being “ /dev/vgname/lvname
              • Actual examples might be /dev/mapper/home or /dev/almalinux/var_lib
          • If you want to make a new data area that uses 100% of available space: lvcreate -l +100%FREE -n lvname vgname
            • that is a lowercase L before the plus sign
            • The vgname is likely to pre-exist from other partitions, while the lvname may be something that can be more easily named in a more custom fashion. Naming this after the intended eventual mount point is probably a good idea.
          • If there is an existing data area which is not taking all of the space, but you want it to: lvcreate -l +100%FREE /dev/vgname/lvname
  • Checks
    • Check disk space ( df -h )
    • lvs ” (shows a list of logical sections)
    • lvdisplay | grep "LV Path" ” (shows a list of logical sections)
    • lvdisplay | less ” (if you want to see more data)
  • Extend the space of the filesystem volume
    • Probably want to reboot into normal mode (not single-user mode); it seems that xfs and ext4 can both be grown while mounted
    • If creating a new partition
      • (details not here; may be forthcoming)
    • If adding space:
      • if XFS:
        • Device should be mounted. (actually: must be mounted for this to work)
        • probably use this: time xfs_growfs -d /dev/sda7
          • or if that doesn't work, try: xfs_growfs /dev/sda7
      • if Ext4:
      • probably used: time resize2fs /dev/sda7
  • Add to /etc/fstab is appropriate. For instance, if a new partition were made for /home then perhaps:
    1. backup /etc/fstab
    2. echo /dev/almalinux/home /home xfs defaults,uquota 0 0 | tee -a /etc/fstab
  • Check that the free space looks as expected (df -h)
  • Check that server looks good
  • If using ESXi and if a snapshot was created, then:
    • Tell vSphere ESXi to (delete?) the snapshot, which will cause it to remove the snapshot
      • Don't just have the snapshow wait around for a day to test things out. This might cause significant slowdown on the machine, including the physical machine that runs the virtual machine (and, thereby, affecting other virtual machines). If things look good, then eliminate the snapshot.
  • Related/Similar Pages/Content/Topics/Documentation/Resources
    https://support.cpanel.net/hc/en-us/articles/360053069253-How-to-resize-a-logical-volume
    Shrinking a filesystem
    No solution for XFS

    There is a process to workaround this by deleting the data, and restoring it. Here is a guide to help do this somewhat efficiently when using LVM. The LVM partition can be shrunk, but XFS filesystem-specific software does not seem to support shrinking the XFS filesystem (that may exist within that partition). So, here is a guide to helping to back up such a partion, remove it, and restore it. This may be best done on a partition that is already fairly small: the backup process shown here was tested by storing a copy of the partition's data into a file on the local system. (That might not be quite as senisble for large partitions.)

    A lot of this may be based on https://logic.edchen.org/how-to-shrink-xfs-file-system-on-enterprise-linux-7-2/

    • recommends creating a simple file, which will then be used as a test to make sure it is accessible later.
    • Prepare for backup:
      • yum -y install xfsdump
    • back up the filesystem
      • With the drive still mounted:
        • Determine a device name. Use “df -h and see what the device name is, as listed in the first column of output.
          • In this example, this is /dev/mapper/almalinux-home
        • xfsdump -l 0 -f /home.image /dev/mapper/almalinux-home
          • When prompted, give the output file a name. (This isn't necessarily a filename, it can be an easily-readable, descritpive title.)
          • timeout can be default (just press Enter)
        • echo ${?}
        • ls -l /home.image
        • Note current values: [root@cp801 tmp]# df -h e.g.: /dev/mapper/almalinux-home 950G 6.7G 944G 1% /home [root@cp801 tmp]# df -i /home Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/almalinux-home 498280448 4 498280444 1% /home [root@cp801 tmp]# df -h
        • umount /dev/mapper/almalinux-home
          • echo ${?}
          • if busy, maybe try some of this to help identify what process(es) to close:
            • lsof /home
            • fuser -mv /home
            • cd / exec sudo su # might help, see https://unix.stackexchange.com/a/410457 by replacing a current shell, started in the directory, with a new shell that starts from a differnet directory.
      • lvremove /dev/mapper/almalinux-home

        y
      • lvcreate -L 16G -n newdir almalinux
        • The “newdir” is where you may want a custom name that represents what this drive will be for (perhaps identical to, or related to, the name of the directory where it is planned to be mounted)
        • The next part, in this example “almalinux”, shows up under /dev/. For instance, if similar devices were named /dev/almalinux/var/ then “-n newdir almalinux&rqduo; would make a /dev/almalinux/newdir
        • The above -L 16G indicates the new size will be 16 GB. If you just want all the space, a lowercase L can be used with this syntax instead: lvcreate -l +100%FREE -n newdir almalinux
        • This may show:
          WARNING: xfs signature detected on /dev/almalinux/newdir at offset 0. Wipe it? [y/n]: y
            Wiping xfs signature on /dev/almalinux/newdir.
            Logical volume "newdir" created.
          That's fine. We didn't wipe the old drive, so that's okay.
      • swapon # this gives a report
      • swapoff /dev/dm-1
      • vgs
      • lvdisplay
      • Make new partition
        • https://access.redhat.com/articles/1273933#:~:text=XFS%20Inode%20Size%20%3A,inode%20size%20of%20256%20bytes. suggests 512 byte inode size but maybe that is largely to be able to store SE Linux details well, and cPanel has SE Linux disabled...
      • Turn swap back on, if it was turned off earlier. Re-mount other things that got unmounted (and were intended to be unmounted only temporarily), if any.