Linux Tricks And Tips

Setting Default Mount Points for Hard drives

The /etc/fstab file controls the default mount points of all of the hard disks. A sample fstab file is shown below:

# /etc/fstab: static file system information.
#
# noatime turns off atimes for increased performance (atimes normally aren't 
# needed); notail increases performance of ReiserFS (at the expense of storage 
# efficiency).  It's safe to drop the noatime options if you want and to 
# switch between notail / tail freely.
#
# The root filesystem should have a pass number of either 0 or 1.
# All other filesystems should have a pass number of 0 or greater than 1.
#
# See the manpage fstab(5) for more information.
#

# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>

# NOTE: If your BOOT partition is ReiserFS, add the notail option to opts.
/dev/sda1               /boot           ext4            noauto,noatime  1 2
/dev/sda3               /               ext4            noatime         0 1
/dev/sda2               none            swap            sw              0 0
/dev/sda4               /home           ext4            noatime         0 1
/dev/cdrom              /mnt/cdrom      auto            noauto,ro       0 0
/dev/fd0                /mnt/floppy     auto            noauto          0 0

The above automatically mounts the /dev/sda3 partition to the root location (/), /dev/sda2 as swap space ("fake" RAM on the hard drive), /dev/sda4 to the user home space /home. The boot partition, /dev/sda1 is not mounted by default (because of the noauto keyword).

However, /dev/sdX is a relative device identifier. For instance, if you add a hard disk to a SATA slot that the motherboard numbers before others that are in use, then any device that comes logically after your new device will be renumbered to at least one letter greater. Likewise if you accidentally swap the SATA slots that 2 drives are mounted in (or perhaps one dies, etc.).

To avoid this, you can use an absolute identifier, called the UUID. You can get this from the blkid program.

bash$ blkid /dev/sda[1234]

/dev/sda1: UUID="ca81087d-e2ed-4a97-bb3b-82d9bae44397" TYPE="ext4" 
/dev/sda2: UUID="3f7d2143-a968-4d80-b454-d797da2707c4" TYPE="swap" 
/dev/sda3: UUID="dc3fb1b6-09bf-48cb-8e94-6c265d5083ee" TYPE="ext4" 
/dev/sda4: UUID="062977ac-3c06-4a2b-8866-4a6a4af3265d" TYPE="ext4"

Thus, /dev/sda1 in /etc/fstab could be replaced via

UUID=ca81087d-e2ed-4a97-bb3b-82d9bae44397  /boot   ext4   noauto,noatime 1 2

Exploring Virtual Machine Disks

Ever been at home and wanted a file you kept on a virtual machine hard disk (lets say your work computer is a Linux box but you have a Windows VM you need a file from). Unless you can find a way to connect graphically to the desktop (via VNC, for instance), you would need to go to the office to fire up the virtual machine and access items from the disk.

Well not anymore! Thanks to this page ([http://bethesignal.org/blog/2011/01/05/how-to-mount-virtualbox-vdi-image/]) I have learned about the software necessary to mount virtual hard disk images (like *.vdi files) directly to your Linux host filesystem so you can access the files on your virtual machine as though it was just another folder on your computer. In some cases, you can use this to do some nifty little repair work on a VM that won't boot (but unless you really know what you're doing you're more likely to just break it worse).

In fact, if you have a guest Linux OS (for testing a specific Linux distro), you can actually mount your Linux virtual hard disk and use chroot to effectively 'log in' to that machine without ever going through VirtualBox (of course the guest OS must be kernel-compatible with the host since the host kernel is still the one in use when chrooting).

Anyway, on to the magic. Some prerequisites: your kernel needs to have been compiled to support Network Block Devices (NBD). The standard Linux distros come this way by default, I think (or at least provide the option of building the nbd module that you can load). You also need the QEMU tools, which are available through your package manager as shown below.

bash$ sudo apt-get install qemu-utils     # Ubuntu/Debian
bash$ sudo emerge -a qemu                 # Gentoo
bash$ sudo zypper install qemu            # SUSE
bash$ sudo yum install qemu               # Fedora/CentOS/Red Hat

Check the device folder (/dev) and make sure you see a handful of nbd# devices. For instance, on my machine I have:

bash$ ls /dev/nbd*
/dev/nbd0  /dev/nbd10  /dev/nbd12  /dev/nbd14  /dev/nbd2  /dev/nbd4  /dev/nbd6  /dev/nbd8
/dev/nbd1  /dev/nbd11  /dev/nbd13  /dev/nbd15  /dev/nbd3  /dev/nbd5  /dev/nbd7  /dev/nbd9

If these are not available, make sure you have installed the necessary qemu package and have the nbd module loaded. You can check that the nbd module is loaded using the command lsmod and looking for nbd. On my machine, for example:

bash$ lsmod | grep nbd
nbd                     8123  0

If it is not present, use the command modprobe nbd to load the module (you can use the above command to check that it was properly loaded afterwards). Once this is done, you are ready to proceed.

You first have to 'register' the virtual hard disk as a Network Block Device using the qemu-nbd command as shown below:

bash$ qemu-nbd -c /dev/nbd0 /path/to/hard/disk/Windows\ XP.vdi

This command should have created a new device for each partition in the virtual hard disk named /dev/nbd#p# where the first # is the nbd device you initially chose (/dev/nbd0 in my case) and the second # is the partition number. In my machine:

bash$ ls /dev/nbd0*
/dev/nbd0  /dev/nbd0p1

These new partitions are mount-able. Just create a mount point and use the command

bash$ mount /dev/nbd0p1 /path/to/mount

You can now access all of the files in that filesystem in /path/to/mount. To clean up, unmount /path/to/mount and use qemu-nbd -d to unbind the virtual hard disk from the network block device, as shown below:

bash$ umount /path/to/mount
bash$ qemu-nbd -d /dev/nbd0
/dev/nbd0 disconnected
bash$ ls /dev/nbd0*
/dev/nbd0

And that's it…

Using LLVM

I'm a relative newcomer to the Low Level Virtual Machine, which is more a carefully designed library capable of compiling code. You can learn more about LLVM at [http://llvm.org/]. From all appearances, however, programs generated with LLVM tend to run considerably faster than those generated with other compilers I have tried (like, for instance, GCC and the Intel compilers).

There are two main ways for mainstream ways to use the LLVM backend to compile code:

  1. You can use the clang front-end to compile C and C++ code
  2. You can use the dragonegg plugin to tell the compilers in the GNU Compiler Collection to use LLVM as the backend when generating machine code. (It effectively just uses the GCC compilers as glorified parsers).

At the moment, the second approach appears to be the only way to compile Fortran source code using LLVM.

Using clang

Using clang is easy enough. Most package managers (e.g., apt-get, yum, and zypper) provide a clang package that allows you to easily install the clang and clang++ LLVM front-ends. To compile programs with them (C, C++, or Objective C), simply use clang or clang++ as the compiler on the command-line, respectively.

Using dragonegg

This is a bit more experimental. So far I've only tested this on my own Gentoo machine, but it seems that using the argument -fplugin=/usr/lib64/llvm/dragonegg.so will do the trick. So something like:

gfortran -fplugin=/usr/lib64/llvm/dragonegg.so test_program.F90

This works for me, anyway.

Why LLVM?

One fair question is: why should we both using LLVM when other options like GCC and icc/ifort work fine? There are several reasons, but some of the more obvious ones are:

  • LLVM tends to compile programs much faster than GCC (and likely Intel as well). As I understand it, clang will offer a better speedup over GCC simply because it parses and preprocesses faster. As a result, dragonegg is unlikely to be as much of a speedup at compile time, but it should still be faster than GCC.
  • LLVM uses less memory when compiling machine code. This doesn't matter too much IMO since I haven't had many issues running out of memory when compiling programs (and memory is only getting cheaper).
  • LLVM creates faster programs. So it uses less memory to produce faster programs in less time… Seems like the trifecta.

It's still relatively new, though…

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License