Translation(s): none

This page describes how to achieve persistent disk names according to disk bay position, generally - according to physical device path. It is tested on Debian Lenny but it should work on Debian Etch as well

The problem

When using Linux SW RAID made of several SATA hotswap disks, I have found a big problem: when a disk in the array is physically removed and then inserted again, the kernel does not assign it the same name, but a first free one.


I had a RAID made of 2 disks – sda and sdb.

When I physically remove the sda disk (without first executing mdadm –fail /dev/md* /dev/sda*; mdadm –remove /dev/md* /dev/sda*) and then insert this disk back, the disk never appears as /dev/sda. Instead, it is named as /dev/sdc. It seems that the kernel uses first free drive letter, because it thinks that /dev/sda is still used.

Another problem:

If /dev/sda fails completely such a way, that the server will not detect it. Then, if the server is rebooted, the second physical disk, named previously /dev/sdb, will become /dev/sda! This is very confusing, because how will you know which physical disk should be replaced?

The behaviour I would like to see is: disk in disk bay 1 will be named /dev/sda disk in disk bay 2 will be named /dev/sdb – even if there is no disk in disk bay 1 etc.

Then, if /dev/sda fails and the server is restarted, the /dev/sda will be missing and the second disk is still /dev/sdb. And I am able to tell the operator „please go and exchange the disk in disk bay 1“

How to do this?

After 4 days of trial&error experiments, I have developed this solution:

The solution

Although it may sound simple, Google have not found any solution for this problem. First of all, you have to find the UDEV physical device path of your disks. Disk bays are connected to the mainboard in a fixed way, which corresponds to the UDEV physical device path.

Now the fun begins: the udev developers probably did not suppose anyone would need physical device path, so they are changing the location of this information quite frequently.

* Historically, physical device path was stored in ENV{PHYSDEVPATH} udev key. This was deprecated and is no longer supported by newest kernels. But in Lenny, I was forced to use this old key, because all the other keys with physical device path (e.g., ENV{ID_PATH}) did not work for me * In Squeeze kernel (2.6.30), physical device path is available just as DEVPATH

Therefore, the resulting udev rules are different for each kernel version:

Bullseye (5.7.0)

Under systemd-udev its not possible to rename block devices, so using NAME=sda in udev rules doesn't work. Instead you must create symlinks using SYMLINK+="mydiska" or something. There are also whole trees of persistent name symlinks created by default now under /dev/disk/by-{id,label,partuuid,path,uuid}.

Squeeze (2.6.30)

For each disk, the path is printed by udevadm command. Example for /dev/sda and /dev/sda2:

 udevadm info --query=path --name=/dev/sda

For discovering the fixed part of the path, check the output for the other disks in your system. E.g., this is a USB stick put in the USB port 1:

 udevadm info --query=path --name=/dev/sdb

And this is the same USB stick in USB port 2:

 udevadm info --query=path --name=/dev/sdb

Now let's say we want to name any disk inserted into USB port 1 as /dev/sdu. So we create a file (e.g.) /etc/udev/rules.d/20-disk-bays.rules with this content:

KERNEL=="sd?", SUBSYSTEM=="block", DEVPATH=="*usb1/1-1*", NAME="sdu", RUN+="/usr/bin/logger My disk ATTR{partition}=$ATTR{partition}, DEVPATH=$devpath, ID_PATH=$ENV{ID_PATH}, ID_SERIAL=$ENV{ID_SERIAL}", GOTO="END_20_PERSISTENT_DISK"

KERNEL=="sd?*", ATTR{partition}=="1", SUBSYSTEM=="block", DEVPATH=="*usb1/1-1*", NAME="sdu%n" RUN+="/usr/bin/logger My partition parent=%p number=%n, ATTR{partition}=$ATTR{partition}"

Explanation: * ATTR{partition} - thanks to the GOTO, this is not needed. But before I discovered the GOTO possibility, I used this attribute to distinguish between the disk and its partitions. I found this attribute by comparing the output of  udevadm info --attribute-walk --name /dev/sda1  and  udevadm info --attribute-walk --name /dev/sda  commands - I noticed that the partition has this extra attribute (the ENV{DEVTYPE} attribute used in Lenny did not work for me...)

Lenny (2.6.26)

KERNEL=="sd?", SUBSYSTEM=="block", ENV{PHYSDEVPATH}=="*1f.2/host0/target0:0:0/0:0:0:0*", NAME="sda", RUN+="/usr/bin/logger My disk ATTR{partition}=$ATTR{partition}, DEVPATH=$devpath, ID_PATH=$ENV{ID_PATH}, ID_SERIAL=$ENV{ID_SERIAL}", GOTO="END_20_PERSISTENT_DISK"

KERNEL=="sd?*", ENV{DEVTYPE}=="partition", SUBSYSTEM=="block", ENV{PHYSDEVPATH}=="*1f.2/host0/target0:0:0/0:0:0:0*", NAME="sda%n" RUN+="/usr/bin/logger My partition parent=%p number=%n, ATTR{partition}=$ATTR{partition}"

Explanation: *I found the value of ENV{PHYSDEVPATH} by issuing the  udevadm monitor --kernel --udev --environment  command in one shell window, and then executing the  udevadm trigger --subsystem-match=block  command in another window.

Testing the disk failures

To test the result of a complete disk failure, you can use this script:

#place this script to /usr/local/bin/stop-disk
if [ "" = "$1" ]; then
        echo "Usage: `basename $0` device"
        #extract the SCSI ID numbers from the output of lsscsi:
        read -d ] A B C D < <(IFS=':'; echo $(lsscsi | grep $1))
        #remove the "[" from begining of A:
        A=${A##*[}   #quicker version: A=${A:1}
        #stop the disk spinning
        sg_start -i -v --stop $1
        echo "Host adapter ID=$A, SCSI channel=$B, ID=$C, LUN=$D"
        #and remove it from the scsi bus
        echo "scsi remove-single-device $A $B $C $D" > /proc/scsi/scsi

If you name this script /usr/local/bin/stop-disk, then you can stop a disk (e.g., sda) by issuing the command  stop-disk /dev/sda .

For spinning down the disk, this script uses the sg_start command, which is part of sg3-utils, if you don't have it installed, then  apt-get install sg3-utils  will do the job.

Now you can start the disk again using another script:

if [ "" = "$1" ]; then
        echo "Usage: `basename $0` HostAdapterID Channel ID LUN"
        echo "Host adapter ID=$1, SCSI channel=$2, ID=$3, LUN=$4"
        echo "scsi add-single-device $1 $2 $3 $4" > /proc/scsi/scsi
        #sg_start -i -v --start $1

TODO: try to process the device remove event in a way that if a RAID component is removed, the mdadm --fail would be called first


Hello, I have just tested a debian-squeeze. The Rsoft AID disk isn't configured with using block disk anymore. Rather, it is using something like :

DEVICE partition
HOMEHOST <system>
ARRAY /dev/mdO UUID=01230123:23452345:9876986:56785678

Should this page be consistent with Debian-Installer new way way of doing it?

Tomas Dulik replies:

Hello, the UUID is the same on all partitions which are part of a Linux SW RAID volume.

So if your /dev/md0 has the UID =01230123:23...(etc), then all physcial disk partitions, which are components of md0, will have the same UUID. The reason is simple - if you want to migrate your physical disks with you valuable data to another machine, the other machine will be able to easily re-assemble the array according to UUIDs of its components.

So UUID is not usable for identification of the physical disks used in RAID.

The only data that can be used in udev rules (apart of physical device path) are disk vendor and model ID or name and disk serial number, but again - these are not so interesting, because in case of the disk failure, I want to be able to hotswap the failed disk with ANY new disk (made by any vendor) and use the new disk immediatelly without diggin out its vendor/model/serial_number and putting it into the udev rules.

So the only udev data, that are constant and independet on the physical disks, are the physical device paths.

The physical device paths will change only if you change your machine. Then the udev rules will not match, so the disks will have default names until you correct your udev rules.

See also