Multiarch paths and toolchain implications

Overview and goals

Binary files in packages are usually platform-specific, that is they work only on the architecture they were built for. Therefore, the packaging system provides platform-specific versions for them. Currently, these versions will install platform-specific files to the same file system locations, which implies that only one of them can be installed into a system at the same time.

The goal of the "multiarch" effort is to lift this limitation, and allow multiple platform-specific versions of the same package to be installed into the same file system at the same time. In addition, each package should install to the same file system locations no matter on which host architecture the installation is performed (that is, no rewriting of path names during installation).

This approach could solve a number of existing situations that are not handled well by today's packaging mechanisms:

In order to support this, platform-specific versions of a multiarch package must have the property that for each file, it is either 100% identical across platforms, or else it must be installed to separate locations in the file system.

The latter is the case at least for executable files, shared libraries, static libraries and object files, and to some extent maybe header files. This means that in a multiarch world, such files must move to different locations in the file system than they are now. This causes a variety of issues to be solved; in particular, most of the existing locations are defined by the FHS and/or are assumed to have well-known values by various system tools.

In this document, I want to focus on the impact of file system hierarchy changes to two tasks in particular:

In the following two sections, I'll provide details on file system paths are currently handled in these two areas. In the final section, I'll discuss suggestions how to extend the current behavior to support multiarch paths.

Loading/running an executable

Running a new executable starts with the execve () system call. The Linux kernel supports execution of a variety of executable types; most commonly used are

The binary itself is passed via full (or relative) pathname to the execve call; the kernel does not make file system hierarchy assumptions. By convention, callers of execve usually search well-known path locations (via the PATH environment variable) when locating executables. How to adapt these conventions for multiarch is beyond the scope of this document.

With #! scripts and binfmt_misc handlers, the kernel will involve a user-space helper to start execution. The location of these handlers themselves and secondary files they in turn may require is provided by user space (e.g. in the #! line, or in the parameters installed into the binfmt_misc file system). Again, adapting these path names is beyond the scope of this document.

ELF interpreter

For native ELF executables, there are two additional classes of files involved in the initial load process: the ELF interpreter (dynamic loader), and shared libraries required by the executable.

The ELF interpreter name is provided in the PT_INTERP program header of the ELF executable to be loaded; the kernel makes no file name assumptions here. This program header is generated by the linker when performing final link of a dynamically linked executable; it uses the file name passed via the -dynamic-linker argument. (Note that while the linker will fall back to some hard-coded path if that argument is missing, on many Linux platforms this default is in fact incorrect and does not correspond to a ELF interpreter actually installed in the file system in current distributions. Passing a correct -dynamic-linker argument is therefore mandatory.)

In normal builds, the -dynamic-linker switch is passed to the linker by the GCC compiler driver. This in turn gets the proper argument to be used on the target platform from the specs file; the (correct) default value is hard-coded into the GCC platform back-end sources. On bi-arch platforms, GCC will automatically choose the correct variant depending on compile options like -m32 or -m64. Again, the logic to do so is hard-coded into the back-end. Unfortunately, various bi-arch platforms use different schemes today:





















Dynamic libraries

Once the ELF interpreter is loaded, it will go on and load dynamic libraries required by the executable. For this discussion, we will consider only the case where the interpreter is as provided by glibc.

As opposed to the kernel, glibc does in fact *search* for libraries, and makes a variety of path name assumptions while doing so. It will consider paths encoded via -rpath, the LD_LIBRARY_PATH environment variable, and knows of certain hard-coded system library directories. It also provides a mechanism to automatically choose the best out of a number of libraries available on the system, depending on which capabilities the hardware / OS provides.

Specifically, glibc determines a list of search directory prefixes, and a list of capability suffixes. The directory prefixes are:

The capability suffixes are determined from the following list of capabilities:

The full list of capability suffixes is created from the list of supported capabilities by forming every sub-sequence. For example, if the platform is "i686", supports the important hwcap "sse2" and TLS, the list of suffixes is:








The total list of directories to be searched is then formed by concatenating every directory prefix with every capability suffix. Various caching mechanisms are employed to reduce the run-time overhead of this large search space.

Note: This method of searching capability suffixes is employed only by glibc at run time; it is unknown to the toolchain at compile time. This implies that an executable will have been linked against the "base" version of a library, and the "capability-specific" version of the library is only substituted at run time. Therefore, all capability-specific versions must be ABI-compatible to the base version, in particular they must provide the same soname and symbol versions, and they must use compatible function calling conventions.

Building an executable from source

For this discussion, we only consider GCC and the GNU toolchain, installed into the usual locations as system toolchain, and in the absence of any special-purpose options (-B) or environment variables (GCC_EXEC_PREFIX, COMPILER_PATH, LIBRARY_PATH ...).

However, we do consider the three typical modes of operation:

Directory prefixes

During the build process, the toolchain performs a number of searches for files. In particular, it looks for (executable) components of the toolchain itself; for include files; for startup object files; and for static and dynamic libraries.

In doing so, the GNU toolchain considers locations derived from any of the following "roots":

Multilib suffixes

In addition to the base directory paths refered to above, the GNU toolchain supports the so-called "multilib" mechanism. This is intended to provide support for multiple incompatible ABIs on a single platform. This is implemented by the GCC back-end having hard-coded information about which compiler option causes an incompatible ABI change, and a hard-coded "multilib" directory suffix name corresponding to that option. For example, on PowerPC the -msoft-float option is associated with the multilib suffix "nof", which means libraries using the soft-float ABI (passing floating point values in integer registers) can be provided in directories like:



The multilib suffix is appended to all directories searched for libraries by GCC and passed via -L options to the linker. The linker itself does not have any particular knowledge of multilibs, and will continue to consult its default search directories if a library is not found in the -L paths. If multiple orthogonal ABI-changing options are used in a single compilation, multiple multilib suffixes can be used in series.

As a special consideration, some compiler options may correspond to multiple incompatible ABIs that are already supported by the OS, but using directory names differently from what GCC would use internally. As the typical example, on bi-arch systems the OS will normally provide the default 64-bit libraries in /usr/lib64, while also providing 32-bit libraries in /usr/lib. For GCC on the other side, 64-bit is the default (meaning no multilib suffix), while the -m32 option is associated with the multilib suffix "32".

To solve this problem, the GCC back-end may provide a secondary OS multilib suffix which is used in place of the primary multilib suffix for all library directories derived from *system* paths as opposed to GCC paths. For example, in the typical bi-arch setup, the -m32 option is associated with the OS multilib suffix "../lib". Given the that primary system library directory is /usr/lib64 on such systems, this has the effect of causing the toolchain to search

Detailed directory search rules

The following rules specify in detail which directories are searched at which phase of compilation. The following parameters are used:

Directories searched by the compiler driver for executables (cc1, as, ...):

GCC directories





Toolchain directories



Directories searched by the compiler for include files:

G++ directories (when compiling C++ code only)




Prefix directories (if distinct from system directories)

[native only]


GCC directories



Toolchain directories

[cross only]



System directories





Directories searched by the compiler driver for startup files (crt*.o):

GCC directories


Toolchain directories



Prefix directories

[native only]


[native only] $(libdir)[/$(multi_os)]

System directories









Directories searched by the linker for libraries:

Prefix directories (if distinct from system directories)

[native only]


Toolchain directories



System directories







Non-biarch directories (if distinct from the above)

[native only]










Note: In addition to these directories built-in to the linker, if the linker is invoked via the compiler driver, it will also search the same list of directories specified above for startup files, because those are implicitly passed in via -L options by the driver. Also, when searching for dependencies of shared libraries, the linker will attempt to mimic the search order used by the dynamic linker, including DT_RPATH/DT_RUNPATH and LD_LIBRARY_PATH lookups.

Multiarch impact on the toolchain

The current multiarch proposal is to move the system library directories to a new path including the GNU target triplet, that is, instead of using



the system library directories are now called



At this point, there is no provision for multiarch executable or header file installation.

What are the effects of this renaming on the toolchain, following the discussion above?

ELF interpreter

The ELF interpreter would now reside in a new location, e.g.


This allows interpreters for different architectures to be installed simultaneously, and removes the need for the various bi-arch hacks. Change would imply modification of the GCC back-end, and possibly the binutils ld default as well (even though that's currently not normally used), to build new executables using the new ELF interpreter install path.


Shared library search paths

According to the primary multiarch assumption, the system library search paths are modified to include the multiarch target string:



This requires modifications to glibc's loader (can possibly be provided via platform back-end changes). Backwards compatibility most likely requires that both the new multiarch location and the old location are searched.

Open questions:

GCC and toolchain directory paths

The core multiarch spec only refers to system directories. What about directories provided by GCC and the toolchain? Note that for a cross-development setup, we may have various combinations of host and target architectures. In this case $(host-multiarch) refers to the multiarch identifier for the host architecture, while $(target-multiarch) refers to the one for the target architecture. In addition $(target) refers to the GNU target triplet as currently used by GCC paths (which may or may not be equal to $(target-multiarch), depending on how the latter will end up being standardized).

Multiarch and cross-compilation

Using the paths as discussed in the previous section, we have some options how to perform cross-compilation on a multiarch system. The most obvious option is to build a cross-compiler with sysroot equal to "/". This means that the compiler will use target libraries and header files as installed by unmodified distribution multiarch packages for the target architecture. This should ideally be the default cross-compilation setup on a multi-arch system. In addition, it is still possible to build cross-compilers with a different sysroot, which may be useful if you want to install target libraries you build yourself into a non-system directory (and do not want to require root access for doing so).


Multiarch and multilib

The multilib mechanism provides a way to support multiple incompatible ABI versions on the same ISA. In a multiarch world, this is supposed to be handled by different multiarch prefixes, to enable use of the package management system to handle libraries for all those variants. How can we reconcile the two systems?

It would appear that the way forward is based on the existing "OS multilib suffix" mechanism. GCC already expects to need to handle naming conventions provided by the OS for where incompatible versions are to found. In a multiarch system, the straightforward solution would appear to be to use the multiarch names as-is as OS multilib suffixes.

In fact, this could even handle the *default* multiarch name without requiring any further changes to GCC. For example, in a bi-arch amd64 setup, the GCC back-end might register "x86_64-linux" as default OS multilib suffix, and "i386-linux" as OS multilib suffix triggered by the (ABI changing) -m32 option. Without any further change, GCC would now search




as appropriate, depending on the command line options used. (This assumes that the main libdir is /usr/lib, i.e. no bi-arch configure options were used.)

Note that GCC would still use its own internal multilib suffixes for its private libraries, but that seems to be OK.

Caveat: This would imply those multilib names, and their association with compiler options, becomes hard-coded in the GCC back-end. However, this seems to have already been necessary (see above).


From the preceding discussion, it would appear a multiarch setup allowing parallel installation of run-time libraries and development packages, and thus providing support for running binaries of different native ISAs and ABI variants as well as cross-compilation, might be feasible by implementing the following set of changes. Note that parallel installation of main *executable* files by multiarch packages is not (yet) supported.

This document is primarily the work of Ulrich Wiegand, and came out of a discussion at the Linaro sprint in Prague, July 2010.

I'd appreciate any feedback or comments! Have I missed something?