An external GPU (eGPU) is a standalone graphic card together with a power supply unit (PSU) and some kind of an adapter (in a form of a dock or an enclosure) that is connected to a host computer, usually a laptop, using a single flexible cable. This page gives a high-level overview of issues and benefits of connecting various brands of GPUs via various types of connections to a Debian system. For OS-agnostic in-depth reviews, analysis, guides, build examples etc refer to https://egpu.io site.

Discuss the content of this page on egpu.io forum.


Overview

Due to space and power-consumption limitations, many laptop models are equipped only with pretty modest iGPUs (GPUs integrated on a single chip with a CPU) with limited computing power. This may be a limiting factor in tasks such as video processing, gaming, local LLMs and others. OTOH, desktop systems that are able to accommodate powerful dGPUs (discrete or dedicated GPU chips or whole separate cards) are not portable, which limits work flexibility, including of tasks that could be performed without a powerful GPU. eGPU is a sort of middle-ground between these two, allowing to perform "light" tasks utilizing your laptop's portability and changing it into a machine with almost desktop capabilities by connecting an eGPU to it when needed.

Most eGPU solutions are separately sold components (GPU, PSU, adapter), requiring users to put them together themselves, but ready to use 3-in-1 solutions are also sometimes available.

Comparing to the same card used as a dGPU in a desktop setup, a given eGPU will have exactly the same computing power. However depending on the type of connection used, the speed of communication between the CPU and the GPU (VRAM access) will usually be limited at least to some extent. Impact of this limitation on the overall performance heavily depends on the type of a task being performed. Refer to the benchmarks gathered in "Builds" section of the egpu.io site.


Connection types and interfaces

As of 2025 on consumer computers, "low-level" peripheral devices such as GPUs are connected to the rest of the system using a PCIe bus. A motherboard and a CPU have a certain number of physical PCIe lanes available and each soldered device and each PCIe slot or connector has a few of these lanes assigned exclusively to itself. The transfer capacity of a single lane depends on the PCIe version and roughly doubles with each subsequent version, so for example 8 PCIe-3.0 lanes (x8 gen3) provide roughly the same transfer as 4 PCIe-4.0 lanes (x4 gen4), which is slightly less than 64Gbps. Standard PCIe slots on desktop motherboards provide either x1, x4, x8, or x16 lanes and the more lanes they provide, the longer their minimum physical length needs to be. PCIe slots are compatible between sizes: it is possible to insert a card with a smaller interface into a bigger slot, for example an x8 card into an x16 slot and it will work normally at the speed provided by x8 lanes. If a slot has an open end, then it is also possible to insert a card with a bigger interface into it and it will work at the speed provided by the number of lanes of the slot. Similar solution to open-ended slots are bigger physical slots with only a fraction of lanes connected, for example a slot of x16 physical size, but only with x8 lanes electrically connected. PCIe interface is also fully backwards compatible, providing the overall speed of the slowest component, so it is possible to insert a gen3 device into a gen4 slot or vice versa and everything will work fine, but at the gen3 speed.

As of 2025 most (if not all) consumer GPU cards use standard x16 PCIe slot interface. Therefore that's the physical size of slots provided on the "GPU end" of all types of eGPU adapters described here, although only a fraction of lanes is actually connected in most cases (most commonly x4). The highest PCIe version supported by a given card varies between models and depends mostly how old the model is.

A wide variety of interface types described in the below subsections is used on the "host end" of eGPU adapters. Ultimately however it is either a direct connector to a host computer's PCIe lanes or some other interface that is capable of tunneling of the PCIe protocol. The below list is far from comprehensive and describes only the currently most common types.


Fixed-cable connections

This connection type uses interfaces that are supposed to be connected and disconnected very rarely, usually only during an initial setup and hardware upgrades. As such it only makes sense in systems that are supposed to be used only at fixed locations, which in many cases defeats the original purpose of portability. Nevertheless sometimes people use spare laptops as their desktops in which case extending their capabilities with a fixed-cable eGPU makes perfect sense.

This connection type provides usually the highest PCIe signal integrity: something that some pluggable connection types suffer with.

From an OS perspective, GPUs connected this way are virtually indistinguishable from standard dGPUs and as such no additional software setup is needed comparing to dGPU.

M.2

M.2 is a compact connector that among others is used as an interface to PCIe buses in contemporary laptops and desktops. Depending on "keying" it provides up to x4 PCIe lanes: most usually "two times x1" (A, E. A+E keys) or x4 (M key).

Example adapters:

Standard PCIe slot

Available only on desktop and server motherboards. Such adapters may be useful if there are not enough x16 slots available and smaller ones are not open-ended. Another reason may be not enough physical space to accommodate a GPU card inside the case or if the case doesn't have enough cooling capabilities.

Example adapters:


Pluggable connections

This type refers to interfaces that may be easily connected and disconnected, but only when the system is powered-off. From an OS perspective, GPUs connected this way are also seen as dGPUs.

OCuLink is an interface designed specifically to expose PCIe lanes as an external port. It uses SFF8612 socket + SFF8611 plug as its connector, which comes in two sizes: 4i and 8i, exposing x4 and x8 lanes respectively. Currently still very few consumer devices are natively equipped with an OCuLink port, however M.2 M to OCuLink 4i adapters are easily available and are usually quite cheap.

The OCuLink spec does not define any norms for maximum latency nor signal noise level, leading to some cables and adapters that are formally "OCuLink compliant", degrading signal integrity beyond what many host motherboards are able to tolerate. This is especially true when M.2 to OCuLink adapters are used, as M.2 M slots are usually originally intended for NVMe drives that generally meet very strict signal integrity standards. To address this problem, some OCuLink eGPU adapters are equipped with PCIe redrivers that improve signal integrity. It is speculated that using M.2 to OCuLink adapters that also include redrivers (such as Minerva DP6303) could potentially improve things even more, but not enough data has been gathered so far to neither prove nor disprove this theory.

The PCIe spec and as a consequence also OCuLink define hot-plugging as an optional feature, but this requires a special hardware support on both sides, so in a general case OCuLink does not support hot-plugging. As of February 2025 the only consumer computers supporting OCuLink hot-plugging are Lenovo laptops with TGX ports.

Recently many new OCuLink solutions are being dynamically developed: see this dedicated egpu.io thread to stay up2date.

Example adapters:


Hot-pluggable connections

This type of connections allows to connect and disconnect an eGPU while the system is running, triggering OS level mechanisms to handle such events. While plugging-in is mostly PnP, unplugging currently requires to first terminate all processes using the eGPU. Failure to do so usually results in software crashes at various layers. Depending on the specific stack, these may range from fatal (kernel-panic), through current data loss, to fully recoverable. See the "Software support for hot-plugging" section for details.

USB-C based (Thunderbolt 3+ and USB 4+)

This interface family uses PCIe tunneling over USB. On x86_64 computers this was popularized on Intel-based machines with Thunderbolt-3 controllers and was later included in USB-4 and adopted by AMD. Earlier versions of USB are not capable of PCIe tunneling even when using USB-C connector. All adapters and ports within this family are usually compatible with each other, but the transfer capacity may vary from about 12 to 30Gbps depending on the exact mix: refer to the perf table on egpu.io site. If quality cables are used (active and/or with quality screening), this interface family provides a very good PCIe signal integrity. As of 2025 almost every contemporary laptop model is equipped with either a Thunderbolt or an USB-4 port, making this interface very ubiquitous.

Example adapters:

TGX

Thinkbook Graphics eXtension (TGX) is a Lenovo-designed interface based on OCuLink 4i that supports hot-plugging and includes redrivers. As such it is to some extent interoperable with OCuLink 4i. As of February 2025 TGX docks and Thinkbooks equipped with a TGX port are sold only in China.

ExpressCard

ExpressCard interface was popular on laptops during the first decade of the century. It exposes a single x1 gen2 lane, providing about 3.1Gbps transfer capacity in practice.

Example adapters:


Software support for hot-plugging

Nvidia with proprietary driver

This section refers to NvidiaGraphicsDrivers.

As per Nvidia driver README file, hot-unplugging an Nvidia eGPU while in use is generally not supported and will cause various levels of crashes. nvidia-smi command may be used to obtain the list of processes currently using the Nvidia GPUs.

Kernel module status

Running X11 on an eGPU

As described in the above README file, X11 nvidia driver by default refuses to start on hot-pluggable eGPUs to avoid crashing when unplugging. To force starting on an eGPU, it must be specifically pointed by its PCI BusID and AllowExternalGpus option needs to be set to true (in addition to AllowEmptyInitialConfiguration useful for multi-GPU setups involving Nvidia). For example one could have /etc/X11/xorg.conf.d/20-egpu-device.conf file like this:

Section "Device"
        Identifier "eGPU"
        Driver "nvidia"
        Option "AllowEmptyInitialConfiguration" "true"
        Option "AllowExternalGpus" "true"
        BusID "PCI:11:0:0"
EndSection

BusID may be obtained from the output of lspci |grep VGA command: the numbers from the first column of the corresponding row must be converted from hex to decimal. For example here is the lspci output that rendered the above 11:0:0 ID:

$ lspci |grep VGA
00:02.0 VGA compatible controller: Intel Corporation Iris Plus Graphics 640 (rev 06)
0b:00.0 VGA compatible controller: NVIDIA Corporation GA102 [GeForce RTX 3090] (rev a1)

Running X11 on an iGPU and using an eGPU for offloading

For General info on offloading see "PRIME" section in "Optimus" page.

As of driver 570, even if X11 is initially started only on the iGPU/dGPU, when an eGPU is connected, the Xorg process will "attach itself" to such eGPU to allow extending the desktop to the connected monitors. Unfortunately "detaching" from eGPUs is not implemented yet, so in such case it is necessary to close the whole X session before unplugging the eGPU, which defeats the original purpose of running it on the iGPU. The workaround is to prevent attaching to eGPUs by setting AutoAddGPU option to false in ServerFlags section, for example in an additional /etc/X11/xorg.conf.d/10-server-flags.conf file:

Section "ServerFlags"
        Option "AutoAddGPU" "false"
EndSection

Wayland

ToDo: gather and add info


Nvidia with Nouveau driver

ToDo: gather and add info


AMD

Before unplugging all processes using the eGPU must be terminated. Next, amdgpu kernel driver needs to be removed or unbound from the eGPU by writing its PCIe bus ID to /sys/bus/pci/drivers/amdgpu/unbind file:

echo "0000:7:0.0" >/sys/bus/pci/drivers/amdgpu/unbind

The bus ID may be obtained from the output of lspci |grep VGA command: the numbers from the first column of the corresponding row must be prefixed with 0000: (assuming a single PCIe controller). Unbinding the module is preferred over removing especially when there are multiple AMD GPUs in the system (for example if the iGPU is also from AMD).

Running X11 on an eGPU

If no config is provided, by default X will choose to run on a built-in GPU. To run X on an eGPU, it is sufficient to define a Device section for it and point it by its PCI BusID. For example one could have /etc/X11/xorg.conf.d/20-egpu-device.conf file like this:

Section "Device"
        Identifier "eGPU"
        Driver "amdgpu"
        BusID "PCI:7:0:0"
EndSection

BusID uses decimal format, so the numbers need to be converted from hex when using lspci |grep VGA to obtain them.

Wayland

ToDo: gather and add info


Intel

ToDo: gather and add info


Common

Thunderbolt 3 authorization

Depending on BIOS/UEFI settings, when connecting an eGPU via a Thunderbolt-3 port, device authorization may be required first: install bolt package and refer to boltctl manpage for details.

Running X11 on an eGPU

When running X on an eGPU, it is possible to also utilize monitors connected to other GPUs of the system (like laptop's built-in panel connected to its iGPU). Such additional GPUs need to be listed in the Screen section as a GPUDevice, for example in /etc/X11/xorg.conf.d/80-screen.conf file along an /etc/X11/xorg.conf.d/20-egpu-device.conf file with an appropriate eGPU Device config:

Section "Device"
        Identifier "Intel iGPU"
        Driver "modesetting"
        BusID "PCI:0:2:0"
EndSection

Section "Screen"
        Identifier "dualGPU"
        Device "eGPU"  ## primary GPU performing the rendering, defined in /etc/X11/xorg.conf.d/20-egpu-device.conf file
        GPUDevice "Intel iGPU"  ## secondary GPU providing additional monitors
EndSection


Choosing and connecting a PSU

Many of the eGPU adapters available on the market, come without a PSU, thus forcing users to make one more additional choice. PSUs are relatively simpler devices comparing to CPUs or GPUs, but their role in a system is no less critical. Therefore choosing a PSU should be planned accordingly:

When connecting a PSU to a GPU (or a PSU to anything else to that matter), it is absolutely critical to make sure that the connector is fully plugged and its accidental unplug prevention clip is locked. Failure to do so will likely lead to the GPU catching fire.


CategoryLaptopComputer CategoryHardware