An external GPU (eGPU) is a standalone graphic card together with a power supply unit (PSU) and some kind of an adapter (in a form of a dock or an enclosure) that is connected to a host computer, usually a laptop, using a single flexible cable. This page gives a high-level overview of issues and benefits of connecting various brands of GPUs via various types of connections to a Debian system. For OS-agnostic in-depth reviews, analysis, guides, build examples etc refer to https://egpu.io site.
Contents
Discuss the content of this page on egpu.io forum.
Introduction
Due to space and power-consumption limitations, many laptop models are equipped only with pretty modest iGPUs (GPUs integrated on a single chip with a CPU) with limited computing power. This may be a limiting factor in tasks such as video processing, gaming, local LLMs and others. OTOH, desktop systems that are able to accommodate powerful dGPUs (discrete or dedicated GPU chips or whole separate cards) are not portable, which limits work flexibility, including of tasks that could be performed without a powerful GPU. eGPU is a sort of middle-ground between these two, allowing to perform "light" tasks utilizing your laptop's portability and changing it into a machine with almost desktop capabilities by connecting an eGPU to it when needed.
Most eGPU solutions are separately sold components (GPU, PSU, adapter), requiring users to put them together themselves, but ready to use 3-in-1 solutions are also sometimes available.
Comparing to the same card used as a dGPU in a desktop setup, a given eGPU will have exactly the same computing power. However depending on the type of connection used, the speed of communication between the CPU and the GPU (VRAM access) will usually be limited at least to some extent: see the egpu.io speed measurements table. Impact of this limitation on the overall performance heavily depends on the type of a task being performed. Recently pretty extensive benchmarks have been performed by Puget Systems. Also refer to the benchmarks gathered in "Builds" section of the egpu.io site.
Connection types and interfaces
As of 2025 on consumer computers, "low-level" peripheral devices such as GPUs are connected to the rest of the system using a PCIe bus. A motherboard and a CPU have a certain number of physical PCIe lanes available and each soldered device and each PCIe slot or connector has a few of these lanes assigned usually exclusively to itself (some desktop mobos are capable of reassigning lanes between PCIe slots depending on specifics of connected devices, but this rather an exception).
The transfer capacity of a single lane depends on the PCIe version and roughly doubles with each subsequent version, so for example 8 PCIe-3.0 lanes (x8 gen3) provide roughly the same transfer as 4 PCIe-4.0 lanes (x4 gen4), which is slightly less than 64Gbps.
Standard PCIe slots on desktop motherboards provide either x1, x4, x8, or x16 lanes and the more lanes they provide, the longer their minimum physical length needs to be. PCIe slots are compatible between sizes: it is possible to insert a card with a smaller interface into a bigger slot, for example an x8 card into an x16 slot and it will work normally at the speed provided by x8 lanes. If a slot has an open end, then it is also possible to insert a card with a bigger interface into it and it will work at the speed provided by the number of lanes of the slot. Similar solution to open-ended slots are bigger physical slots with only a fraction of lanes connected, for example a slot of x16 physical size, but only with x8 lanes electrically connected.
PCIe interface is also fully backwards compatible between versions, providing the overall speed of the slowest component, so it is possible to insert a gen3 device into a gen4 slot or vice versa and everything will work fine, but at the gen3 speed.
As of 2025 most (if not all) consumer GPU cards use standard x16 PCIe slot interface. Therefore that's the physical size of slots provided on the "GPU end" of all types of eGPU adapters described here, although only a fraction of lanes is actually connected in most cases (most commonly x4). The highest PCIe version supported by cards and adapters varies between specific models and depends mostly how old the model is.
A wide variety of interface types described in the below subsections is used on the "host end" of eGPU adapters. Ultimately however it is either a direct connector to a host computer's PCIe lanes or some other interface that is capable of tunneling of the PCIe protocol. The below list is far from comprehensive and describes only the currently most common types.
Fixed-cable connections
This connection type uses interfaces that are supposed to be connected and disconnected very rarely, usually only during the initial setup and hardware upgrades or maintenance. As such it only makes sense in systems that are supposed to be used only at fixed locations, which in many cases defeats the original purpose of portability. Nevertheless sometimes people use spare laptops as their desktops in which case extending their capabilities with a fixed-cable eGPU makes perfect sense.
This connection type provides usually the highest PCIe signal integrity: something that some pluggable connection types suffer with.
From an OS perspective, GPUs connected this way are virtually indistinguishable from standard dGPUs and as such no additional OS-level (nor above) software setup is needed comparing to dGPU-based multi-GPU setups. Depending on the host computer mobo model however, BIOS/UEFI settings may need to be modified and some laptop BIOSes require their iGPUs to be disabled when a dGPU (or a fixed-cable eGPU) is present.
M.2
M.2 is a compact connector that among others is used as an interface to PCIe buses in contemporary laptops and desktops. Depending on "keying" it provides up to x4 PCIe lanes: most usually "two times x1" (A, E. A+E keys) or x4 (M key).
Example adapters:
M.2 M key: ADT-Link F43SG, supports PCIe up to gen5.
- M.2 A/E key: EXP-GDC Beast (M.2 A/E variant), supports PCIe up to gen4.
Standard PCIe slot
Available only on desktop and server motherboards. Such adapters may be useful if there are not enough x16 slots available and smaller ones are not open-ended. Another reason may be not enough physical space to accommodate a GPU card inside the case or if the case doesn't have enough cooling capabilities.
Example adapters:
ADT-Link R23SG (PCIe x4 connector variant), supports PCIe up to gen3.
- PCIe riser extensions or bifurcation cards: this is hardly an eGPU anymore, but rather a dGPU placed uncommonly far from its PCIe slot, possibly outside of the case. A GPU connected this way cannot be powered by an external ATX PSU under normal circumstances as an ATX PSU needs to be aware of the system's power state. Thus either a non-ATX external PSU may be used or the GPU should be connected to the system's main PSU.
Pluggable connections
This type refers to interfaces that may be easily connected and disconnected, but only when the system is powered-off. From an OS perspective, GPUs connected this way are also seen as dGPUs.
OCuLink
OCuLink is an interface designed specifically to expose PCIe lanes as an external port. It uses SFF8612 socket + SFF8611 plug as its connector, which comes in two sizes: 4i and 8i, exposing x4 and x8 lanes respectively. Currently still very few consumer devices are natively equipped with an OCuLink port, however M.2 M to OCuLink 4i adapters are easily available and are usually quite cheap.
The OCuLink spec does not define any norms for maximum latency nor signal noise level, leading to some cables and adapters that are formally "OCuLink compliant", degrading signal integrity beyond what many host motherboards are able to tolerate. This is especially true when M.2 to OCuLink adapters are used, as M.2 M slots are usually originally intended for NVMe drives that generally meet very strict signal integrity standards. To address this problem, some OCuLink eGPU adapters are equipped with PCIe redrivers that improve signal integrity. Another option is to use an M.2-to-OCuLink adapter with redrivers such as Minerva DP6303 or ADT-F4Q. It is usually sufficient to have redrivers on 1 side only, but in some extreme cases it may be necessary to use them on both adapters.
The PCIe spec and as a consequence also OCuLink define hot-plugging as an optional feature, but this requires a special hardware support on both sides, so in a general case OCuLink does not support hot-plugging. As of February 2025 the only consumer computers supporting OCuLink hot-plugging are Lenovo laptops with TGX ports. It is nevertheless possible to plug or unplug OCuLink devices while the system is hibernated: see the section on cryo-plugging.
Presently many new OCuLink solutions are being dynamically developed: see this dedicated egpu.io thread to stay up2date.
Example adapters:
4i with redrivers: Minisforum DEG1, EXP-GDC OCuP4V2, Aostar AG02
4i no redrivers: ADT-Link F9G, NFHK N-8611Y-D
- 8i no redrivers: NFHK N-P118A (sometimes referred to as "NF-P118A")
Hot-pluggable connections
This type of connections allows to connect and disconnect an eGPU while the system is running, triggering OS level mechanisms to handle such events. See the section on hot-plugging for details.
USB-C based (Thunderbolt 3+ and USB 4+)
This interface family uses PCIe tunneling over USB. On x86_64 computers this was popularized on Intel-based machines with Thunderbolt-3 controllers and was later included in USB-4 and adopted by AMD. Earlier versions of USB are not capable of PCIe tunneling even when using USB-C connector. All adapters and ports within this family are usually compatible with each other, but the transfer capacity may vary from about 12 to 40Gbps depending on the exact mix: refer to the perf table on egpu.io site. If quality cables are used (active and/or with quality screening), this interface family provides a very good PCIe signal integrity. As of 2025 almost every contemporary laptop model is equipped with either a Thunderbolt or an USB-4 port, making this interface very ubiquitous.
Example adapters:
- Thunderbolt 3: EXP-GDC TH3P4G3 (provides up to 85W power delivery (PD) to its host laptop, supports Thunderbolt daisy-chaining thanks to its additional TB port)
USB4: Aostar AG02 (provides 100W PD), ADT-Link UT3G / UT4G
Thunderbolt 5: EXP-GDC TH5P4, Razer Core X V2
TGX
Thinkbook Graphics eXtension (TGX) is a Lenovo-designed interface based on OCuLink 4i that supports hot-plugging and includes redrivers. As such it is to some extent interoperable with OCuLink 4i and most OCuLink 4i docks that include redrivers are also TGX compliant. As of February 2025, Thinkbooks equipped with a TGX port are sold only in China.
ExpressCard
ExpressCard interface was popular on laptops during the first decade of the century. It exposes a single x1 gen2 lane, providing about 3.1Gbps transfer capacity in practice.
Example adapters:
- EXP-GDC Beast (Expresscard variant).
Software support for hot-plugging
While plugging-in is mostly PnP, unplugging currently requires to first terminate all processes using the eGPU and usually also either unbinding it from its kernel module or unloading the module all together.
Improper unplugging will likely result in software crashes at various layers!
Depending on the specific stack, these may range from data-loss in involved processes to fatal kernel-panics and permanent data-corruption.
Common
PCIe tunneling over Thunderbolt/USB4
bolt should be installed to manage tunneling.
Thunderbolt 3 authorization
Depending on BIOS/UEFI settings, when connecting an eGPU via a Thunderbolt-3 port, device authorization may be required first: refer to boltctl manpage from bolt package for details.
PCIe bus IDs
The subsequent sections often refer to "PCIe bus IDs" of GPUs: these may be obtained from the output of lspci command. For example here is the lspci output on a laptop with an Intel iGPU and an Nvidia eGPU:
$ lspci |grep -iE 'display|vga' 00:02.0 VGA compatible controller: Intel Corporation Iris Plus Graphics 640 (rev 06) 0b:00.0 VGA compatible controller: NVIDIA Corporation GA102 [GeForce RTX 3090] (rev a1)
In this case the bus ID of the eGPU are the hex digits in the first column of the last row: 0b:00.0. Note: X config uses decimal format when specifying a BusID, so numbers need to be converted appropriately (so in this case it would be 11:0:0 for the eGPU).
Running X11 on an eGPU
When running X on an eGPU (see the subsequent vendor-specific subsections), it is possible to also utilize monitors connected to other GPUs of the system (like laptop's built-in panel connected to its iGPU). Such additional GPUs need to be listed in the Screen section as a GPUDevice, for example in /etc/X11/xorg.conf.d/80-screen.conf file along a vendor-specific file defining "eGPU" config:
Section "Device"
Identifier "Intel iGPU"
Driver "modesetting"
BusID "PCI:0:2:0" ## replace with values from lspci
EndSection
Section "Screen"
Identifier "dual GPU"
Device "eGPU" ## primary GPU performing the rendering, defined in a separate file
GPUDevice "Intel iGPU" ## secondary GPU providing additional monitors
EndSectionNote: running X on an eGPU requires to terminate a given session before unplugging the eGPU, so it's quite inconvenient in many cases. Failure to do so may cause serious kernel module crashes often requiring a reboot: see the subsequent vendor-specific subsections.
Nvidia with proprietary driver
This section refers to NvidiaGraphicsDrivers.
As per Nvidia driver README file, hot-unplugging an Nvidia eGPU while in use is generally not supported and will cause various levels of crashes. nvidia-smi command may be used to obtain the list of processes currently using the Nvidia GPUs.
Kernel module status
Debian-packaged version 535 handles both plugging and unplugging cleanly if only there are no processes running on the eGPU (as reported by nvidia-smi).
Nvidia-packaged version 570 before unplugging additionally require nvidia module to be removed, otherwise the driver will crash. Unfortunately removing the module is often not a feasible option on multi-GPU setups. unbinding seemingly also allows to cleanly unplug the eGPU, but the driver will then malfunction after reconnecting and will eventually crash.
Nvidia-packaged versions 575 and 580 before unplugging require nvidia module to be removed or unbound from the eGPU. Unbinding requires nvidia-persistenced.service to be stopped and nvidia_drm module to be removed first nevertheless. Modules nvidia and nvidia_uvm may stay however, so any compute tasks on other Nvidia GPUs will not be affected:
systemctl stop nvidia-persistenced.service modprobe -r nvidia_drm echo "0000:0b:0.0" >/sys/bus/pci/drivers/nvidia/unbind
If there are other Nvidia GPUs on the system, then usually nvidia-drm module should be loaded again and nvidia-persistenced.service started, after the eGPU is unplugged.
Note: if unbinding hangs yet nvidia-smi does not report any processes using the eGPU, check for running nvtop, which is not reported, but may hold a reference via nvidia_uvm.
Running X11 on an eGPU
As described in the above README file, X11 nvidia driver by default refuses to start on hot-pluggable eGPUs to avoid crashing when unplugging. To force starting on an eGPU, it must be specifically pointed by its PCI BusID and AllowExternalGpus option needs to be set to true (in addition to AllowEmptyInitialConfiguration useful for multi-GPU setups involving Nvidia). For example one could have /etc/X11/xorg.conf.d/20-egpu-device.conf file like this:
Section "Device"
Identifier "eGPU"
Driver "nvidia"
Option "AllowEmptyInitialConfiguration" "true"
Option "AllowExternalGpus" "true"
BusID "PCI:11:0:0" ## replace with values from lspci
EndSection
Running X11 on an iGPU and using an eGPU for offloading
For General info on offloading see "PRIME" section in "Optimus" page.
As of driver 570, even if X11 is initially started only on the iGPU/dGPU, when an eGPU is connected, the Xorg process will "attach itself" to such eGPU to allow extending the desktop to the connected monitors. Unfortunately "detaching" from eGPUs is not implemented yet, so in such case it is necessary to close the whole X session before unplugging the eGPU, which defeats the original purpose of running it on the iGPU. The workaround is to prevent attaching to eGPUs by setting AutoAddGPU option to false in ServerFlags section, for example in an additional /etc/X11/xorg.conf.d/10-server-flags.conf file:
Section "ServerFlags"
Option "AutoAddGPU" "false"
EndSection
Wayland
ToDo: gather and add info
Nvidia with Nouveau driver
ToDo: gather and add info
AMD
Before unplugging all processes using the eGPU must be terminated and amdgpu kernel driver needs to be removed or unbound from the eGPU by writing its PCIe bus ID prefixed with 0000: (assuming a single PCIe controller) to /sys/bus/pci/drivers/amdgpu/unbind file:
echo "0000:7:0.0" >/sys/bus/pci/drivers/amdgpu/unbind
Unbinding the module is preferred over removing especially when there are multiple AMD GPUs in the system (for example if the iGPU is also from AMD).
Running X11 on an eGPU
If no config is provided, by default X will choose to run on a built-in GPU. To run X on an eGPU, it is sufficient to define a Device section for it and point it by its PCI BusID. For example one could have /etc/X11/xorg.conf.d/20-egpu-device.conf file like this:
Section "Device"
Identifier "eGPU"
Driver "amdgpu"
BusID "PCI:7:0:0" ## replace with values from lspci
EndSection
Wayland
ToDo: gather and add info
Intel
This section refers to Xe cards using xe kernel module.
Running X11 on an eGPU
If no config is provided, by default X will choose to run on a built-in GPU. To run X on an eGPU, it is sufficient to define a Device section for it and point it by its PCI BusID. For example one could have /etc/X11/xorg.conf.d/20-egpu-device.conf file like this:
Section "Device"
Identifier "eGPU"
Driver "modesetting"
BusID "PCI:48:0:0" ## replace with values from lspci
EndSection
Wayland
ToDo: gather and add info
Software support for (un)plugging under hibernation
Plugging in or unplugging physical PCIe devices when the system is hibernated ("cryo-plugging"?) is generally possible, but it's a delicate process: this may change bus IDs of other devices resulting in crashes at various layers. For this reason cryo-plugging new devices into slots that do not support hot-plugging, is almost guaranteed to cause crashes.
Always make sure that the host machine and the external PCIe/OCuLink device are BOTH indeed powered-off before plugging or unplugging!!!
Failure to do so may result in a permanent hardware damage.
Cryo-unplugging
Unplugging under hibernation is often possible to be performed in a way that does no cause any crashes and allows to cryo-re-plug a given device later into the same slot, but certain preparations are necessary. First, all the requirements for hot-unplugging must be met: all processes using a given device terminated, handling kernel modules must removed or unbound including those of related virtual audio controllers. Next, all virtual PCIe devices provided by the physical one need to be soft-removed from the kernel. Graphic cards usually provide 3 such devices: a VGA controller, a virtual audio controller and a PCI bridge to which the previous 2 are connected. lspci -tv outputs a tree allowing to identify these devices, for example here is an output prefix showing an Nvidia RTX 3090 connected via an OCuLink to an AMD HX370 based laptop:
-[0000:00]-+-00.0 Advanced Micro Devices, Inc. [AMD] Strix/Strix Halo Root Complex
+-00.2 Advanced Micro Devices, Inc. [AMD] Strix/Strix Halo IOMMU
+-01.0 Advanced Micro Devices, Inc. [AMD] Strix/Strix Halo Dummy Host Bridge
+-01.2-[01-60]--
+-02.0 Advanced Micro Devices, Inc. [AMD] Strix/Strix Halo Dummy Host Bridge
+-02.1-[61]--+-00.0 NVIDIA Corporation GA102 [GeForce RTX 3090]
| \-00.1 NVIDIA Corporation GA102 High Definition Audio ControllerIn the above case the devices need to be unbound and then removed in order from the leaf(s) to the root bridge of a given branch:
systemctl stop nvidia-persistenced.service ## Nvidia specific workaround modprobe -r nvidia_drm ## Nvidia specific workaround echo "0000:61:0.1" >/sys/bus/pci/drivers/snd_hda_intel/unbind ## handling module obtained from lspci -v echo "0000:61:0.0" >/sys/bus/pci/drivers/nvidia/unbind ## change module depending on the card brand echo 1 >"/sys/bus/pci/devices/0000:61:00.1/remove" echo 1 >"/sys/bus/pci/devices/0000:61:00.0/remove" echo 1 >"/sys/bus/pci/devices/0000:00:02.1/remove"
Audio handling module may be obtained from the output of lspci -vs "61:00.1" |grep 'Kernel driver in use' command. If all the above commands succeed, it is safe to hibernate and physically unplug the eGPU.
ToDo: verify the above procedure for AMD and Intel cards.
Cryo-re-plugging
A device that was previously successfully unplugged under hibernation may be re-plugged under hibernation but only to the same slot/port (otherwise it will change PCIe bus IDs and cause crashes). After that PCIe bridge to which the device is connected must be re-scanned. In the example above it is bridge 0000:00:02.0:
echo 1 >"/sys/bus/pci/devices/0000:00:02.0/rescan"
This should reload/bind all necessary modules and the eGPU should be ready to use.
Choosing and connecting a PSU
Most of the eGPU adapters available on the market come without a PSU, which allows for greater flexibility, but requires an educated decision. PSUs are relatively simpler devices comparing to CPUs or GPUs, but their role in a system is no less critical, so the choice should be planned accordingly:
Do not use off-brand PSUs or brands/models known to cause problems!!!
A misbehaving PSU may render your system unstable or even permanently damage other components by delivering too much power and frying them from the inside. Even some well-known, seemingly reputable brands have released models that were literally exploding, catching fire or frying other components: check reviews on the web before making your final decision.
- Calculate the power that your PSU must be able to deliver by following the below steps:
Start with the power rating of your GPU card: this should be obtained from the officials specs released by the manufacturer. Unfortunately some manufacturers don't release this critical information and for some reasons regulatory bodies don't seem to care. Sometimes manufacturers release values named like TGP, Max TGP, TBP, MPC: these are all informal names that may or may not refer to the power rating as they lack any precise definition. Most manufacturers release the value of TDP (Thermal Design Power), but this value is smaller than the power rating of a given card as it flattens short power draw spikes as thermal output is by its nature averaged over longer time slices. If there's no better estimate, using 1.5 * TDP should be safe enough (according to the sources cited by the linked TDP Wikipedia page).
- Add about 50W for the adapter itself (50W is probably way more than actually needed but we need to be on the safe side).
- If your adapter has a capability of power delivery (PD) to the host laptop, add also the maximum wattage it can provide from its specs.
- Add at least 50W safety margin, but 100W is more recommended for the PSU to be more efficient.
- Check in the specs what PSU form-factors your dock/enclosure was designed for. Most common options are ATX and SFX, some docks are able to accommodate both of these types.
GPU cards use either multiple 6+2 pin power connectors or 12vhpwr connectors: make sure that your PSU has enough of appropriate connectors for your card. For example if you have a GPU card that uses multiple 6+2 pin connectors, then don't buy MSI PSUs released from 2025 onward as they have only a single 6+2 pin connector.
- There have been numerous reports of overheating 12vhpwr connectors in combination with high-power GPUs (RTX 4090, RTX 5080, RTX 5090, RX 9070XT) to the point of melting or catching fire due to the misdesign of this connector. Therefore it is recommended to use a PSU with a thermal fuse on the 12vhpwr connector in combination with such GPUs.
Make sure that the connector is fully plugged in and its "accidental unplug prevention latch" is locked when connecting a PSU to a GPU (or a PSU to anything else to that matter).
Failure to do so will likely lead to the GPU catching fire.
