These are some of the solutions that were proposed for implementing the DebTorrent program.

Solution Ideas

New or interesting ways of solving the problems above are needed. If you have an idea for a new solution, or just a way to improve a currently proposed solution, add it here. Don't worry if your solution solves one problem but adds another or conflicts with solving other problems, this is just a repository for solution ideas.

Protocol Solutions

Implementation Solutions

Full Proposals

If you have a full proposal for how to implement this functionality, possibly combining some of the ideas above, put it here. Make sure to list which problems it solves, and which it doesn't. Also, fell free to comment on other's proposals.

Proposal A

  • create a torrent for every combination of suite (stable/testing/unstable) and architecture, including separate ones for architecture:all and source
  • communicate all torrent information in the Packages file
  • use a variable number of pieces and size of pieces
  • small packages are in their own single piece, hashed by their SHA1 value
  • large packages have multiple pieces
  • new/updated packages get new piece(s) which are automatically added to the torrent
  • pieces use unique numbers that are never reused over the life of the torrent
    • if a package is updated, the piece number larger than the current maximum one is assigned to it
    • allows for the use of bitfields along with updates
    • peers that don't have these new pieces will transmit shorter bitfields (recieveing peers assume the peer doesn't have the pieces)
    • peers that aren't aware of the new pieces will get longer bitfields, and will ignore the new piece numbers they aren't aware of
  • Interested state now indicates wanting pieces the other peer has
  • uninteresting connections are dropped to find interesting ones
  • new communication added: LOSTHAVE, indicating that the peer has lost the piece (due to cleaning or replacement with a new version)
  • seeding now means not currently downloading, and a peer can return to the downloading state later (when a new package is requested)

Implementation Steps

A good ordering of initial implementation steps might be (note that there is a working product at each step that can be tested):

  1. Implement a single variable size piece for each package (test using manually created .torrent files from Packages files for part of the archive).
  2. Add parsing of current Packages files so they can be used unmodified as torrent files.
  3. Implement proxy functionality to receive input from apt about which packages to download.
  4. Automate the process to run as a daemon started from /etc/init.d

At this point, a public beta can be released to get feedback and usage information from general users.

Future steps could then be:

  • Modifications to the Debian archive management software to allow for further enhancements to the client
    • break up large packages into a number of pieces
    • add some kind of piece numbering
    • add torrent names to the Release files
  • Add communication protocols, such as ?LostHave or compressed ?BitFields, or the ability to drop uninteresting connections

  • Add a backup retrieval method for hard to find packages using http or ftp
  • Determine usage patterns from the public beta.
  • Based on measured usage patterns analyze ways to optimize sharing amongst different Packages files (arch:all, different versions, testing/unstable, testing/stable at release, etc ...)


  • solves the small package problem
  • solves the large number of torrents and large archive problem
  • mostly solves the communication between downloaders of the same package
    • if the same package version exists in more than one suites, communication between the downloaders will not occur


  • a large number of peers and pieces will make it difficult to find rare pieces, given the random peer lists returned by current trackers
    • could allow a backup retrieval method for rare pieces using http or ftp
    • could add piece tracking to the tracker so that it can respond to a request for peers that have a specific piece
      • this could be a lot of information for the tracker to have, as it would need to track what pieces all peers have
      • might be better implemented using a distributed tracker using DHT (though the communication costs might still be large)
    • could allow peers to ask their neighbors if their neighbors' neighbors have the rare piece
      • starting to look like broadcast searching (such as in Gnutella)


Proposal B

Implement p2p system using standard web servers (with database & special CGI on the server) instead of bittorrent.


1) apt-get knows the region of the host (uk/us/etc)

2) apt-get downloads compressed Metalink files (possibly per-deb, or a single metalink file containing info for all the files that apt-get wants to download).

  • apt-get includes it's host location (us/uk/etc) in the request
  • metalink files are generated dynamically on the server. eg If there are 1000s of peers then return 50 random ones from the same region (uk/us/etc). Similar logic to bittorrent tracker.
  • You could also configure (on the server) which files (eg > 100MB) should be shared by clients over bittorrent, and indicate that in the metalink data. The server would automatically generate torrents for those files and seed them. Torrents could be generated per-location (uk/us/etc).

  • metalinks should also include hashes of file chunks.

3) apt-get downloads each deb file from sources listed in metalink (similar to aria2). apt-get also checks each downloaded chunk and blacklists hosts serving bad data.

4) Downloaded file then gets verified (hash check) and copied (hardlinked if possible) to an uploads directory (apache or other http/ftp service)

  • Files are renamed to the sha1sum of their contents, and stored under a sub-directory based on the 1st 2 chars of the sha1sum (see git directory structure for blobs)
  • Individual shared files are garbage collected if configured.
  • Web service needs to be configurable with per-host & global upload and connection limits (see mod_cband for Apache)

  • If it is important for hosts to upload files while busy downloading them (instead of only after the download completes), then (assuming you don't use bittorrent) you could save chunks to the shared directory (named by their sha1sum) as they get downloaded & checked. Then you tell the server about the new available chunks. While very bittorrent-like, this should be more efficient than bittorrent at sharing a large number of files.

5) Hosts periodically send a compressed list of shared file/chunk to the server, along with locale (uk/us, etc). Server updates the list of mirrors for each file/chunk. If the server has not received this data from a host recently then it removes the host from the list of mirrors.

  • If there are a large number of hosts and shared files/chunks and you don't want to swamp the server with shared file/chunk list updates, then:

5.1) Hosts maintain a local, compressed file listing shared files/chunks (similar to debian Packages.bz2 file). File is stored under the shared directory.

5.2) Hosts tell the server when there has been a list update (http GET).

5.3) Server downloads the listing file from the host when convenient. Large listing files could be accompanied by a set of diffs and a diff index (see apt-get pdiff logic).


  • Clients can share a large number of files with minimal overhead
  • Can use apache or other very reliable file hosting service.
  • Allows use of many transfer methods and mirrors (by use of metalink).


  • Re-implementation of bittorrent-like logic on clients & servers, instead of re-using bittorrent or another already-existing p2p client.

  • Could cause a high load on the server
    • Server-side logic needs to be implemented as efficiently as possible.
  • Can cause a large amount of disk usage on clients. Files are stored in both their destination directory (/var/cache/apt/archives) and their shared directory. Even more disk usage if both the completed file and the chunks are stored.
    • any proxying-type application suffers from this problem (see apt-proxy, apt-cacher, and the current debtorrent)
    • it might be possible to hard link files to save space, though how to implement this to work with apt is not clear
  • Could require installation & (re)-configuration of apache or other http service on the clients.


This sounds very similar to BitTorrent in many ways. Perhaps some additional explanation of how this proposal would be better than BitTorrent is needed. In particular, the stated bittorrent-like sharing of "upload files while busy downloading them" that is more efficient than BitTorrent is not clear, since this is exactly what BitTorrent does. Where does the extra efficiency come from?

Adding locality-awareness to peer-to-peer protocols is not necessarily a good thing. While it does reduce the bandwidth requirement on the network as a whole, it can also reduce the efficiency and reliability of the download. Specifically, BitTorrent is very good at forming scale-free networks, which are networks with small diameters (so the sharing is quickly communicated across the network) that are very resilient to failures (important in any P2P system). It is therefore by-design that BitTorrent is not locality-aware, and not an oversight by the designers. Although locality-aware protocols definitely have desirable features, if this results in a poorer user experience then the programs using them will not be very popular.


Ordering Strategies for Fixed Size Pieces

I did some preliminary analysis on the efficiency of this strategy. Using my install as a test (amd64 desktop), I examined different ordering strategies for pieces in the main-amd64 Packages file (which accounts for 1486 out of the 1551 packages I have installed). The total size for this configuration is 1.1489 GB. I tried out different piece sizes from 32 KB to 512 KB, which gives the following number of pieces for the entire torrent:

Piece Size

Number of Pieces

32 KB


64 KB


128 KB


256 KB


512 KB


I would say that over 100,000 pieces is getting quite large, and we would probably want to go with 256 KB or 512 KB piece sizes.

The above graph shows the results for 5 different orderings. The best is sorting by Priority, then Section, then by Source package (I guess because I install multiple packages from the same source package quite a bit, which is interesting in itself). The results show a wastage of at least 10% for 256 KB, even for the best strategy.

Variable Sized Pieces

I guess the easiest would be to pick a piece size and stick with it for everything, rather than use a bigger piece size for large packages, and a smaller one for small packages. This gives the following for the 19091 packages in the same main-amd64 Packages file:


Total Number

Number (%) of Packages that are


of Pieces

1 piece

2-8 pieces

> 8 pieces

32 KB


5388 (28.2%)

7789 (40.8%)

5914 (31.0%)

64 KB


8098 (42.4%)

7008 (36.7%)

3985 (20.9%)

128 KB


10814 (56.6%)

5906 (30.9%)

2371 (12.4%)

256 KB


13177 (69.0%)

4586 (24.0%)

1328 ( 7.0%)

512 KB


15106 (79.1%)

3321 (17.4%)

664 ( 3.5%)

1 MB


16720 (87.6%)

2053 (10.7%)

318 ( 1.7%)

This is again looking like 256 KB or 512 KB are the way to go, as any smaller creates too many pieces, and larger means almost all the packages are a single piece.

Additional Comments

Is the problem that apt is http/ftp based or is the real problem that bandwidth is a high cost to the servers. If the problem is just the cost of the servers, then all of the packages don't need to become bittorrent based. The 90/10 rule applies here, 90% of the bandwidth use comes from 10% of the packages. The torrents only need to cover these packages, and rather than having http/ftp as a "backup" have it the only way to get the rare packages. There will be a file on the server that lists if a package is in the torrent base or in the ftp base, which apt-proxy will check before either connecting with ftp or the torrent files. This reduces the overhead communication for 90% of the packages - the ones rarely used. The kernel, KDE, GNOME, a handful of GNU tools, a few general programs probably could each become torrents from which people taking pieces. This reduces the complexity of the problem into something that could be completed in one summer and have an immediate effect. Down the road full conversion to a torrent based system could be implemented based on how the first trial works out. It still goes a long way towards solving the real problem - which is the cost of bandwidth for the servers.

what about dynamically generating the .torrent files, and using a hybrid http/bittorrent protocol. something like: apt grabs Packages file via http. when requesting a package, looks for the Filename entry in the Packages file, and adds .torrent to it. mirror generates the .torrent if not already present, based on the contents of the file (maybe .torrent files are only generated if the package size is large enough to warrant it, and falls back to http for smaller files). then apt downloads with a bittorrent client, and the downloaded file can be verified by the OpenPGP-signed Release/Packages files. apt-get (auto)clean could stop seeding any .torrent files that no longer have a correlating package entry in Packages. the dynamic generation puts some burden on the server, but only the first time a .torrent is requested.

  • This doesn't solve the problem of too many torrents. Each client will need to maintain many (hundreds to thousands) of torrents to do something like a fresh install or dist-upgrade. The tracker would also need to track many (tens of thousands to hundreds of thousands) of torrents, many of which will have extra information from the same clients.

Consider using Metalink files. This way you can bittorrent along with fall-back methods like http/ftp. metalink also includes location info (us/uk) and hashes. You can also include per-chunk hashes ala bittorrent.