This page describes the current implementation of the protocol used by DebTorrent. It will begin by being exactly the same as the BitTorrent protocol described here. As the program evolves, so will the protocol.

Please don't edit this page unless you are familiar with the BitTorrent protocol, the DebTorrent protocol, or preferably both.

The protocol below describes version: 0.1

The currently released DebTorrent version is: 0.1.3


DebTorrent is a protocol for distributing Debian package files, similar to BitTorrent. It identifies content by URL and is designed to integrate seamlessly with the web. Its advantage over plain HTTP is that when multiple downloads of the same file happen concurrently, the downloaders upload to each other, making it possible for the file source to support very large numbers of downloaders with only a modest increase in its load.

A DebTorrent file distribution consists of these entities:

There are ideally many end users for a single file.

To start serving, a host goes through the following steps:

  1. Start running a tracker (see the debtorrent-tracker program).
  2. Setup an HTTP-accessible repository (ordinary web server, such as apache, must support HTTP Range requests).
  3. (optional) Add extrapieces data for the larger packages to make the sharing more efficient (see the hippy program).
  4. Request users to use the DebTorrent program to save bandwidth on your server.

To start downloading, a user does the following:

  1. Install DebTorrent.

  2. Modify the sources.list file to prefix the hostnames with the location of DebTorrent client (localhost:9988 by default).

  3. apt-get update to start the downloads running.

  4. apt-get install <package> or any other apt-based commands will now use DebTorrent to download packages files.

The connectivity is bencoded as follows:

Metainfo files are generated from Packages files:

DebTorrent uses the data in a Packages file to create a torrent metainfo file. The Packages file contains the file names and their SHA1 hash that are used to generate the metainfo file. For breaking up large files into multiple pieces, additional Packages-extrapieces files are used, which contain the SHA1 hash and size of the multiple pieces of a file for use in creating the metainfo file.

Mostly, the metainfo files will not be seen by users, as they are not used by DebTorrent directly, but are more a representation of the data DebTorrent needs to perform the download.

Metainfo files are bencoded dictionaries with the following keys:

maps to a string, which is the URL of the tracker.

maps to a list of strings, each of which is a URL to use to download files from HTTP mirrors. The HTTP mirrors are used to fetch piece data (using the Range field in the request header), but only when the client can not find any peers with the piece. The URL should be such that appending the path from the "files" dictionary will result in a good URL for the file. The path portion of the URL should end with a '/'. The URL may end with a query section of the form "?param1=value1&param2=value2", in which case these parameters are appended to the request URL after the path from the "files" dictionary.

maps to a string which is the suggested name to save the directory as. It is purely advisory.
maps to a dictionary, with keys described below:
piece lengths
maps to a list of the number of bytes in each piece the files are split into. For the purposes of transfer, each file is transmitted in a number of variable-sized pieces, which collectively are the same size as the file.
maps to a string whose length is 20 times the number of entries in the piece lengths list. It is to be subdivided into strings of length 20, each of which is the SHA1 hash of the piece at the corresponding index.
maps to a list of dictionaries which represent a set of files that go in a directory structure. Each entry in the files list is a dictionary containing the following keys:
an integer which is the length of the file, in bytes. (This is currently duplicate information from the piece lengths list, but will be needed later when large files are split into multiple pieces.)
a list of strings corresponding to subdirectory names, the last of which is the actual file name (a zero length list is an error case).

Tracker GET requests have the following keys:

  • The 20 byte SHA1 hash of the bencoded form of the info value from the metainfo file. Note that this is a substring of the metainfo file. This value will almost certainly have to be escaped.
  • A string of length 20 which this downloader uses as its id. Each downloader generates its own id at random at the start of a new download. This value will also almost certainly have to be escaped.
  • An optional parameter giving the IP address which this peer is at. Generally used only if the client is on the same machine as the tracker.
  • The port number this peer is listening on.
  • The total amount uploaded so far, encoded in base ten ascii.
  • The total amount downloaded so far, encoded in base ten ascii.
  • The number of bytes this peer still has to download, encoded in base ten ascii. Note that this can't be computed from downloaded and the file length since it might be a resume, or not all of the files are to be downloaded, and there's a chance that some of the downloaded data failed an integrity check and had to be re-downloaded.
  • This is an optional key which maps to started, completed, or stopped (or empty, which is the same as not being present). If not present, this is one of the announcements done at regular intervals. An announcement using started is sent when a download first begins, and one using completed is sent when the download is complete. No completed is sent if the file was complete when started. Downloaders send an announcement using stopped when they cease downloading.

Tracker responses are bencoded dictionaries.

If a tracker response has a key

failure reason
maps to a human readable string which explains why the query failed.

then no other keys are required.

Otherwise, it must have two keys:

maps to the number of seconds the downloader should wait between regular rerequests.
depends on the tracker. To save bandwidth, it can map to a string whose length is a multiple of 6, with the first 4 bytes being the IP address, and the last 2 bytes being the port number (all in big endian network notation). Otherwise, it maps to a list of dictionaries corresponding to peers to connect to. Each dictionary contains the keys:
peer id
maps to a string that is the peer's self-selected ID.
maps to a string that is the peer's IP address.
maps to a string that is the port number to contact the peer on

The default for the peers key from the tracker response is to use the newer compact form of a string. The older dictionary form is kept for IPv6 compatibility.

Note that downloaders may rerequest on nonscheduled times if an event happens or they need more peers.

DebTorrent's peer protocol operates over TCP.

It performs efficiently without setting any socket options.

Peer connections are symmetrical. Messages sent in both directions look the same, and data can flow in either direction.

The peer protocol refers to pieces of the files by index as described in the metainfo file, starting at zero. When a peer finishes downloading a piece and checks that the hash matches, it announces that it has that piece to all of its peers.

Connections contain two bits of state on either end: choked or not, and interested or not. Choking is a notification that no data will be sent until unchoking happens. The reasoning and common techniques behind choking are explained later in this document.

Data transfer takes place whenever one side is interested and the other side is not choking. Interest state must be kept up to date at all times. Whenever a downloader doesn't have something they currently would ask a peer for in unchoked, they must express lack of interest, despite being choked. Implementing this properly is tricky, but makes it possible for downloaders to know which peers will start downloading immediately if unchoked.

Connections start out choked and not interested.

When data is being transferred, downloaders should keep several piece requests queued up at once in order to get good TCP performance (this is called 'pipelining'.) On the other side, requests which can't be written out to the TCP buffer immediately should be queued up in memory rather than kept in an application-level network buffer, so they can all be thrown out when a choke happens.

The peer wire protocol consists of a handshake followed by a never-ending stream of length-prefixed messages. The handshake starts with ascii character fourteen followed by the string 'DebTorrent/0.x', where the x indicates the version of the protocol being used. The leading character is a length prefix, put there so that new protocols may be distinguished from each other.

All later integers sent in the protocol are encoded as four bytes big-endian.

After the fixed headers come eight reserved bytes, which are all zero in all current implementations. If you wish to extend the protocol using these bytes, please coordinate with CameronDale to make sure all extensions are done compatibly.

Next comes the 20 byte sha1 hash of the bencoded form of the info value from the metainfo file. (This is the same value which is announced as info_hash to the tracker, only here it's raw instead of quoted.) If both sides don't send the same value, they sever the connection. The one possible exception is if a downloader wants to do multiple downloads over a single port, they may wait for incoming connections to give a download hash first, and respond with the same one if it's in their list.

After the download hash comes the 20-byte peer id which is reported in tracker requests and contained in peer lists in tracker responses. If the receiving side's peer id doesn't match the one the initiating side expects, it severs the connection.

That's it for handshaking, next comes an alternating stream of length prefixes and messages. Messages of length zero are keepalives, and ignored. Keepalives are generally sent once every two minutes, but note that timeouts can be done much more quickly when data is expected.

All non-keepalive messages start with a single byte which gives their type. The possible values are:

'choke', 'unchoke', 'interested', and 'not interested' have no payload.

'bitfield' is only ever sent as the first message. Its payload is a bitfield with each index that downloader has sent set to one and the rest set to zero. Downloaders which don't have anything yet may skip the 'bitfield' message. The first byte of the bitfield corresponds to indices 0 - 7 from high bit to low bit, respectively. The next one 8-15, etc. Spare bits at the end are set to zero.

The 'have' message's payload is a single number, the index which that downloader just completed and checked the hash of.

'request' messages contain an index, begin, and length. The last two are byte offsets. Length is generally a power of two unless it gets truncated by the end of the file. All current implementations use 215 , and close connections which request an amount greater than 217.

'cancel' messages have the same payload as request messages. They are generally only sent towards the end of a download, during what's called 'endgame mode'. When a download is almost complete, there's a tendency for the last few pieces to all be downloaded off a single slow modem line, taking a very long time. To make sure the last few pieces come in quickly, once requests for all pieces a given downloader doesn't have yet are currently pending, it sends requests for everything to everyone it's downloading from. To keep this from becoming horribly inefficient, it sends cancels to everyone else every time a piece arrives.

'piece' messages contain an index, begin, and piece. Note that they are correlated with request messages implicitly. It's possible for an unexpected piece to arrive if choke and unchoke messages are sent in quick succession and/or transfer is going very slowly.

Clients may choose to download pieces in random order. A better strategy is to download pieces in rarest first order. The client can determine this by keeping the initial bitfield from each peer, and updating it with every 'have' message. Then, the client can download the pieces that appear least frequently in these peer bitfields first. Note that any "Rarest First" strategy should include randomization among at least several of the least common pieces, as having many clients all attempting to jump on the same "least common" piece would be counter-productive.

Choking is done for several reasons. TCP congestion control behaves very poorly when sending over many connections at once. Also, choking lets each peer use a tit-for-tat type algorithm to ensure that they get a consistent download rate.

The choking algorithm described below is the currently deployed one. It is very important that all new algorithms work well both in a network consisting entirely of themselves and in a network consisting mostly of this one.

There are several criteria a good choking algorithm should meet. It should cap the number of simultaneous uploads for good TCP performance. It should avoid choking and unchoking quickly, known as 'fibrillation'. It should reciprocate to peers who let it download. Finally, it should try out unused connections once in a while to find out if they might be better than the currently used ones, known as optimistic unchoking.

The currently deployed choking algorithm avoids fibrillation by only changing who's choked once every ten seconds. It does reciprocation and number of uploads capping by unchoking the four peers which it has the best download rates from and are interested. Peers which have a better upload rate but aren't interested get unchoked and if they become interested the worst uploader gets choked. If a downloader has a complete file, it uses its upload rate rather than its download rate to decide who to unchoke.

For optimistic unchoking, at any one time there is a single peer which is unchoked regardless of it's upload rate (if interested, it counts as one of the four allowed downloaders.) Which peer is optimistically unchoked rotates every 30 seconds. To give them a decent chance of getting a complete piece to upload, new connections are three times as likely to start as the current optimistic unchoke as anywhere else in the rotation.

Hmm, interesting - consider grabbing the source project for the GitTorrent RFC (GTP/0.1) and putting this information in it, for an IETF-style RFC. -- ?SamVilain