Translation(s): none


What is an appropriate threat-model for the FreedomBox's client-server communications?

Goal

The threat-model question has a number of obvious answers, but keep in mind the project's end-goals: to bring communication freedom to as many folks in as many situations as possible. To that end, what are appropriate compromises between server and client security, accessibility, and availability?

Client Device Classification

Client devices seem to fall into one of two basic categories:

  1. Those on which the user has root privileges and fully trusts (like their own laptop, running a fully free operating system and BIOS, in which no mal/spy/inscrutable-ware exists).
  2. Those on which the user doesn't have root privileges and therefore can't fully trust (an iPhone, a laptop with non-free software and/or binary kernel blobs, a desktop with a non-free BIOS).

Obviously, there's a range of trustworthiness, though I don't know how to meaningfully measure this quantitatively (I'd like to survey and classify devices, but I don't know how to massively and remotely detect un-trustworthy or malicious software, suggestions are welcome).

At this point, I'm worried about secret key (identity) material. This, being the most important and secret of data, can teach lessons that can be applied to nearly all other data.

Specific Questions

Who can be trusted with which secret key material?

  1. Can servers be trusted with the client's key?
  2. Which clients can be trusted with parts of the server's key?

In what ways is it acceptable for devices to give up which secrets?

For example, is it acceptable if the client's secret key be exposed when the box is rooted by attackers? (Probably not, but that does let the host act as a trust proxy without relying on subkeys, or other weird yet conceptually interesting trust models).

What is the client application delivery model?

Is it:

  1. Browser-based interaction between client and server?
  2. Browser-plugin-based interaction?
  3. Appstore-based interaction?

Notes

1

Melvin Carvalho writes:

That's a really good point. I'm seeing a few different potential client/server interactions here. How do we enforce, yet not compromise, key and identity material for both end points in each case? How do we deliver services or client-applications from the server to the client?

0. Everything connects to a fully free FreedomBox: ?DreamPlug or equivalent that's fully verifiable (without binary blobs or non-free software).

Client Attributes

I care about two different aspects of each client:

Is the client fully end-user-controlled and verifiable?

Is the network trustworthy?

Am I missing anything meaningful?

Attribute Applications

These attributes are non-exclusive and seem to line up like:

  1. (Yes, Yes) A fully trusted client (a user-owned/rooted laptop) connecting over wifi.
  2. (Yes, No) A rooted phone connecting over most data-networks, or a tethered laptop. This case seems to simplify to (Yes, Yes) as, unless the network censors encrypted connections, you could always set up a VPN. In those cases, it seems to transform into (No, No), as the user has to be complicit in the network's insecure requirements.
  3. (No, Yes) A compromised client connecting over a "trusted" connection. A rooted phone connecting over wifi: the client could be ratting out the user, and the network would never know. Most Windows boxes fall into this category.
  4. (No, No) A compromised client connecting over a compromised connection. These are called iPhones.

Handling Attributes

What do we need to support useful communication between each without disclosing secret-identity material to third parties?

My suspicions are:

  1. This one's easy as pie, were pie easy. If we need client applications, we use them. If we need browser plugins, we use them. If we need a network, we use it. We can enforce any restrictions we need for secure communications.
  2. Initial delivery is difficult but, thereafter, execution is easy.
  3. We can't trust the client, so it can't handle its own data. ...What? Yeah... We have to start being creative here. Perhaps the client could hold half its (secret-shared) key, which is delivered to the server on connection. Anybody could extract the key and impersonate the client. It's the same problem as third-party-advertising-networks: only your adversaries and their 3,000 closest friends have the information you don't want them to have. Without your phone and password nobody else could impersonate you, though, so your secrets are safe from your siblings.
  4. I have no idea how to handle the iPhone case. iPhones can't store their own key identity material, as it's preemptively compromised. This is where the client-key-splitting idea comes into play, but that makes each FreedomBox a more worthwhile target, as knocking one of those over then compromises all its clients. I would be stunned if FBX applications were inoffensive enough to be distributable through any app stores. Users can say *anything* they want and *we can't censor them?!* It'd never fly.

How do we support all four modes at once? Anybody want to add another variable and make it 8 or 16 states?

If anyone's aware of any recent research into these problems, I'd appreciate the pointers.

Nick