What is an appropriate threat-model for the FreedomBox's client-server communications?
The threat-model question has a number of obvious answers, but keep in mind the project's end-goals: to bring communication freedom to as many folks in as many situations as possible. To that end, what are appropriate compromises between server and client security, accessibility, and availability?
Client Device Classification
Client devices seem to fall into one of two basic categories:
- Those on which the user has root privileges and fully trusts (like their own laptop, running a fully free operating system and BIOS, in which no mal/spy/inscrutable-ware exists).
- Those on which the user doesn't have root privileges and therefore can't fully trust (an iPhone, a laptop with non-free software and/or binary kernel blobs, a desktop with a non-free BIOS).
Obviously, there's a range of trustworthiness, though I don't know how to meaningfully measure this quantitatively (I'd like to survey and classify devices, but I don't know how to massively and remotely detect un-trustworthy or malicious software, suggestions are welcome).
At this point, I'm worried about secret key (identity) material. This, being the most important and secret of data, can teach lessons that can be applied to nearly all other data.
1. Who can be trusted with which secret key material?
- Can servers be trusted with the client's key?
- Which clients can be trusted with parts of the server's key?
2. In what ways is it acceptable for devices to give up which secrets?
For example, is it acceptable if the client's secret key be exposed when the box is rooted by attackers? (Probably not, but that does let the host act as a trust proxy without relying on subkeys, or other weird yet conceptually interesting trust models).
3. What is the client application delivery model?
- Browser-based interaction between client and server?
- Browser-plugin-based interaction?
- Appstore-based interaction?
Melvin Carvalho writes:
- "Hi Nick, great topic. Which client/server interactions would you envisage as being high on the priority list? e.g. ssh to box, login to dashboard via a browser, using gpg based tools for email etc. ... a specific context may be slightly easier to visualize the possible attack surface ..."
That's a really good point. I'm seeing a few different potential client/server interactions here. How do we enforce, yet not compromise, key and identity material for both end points in each case? How do we deliver services or client-applications from the server to the client?
1.1. Client Attributes
I care about two different aspects of each client:
1.1.1. Is the client fully end-user-controlled and verifiable?
- Binary blobs?
- Non-free software?
1.1.2. Is the network trustworthy?
- Other Criteria?
Am I missing anything meaningful?
1.2. Attribute Applications
These attributes are non-exclusive and seem to line up like:
- (Yes, Yes) A fully trusted client (a user-owned/rooted laptop) connecting over wifi.
- (Yes, No) A rooted phone connecting over most data-networks, or a tethered laptop. This case seems to simplify to (Yes, Yes) as, unless the network censors encrypted connections, you could always set up a VPN. In those cases, it seems to transform into (No, No), as the user has to be complicit in the network's insecure requirements.
- (No, Yes) A compromised client connecting over a "trusted" connection. A rooted phone connecting over wifi: the client could be ratting out the user, and the network would never know. Most Windows boxes fall into this category.
- (No, No) A compromised client connecting over a compromised connection. These are called iPhones.
1.3. Handling Attributes
What do we need to support useful communication between each without disclosing secret-identity material to third parties?
My suspicions are:
- This one's easy as pie, were pie easy. If we need client applications, we use them. If we need browser plugins, we use them. If we need a network, we use it. We can enforce any restrictions we need for secure communications.
- Initial delivery is difficult but, thereafter, execution is easy.
- We can't trust the client, so it can't handle its own data. ...What? Yeah... We have to start being creative here. Perhaps the client could hold half its (secret-shared) key, which is delivered to the server on connection. Anybody could extract the key and impersonate the client. It's the same problem as third-party-advertising-networks: only your adversaries and their 3,000 closest friends have the information you don't want them to have. Without your phone and password nobody else could impersonate you, though, so your secrets are safe from your siblings.
I have no idea how to handle the iPhone case. iPhones can't store their own key identity material, as it's preemptively compromised. This is where the client-key-splitting idea comes into play, but that makes each FreedomBox a more worthwhile target, as knocking one of those over then compromises all its clients. I would be stunned if FBX applications were inoffensive enough to be distributable through any app stores. Users can say *anything* they want and *we can't censor them?!* It'd never fly.
How do we support all four modes at once? Anybody want to add another variable and make it 8 or 16 states?
If anyone's aware of any recent research into these problems, I'd appreciate the pointers.
Next call: Sunday, October 25th at 17:00 UTC
Latest news: Help translate freedombox.org - 2020-08-01
This page is copyright its contributors and is licensed under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.