Differences between revisions 4 and 5
Revision 4 as of 2013-02-17 03:01:07
Size: 6460
Editor: ?NickDaly
Comment: Corrected headings.
Revision 5 as of 2013-02-17 03:06:32
Size: 6336
Editor: ?NickDaly
Comment: Fixed formatting.
Deletions are marked like this. Additions are marked like this.
Line 7: Line 7:
## If your page gets really long, uncomment this Table of Contents  ## If your page gets really long, uncomment this Table of Contents
Line 52: Line 52:
Melvin Carvalho <melvincarvalho@gmail.com> writes: Melvin Carvalho writes:
Line 54: Line 54:
> Hi Nick, great topic. Which client/server interactions would you envisage
> as being high on the priority list? e.g. ssh to box, login to dashboard
> via a browser, using gpg based tools for email etc. ... a specific context
> may be slightly easier to visualize the possible attack surface ...
    "Hi Nick, great topic. Which client/server interactions would you envisage
    as being high on the priority list? e.g. ssh to box, login to dashboard
    via a browser, using gpg based tools for email etc. ... a specific context
    may be slightly easier to visualize the possible attack surface ..."
Line 59: Line 59:
That's a really good point. I'm seeing a few different potential
client/server interactions here. How do we enforce, yet not compromise,
key and identity material for both end points in each case? How do we
deliver services or client-applications from the server to the client?
That's a really good point. I'm seeing a few different potential client/server interactions here. How do we enforce, yet not compromise, key and identity material for both end points in each case? How do we deliver services or client-applications from the server to the client?
Line 64: Line 61:
 0. Everything connects to a fully free FreedomBox: DreamPlug or equivalent that's fully verifiable (without binary blobs or non-free software). 0. Everything connects to a fully free FreedomBox: DreamPlug or equivalent that's fully verifiable (without binary blobs or non-free software).
Line 72: Line 69:
 - Binary blobs?
 - Firmware?
 - Bootloader?
 - Non-free software?
 * Binary blobs?
 * Firmware?
 * Bootloader?
 * Non-free software?
Line 79: Line 76:
 - End-user-controlled?
 - Verifiable?
 - End-to-End?
 - Other Criteria?
 * End-user-controlled?
 * Verifiable?
 * End-to-End?
 * Other Criteria?
Line 90: Line 87:
1. (Yes, Yes) A fully trusted client (a user-owned/rooted laptop)
  
connecting over wifi.
 1. (Yes, Yes) A fully trusted client (a user-owned/rooted laptop) connecting over wifi.
Line 93: Line 89:
2. (Yes, No) A rooted phone connecting over most data-networks, or a
  
tethered laptop.
 2. (Yes, No) A rooted phone connecting over most data-networks, or a tethered laptop.
Line 96: Line 91:
   This case seems to simplify to (Yes, Yes) as, unless the network
  
censors encrypted connections, you could always set up a VPN. In
  
those cases, it seems to transform into (No, No), as the user has to
  
be complicit in the network's insecure requirements.
 This case seems to simplify to (Yes, Yes) as, unless the network censors encrypted connections, you could always set up a VPN. In those cases, it seems to transform into (No, No), as the user has to be complicit in the network's insecure requirements.
Line 101: Line 93:
3. (No, Yes) A compromised client connecting over a "trusted" connection.  3. (No, Yes) A compromised client connecting over a "trusted" connection.
Line 103: Line 95:
   A rooted phone connecting over wifi: the client could be ratting out
  
the user, and the network would never know. Most Windows boxes fall
  
into this category.
 A rooted phone connecting over wifi: the client could be ratting out the user, and the network would never know. Most Windows boxes fall into this category.
Line 107: Line 97:
4. (No, No) A compromised client connecting over a compromised
  
connection.
 4. (No, No) A compromised client connecting over a compromised connection.
Line 110: Line 99:
   These are called iPhones.  These are called iPhones.
Line 114: Line 103:
What do we need to support useful communication between each without
disclosing secret-identity material to third parties?
What do we need to support useful communication between each without disclosing secret-identity material to third parties?
Line 119: Line 107:
1. This one's easy as pie, were pie easy. If we need client
  
applications, we use them. If we need browser plugins, we use them.
  
If we need a network, we use it. We can enforce any restrictions we
   need for secure communications.
 1. This one's easy as pie, were pie easy. If we need client applications, we use them. If we need browser plugins, we use them. If we need a network, we use it. We can enforce any restrictions we need for secure communications.
Line 124: Line 109:
2. Initial delivery is difficult but, thereafter, execution is easy.  2. Initial delivery is difficult but, thereafter, execution is easy.
Line 126: Line 111:
3. We can't trust the client, so it can't handle its own data. ...What?
  
Yeah... We have to start being creative here. Perhaps the client
  
could hold half its (secret-shared) key, which is delivered to the
  
server on connection. Anybody could extract the key and impersonate
  
the client. It's the same problem as
  
third-party-advertising-networks: only your adversaries and their
  
3,000 closest friends have the information you don't want them to
  
have. Without your phone and password nobody else could impersonate
  
you, though, so your secrets are safe from your siblings.
 3. We can't trust the client, so it can't handle its own data. ...What? Yeah... We have to start being creative here. Perhaps the client could hold half its (secret-shared) key, which is delivered to the server on connection. Anybody could extract the key and impersonate the client. It's the same problem as third-party-advertising-networks: only your adversaries and their 3,000 closest friends have the information you don't want them to have. Without your phone and password nobody else could impersonate you, though, so your secrets are safe from your siblings.
Line 136: Line 113:
4. I have no idea how to handle the iPhone case. iPhones can't store
  
their own key identity material, as it's preemptively compromised.
  
This is where the client-key-splitting idea comes into play, but that
  
makes each FreedomBox a more worthwhile target, as knocking one of
  
those over then compromises all its clients.
 4. I have no idea how to handle the iPhone case. iPhones can't store their own key identity material, as it's preemptively compromised. This is where the client-key-splitting idea comes into play, but that makes each FreedomBox a more worthwhile target, as knocking one of those over then compromises all its clients.
Line 142: Line 115:
   I would be stunned if FBX applications were inoffensive enough to be
  
distributable through any app stores. Users can say *anything* they
  
want and *we can't censor them?!* It'd never fly.
 I would be stunned if FBX applications were inoffensive enough to be distributable through any app stores. Users can say *anything* they want and *we can't censor them?!*  It'd never fly.
Line 146: Line 117:
How do we support all four modes at once? Anybody want to add another
variable and make it 8 or 16 states?
How do we support all four modes at once? Anybody want to add another variable and make it 8 or 16 states?
Line 149: Line 119:
If anyone's aware of any recent research into these problems, I'd
appreciate the pointers.
If anyone's aware of any recent research into these problems, I'd appreciate the pointers.

Translation(s): none


What is an appropriate threat-model for the FreedomBox's client-server communications?

Goal

The threat-model question has a number of obvious answers, but keep in mind the project's end-goals: to bring communication freedom to as many folks in as many situations as possible. To that end, what are appropriate compromises between server and client security, accessibility, and availability?

Client Device Classification

Client devices seem to fall into one of two basic categories:

  1. Those on which the user has root privileges and fully trusts (like their own laptop, running a fully free operating system and BIOS, in which no mal/spy/inscrutable-ware exists).
  2. Those on which the user doesn't have root privileges and therefore can't fully trust (an iPhone, a laptop with non-free software and/or binary kernel blobs, a desktop with a non-free BIOS).

Obviously, there's a range of trustworthiness, though I don't know how to meaningfully measure this quantitatively (I'd like to survey and classify devices, but I don't know how to massively and remotely detect un-trustworthy or malicious software, suggestions are welcome).

At this point, I'm worried about secret key (identity) material. This, being the most important and secret of data, can teach lessons that can be applied to nearly all other data.

Specific Questions

Who can be trusted with which secret key material?

  1. Can servers be trusted with the client's key?
  2. Which clients can be trusted with parts of the server's key?

In what ways is it acceptable for devices to give up which secrets?

For example, is it acceptable if the client's secret key be exposed when the box is rooted by attackers? (Probably not, but that does let the host act as a trust proxy without relying on subkeys, or other weird yet conceptually interesting trust models).

What is the client application delivery model?

Is it:

  1. Browser-based interaction between client and server?
  2. Browser-plugin-based interaction?
  3. Appstore-based interaction?

Notes

1

Melvin Carvalho writes:

  • "Hi Nick, great topic. Which client/server interactions would you envisage as being high on the priority list? e.g. ssh to box, login to dashboard via a browser, using gpg based tools for email etc. ... a specific context may be slightly easier to visualize the possible attack surface ..."

That's a really good point. I'm seeing a few different potential client/server interactions here. How do we enforce, yet not compromise, key and identity material for both end points in each case? How do we deliver services or client-applications from the server to the client?

0. Everything connects to a fully free FreedomBox: ?DreamPlug or equivalent that's fully verifiable (without binary blobs or non-free software).

Client Attributes

I care about two different aspects of each client:

Is the client fully end-user-controlled and verifiable?
  • Binary blobs?
  • Firmware?
  • Bootloader?
  • Non-free software?

Is the network trustworthy?
  • End-user-controlled?
  • Verifiable?
  • End-to-End?
  • Other Criteria?

Am I missing anything meaningful?

Attribute Applications

These attributes are non-exclusive and seem to line up like:

  1. (Yes, Yes) A fully trusted client (a user-owned/rooted laptop) connecting over wifi.
  2. (Yes, No) A rooted phone connecting over most data-networks, or a tethered laptop. This case seems to simplify to (Yes, Yes) as, unless the network censors encrypted connections, you could always set up a VPN. In those cases, it seems to transform into (No, No), as the user has to be complicit in the network's insecure requirements.
  3. (No, Yes) A compromised client connecting over a "trusted" connection. A rooted phone connecting over wifi: the client could be ratting out the user, and the network would never know. Most Windows boxes fall into this category.
  4. (No, No) A compromised client connecting over a compromised connection. These are called iPhones.

Handling Attributes

What do we need to support useful communication between each without disclosing secret-identity material to third parties?

My suspicions are:

  1. This one's easy as pie, were pie easy. If we need client applications, we use them. If we need browser plugins, we use them. If we need a network, we use it. We can enforce any restrictions we need for secure communications.
  2. Initial delivery is difficult but, thereafter, execution is easy.
  3. We can't trust the client, so it can't handle its own data. ...What? Yeah... We have to start being creative here. Perhaps the client could hold half its (secret-shared) key, which is delivered to the server on connection. Anybody could extract the key and impersonate the client. It's the same problem as third-party-advertising-networks: only your adversaries and their 3,000 closest friends have the information you don't want them to have. Without your phone and password nobody else could impersonate you, though, so your secrets are safe from your siblings.
  4. I have no idea how to handle the iPhone case. iPhones can't store their own key identity material, as it's preemptively compromised. This is where the client-key-splitting idea comes into play, but that makes each FreedomBox a more worthwhile target, as knocking one of those over then compromises all its clients. I would be stunned if FBX applications were inoffensive enough to be distributable through any app stores. Users can say *anything* they want and *we can't censor them?!* It'd never fly.

How do we support all four modes at once? Anybody want to add another variable and make it 8 or 16 states?

If anyone's aware of any recent research into these problems, I'd appreciate the pointers.

Nick