[Gpg4win-devel] Strength of X509 (Re: Sicherheit der Downloads von der Website http://www.gpg4win.de/ )

Marcus Brinkmann marcus.brinkmann at ruhr-uni-bochum.de
Thu Apr 22 17:01:36 CEST 2010


On 04/22/2010 08:56 AM, Bernhard Reiter wrote:
> Am Dienstag, 20. April 2010 14:03:04 schrieb Marcus Brinkmann:
>> 4) The web of trust based on OpenPGP is much more reliable and resilient
>> than X.509 certificates and certificate authorities.
> 
> I doubt that statement holds for the average case, though I agree with your 
> general assessment that the strength https is weak in the average use case
> and it would not add a lot to our security. (I came to the same conclusion in 
> my post that I did write without seeing Werner's reply.)

Well, let's separate the issues: HTTPS adds *nothing* to installer package
security, because it only verifies that bits are correctly sent from web
server to browser.  Software packages are sent from the developer to the user
and the internet is only a small part of that path.

Once we have the gpg4win installer package out of the way, we now have two
issues: HTTPS in general on the one hand and X.509 vs OpenPGP on the other.
Both have been argued on for a long time now, by smarter people than me.  But
let's just state here as a reminder that X.509 is not a complete standard, but
requires a profile.  The relevant profile for SSL seems to be RFC2459, which
is a whopping 130 pages long and came pretty late in 1999 (HTTPS was created
in 1994, Verisign was founded 1995).  If you want a laugh (or maybe crying is
the appropriate response), check out GnuTLS's answer how to verify a peer
certificate:

http://www.gnu.org/software/gnutls/manual/html_node/Verifying-peer_0027s-certificate.html

If you need 227 lines of code to verify a certificate chain, and you still
don't know if you did it correctly, then in the context of system and process
security, that's pre-programmed failure.  And of course it is always worth to
reference Peter Gutmanns X509 Style Guide, for those who don't know it yet:

http://www.cs.auckland.ac.nz/~pgut001/pubs/x509guide.txt

Now, on package signing:

> In the general case, if the operating system has a general x509 certificate 
> store which is well maintained by an administrator, I believe this can turn 
> into a more security and an easier solution for users.

Sure.  Debian is doing this for years.  They use OpenPGP, but it doesn't
matter: If all aspects of the systems are tightly controlled, X.509 and
OpenPGP provide approximately equivalent functionality, because the underlying
cryptographic algorithms play a more important role than the (fixed and
managable) policy issues.

> For the x509
> certificates used by S/MIME we can do so. I have seen applications where 
> https was only allowed by client certificates coming out of a few selected 
> certificiates authorities. Overall I believe this beats OpenPGP, because most 
> users really have a hard time to evaluate the trust situation with OpenPGP 
> and for many use cases they will not be able to find a trust chain within a 
> reasonable time frame.

This argument is not logically consistent: Client certificates (and software
certificates) are not verified by users, not in the case of X.509, but also
not in the case of OpenPGP (see the Debian example).

> Also OpenPGP implementations are not as good on 
> checking the current validity like getting uptodate information if a 
> certificate is revoked.

Not so sure about that, but Werner responded already.

>> [1] This deserves an explanation.  If you think that X.509 certificates
>> provide a value for HTTPS, then please answer a simple question for me:
>> When is a X.509 certificate valid for a given domain?  See the SSL attacks
>> by Dan Kaminsky and Moxie Marlinspike from last year for details.
> 
> You accept the certificate if you have unbroken implementations on ca and 
> browser level and successfully getting current revokation information that 
> indicated that the certificate chain is in good order.

Well, in the case of HTTPS I don't control the CAs, so it is impossible for me
to have verifiably unbroken implementations.  Strictly, I'd have to stop right
there and reject every certificate that I didn't issue myself.

I can fix the browser, but I would have a hard job, see the GnuTLS example
above.  And even if you can figure out what the standards actually mean you
would also have to take care of legacy/compatibility issues.  The question is:
Why do I have to do all this?  Why is there not already a program that can do
it for me, reliably?

> To my knowledge the researchers you mention have mainly discovered 
> implementation and maintenance flaws.

Wild card matching is not an implementation or maintainenance flaw, it's a
design/specification flaw.  You don't accidentially match wild cards, it needs
to be implemented.  On what basis?  RFC2459 says:

   "Finally, the semantics of subject alternative names that include
   wildcard characters (e.g., as a placeholder for a set of names) are
   not addressed by this specification.  Applications with specific
   requirements may use such names but shall define the semantics."

It's not a bug, it's a feature!

> Both type of flaws will be there with OpenPGP or any other system as well.

No.  Although it is possible to build more complex systems based on OpenPGP
that are just as broken, X.509 contains the brokenness right there in the
specification.  I am not saying that a HTTPS build on top of OpenPGP would be
better, so you still make your point.  HTTPS is not broken because of X.509
(although that certainly helped), it is just broken.

Still, there is reason to believe that starting with OpenPGP and the web of
trust could lead to a better design.  Consider the case of openssh: You accept
a remote certificate on first use, from then on you only get an error if the
remote fingerprint changes.  This is similar to signing an OpenPGP key the
first time you communicate/meet with somebody, and thus such a mechanism and
OpenPGP are a natural match (openssh of course shows that the same can be
achieved with X.509).  Peter Gutmann and others have long proposed to use a
similar mechanism in the browser.

> Maybe PKI just attracts more research
> to unveil them. So the main criticism to X509 probably is that it is not 
> simple enough to be easily implemented.  I am no real expert, though.

I'd suggest you reread the Kaminsky paper.  It is very instructive to think
about the source of the problems, which manifests itself in bugs, but are
actually caused by other defects such as lack of specification or wrong
defaults/mental models.

Now, what I find really exciting is that you identify complexity as the main
problem here, because complexity in crypto systems is a fatal flaw.  It's not
something that can be brushed aside.  "Complexity is the worst enemy of
security." is a common Schneier quote:
http://www.schneier.com/crypto-gram-0003.html#8 details some of the reasons.

Thanks,
Marcus



More information about the Gpg4win-devel mailing list