Showing posts with label NSA. Show all posts
Showing posts with label NSA. Show all posts

Sunday, January 26, 2014

Trust me, really


One of the things which makes security such an interesting business is that sometimes it's not a black-or-white proposition.

Here's a good example of the shades of grey we sometimes deal with.

Say "trust me" to a security person, and you might as well have just shoveled chum into a shark tank.  We are trained to not trust, and if we're good at our job, trust comes us as easily to us as telling the truth comes to a politician.  :-)

But, being able to award trust in a thoughtful way is one of the hallmarks of being a security professional.  At some point we have no choice but to trust others ... even if we don't want to.

For example; we trust NIST and the crypto research community to give us good encryption algorithms, we trust certification labs to test the implementation of those algorithms, we trust our vendors to try to give us good products ... and then we trust them to tell us when they failed.

And it's not just things we have to trust, we have to trust people in numerous ways - nothing chills my heart more than to contemplate the "insider threat".

Unfortunately, over the last few months, I believe that our ability to trust has been seriously challenged.  I have three examples I'd like to mention of recent events which challenge our ability to trust.

The first is that NSA "thing."

Most folks, including any terrorist with half a clue, have assumed for years that the NSA is siphoning up all their information.  But a lot of us were blind-sided when it was revealed that the NSA has been tampering with bits of the fundamental encryption we depend on.  They went so far as to pay at least one company to make their products default to insecure algorithms, specifically so the NSA could then compromise those products.

There's a worldwide infrastructure focused on providing trustworthy encryption products.  And the foundation of that infrastructure is our trust in the NIST certification and testing process.  What we're being told now is that the core of that trust has been undermined ... specifically that the NSA has been planting known-to-them vulnerable encryption algorithms into the public approval process and then paying companies to adopt those vulnerable algorithms.   Not to whine too much, but if you can't trust NIST and RSA to give us good random-number generators, who can you trust?

The second example of trust gone awry I'll mention is the whole home-router scandal. (...Be sure to see my update at the bottom...)

I've honestly lost track of how many "home" routers have turned out to have a back door built into them.  This isn't an example of some idiot engineer installing another Sendmail WIZ bug, this looks like a conscious decision by a bunch of the home router manufacturers to put back doors into their products.  I wouldn't be surprised to learn that almost all routers intended for the retail home/small business market have some sort of back-door in them.  Scarily, it's not much of a leap to find a common thread between home router back-doors, and the NSA paying RSA to leave their products vulnerable.

My final observation of broken trust  is just to notice that the NSA trusted Edward Snowden.  How'd that work out for them?

So, other than venting, what's the point here?  Simple, we need to remember what's at the core of trust and learn from these experiences.  Merrian Webster defines trust as the "belief that somebody or something is reliable, good, honest, effective, etc."  Ultimately, the level of trust you can have in something is directly proportional to how much control you have over its construction or use.  If you've built something yourself from scratch, you can have a lot of trust in it - otherwise you're stuck having to make the assumption that everybody involved in producing it has been reliable, honest and effective.  With a loaf of bread, that's relatively easy, but with a core router on the Internet, the chain of entities you have to trust is very long and complex.

While that sounds like an argument for "don't trust anything", drawing that conclusion is a mistake.  You can't have zero risk and still get anything to work.  Sadly, we have to trust some things.

So if we have to trust the untrustworthy, what can we do?  We've been forcefully reminded that we're at risk when we trust things we don't control.  However, that's nothing new and our response should not be a surprise.

Enter open source.  In the home router arena, there have been open source replacements for manufacturer's router code for awhile.  While nothing is guaranteed, one of the great things about open source is that it's very hard to hide a back-door in open source code.  This doesn't completely solve the problem, there are still opportunities for vendors to hide back doors, even if you're running open source software on them.   But it does dramatically raise the bar, and often that's the best you can do.

We can also apply this lesson more broadly.  Think about the entire network stack you're using and ask, where do I have to trust the vendor, and where can I mitigate that trust by using open source?   Think open source OS and applications (e.g. Linux and OpenOffice.)   Then, think about going beyond that, do you really have to use Google?  Maybe you can up your game a bit and use something like duckduckgo, or running your searches though a TOR connection (yes, both of those solutions have their own problems ... nothing's perfect.)  Do you really want to buy that Nest, maybe one of the open-source thermostats would be more secure (and fun)?

There's one other lesson I think we should take away.  It's an oldie, but a goodie and really ties back to trust.  I'm speaking of defense in depth.  The reason I love defense in depth so much is that it's an explicit acknowledgement that you can't completely trust anything.  The point of defense in depth is that when a layer of defense lets you down, i.e. when it turns out you couldn't trust it after all, you've got additional layers to pick up the slack.

When it turns out that the random number generator you used to protect your SSL session was defective and <mumble> is snooping on your email connection, wouldn't it be nice if you had PGP encrypted your sensitive email?   When Unit 61398 takes an interest in your home router, wouldn't it be nice if your data was housed on a server running OpenBSD instead of Windows XP? When your carefully vetted employee decides that your organization is evil, and needs to be taken down a notch or two, wouldn't it be nice if his access truly was limited based on need-to-know?

So, here's the bottom line.  It's easy to get freaked out by some of the recent revelations, but really nothing has changed.  There have always been very serious, very smart, well resourced attackers on the Internet.   However, the principals you need to use to protect yourself haven't changed, we've just been reminded that they actually matter. 

Here's a random collection of links related to the NSA issue and the home router problems.

NSA
http://topics.nytimes.com/top/reference/timestopics/organizations/n/national_security_agency/
https://www.schneier.com/blog/archives/2013/09/the_nsas_crypto_1.html

Home Router Back Doors
http://www.devttys0.com/2013/10/from-china-with-love/
http://www.reddit.com/r/netsec/comments/1orepx/great_new_backdoor_in_tenda_w302r_routers/
http://blog.nruns.com/blog/2013/11/29/In-the-Wild-Malware-for-Routers-Sergio/
http://www.reddit.com/r/netsec/comments/1rn37d/dlink_vulnerability_of_the_week_telnet_interface/
http://securityadvisories.dlink.com/security/publication.aspx?name=SAP10001
http://krebsonsecurity.com/2013/12/important-security-update-for-d-link-routers/
http://www.h725.co.vu/2013/11/d-link-whats-wrong-with-you.html
http://shadow-file.blogspot.nl/2013/10/netgear-root-compromise-via-command.html
http://www.exploit-db.com/exploits/16149/
http://www.securityfocus.com/archive/1/530119

Update: 4/22/2014 Added this link.  One of the primary suppliers of router hardware/software (http://www.sercomm.com/home.aspx?langid=1) claimed to have fixed the products, only to have been caught just hiding it deeper!  OM#$!@*!G

http://arstechnica.com/security/2014/04/easter-egg-dsl-router-patch-merely-hides-backdoor-instead-of-closing-it/
http://www.synacktiv.com/ressources/TCP32764_backdoor_again.pdf

I am speechless ...

(But my point still remains, trust is a necessary evil so mitigate it as best as you can.)






Tuesday, September 17, 2013

I didn't know that gold can tarnish

It's been accepted for years that cryptography is hard to implement and is full of snake-oil products. The only way to be reasonably sure that encryption is effective is to:

  • Stay current
  • Use robust key lengths
  • Manage keys/passwords carefully 
  • And most importantly, only use FIPS 140-2 validated encryption.  

In general, FIPS 140-2 has always been the gold standard of encryption, and trust in FIPS 140-2 has been a cornerstone of being able to trust most security products available today.

However, that trust is now under attack.

The Arstechnica article below provides an alarming report, describing how between 2006 and 2007 the Taiwan government issued at least 10,000 flawed smart cards.  These smart cards were designed to be used by Taiwanese citizens for many sensitive transactions, including activities such as submitting tax returns.  The cards with the flaw had virtually useless encryption, putting at risk any data "protected" by the cards.   The gist of the Arstechnica article is that these failures occurred in spite of the cards being FIPS 140-2 validated, and that the FIPS 140-2 validation process is broken.

But reading the research paper which the Arstechnica article is based on suggests that it's not as simple as that...

Despite being FIPS 140-2 validated, it turns out that the random number generator used by the card (technically, the "Renesas AE45C1 smart card microcontroller" used by the card) "sometimes fails", producing (non)random numbers that can lead to certificates which are easily compromised.  This is exactly the type of failure mode that FIPS 140-2 is designed to catch.  However, the generator on this card had an optional "health check" which was intended to detect when the random number generator was failing.  Not surprisingly, the FIPS 140-2 validation for the card only applies if this health check is enabled.  In other words, if the health check is turned off, as it was on the 10,000 or so broken cards, FIPS 140-2 does not apply and you're on your own using these cards.

Here's the way the research report describes the problem (MOICA is the agency which issued the cards):

"Unfortunately, the hardware random-number generator on the AE45C1 smart card microcontroller sometimes fails, as demonstrated by our results. These failures are so extreme that they should have been caught by standard health tests, and in fact the AE45C1 does offer such tests. However, as our results show, those tests were not enabled on some cards. This has now also been confirmed by MOICA. MOICA’s estimate is that about 10000 cards were issued without these tests, and that subsequent cards used a “FIPS mode” (see below) that enabled these tests"

This is pretty standard, if you look at FIPS 140-2 validation reports, or Common Criteria evaluations, you'll always see a very precise description of exactly how the product must be configured in order for the validation to apply.   It's common to see that FIPS 140-2 validated software or hardware has a "FIPS mode", which must be enabled for the validation to apply.

Looking at the FIPS 140-2 certificate for at least one version of this chip (https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Zertifizierung/Reporte02/0212a_pdf.pdf), the report specifically says "postprocessing should be included in the users embedded software", which I believe is a requirement to include the health check.

Clearly,  this was a horrific failure.  The Taiwanese government issued a bunch of smart cards which were used to authenticate citizens and protect sensitive data, that were completely broken.  Yes, the folks producing the card made a critical mistake. But placing this at the feet of FIPS 140-2 is, IMHO, missing the point.

It would be nice if FIPS 140-2 meant a product was idiot proof, but that's not the way the world works.  Encryption is complicated and has to be done correctly or it doesn't work.  The whole purpose of the FIPS 140-2 testing regime is to ensure that encryption has been rigorously tested under controlled conditions ... and most importantly, to document those conditions so that we know how to use it in a way that can be trusted.  Just because something is FIPS 140-2 validated doesn't mean it's idiot proof or that it can't be configured insecurely.

In any event, in my opinion, despite being very clear about what they did and didn't do when testing this chip - NIST's reputation has been badly tarnished and it will take significant time and effort on their part to undo the damage.  I guess even a gold standard can tarnish sometimes ...

Here's the Arstechnica article describing the failure:
http://arstechnica.com/security/2013/09/fatal-crypto-flaw-in-some-government-certified-smartcards-makes-forgery-a-snap/2/

Here's the actual research paper describing the findings:
http://smartfacts.cr.yp.to/smartfacts-20130916.pdf

A very good overview of the problem provided by the researchers:
http://smartfacts.cr.yp.to/index.html

Tin-Foil Hat Addendum

Given the <euphemism>crisis of trust</euphemism> that the NSA, and the US Government, is currently going through - including accusations that the NSA has been surreptitiously weakening encryption products, it's very hard to avoid the theory that the NSA might have had a hand in this failure.  That's certainly possible in this case, but there's nothing to suggest that the lab issuing the FIPS 140-2 validation was complicit in this failure.

BTW, given the relationship between Taiwan and Mainland China, I've always assumed that Taiwan is constantly under cyber attack by China.  Putting on my second layer of tin-foil headgear, could this be the result of a Chinese effort, not an American one?  I'm sure the NSA is very good, but China is certainly no cyber-slouch either, and they might have a better pool of human resources on the ground in Taiwan - which would have simplified introducing this vulnerability into the card.

Finally, removing my tin-foil hats for a second, this could simply be a screw up.  Broken products get shipped every day, and encryption errors like this are subtle and hard to notice when present in only a very small percentage of the cards.