Sunday, August 17, 2014

DEFCON 22 Impressions

For the third year in a row I was able to attend DEFCON (self funded), and I must say I'm completely hooked.

The crowds were even more out of control than last year.  I got in line for the badge around 6:30 Thursday morning, and the line was already humongous.  By the time they started handing out badges (9:00), the line extended all the way back to the casino and they had started turning folks away.  Amazing!

In fact, they ran out of badges by the end of the day.  Those folks who missed out on "The" badge, got a paper badge and a DEFCON 20 badge as a consolation prize.  I feel bad for the folks who didn't get a badge.  There were 14,000 badges and it wasn't enough.  I can imagine it's very hard to predict how many folks will attend a conference like DEFCON (especially since there's no pre-registration).  Still they usually get it right, I wonder what led to them to so completely misjudge how many badges they needed this  time.

Speaking of the badge, it was awesome!  Almost as good as the DEFCON 20 badge, and clearly a prize worth the wait.  As with the DEFCON 20 badge, it was based on the Parallax Propeller development system. http://www.parallax.com/news/2014-08-06/propeller-1-defcon-22-badges-las-vegas

I spent some time playing with the badge, mostly establishing communication between it and my Linux system (minicom, 57600 baud with ttyUSB0).  Once I got that running, the badge started typing random commands to the screen, but due to the primitive communications the commands were overwriting each other and I wasn't sure I was getting it all.  I then found the source for the badge on the CD they handed out.  :-)

Poking at the source I found the strings that the badge prints to the screen.  They were encoded, but the routines to decode them were in the source, so I wrote a quick and dirty program to print them all.  Below is a cleaned up version of that program, which decodes and prints all the "secret" hints that the badge types when you connect to it.  (Of course, this is just for the "Human" badge, I don't know what the other badges do.) I included the relevant code from the badge as comments in my program.




#!/usr/bin/env ruby

# Decode the "secret" strings in the defcon 22 badge source code.

################################################################
# Here's snippits of the spin source (and the strings) fromm the badge
#
#  RayNelson   byte      "IAIHG TPJNU QU CZR GALWXK DC MHR LANK FOTLA OTN LOYOC HPMPB PX HKICW",0
#  Test4       byte      "DID YOU REALLY THINK THAT IT WOULD BE SO EASY? Really?  Just running strings?",0
#  Greets      byte      16,77,85,66,83,69,67,85,32,74,69,32,84,85,86,83,69,68,32,74,77,85,68,74,79,32,74,77,69,13,0
#  Detective   byte      13,74,85,82,69,82,32,71,66,32,79,82,84,86,65,32,86,32,88,65,66,74,32,83,86,65,81,32,85,78,69,66,89,81,13,0
#  Scientist   byte      76,81,84,89,86,70,32,82,75,66,32,83,78,90,32,83,81,87,83,85,32,87,82,65,32,73,77,82,66,32,67,70,72,82,32,90,65,65,65,65,32,73,89,77,87,90,32,80,32,69,65,74,81,86,68,32,89,79,84,80,32,76,71,65,87,32,89,75,90,76,13,0
#  Diver       byte      10,"DBI DRO PSBCD RKVP YP RSC ZRYXO XEWLOB PYVVYGON LI RSC VKCD XKWO DROX DRO COMYXN RKVP YP RSC XEWLOB",CR,0
#  Driver      byte      "SOMETIMES WE HAVE ANSWERS AND DONT EVEN KNOW IT SO ENJOY THE VIEW JUST BE HAPPY",0
#  Politician  byte      83,83,80,87,76,77,32,84,72,67,65,80,32,81,80,32,74,84,32,73,87,69,32,87,68,88,70,90,32,89,85,90,88,32,85,77,86,72,88,72,32,90,65,32,67,66,32,80,65,69,32,88,82,79,76,32,70,65,89,32,73,80,89,75,13,0
#  Test3       byte      "ZGJG MTM LLPN C NTER MPMH TW",CR,0
#  Football    byte      "IT MIGHT BE HELPFUL LATER IF YOU KNOW HOW TO GET TO EDEN OR AT LEAST THE WAY",0
#  Mystery     byte      "OH A MYSTERY STRING I SHOULD HANG ON TO THIS FOR LATER I WONDER WHAT ITS FOR OR WHAT IT DECODES TO?",0
#
#  Cmd00       byte      $05, $42, $54, $57, $50, $20, $4A, $4E, $4C, $4D, $59, $20, $4D, $54
#              byte      $5A, $57, $58, $0D, $00, $4C, $4F, $56, $45, $00 
#  Cmd01       byte      $04, $41, $45, $58, $47, $4C, $20, $58, $5A, $0D, $00
#  Cmd02       byte      $0E, $47, $49, $50, $41, $57, $48, $0D, $00, $4C, $49, $46, $45, $00
#  Cmd03       byte      $0C, $45, $46, $4D, $4B, $20, $4D, $45, $58, $51, $51, $42, $0D, $00
#  Cmd04       byte      $04, $53, $46, $49, $43, $0D, $00, $47, $69, $47, $21, $00
#  Cmd05       byte      $14, $48, $49, $20, $43, $48, $58, $59, $4A, $59, $48, $58, $59, $48
#              byte      $4E, $20, $4E, $42, $49, $4F, $41, $42, $4E, $0D, $00
#  Cmd06       byte      $02, $50, $51, $20, $4B, $4F, $43, $49, $4B, $50, $43, $56, $4B, $51
#              byte      $50, $0D, $00, $4A, $6F, $6E, $6E, $79, $4D, $61, $63, $00
#  Cmd07       byte      $0C, $59, $4D, $44, $44, $4B, $20, $4D, $5A, $50, $20, $44, $51, $42
#              byte      $44, $41, $50, $47, $4F, $51, $0D, $00, $48, $41, $50, $50, $59, $00
#  Cmd08       byte      $05, $4A, $46, $59, $0D, $00, $48, $45, $41, $4C, $54, $48, $00
#  Cmd09       byte      $09, $4D, $58, $20, $57, $58, $43, $20, $5A, $44, $4E, $42, $43, $52
#              byte      $58, $57, $20, $4A, $44, $43, $51, $58, $41, $52, $43, $48, $0D, $00 
#  Cmd10       byte      $0F, $52, $44, $43, $48, $4A, $42, $54, $0D, $00
#  Cmd11       byte      $02, $45, $51, $50, $48, $51, $54, $4F, $0D, $00
#  Cmd12       byte      $19, $41, $54, $58, $0D, $00, $57, $45, $41, $4C, $54, $48, $00
#              byte      $31, $6F, $35, $37, $00
#
##################################
# Here's where the various hints are decoded.  Notice two different schemes are used.
#
#  term.caesar(@@Commands[idx])
#  term.caesar(@Greets)
#  term.otp(@Test3, @Test4)
#  term.caesar(@Detective)                                 ' display crypto string
#  term.otp(@Scientist, @Driver)
#  term.caesar(@Diver)
#  term.otp(@Politician, @Football)
#  term.otp(@RayNelson, @Mystery)
#
##################################
# Here are the actual spin decoding routines.  exor isn't actually used (here)
#
#pub caesar(p_zstr) | c                                          ' *1o57*   
#
#  lost := byte[p_zstr++]
#  repeat strsize(p_zstr)
#    c := byte[p_zstr++] 
#    case c
#      32    : tx(32)
#      13    : tx(13)
#      other : tx((((c-65)+26-lost)//26)+65)
#
#      
#pub exor(p_zstr)                                                ' *1o57*   
#
#  lost := byte[p_zstr++]
#  repeat strsize(p_zstr)
#    tx(byte[p_zstr++]^lost)
#    
#    
#pub otp(p_zstr1, p_zstr2)                                       ' *1o57*
#
#  repeat until (byte[p_zstr1] == 0)
#    if (byte[p_zstr1] == 32)
#      tx(32)
#      p_zstr1++
#      
#    elseif (byte[p_zstr1] == 13)
#      tx(13)
#      p_zstr1++
#      
#    elseif (byte[p_zstr2] == 32)
#      p_zstr2++
#      
#    else
#      tx((((byte[p_zstr1++]-65)+(byte[p_zstr2++]-65))//26)+65)
#
################################################################

#
# Here are my versions of the decoding routines
#

def caesar (message)
  decoded = ""
  lost = message.delete_at(0)

  message.each {|c|
    case c
    when 32 then result = 32
    when 13 then next
    when 0 then break
    else
      result = ((((c-65)+26-lost)%26)+65)
    end
    decoded = decoded + result.chr
  }
  decoded
end

def otp (message, key)
  decoded = ""

  msg_index = 0
  key_index = 0

  while msg_index < message.length
    if key_index < key.length
      c = message[msg_index]
      d = key[key_index]
      case c
      when 32 then 
        result = 32
        msg_index += 1
      when 13 then 
        msg_index += 1
        next
      when 0 then
        break
      else
        if d == 32 then
          key_index +=  1
          next
        else
          result = ((((c-65)+(d-65))%26)+65)
          msg_index += 1
          key_index += 1
        end
      end
      decoded = decoded + result.chr
    end
  end
  decoded
end

#
# A  couple of auxiliary routines to adjust data types.
#

def s2c (string)  # String2Char .. convert string of decimal numbers to array of chars
  result = []
  string.each_byte {|char|
    result.push(char)
  }
  result
end


def h2c  (array_string)   # Hex2Char .. convert string of Hex numbers (in spin format) to array of chars
  result = [];
  inter1 = ""
  inter1 = array_string.gsub(/\$/,'0x')
  inter1.scan(/0x../){|element| result.push(element.hex)}

  result
end

#
# OK, finally. Decode the all the hints and print them
#

cmd00 = h2c ("$05, $42, $54, $57, $50, $20, $4A, $4E, $4C, $4D, $59, $20, $4D, $54, $5A, $57, $58, $0D, $00, $4C, $4F, $56, $45, $00")
puts "caeser(Cmd00) = #{caesar(cmd00)}"

cmd01 = h2c ("$04, $41, $45, $58, $47, $4C, $20, $58, $5A, $0D, $00")
puts "caesar(Cmd01) = #{caesar(cmd01)}"

cmd02 = h2c("$0E, $47, $49, $50, $41, $57, $48, $0D, $00, $4C, $49, $46, $45, $00")
puts "caesar(Cmd02) = #{caesar(cmd02)}"

cmd03 = h2c("$0C, $45, $46, $4D, $4B, $20, $4D, $45, $58, $51, $51, $42, $0D, $00")
puts "caesar(Cmd03) = #{caesar(cmd03)}"

cmd04 = h2c("$04, $53, $46, $49, $43, $0D, $00, $47, $69, $47, $21, $00")
puts "caesar(Cmd04) = #{caesar(cmd04)}"

cmd05 = h2c("$14, $48, $49, $20, $43, $48, $58, $59, $4A, $59, $48, $58, $59, $48, $4E, $20, $4E, $42, $49, $4F, $41, $42, $4E, $0D, $00")
puts "caesar(Cmd05) = #{caesar(cmd05)}"

cmd06 = h2c("$02, $50, $51, $20, $4B, $4F, $43, $49, $4B, $50, $43, $56, $4B, $51, $50, $0D, $00, $4A, $6F, $6E, $6E, $79, $4D, $61, $63, $00")
puts "caesar(Cmd06) = #{caesar(cmd06)}"

cmd07 = h2c("$0C, $59, $4D, $44, $44, $4B, $20, $4D, $5A, $50, $20, $44, $51, $42, $44, $41, $50, $47, $4F, $51, $0D, $00, $48, $41, $50, $50, $59, $00")
puts "caesar(Cmd07) = #{caesar(cmd07)}"

cmd08 = h2c("$05, $4A, $46, $59, $0D, $00, $48, $45, $41, $4C, $54, $48, $00")
puts "caesar(Cmd08) = #{caesar(cmd08)}"

cmd09 = h2c("$09, $4D, $58, $20, $57, $58, $43, $20, $5A, $44, $4E, $42, $43, $52, $58, $57, $20, $4A, $44, $43, $51, $58, $41, $52, $43, $48, $0D, $00")
puts "caesar(Cmd09) = #{caesar(cmd09)}"

cmd10 = h2c("$0F, $52, $44, $43, $48, $4A, $42, $54, $0D, $00")
puts "caesar(Cmd10) = #{caesar(cmd10)}"

cmd11 = h2c("$02, $45, $51, $50, $48, $51, $54, $4F, $0D, $00")
puts "caesar(Cmd11) = #{caesar(cmd11)}"

cmd12 = h2c("$19, $41, $54, $58, $0D, $00, $57, $45, $41, $4C, $54, $48, $00, $31, $6F, $35, $37, $00")
puts "caesar(Cmd12) = #{caesar(cmd12)}"

greets = [16,77,85,66,83,69,67,85,32,74,69,32,84,85,86,83,69,68,32,74,77,85,68,74,79,32,74,77,69,13,0]
puts "caesar(Greets) -> #{caesar(greets)}\n"

test3 = s2c ("ZGJG MTM LLPN C NTER MPMH TW")
test4 = s2c("DID YOU REALLY THINK THAT IT WOULD BE SO EASY? Really?  Just running strings?")
puts "otp (Test3, Test4) -> #{otp(test3, test4)}"

detective = [13,74,85,82,69,82,32,71,66,32,79,82,84,86,65,32,86,32,88,65,66,74,32,83,86,65,81,32,85,78,69,66,89,81,13,0]
puts "caesar(Detective) -> #{caesar(detective)}\n"

scientist = [76,81,84,89,86,70,32,82,75,66,32,83,78,90,32,83,81,87,83,85,32,87,82,65,32,73,77,82,66,32,67,70,72,82,32,90,65,65,65,65,32,73,89,77,87,90,32,80,32,69,65,74,81,86,68,32,89,79,84,80,32,76,71,65,87,32,89,75,90,76,13,0]
driver = s2c ("SOMETIMES WE HAVE ANSWERS AND DONT EVEN KNOW IT SO ENJOY THE VIEW JUST BE HAPPY")
puts "otp (Scientist, Driver) ->  #{otp(scientist, driver)}"

diver = [10] + s2c("DBI DRO PSBCD RKVP YP RSC ZRYXO XEWLOB PYVVYGON LI RSC VKCD XKWO DROX DRO COMYXN RKVP YP RSC XEWLOB")
puts "caesar (Diver) -> #{caesar(diver)}"

politician = [83,83,80,87,76,77,32,84,72,67,65,80,32,81,80,32,74,84,32,73,87,69,32,87,68,88,70,90,32,89,85,90,88,32,85,77,86,72,88,72,32,90,65,32,67,66,32,80,65,69,32,88,82,79,76,32,70,65,89,32,73,80,89,75,13,0]
football = s2c("IT MIGHT BE HELPFUL LATER IF YOU KNOW HOW TO GET TO EDEN OR AT LEAST THE WAY")
puts "otp(Politician, Football) -> #{otp(politician, football)}"

raynelson = s2c("IAIHG TPJNU QU CZR GALWXK DC MHR LANK FOTLA OTN LOYOC HPMPB PX HKICW")
mystery = s2c("OH A MYSTERY STRING I SHOULD HANG ON TO THIS FOR LATER I WONDER WHAT ITS FOR OR WHAT IT DECODES TO?")
puts "otp(RayNelson, Mystery) -> #{otp(raynelson, mystery)}"


And here's the output from this program:

caeser(Cmd00) = WORK EIGHT HOURS
caesar(Cmd01) = WATCH TV
caesar(Cmd02) = SUBMIT
caesar(Cmd03) = STAY ASLEEP
caesar(Cmd04) = OBEY
caesar(Cmd05) = NO INDEPENDENT THOUGHT
caesar(Cmd06) = NO IMAGINATION
caesar(Cmd07) = MARRY AND REPRODUCE
caesar(Cmd08) = EAT
caesar(Cmd09) = DO NOT QUESTION AUTHORITY
caesar(Cmd10) = CONSUME
caesar(Cmd11) = CONFORM
caesar(Cmd12) = BUY
caesar(Greets) -> WELCOME TO DEFCON TWENTY TWO
otp (Test3, Test4) -> COME AND PLAY A GAME WITH ME
caesar(Detective) -> WHERE TO BEGIN I KNOW FIND HAROLD
otp (Scientist, Driver) ->  DEFCON DOT ORG SLASH ONE ZERO FIVE SEVEN SLASH I WONDER WHAT GOES HERE
caesar (Diver) -> TRY THE FIRST HALF OF HIS PHONE NUMBER FOLLOWED BY HIS LAST NAME THEN THE SECOND HALF OF HIS NUMBER
otp(Politician, Football) -> ALBERT MIGHT BE ON THE PHONE WITH HAROLD SO IF ITS BUSY TRY BACK
otp(RayNelson, Mystery) -> WHITE LINES IN THE MIDDLE OF THE ROAD THATS THE WORST PLACE TO DRIVE

Of course, this was  just scratching  the surface.  To actually solve the mystery of the badge was a massive and extremely challenging undertaking.  Here's one description of the entire solution: http://potatohatsecurity.tumblr.com/post/94565729529/defcon-22-badge-challenge-walkthrough

Many of the talks I was able to get into were excellent, with the most enjoyable one being the very last talk of  the conference, "Elevator Hacking" by Deviant Ollam and Howard Payne.  Wow, I'll never be able to look at elevator the same way again!  I recommend looking for it when the video becomes available.

At the closing ceremony, DT gave the best news we could have asked for ... next year's DEFCON will be hosted at both the Bally and Paris casino/conference centers.  This ought to resolve the space problems which have plagued DEFCON the last few years.

I'm already looking forward to it!

Sunday, April 13, 2014

As the Heart Bleeds (A new cryptographic soap opera)


By now, I'm sure you've heard of HeartBleed.  If not, you've missed a fun time. A good description of what it's all about is at: http://blog.cryptographyengineering.com/2014/04/attack-of-week-openssl-heartbleed.html

One of the "interesting" aspects of this vulnerability is that it places the private SSL certificate for the server at risk.  At first, there was some doubt as to just how vulnerable the private certificate really was, but as of 4/12/2014, the vulnerability of a server's private certificate has been clearly demonstrated: http://arstechnica.com/security/2014/04/private-crypto-keys-are-accessible-to-heartbleed-hackers-new-data-shows/

The first person to publicly demonstrate that the private certificate is vulnerable, Fedor Indutny (https://twitter.com/indutny), also had a snippet of code on his twitter feed showing how to grab and view certificate revocation lists (CRL).  

I was idly curious ... since we now know that any server's private certificate has potentially been compromised ... how many folks have started to revoke their server's SSL certificates?

Below is the results of the hint provided by Indutny, and a bit of Perl.  The vertical axis is the number of certificates revoked on a given day, with time flowing from the left to the right.

Here we can see the number of certificates revoked over the past two years (see updates below for discussion of the spike around 4/16):



And here we can see the activity over the past month.  Since we know that some folks received advanced notification of the vulnerability, I was especially interested to know if there a spike in revocations prior to the announcement.  (Which I don't see.)



Just to provide some context, here's the same data for this calendar year. 


If you're interested in drilling into this a bit deeper, check out the numerous charts at:


Obviously, I'll keep an eye on certificate revocations for awhile.  All these charts are now being updated automatically every night.

It'll be interesting to see what the next weeks will bring.  Some folks are predicting a meltdown of both the CRL handling infrastructure (straight CRLs, OCSP and Google's CRLSet), as well as the infrastructure for issuing replacement SSL certificates.


Update (4/15/2014):  Those who know me know I'm extraordinarily lazy, I'll spend days getting a computer to do 5 minutes of work for me.  :-)  I've automated producing these charts, and through the magic of cron I'll be updating them daily.  They're not the pretty ones I created with Excel previously, but gnuplot is my friend in this case.


Update (4/16/2014): The good folks as SANS published a diary entry yesterday looking at the same issues.  They produced a similar chart, but from a much richer set of sources.  When I contacted them to ask who their sources were, not only did they provide a list - they put up a spiffy page which features an automated chart.  So the charts here now have a total of 16 different sources for CRLs.  Whoopee!


Here's a link to their CRL tracking page: https://isc.sans.edu/crls.html


Update (4/16/2014 - #2): SANS has provided an even more comprehensive list of CRLs.  Also they noticed that with the enhanced list, there's a massive spike in certificate revocations ... due to a massive number of certificate revocations from Globalsign.com.




Update (4/18/2014): The rest of the world is starting to catch up on this issue.  This article looks at the impact of massive CRL changes to the "performance" of the Internet - and confirms that Akamai is planning a major round of certificate revocations: http://www.zdnet.com/internet-slowed-by-heartbleed-identity-crisis-7000028506/

Here, BTW, is a observation from Cloudflare on the cost of massive certificate revocation: http://blog.cloudflare.com/the-hard-costs-of-heartbleed


Update (4/20/2014):  In an effort to provide more comprehensive monitoring, I've modified (actually rewritten) my program to provide a plethora of charts.  They're updated daily, and available at this URL:


I've added this link to the primary blog entry above. 


Update (9/10/2014): This excellent paper just came out.  A comprehensive review of how site admins responded to the Heat Bleed vulnerability.  Two Thumbs up!






And here's the updated list of the CRL sources which both SANS and I use:







  • http://corppki/crl/MSIT Machine Auth CA 2(1).crl,
  • http://crl-ssl.certificat2.com/keynectis/class2keynectisca.crl,
  • http://crl.comodoca.com/COMODOExtendedValidationSecureServerCA.crl,
  • http://crl.comodoca.com/COMODOHigh-AssuranceSecureServerCA.crl,
  • http://crl.comodoca.com/COMODOSSLCA.crl,
  • http://crl.entrust.net/level1c.crl,
  • http://crl.globalsign.com/gs/gsdomainvalg2.crl,
  • http://crl.globalsign.com/gs/gsorganizationvalg2.crl,
  • http://crl.godaddy.com/gdig2s1-42.crl,
  • http://crl.godaddy.com/gds1-54.crl,
  • http://crl.godaddy.com/gds1-85.crl,
  • http://crl.microsoft.com/pki/mscorp/crl/MSIT Machine Auth CA 2(1).crl,
  • http://crl.netsolssl.com/NetworkSolutions_CA.crl,
  • http://crl.omniroot.com/PublicSureServerSV.crl,
  • http://crl.startssl.com/crt2-crl.crl,
  • http://crl.usertrust.com/USERTrustLegacySecureServerCA.crl,
  • http://crl2.netsolssl.com/NetworkSolutions_CA.crl,
  • http://crl3.digicert.com/ca3-g27.crl,
  • http://crl3.digicert.com/sha2-ev-server-g1.crl,
  • http://crl3.digicert.com/ssca-g5.crl,
  • http://crl4.digicert.com/ca3-g27.crl,
  • http://crl4.digicert.com/sha2-ev-server-g1.crl,
  • http://EVIntl-crl.verisign.com/EVIntl2006.crl,
  • http://EVSecure-crl.verisign.com/EVSecure2006.crl,
  • http://gtssl-crl.geotrust.com/crls/gtssl.crl,
  • http://gtssl2-crl.geotrust.com/gtssl2.crl,
  • http://mscrl.microsoft.com/pki/mscorp/crl/MSIT Machine Auth CA 2(1).crl,
  • http://pki.google.com/GIAG2.crl,
  • http://sd.symcb.com/sd.crl,
  • http://svr-sgc-crl.thawte.com/ThawteSGCG2.crl,
  • http://SVRIntl-G3-crl.verisign.com/SVRIntlG3.crl,
  • http://SVRSecure-G3-crl.verisign.com/SVRSecureG3.crl



  • Wednesday, March 26, 2014

    Every Little Bit Helps


    I thought this was a pretty interesting response to the debacle that our current certificate infrastructure has become.

    https://sites.google.com/site/certificatetransparency/ev-ct-plan

    In short, Google is going to try to encourage the use of Certificate Transparency to help deal with the weakness in our certificate infrastructure exposed by the recent rash of invalid certificate incidents.

    Certificate Transparency is the idea that whenever a new certificate is issued, that event is logged in a public logfile.  In fact, anyone can log a certificate to a pubic logfile.  Interested folks could then audit the logfile for signs of fraud or erroneously issued certificates.

    In other words, if users can reject certificates not published in the logfile, certificate forgers will have to publish their forged certificates in the public logfile for them to be useful ... permitting the legitimate domain owners to see that a forged certificate has been issued.

    For a desciption of the concept, see:
    https://sites.google.com/site/certificatetransparency/what-is-ct

    On the one hand, I really like the idea.  As I've mentioned before, I firmly believe that the way to make the Internet more secure is to move aggressively to improve transparency and utilize open-source.

    On the other hand, it really doesn't change anything fundamentally.  The issuers of certificates will still be the primary source of information in the logfile, which means that a rogue or compromised certificate authority will still be able to issue invalid certificates.  The logfile will just, potentially, provide a better opportunity for those who are paying attention to catch bad certificates more quickly.

    There is an opportunity for Certificate Transparency to provide additional value, I think.  I'm no certificate expert, but what if when users are presented a certificate not listed in a logfile, in addition to having the option of rejecting the certificate they also "log" it to the logfile as suspect - providing in effect a distributed early warning system for questionable certificates.  In fact, I would imagine that the logfile service can simply track and publicize when it's queried for certificates which it doesn't know about (note: there are probably privacy issues with just automatically publicizing failed certificate lookups.)

    Part of the challenge will be giving users the ability to query about certificates when they find themselves in a hostile environment.  Consider for example when a nation state controls DNS and is trying to use forged certificates to conduct man-in-the-middle attacks.  If the attacker can spoof or compromise a logfile service, then we're back to square one.

    I still sometimes dream of a PGP style web-of-trust certificate system, which would rely on multiple links of trust used to generate a score based on how broadly trusted a given certificate is ... unlike the current hierarchical, centralized chain-of-trust we currently use.  The biggest problems with a web-of-trust is that it probably wouldn't scale well enough, and it does place more of the responsibility on the user to manage how they allocate trust.

    In any event, it's always nice when elephantine companies like Google throw their weight behind something that can only help.

    Sunday, January 26, 2014

    Trust me, really


    One of the things which makes security such an interesting business is that sometimes it's not a black-or-white proposition.

    Here's a good example of the shades of grey we sometimes deal with.

    Say "trust me" to a security person, and you might as well have just shoveled chum into a shark tank.  We are trained to not trust, and if we're good at our job, trust comes us as easily to us as telling the truth comes to a politician.  :-)

    But, being able to award trust in a thoughtful way is one of the hallmarks of being a security professional.  At some point we have no choice but to trust others ... even if we don't want to.

    For example; we trust NIST and the crypto research community to give us good encryption algorithms, we trust certification labs to test the implementation of those algorithms, we trust our vendors to try to give us good products ... and then we trust them to tell us when they failed.

    And it's not just things we have to trust, we have to trust people in numerous ways - nothing chills my heart more than to contemplate the "insider threat".

    Unfortunately, over the last few months, I believe that our ability to trust has been seriously challenged.  I have three examples I'd like to mention of recent events which challenge our ability to trust.

    The first is that NSA "thing."

    Most folks, including any terrorist with half a clue, have assumed for years that the NSA is siphoning up all their information.  But a lot of us were blind-sided when it was revealed that the NSA has been tampering with bits of the fundamental encryption we depend on.  They went so far as to pay at least one company to make their products default to insecure algorithms, specifically so the NSA could then compromise those products.

    There's a worldwide infrastructure focused on providing trustworthy encryption products.  And the foundation of that infrastructure is our trust in the NIST certification and testing process.  What we're being told now is that the core of that trust has been undermined ... specifically that the NSA has been planting known-to-them vulnerable encryption algorithms into the public approval process and then paying companies to adopt those vulnerable algorithms.   Not to whine too much, but if you can't trust NIST and RSA to give us good random-number generators, who can you trust?

    The second example of trust gone awry I'll mention is the whole home-router scandal. (...Be sure to see my update at the bottom...)

    I've honestly lost track of how many "home" routers have turned out to have a back door built into them.  This isn't an example of some idiot engineer installing another Sendmail WIZ bug, this looks like a conscious decision by a bunch of the home router manufacturers to put back doors into their products.  I wouldn't be surprised to learn that almost all routers intended for the retail home/small business market have some sort of back-door in them.  Scarily, it's not much of a leap to find a common thread between home router back-doors, and the NSA paying RSA to leave their products vulnerable.

    My final observation of broken trust  is just to notice that the NSA trusted Edward Snowden.  How'd that work out for them?

    So, other than venting, what's the point here?  Simple, we need to remember what's at the core of trust and learn from these experiences.  Merrian Webster defines trust as the "belief that somebody or something is reliable, good, honest, effective, etc."  Ultimately, the level of trust you can have in something is directly proportional to how much control you have over its construction or use.  If you've built something yourself from scratch, you can have a lot of trust in it - otherwise you're stuck having to make the assumption that everybody involved in producing it has been reliable, honest and effective.  With a loaf of bread, that's relatively easy, but with a core router on the Internet, the chain of entities you have to trust is very long and complex.

    While that sounds like an argument for "don't trust anything", drawing that conclusion is a mistake.  You can't have zero risk and still get anything to work.  Sadly, we have to trust some things.

    So if we have to trust the untrustworthy, what can we do?  We've been forcefully reminded that we're at risk when we trust things we don't control.  However, that's nothing new and our response should not be a surprise.

    Enter open source.  In the home router arena, there have been open source replacements for manufacturer's router code for awhile.  While nothing is guaranteed, one of the great things about open source is that it's very hard to hide a back-door in open source code.  This doesn't completely solve the problem, there are still opportunities for vendors to hide back doors, even if you're running open source software on them.   But it does dramatically raise the bar, and often that's the best you can do.

    We can also apply this lesson more broadly.  Think about the entire network stack you're using and ask, where do I have to trust the vendor, and where can I mitigate that trust by using open source?   Think open source OS and applications (e.g. Linux and OpenOffice.)   Then, think about going beyond that, do you really have to use Google?  Maybe you can up your game a bit and use something like duckduckgo, or running your searches though a TOR connection (yes, both of those solutions have their own problems ... nothing's perfect.)  Do you really want to buy that Nest, maybe one of the open-source thermostats would be more secure (and fun)?

    There's one other lesson I think we should take away.  It's an oldie, but a goodie and really ties back to trust.  I'm speaking of defense in depth.  The reason I love defense in depth so much is that it's an explicit acknowledgement that you can't completely trust anything.  The point of defense in depth is that when a layer of defense lets you down, i.e. when it turns out you couldn't trust it after all, you've got additional layers to pick up the slack.

    When it turns out that the random number generator you used to protect your SSL session was defective and <mumble> is snooping on your email connection, wouldn't it be nice if you had PGP encrypted your sensitive email?   When Unit 61398 takes an interest in your home router, wouldn't it be nice if your data was housed on a server running OpenBSD instead of Windows XP? When your carefully vetted employee decides that your organization is evil, and needs to be taken down a notch or two, wouldn't it be nice if his access truly was limited based on need-to-know?

    So, here's the bottom line.  It's easy to get freaked out by some of the recent revelations, but really nothing has changed.  There have always been very serious, very smart, well resourced attackers on the Internet.   However, the principals you need to use to protect yourself haven't changed, we've just been reminded that they actually matter. 

    Here's a random collection of links related to the NSA issue and the home router problems.

    NSA
    http://topics.nytimes.com/top/reference/timestopics/organizations/n/national_security_agency/
    https://www.schneier.com/blog/archives/2013/09/the_nsas_crypto_1.html

    Home Router Back Doors
    http://www.devttys0.com/2013/10/from-china-with-love/
    http://www.reddit.com/r/netsec/comments/1orepx/great_new_backdoor_in_tenda_w302r_routers/
    http://blog.nruns.com/blog/2013/11/29/In-the-Wild-Malware-for-Routers-Sergio/
    http://www.reddit.com/r/netsec/comments/1rn37d/dlink_vulnerability_of_the_week_telnet_interface/
    http://securityadvisories.dlink.com/security/publication.aspx?name=SAP10001
    http://krebsonsecurity.com/2013/12/important-security-update-for-d-link-routers/
    http://www.h725.co.vu/2013/11/d-link-whats-wrong-with-you.html
    http://shadow-file.blogspot.nl/2013/10/netgear-root-compromise-via-command.html
    http://www.exploit-db.com/exploits/16149/
    http://www.securityfocus.com/archive/1/530119

    Update: 4/22/2014 Added this link.  One of the primary suppliers of router hardware/software (http://www.sercomm.com/home.aspx?langid=1) claimed to have fixed the products, only to have been caught just hiding it deeper!  OM#$!@*!G

    http://arstechnica.com/security/2014/04/easter-egg-dsl-router-patch-merely-hides-backdoor-instead-of-closing-it/
    http://www.synacktiv.com/ressources/TCP32764_backdoor_again.pdf

    I am speechless ...

    (But my point still remains, trust is a necessary evil so mitigate it as best as you can.)






    Tuesday, October 8, 2013

    YACC


     (YACC:  Yet Another Cool Class - not the parser generator)

    I love the low cost online courses that I've taken this summer.  There's nothing like spending a Saturday focused on writing cool programs ... learning something new, with a knowledgeable instructor talking you through the tricky parts.

    I just finished taking the second Ruby for Information Security Professionals course offered by Marcus Carey at threatagent.com.  Not surprisingly, I walked away a bit smarter and with a big grin on my face.

    While his first class (http://jrnerqbbzrq.blogspot.com/2013/08/more-cool-classes.html) provides an introduction to Ruby in the context of writing Ruby code for Metasploit, this class doesn't touch Metasploit. Instead, it assumes you have a basic familiarity with Ruby, and focuses on various techniques for accessing Open Source Intelligence.  What this means is that he walks you through writing code to pull down information from various on-line sources of public information such as Bing, Twitter, LinkedIn and Shodan. :-)

    By visiting several different sources of information, Marcus is able to introduce us to different techniques to collect information.  So for example, Bing provides a really sweet API that gives you  access to the full power of their search engine and get results back in easily parsed json.  LinkedIn however, chooses to hoard their information, forcing us to scrape information off their web pages.  Marcus shows us how to reverse engineer LinkedIn pages and use the power of Nokogiri to pull useful information from LinkedIn's cold-dead-hands.  How cool!

    The class is taught via a webinar, where Marcus shares his desktop to demonstrate code as he builds up applications in real-time.  While watching Marcus' desktop, in another windows we're developing the same code.  When we have questions, Marcus can just demonstrate the answer for us to see. This is a great paradigm for teaching a class like this.  However, it works better if you can use two monitors - one with Marcus' desktop and the other showing the window that you're working in. If your desktop only has one monitor, you'll be switching back and forth between windows a lot. (Maybe pressing your laptop into service to watch the webinar would work.) He also provides a reference document which shows some of the key code snippets.

    The class assumes you've taken his first Ruby course, and while Marcus works hard to bring everybody up to the same level, you'll probably struggle if you've never seen Ruby before.

    You need to have a working copy of Ruby, with the 'whois', 'open-uri', 'nokogiri', 'shodan' and 'twitter' Ruby packages installed.  It would behoove you to get these installed ahead of time, I found that I couldn't get 'nokogiri' to install on my preferred Ubuntu system  - fortunately it installed with no fuss on my Pentoo system so I used that for the class.  Lots of folks used Kali, which seemed to work well.

    Afterwards, Marcus makes available a video of the entire class.  Great for review.

    So here's the bottom line:  For $125, this 8 hour long class is a screaming deal.  It's relevant to what we do, it's very well taught and it's just good wholesome fun!

    You can read about it at: https://www.threatagent.com/training/ruby_osint






    Tuesday, September 17, 2013

    I didn't know that gold can tarnish

    It's been accepted for years that cryptography is hard to implement and is full of snake-oil products. The only way to be reasonably sure that encryption is effective is to:

    • Stay current
    • Use robust key lengths
    • Manage keys/passwords carefully 
    • And most importantly, only use FIPS 140-2 validated encryption.  

    In general, FIPS 140-2 has always been the gold standard of encryption, and trust in FIPS 140-2 has been a cornerstone of being able to trust most security products available today.

    However, that trust is now under attack.

    The Arstechnica article below provides an alarming report, describing how between 2006 and 2007 the Taiwan government issued at least 10,000 flawed smart cards.  These smart cards were designed to be used by Taiwanese citizens for many sensitive transactions, including activities such as submitting tax returns.  The cards with the flaw had virtually useless encryption, putting at risk any data "protected" by the cards.   The gist of the Arstechnica article is that these failures occurred in spite of the cards being FIPS 140-2 validated, and that the FIPS 140-2 validation process is broken.

    But reading the research paper which the Arstechnica article is based on suggests that it's not as simple as that...

    Despite being FIPS 140-2 validated, it turns out that the random number generator used by the card (technically, the "Renesas AE45C1 smart card microcontroller" used by the card) "sometimes fails", producing (non)random numbers that can lead to certificates which are easily compromised.  This is exactly the type of failure mode that FIPS 140-2 is designed to catch.  However, the generator on this card had an optional "health check" which was intended to detect when the random number generator was failing.  Not surprisingly, the FIPS 140-2 validation for the card only applies if this health check is enabled.  In other words, if the health check is turned off, as it was on the 10,000 or so broken cards, FIPS 140-2 does not apply and you're on your own using these cards.

    Here's the way the research report describes the problem (MOICA is the agency which issued the cards):

    "Unfortunately, the hardware random-number generator on the AE45C1 smart card microcontroller sometimes fails, as demonstrated by our results. These failures are so extreme that they should have been caught by standard health tests, and in fact the AE45C1 does offer such tests. However, as our results show, those tests were not enabled on some cards. This has now also been confirmed by MOICA. MOICA’s estimate is that about 10000 cards were issued without these tests, and that subsequent cards used a “FIPS mode” (see below) that enabled these tests"

    This is pretty standard, if you look at FIPS 140-2 validation reports, or Common Criteria evaluations, you'll always see a very precise description of exactly how the product must be configured in order for the validation to apply.   It's common to see that FIPS 140-2 validated software or hardware has a "FIPS mode", which must be enabled for the validation to apply.

    Looking at the FIPS 140-2 certificate for at least one version of this chip (https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Zertifizierung/Reporte02/0212a_pdf.pdf), the report specifically says "postprocessing should be included in the users embedded software", which I believe is a requirement to include the health check.

    Clearly,  this was a horrific failure.  The Taiwanese government issued a bunch of smart cards which were used to authenticate citizens and protect sensitive data, that were completely broken.  Yes, the folks producing the card made a critical mistake. But placing this at the feet of FIPS 140-2 is, IMHO, missing the point.

    It would be nice if FIPS 140-2 meant a product was idiot proof, but that's not the way the world works.  Encryption is complicated and has to be done correctly or it doesn't work.  The whole purpose of the FIPS 140-2 testing regime is to ensure that encryption has been rigorously tested under controlled conditions ... and most importantly, to document those conditions so that we know how to use it in a way that can be trusted.  Just because something is FIPS 140-2 validated doesn't mean it's idiot proof or that it can't be configured insecurely.

    In any event, in my opinion, despite being very clear about what they did and didn't do when testing this chip - NIST's reputation has been badly tarnished and it will take significant time and effort on their part to undo the damage.  I guess even a gold standard can tarnish sometimes ...

    Here's the Arstechnica article describing the failure:
    http://arstechnica.com/security/2013/09/fatal-crypto-flaw-in-some-government-certified-smartcards-makes-forgery-a-snap/2/

    Here's the actual research paper describing the findings:
    http://smartfacts.cr.yp.to/smartfacts-20130916.pdf

    A very good overview of the problem provided by the researchers:
    http://smartfacts.cr.yp.to/index.html

    Tin-Foil Hat Addendum

    Given the <euphemism>crisis of trust</euphemism> that the NSA, and the US Government, is currently going through - including accusations that the NSA has been surreptitiously weakening encryption products, it's very hard to avoid the theory that the NSA might have had a hand in this failure.  That's certainly possible in this case, but there's nothing to suggest that the lab issuing the FIPS 140-2 validation was complicit in this failure.

    BTW, given the relationship between Taiwan and Mainland China, I've always assumed that Taiwan is constantly under cyber attack by China.  Putting on my second layer of tin-foil headgear, could this be the result of a Chinese effort, not an American one?  I'm sure the NSA is very good, but China is certainly no cyber-slouch either, and they might have a better pool of human resources on the ground in Taiwan - which would have simplified introducing this vulnerability into the card.

    Finally, removing my tin-foil hats for a second, this could simply be a screw up.  Broken products get shipped every day, and encryption errors like this are subtle and hard to notice when present in only a very small percentage of the cards.



    Saturday, September 14, 2013

    The Law of Unintended Consequences and Biometrics

    So here's an interesting twist ...

    Generally, the government can't force you to provide information you know, and then use it against you.  Apparently, forcing folks to incriminate themselves is a slippery slope to state sponsored torture - go figure.

    As a result, the state can't compel you to give up passwords or encryption keys.  Although it's recently been challenged, and seems to be subject to subtle interpretations of the law, this protection appears to be holding up in court (http://en.wikipedia.org/wiki/Key_disclosure_law#United_States.)

    But, if your authentication or encryption key is a biometric (e.g. a fingerprint), all bets are off and the state has every right to force you to give them access.  This is despite the fact that the biometric might be more secure from a pure security perspective.

    This article talks about that little irony, in the context of Apples new iPhone - which can use one's fingerprints to protect the information on the phone.

    http://www.wired.com/opinion/2013/09/the-unexpected-result-of-fingerprint-authentication-that-you-cant-take-the-fifth/

    So, being "more secure" from a technical perspective (assuming you buy into single-factor biometric authentication) does not necessarily translate into better protection from legal intrusion. :-)