Sunday, January 26, 2014

Trust me, really


One of the things which makes security such an interesting business is that sometimes it's not a black-or-white proposition.

Here's a good example of the shades of grey we sometimes deal with.

Say "trust me" to a security person, and you might as well have just shoveled chum into a shark tank.  We are trained to not trust, and if we're good at our job, trust comes us as easily to us as telling the truth comes to a politician.  :-)

But, being able to award trust in a thoughtful way is one of the hallmarks of being a security professional.  At some point we have no choice but to trust others ... even if we don't want to.

For example; we trust NIST and the crypto research community to give us good encryption algorithms, we trust certification labs to test the implementation of those algorithms, we trust our vendors to try to give us good products ... and then we trust them to tell us when they failed.

And it's not just things we have to trust, we have to trust people in numerous ways - nothing chills my heart more than to contemplate the "insider threat".

Unfortunately, over the last few months, I believe that our ability to trust has been seriously challenged.  I have three examples I'd like to mention of recent events which challenge our ability to trust.

The first is that NSA "thing."

Most folks, including any terrorist with half a clue, have assumed for years that the NSA is siphoning up all their information.  But a lot of us were blind-sided when it was revealed that the NSA has been tampering with bits of the fundamental encryption we depend on.  They went so far as to pay at least one company to make their products default to insecure algorithms, specifically so the NSA could then compromise those products.

There's a worldwide infrastructure focused on providing trustworthy encryption products.  And the foundation of that infrastructure is our trust in the NIST certification and testing process.  What we're being told now is that the core of that trust has been undermined ... specifically that the NSA has been planting known-to-them vulnerable encryption algorithms into the public approval process and then paying companies to adopt those vulnerable algorithms.   Not to whine too much, but if you can't trust NIST and RSA to give us good random-number generators, who can you trust?

The second example of trust gone awry I'll mention is the whole home-router scandal. (...Be sure to see my update at the bottom...)

I've honestly lost track of how many "home" routers have turned out to have a back door built into them.  This isn't an example of some idiot engineer installing another Sendmail WIZ bug, this looks like a conscious decision by a bunch of the home router manufacturers to put back doors into their products.  I wouldn't be surprised to learn that almost all routers intended for the retail home/small business market have some sort of back-door in them.  Scarily, it's not much of a leap to find a common thread between home router back-doors, and the NSA paying RSA to leave their products vulnerable.

My final observation of broken trust  is just to notice that the NSA trusted Edward Snowden.  How'd that work out for them?

So, other than venting, what's the point here?  Simple, we need to remember what's at the core of trust and learn from these experiences.  Merrian Webster defines trust as the "belief that somebody or something is reliable, good, honest, effective, etc."  Ultimately, the level of trust you can have in something is directly proportional to how much control you have over its construction or use.  If you've built something yourself from scratch, you can have a lot of trust in it - otherwise you're stuck having to make the assumption that everybody involved in producing it has been reliable, honest and effective.  With a loaf of bread, that's relatively easy, but with a core router on the Internet, the chain of entities you have to trust is very long and complex.

While that sounds like an argument for "don't trust anything", drawing that conclusion is a mistake.  You can't have zero risk and still get anything to work.  Sadly, we have to trust some things.

So if we have to trust the untrustworthy, what can we do?  We've been forcefully reminded that we're at risk when we trust things we don't control.  However, that's nothing new and our response should not be a surprise.

Enter open source.  In the home router arena, there have been open source replacements for manufacturer's router code for awhile.  While nothing is guaranteed, one of the great things about open source is that it's very hard to hide a back-door in open source code.  This doesn't completely solve the problem, there are still opportunities for vendors to hide back doors, even if you're running open source software on them.   But it does dramatically raise the bar, and often that's the best you can do.

We can also apply this lesson more broadly.  Think about the entire network stack you're using and ask, where do I have to trust the vendor, and where can I mitigate that trust by using open source?   Think open source OS and applications (e.g. Linux and OpenOffice.)   Then, think about going beyond that, do you really have to use Google?  Maybe you can up your game a bit and use something like duckduckgo, or running your searches though a TOR connection (yes, both of those solutions have their own problems ... nothing's perfect.)  Do you really want to buy that Nest, maybe one of the open-source thermostats would be more secure (and fun)?

There's one other lesson I think we should take away.  It's an oldie, but a goodie and really ties back to trust.  I'm speaking of defense in depth.  The reason I love defense in depth so much is that it's an explicit acknowledgement that you can't completely trust anything.  The point of defense in depth is that when a layer of defense lets you down, i.e. when it turns out you couldn't trust it after all, you've got additional layers to pick up the slack.

When it turns out that the random number generator you used to protect your SSL session was defective and <mumble> is snooping on your email connection, wouldn't it be nice if you had PGP encrypted your sensitive email?   When Unit 61398 takes an interest in your home router, wouldn't it be nice if your data was housed on a server running OpenBSD instead of Windows XP? When your carefully vetted employee decides that your organization is evil, and needs to be taken down a notch or two, wouldn't it be nice if his access truly was limited based on need-to-know?

So, here's the bottom line.  It's easy to get freaked out by some of the recent revelations, but really nothing has changed.  There have always been very serious, very smart, well resourced attackers on the Internet.   However, the principals you need to use to protect yourself haven't changed, we've just been reminded that they actually matter. 

Here's a random collection of links related to the NSA issue and the home router problems.

NSA
http://topics.nytimes.com/top/reference/timestopics/organizations/n/national_security_agency/
https://www.schneier.com/blog/archives/2013/09/the_nsas_crypto_1.html

Home Router Back Doors
http://www.devttys0.com/2013/10/from-china-with-love/
http://www.reddit.com/r/netsec/comments/1orepx/great_new_backdoor_in_tenda_w302r_routers/
http://blog.nruns.com/blog/2013/11/29/In-the-Wild-Malware-for-Routers-Sergio/
http://www.reddit.com/r/netsec/comments/1rn37d/dlink_vulnerability_of_the_week_telnet_interface/
http://securityadvisories.dlink.com/security/publication.aspx?name=SAP10001
http://krebsonsecurity.com/2013/12/important-security-update-for-d-link-routers/
http://www.h725.co.vu/2013/11/d-link-whats-wrong-with-you.html
http://shadow-file.blogspot.nl/2013/10/netgear-root-compromise-via-command.html
http://www.exploit-db.com/exploits/16149/
http://www.securityfocus.com/archive/1/530119

Update: 4/22/2014 Added this link.  One of the primary suppliers of router hardware/software (http://www.sercomm.com/home.aspx?langid=1) claimed to have fixed the products, only to have been caught just hiding it deeper!  OM#$!@*!G

http://arstechnica.com/security/2014/04/easter-egg-dsl-router-patch-merely-hides-backdoor-instead-of-closing-it/
http://www.synacktiv.com/ressources/TCP32764_backdoor_again.pdf

I am speechless ...

(But my point still remains, trust is a necessary evil so mitigate it as best as you can.)






Tuesday, October 8, 2013

YACC


 (YACC:  Yet Another Cool Class - not the parser generator)

I love the low cost online courses that I've taken this summer.  There's nothing like spending a Saturday focused on writing cool programs ... learning something new, with a knowledgeable instructor talking you through the tricky parts.

I just finished taking the second Ruby for Information Security Professionals course offered by Marcus Carey at threatagent.com.  Not surprisingly, I walked away a bit smarter and with a big grin on my face.

While his first class (http://jrnerqbbzrq.blogspot.com/2013/08/more-cool-classes.html) provides an introduction to Ruby in the context of writing Ruby code for Metasploit, this class doesn't touch Metasploit. Instead, it assumes you have a basic familiarity with Ruby, and focuses on various techniques for accessing Open Source Intelligence.  What this means is that he walks you through writing code to pull down information from various on-line sources of public information such as Bing, Twitter, LinkedIn and Shodan. :-)

By visiting several different sources of information, Marcus is able to introduce us to different techniques to collect information.  So for example, Bing provides a really sweet API that gives you  access to the full power of their search engine and get results back in easily parsed json.  LinkedIn however, chooses to hoard their information, forcing us to scrape information off their web pages.  Marcus shows us how to reverse engineer LinkedIn pages and use the power of Nokogiri to pull useful information from LinkedIn's cold-dead-hands.  How cool!

The class is taught via a webinar, where Marcus shares his desktop to demonstrate code as he builds up applications in real-time.  While watching Marcus' desktop, in another windows we're developing the same code.  When we have questions, Marcus can just demonstrate the answer for us to see. This is a great paradigm for teaching a class like this.  However, it works better if you can use two monitors - one with Marcus' desktop and the other showing the window that you're working in. If your desktop only has one monitor, you'll be switching back and forth between windows a lot. (Maybe pressing your laptop into service to watch the webinar would work.) He also provides a reference document which shows some of the key code snippets.

The class assumes you've taken his first Ruby course, and while Marcus works hard to bring everybody up to the same level, you'll probably struggle if you've never seen Ruby before.

You need to have a working copy of Ruby, with the 'whois', 'open-uri', 'nokogiri', 'shodan' and 'twitter' Ruby packages installed.  It would behoove you to get these installed ahead of time, I found that I couldn't get 'nokogiri' to install on my preferred Ubuntu system  - fortunately it installed with no fuss on my Pentoo system so I used that for the class.  Lots of folks used Kali, which seemed to work well.

Afterwards, Marcus makes available a video of the entire class.  Great for review.

So here's the bottom line:  For $125, this 8 hour long class is a screaming deal.  It's relevant to what we do, it's very well taught and it's just good wholesome fun!

You can read about it at: https://www.threatagent.com/training/ruby_osint






Tuesday, September 17, 2013

I didn't know that gold can tarnish

It's been accepted for years that cryptography is hard to implement and is full of snake-oil products. The only way to be reasonably sure that encryption is effective is to:

  • Stay current
  • Use robust key lengths
  • Manage keys/passwords carefully 
  • And most importantly, only use FIPS 140-2 validated encryption.  

In general, FIPS 140-2 has always been the gold standard of encryption, and trust in FIPS 140-2 has been a cornerstone of being able to trust most security products available today.

However, that trust is now under attack.

The Arstechnica article below provides an alarming report, describing how between 2006 and 2007 the Taiwan government issued at least 10,000 flawed smart cards.  These smart cards were designed to be used by Taiwanese citizens for many sensitive transactions, including activities such as submitting tax returns.  The cards with the flaw had virtually useless encryption, putting at risk any data "protected" by the cards.   The gist of the Arstechnica article is that these failures occurred in spite of the cards being FIPS 140-2 validated, and that the FIPS 140-2 validation process is broken.

But reading the research paper which the Arstechnica article is based on suggests that it's not as simple as that...

Despite being FIPS 140-2 validated, it turns out that the random number generator used by the card (technically, the "Renesas AE45C1 smart card microcontroller" used by the card) "sometimes fails", producing (non)random numbers that can lead to certificates which are easily compromised.  This is exactly the type of failure mode that FIPS 140-2 is designed to catch.  However, the generator on this card had an optional "health check" which was intended to detect when the random number generator was failing.  Not surprisingly, the FIPS 140-2 validation for the card only applies if this health check is enabled.  In other words, if the health check is turned off, as it was on the 10,000 or so broken cards, FIPS 140-2 does not apply and you're on your own using these cards.

Here's the way the research report describes the problem (MOICA is the agency which issued the cards):

"Unfortunately, the hardware random-number generator on the AE45C1 smart card microcontroller sometimes fails, as demonstrated by our results. These failures are so extreme that they should have been caught by standard health tests, and in fact the AE45C1 does offer such tests. However, as our results show, those tests were not enabled on some cards. This has now also been confirmed by MOICA. MOICA’s estimate is that about 10000 cards were issued without these tests, and that subsequent cards used a “FIPS mode” (see below) that enabled these tests"

This is pretty standard, if you look at FIPS 140-2 validation reports, or Common Criteria evaluations, you'll always see a very precise description of exactly how the product must be configured in order for the validation to apply.   It's common to see that FIPS 140-2 validated software or hardware has a "FIPS mode", which must be enabled for the validation to apply.

Looking at the FIPS 140-2 certificate for at least one version of this chip (https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Zertifizierung/Reporte02/0212a_pdf.pdf), the report specifically says "postprocessing should be included in the users embedded software", which I believe is a requirement to include the health check.

Clearly,  this was a horrific failure.  The Taiwanese government issued a bunch of smart cards which were used to authenticate citizens and protect sensitive data, that were completely broken.  Yes, the folks producing the card made a critical mistake. But placing this at the feet of FIPS 140-2 is, IMHO, missing the point.

It would be nice if FIPS 140-2 meant a product was idiot proof, but that's not the way the world works.  Encryption is complicated and has to be done correctly or it doesn't work.  The whole purpose of the FIPS 140-2 testing regime is to ensure that encryption has been rigorously tested under controlled conditions ... and most importantly, to document those conditions so that we know how to use it in a way that can be trusted.  Just because something is FIPS 140-2 validated doesn't mean it's idiot proof or that it can't be configured insecurely.

In any event, in my opinion, despite being very clear about what they did and didn't do when testing this chip - NIST's reputation has been badly tarnished and it will take significant time and effort on their part to undo the damage.  I guess even a gold standard can tarnish sometimes ...

Here's the Arstechnica article describing the failure:
http://arstechnica.com/security/2013/09/fatal-crypto-flaw-in-some-government-certified-smartcards-makes-forgery-a-snap/2/

Here's the actual research paper describing the findings:
http://smartfacts.cr.yp.to/smartfacts-20130916.pdf

A very good overview of the problem provided by the researchers:
http://smartfacts.cr.yp.to/index.html

Tin-Foil Hat Addendum

Given the <euphemism>crisis of trust</euphemism> that the NSA, and the US Government, is currently going through - including accusations that the NSA has been surreptitiously weakening encryption products, it's very hard to avoid the theory that the NSA might have had a hand in this failure.  That's certainly possible in this case, but there's nothing to suggest that the lab issuing the FIPS 140-2 validation was complicit in this failure.

BTW, given the relationship between Taiwan and Mainland China, I've always assumed that Taiwan is constantly under cyber attack by China.  Putting on my second layer of tin-foil headgear, could this be the result of a Chinese effort, not an American one?  I'm sure the NSA is very good, but China is certainly no cyber-slouch either, and they might have a better pool of human resources on the ground in Taiwan - which would have simplified introducing this vulnerability into the card.

Finally, removing my tin-foil hats for a second, this could simply be a screw up.  Broken products get shipped every day, and encryption errors like this are subtle and hard to notice when present in only a very small percentage of the cards.



Saturday, September 14, 2013

The Law of Unintended Consequences and Biometrics

So here's an interesting twist ...

Generally, the government can't force you to provide information you know, and then use it against you.  Apparently, forcing folks to incriminate themselves is a slippery slope to state sponsored torture - go figure.

As a result, the state can't compel you to give up passwords or encryption keys.  Although it's recently been challenged, and seems to be subject to subtle interpretations of the law, this protection appears to be holding up in court (http://en.wikipedia.org/wiki/Key_disclosure_law#United_States.)

But, if your authentication or encryption key is a biometric (e.g. a fingerprint), all bets are off and the state has every right to force you to give them access.  This is despite the fact that the biometric might be more secure from a pure security perspective.

This article talks about that little irony, in the context of Apples new iPhone - which can use one's fingerprints to protect the information on the phone.

http://www.wired.com/opinion/2013/09/the-unexpected-result-of-fingerprint-authentication-that-you-cant-take-the-fifth/

So, being "more secure" from a technical perspective (assuming you buy into single-factor biometric authentication) does not necessarily translate into better protection from legal intrusion. :-)

Wednesday, August 28, 2013

More Cool Classes


Last weekend I had the opportunity to take another really fun course.  This one was Ruby Programming for Information Security Professionals, offered by Marcus Carey at ThreatAgent.com. (https://www.threatagent.com/training)

It dovetailed very nicely with the Penetration Testing courses I took from Georgia Weidman earlier this summer.  Georgia's courses provided an accelerated introduction to using Metasploit (and some other pentesting tools).

With Georgia's classes under your belt, Marcus' Ruby class gives you one of the tools you need to take using Metasploit to the next level.  Since Metasploit modules (and Metasploit itself) are written in Ruby, Marcus' class gives you the introduction to Ruby that you need to start writing Metasploit modules.  And even if you're not itching to write an exploit module just yet, he teaches more than enough to let you read and understand Metasploit modules - which is itself a very powerful capability.

About 2/3 of the class is spent in an introduction to Ruby, starting with using the irb interactive Ruby environment, and moving on to the basics of the language.  Ruby turns out to be a delightful language and a pleasure to learn.  Marcus takes the class through the basics of the language using lots of hands-on examples, so it never gets boring.   After we've learned enough Ruby to be "dangerous", we finish off this part of the course writing some quick examples doing things like parsing json, accessing a web site, and making DNS queries.  What fun!

However, the last 1/3 of the class is the real pay-off.   That's when we start writing a Metasploit module.  The module utilizes some of the code we'd already written, and does a simple DNS reconnaissance of a selected domain.   Utilizing a template provided by Marcus, we go through the basics of producing a module which can be integrated into Metasploit.

As with the classes I took from Georgia Weidman, the class it taught via a live webinar.  It's easy to ask questions, and Marcus is very responsive and attentive to his students.  He teaches the class assuming that you're either running Ruby and Metasploit directly, or that you're running Kali.  The only "attacks" are really just accessing public DNS and web sites, so there's no need to provide sacrificial VMs for us to attack.  He provides a written outline for the class, which is very helpful as you work along with him through the examples.  After the class, he provides a video of the webinar, so you can review the class in detail.  Overall, the class is presented in an organized, interesting and professional manner.

As with Georgia's classes, this class is an incredible deal at $125 for the day long class.  If you'd like to read my rant about the cost of training, go back to my review of Georgia's class - which along with Marcus' class, is an example of what our community needs more of.

Since I've taken the class, I've been on an orgy of coding up a module for Metasploit.  It's been a long time since I've been so enthused about a project that I've gone into sleep-deprivation mode to work on it. :-)  I have Marcus to thank for that!

Anyway, here's the bottom line.  Ruby Programming for Information Security Professionals, taught by Marcus Carey is an awesome course.

This class is for you if you have some programming knowledge, but don't know Ruby and want to jump into writing Metasploit modules.  Yes, you can RTFM.  But for a relatively little bit of money, and 8 hours of your time, you can really jump-start the process and go from zero to writing a Metasploit module by the end of the day.  Of course, there's a ton about both Ruby and Metasploit that he doesn't have time to cover, but you will have enough that you can move forward by writing code ... not by just reading about writing code.

Combine this with Georgia's classes (take them first), and you'll be well on your way to being a very competent Metasploiter  (is that a word :-)

BTW, a little while ago I finally looked at Python ... and fell in love.  I've been studying it since then, with the intention of abandoning Perl for Python.  But I have to admit, Ruby really appeals to me and I'm wondering if I may just abandon Python and do all my programming in Ruby. Does that make me a fickle person? :-)

Tuesday, August 20, 2013

Phew! Finally Recovering from DEFCON


This was the second year that I've attended DECON "on my own dime", after a gap of about 9 years when I wasn't able to attend.

Last year, my first time back in 8 years or so, I think I was in a state of shock throughout most of the weekend.  Everything had grown so big - with 15,000 folks attending there was a line for almost everything even remotely popular.  But, if you scratched beneath the surface it was still the same DEFCON as before ... with the same passion for playing with anything that couldn't run away, just 15 times bigger and with a slightly more corporate veneer.

This year, I was a bit more prepared.  There were still long lines everywhere - and for some talks the room filled up before everyone who wanted to attend got in.  But with some planning and flexibility, it was a hugely rewarding DEFCON.

What were the high points for me this year?

This year they released the official DEFCON documentary, which was mostly filmed at last year's event.  The documentary explains what DEFCON is about and shows the history of DEFCON.  It's not bad; I learned a good bit about the early history of DEFCON.  It does a really good job of capturing some of the "hacker ethic" which is what makes DEFCON so great.  It also gives a good view into the core group which runs DEFCON every year.  On the con (sorry!) side, it is a bit of a self absorbed love-fest.  Apparently the documentary was funded by Dark Tangent (Jeff Moss, the person who runs DEFCON every year.), so it shouldn't be a big surprise that only the good side made it out of the cutting room.  But again, I recommend it.  They're giving it away for free, it's up on You Tube and lots of other places: http://youtu.be/3ctQOmjQyYg

I got a huge kick out of the car hacking talks.  Tuners have been hacking auto ECUs for years, figuring out how to rewrite the tuning tables to make car perform better.  My last track car, a Mazda Miata had a third party ECU which completely replaced the Mazda unit, allowing a huge range of custom engine tuning options.  But now cars are so much more like regular computing platforms, and are so much more computerized, they're become really interesting to the hacking community in general.  Insead of just controlling the engine, now virtually every aspect of a car is controlled by a network of computers.  Think about it, if you drive a car with an auto parallel-parking feature, there's a computer driving your car when it parks for you.  Same thing with the crash avoidance, or cruise control that maintains a safe distance from the car in front if you.  So, hacking cars has become a lot more interesting than just tweaking ECUs to run less engine timing.  They didn't talk about it here, but others have been looking at compromising a car's internal network remotely (such as via Bluetooth).  I can't wait to see these threads of work combined.  Here's one video showing some of what they've done: http://youtu.be/oqe6S6m73Zw. Here's the paper describing their work and open source tools: http://youtu.be/3ctQOmjQyYg.  Yes, I said tools - you too can jack into your car's OBD-II port and start injecting traffic onto your car's shared network. :-)

I attended the "Policy Wonk Lounge" which turned out to be a very a un-DEFCON like event.  It was an  informal opportunity for attendees to meet with some relatively high level DC .gov and .mil insiders.  It was also the only event where there was an obvious core of press attending, and it was the first time I've ever been to a meeting which was formally "off the record".   Not surprisingly (to me at least), the DC folks were reasonable, thoughtful folks who really try to do the right thing.  Nothing earth shattering was decided or revealed, but it was really useful to have an open discussion.  Here's the basic description: https://www.defcon.org/html/defcon-21/dc-21-speakers.html#Wonk

Speaking of the Policy Wonk Lounge, this was the year that "Feds" were uninvited to DEFCON in response to the NSA domestic spying issue.  I was wondering just how that would all go down ... and as near as I can tell the big impact was that the NSA didn't have a recruiting table in the vendor room (they had one there last year) or explicitly public talks.  I was pleased that the spirit of tolerance which I always considered a DEFCON hallmark still lived.  There are clearly some sharp political differences between DEFCON attendees, but I personally never saw (or heard) of it becoming an issue.

Remember Pentoo?  It's a Linux distribution focused on penetration testing.  I personally hadn't played with it in awhile, and haven't really thought about it recently.  The hot pentesting distribution for the past couple of year has been Kali (nee BackTrack.) But several talks made a point of mentioning that Pentoo still exists, and *some* people like it better than Kali.  The cool thing about Pentoo is that it's being maintained, provides a high quality alternative to Kali (i.e. a different set of tools to consider) and is based on the Gentoo Linux distribution.  That's what's really great about a conference like DEFCON, you can often read the paper a presenter had written on some topic, but when you attend the talk and the Q&A afterwards, you often pick up all sorts of gems.

Another thing that made me smile: In the hardware hacking area there were a few 3-D printers set up.  One guy had a hacked Kinnect, and was using it make and give out 3-D scans of folks (essentially a scan of your head.) You could use the scan to print a sculpture of your head on a 3-D printer. Imagine what DEFCON attendees will be showing us with those in a few years!  In fact, a photo-copy shop a block from my house just installed a 3-D printer, we live in interesting times!

I'm already excited about next year at DEFCON ...











Wednesday, July 10, 2013

Sometimes, life just hands you an ice cream cone


Recently, I was just sitting at my computer, when I got a call on my phone.  Unfortunately, I don't have a recording app on my phone (I did on my old one), so this is just the highlights from a few handwritten notes and my memory ...

(call from 212-777-3001)
Me: Hello?
Caller: Hello, this is <mumble>Global Soft<mumble>, we're recording errors on your computer
Me: huh?
(Really? I'm finally getting one of "those" calls)
Caller: we're getting lots of errors from your computer.  viruses, malware, ....
Me: huh?  How do you know about this stuff?
Caller: we receive error messages from your computer.  your computer is infected ... i just need to walk you through a few steps to fix it ...
Me: huh?
...
Me: huh?  I'm sorry, I'm pretty dumb about computers.  How do you know what's wrong with my computer?
Me: huh?  Oh! I know! Do you mean I bought your service when I bought the computer
Caller:  yeah, yeah, that's right.  that's what you did!
...
Caller: ok, I just need you do to a few things ...
Caller: turn on your computer ...
Caller: Let me know when you see your desktop ...
Me: huh? it's on, I'm looking right at it.
Caller: do you see your desktop
Me: huh?  I don't know ... it says dollar sign
Caller: (confused) huh? :-)
Me: huh? I see a dollar sign prompt  (I'm looking at a Linux shell prompt, but was trying to remember what a Wylbur prompt looked like ... If you're wondering: http://en.wikipedia.org/wiki/ORVYL_and_WYLBUR)
Caller: where's your desktop?
TMe: huh? what's a desktop? oh! That! there is no desktop.  This is a brand new computer they just gave me
Me: before this we did everything with punched cards ...
Caller:  how do you get to the internet?
Me: huh?  Do you mean how do we do things?  I can submit any card deck you need, the submission desk is just down the hall ...
Caller: Are you at work?  Is this your personal computer?

(... much hilarity ensues while I offer to submit cards and he tries to get me to the desktop and/or internet)

Me: huh?  Of course I'm at work.  I don't have a personal computer
Caller:  Can you get to the Internet from work
Me: I'm not authorized to use the Internet

CLICK! (he finally hung up)

:-)

I am kicking myself a bit.  Not only did I have no way to record the call, but I realized afterwards that I have a throw-away, very vulnerable, Windows-XP virtual machine (from a course I took recently) that would have been a perfect victim.   Unfortunately, I have a feeling that my dyslexia would have kicked in ... and my credit card would have ended up being denied in that case.  :-)

But, pretending I was using punched cards did give me a bit of a giggle.

Update:

Here's an article which give another example of how somebody else had fun with these guys: http://arstechnica.com/tech-policy/2012/10/i-am-calling-you-from-windows-a-tech-support-scammer-dials-ars-technica/


Update 2: Another article, also from ARS, provides more detail on how one of these operations is run (and how the FTC is taking them down.) http://arstechnica.com/tech-policy/2014/05/stains-of-deceitfulness-inside-the-us-governments-war-on-tech-support-scammers/

Update 3 (9/12/2014): There's now a metasploit module which allows you to turn the tables on these scammers. http://www.scriptjunkie.us/2014/09/exploiting-ammyy-admin-developing-an-0day/