WikiLeaks Vault7

March 8th, 2017

WikiLeaks dropped a trove of information about hacking tools from the CIA this week. It’s available via BitTorrent, in an encrypted archive, whose password is SplinterItIntoAThousandPiecesAndScatterItIntoTheWinds. That’s amusing, I suppose.

So what’s in the archive and what does it mean?

First, the archive appears to be a dump of an Intranet portal at the CIA, where staff share information about hacking tools. It’s missing a bunch of stuff – images and documents – so the download appears to have been incomplete. Moreover, this is information about the tools, rather than the tools themselves, though those were apparently leaked
earlier in a separate incident.

There are tools here to hack every common operating system – Windows, MacOSX, Linux, Android and iOS. There are also tools for various other platforms, including some Samsung TVs.

There has been a bunch of wild and wooly press coverage about this leak, but no, the CIA does not appear to have tools to compromise encryption in popular messaging system. It’s simply the case that if you can compromise either end of the conversation, then encryption between the two ends of the conversation is irrelevant. They can’t magically turn your TV into a spy device (without first breaking into your house) and they can’t (yet?) cause your car to suddenly crash. All of these things are plausible, and even discussed in the leaked documents, but not described as current capabilities.

Many of the tools discussed in the leak require physical access. i.e., if you are some “bad guy” that the CIA is interested in, and they can touch your phone or PC or TV, then they can install malware on that device to help them spy on you. Clearly, that doesn’t matter much to most people.

Some of the tools work over the network, and obviously that’s more serious, especially if they get into the hands of criminals or other adversaries that the average business or consumer might be more worried about than the CIA.

Also of note is that some of the tools leverage security vulnerabilities in popular products that the vendors and security researchers were not previously aware of. For the US government to discover such bugs and not work with vendors to close them leaves the public at risk and represents quite dubious ethics.

So is it time to panic? Hardly. We already knew that every large and complex piece of software has bugs (they are written by human beings after all) and that some of those bugs can be used to compromise security. We also already knew that all advanced government spy agencies work to compromise device security to either collect information about their adversaries or to disrupt their operations. That the CIA is doing this is only vaguely surprising, in the sense that it really should be the job of their sister agency, the NSA.

Everyone should already be aware that a smart phone is a perfect surveillance device, incorporating network connectivity, a GPS receiver, plus microphones and cameras. It’s obvious that spy agencies would work to hack into these things to monitor people, and would at least sometimes succeed.

So what’s interesting here?

Well, someone at the CIA obviously dislikes their employer enough to leak this information. That’s a serious crime.

The US government does not disclose all zero-day exploits it finds to vendors. That’s morally compromised.

WikiLeaks is more interested in the (fairly mundane) behaviour of the US government than that of various dictatorships, such as Iran and Russia. That makes WikiLeaks and Julian Asange look quite bad, actually. They are pretty much a puppet of the Russian state at this point.

The involvement of both WikiLeaks and Russian intelligence services in the recent US presidential election should be alarming to everyone in the West. There is no reason to believe this kind of interference will stop there — it’ll continue into the future and in other Western countries. This data dump is really just a side show compared to Russian cyber
warfare efforts.

Sometimes, security boils down to pretty simple controls

August 22nd, 2016

There’s an interesting read this morning about what attack vectors are actually used to compromise corporate networks:

darkreading.com

What’s notable is that most of the commonly used, successful methods used to compromise an organization’s security are related to credential management.

When you read about security, it’s usually about software vulnerabilities, zero-day exploits (i.e., those that have not yet been discovered by others or remediated by the software vendor), perimeter defense and patch management.

That’s sexy stuff – it’s technologically advanced, requires highly skilled people to find problems and develop exploits, etc.

The reality is much more mundane, however:

  • Weak user and admin passwords.
  • Users that unwittingly download and run malware.
  • Use of old, insecure password and name resolution protocols.
  • Plaintext passwords stored on disk and in memory.
  • Wide-open networks, which allow an attacker with a small beachhead to attack additional systems with ease.

The solutions to these problems are fairly simple too:

  • Shut down network services that use old, weak protocols.
  • Patch or upgrade software address plaintext passwords in-memory or on-disk.
  • Limit user access to only what they need, only when they need it.
  • Segment the network with internal firewalls, to limit the impact of a successful breach.
  • Get users to choose strong passwords.
  • Automate controls over admin passwords.
  • User education and awareness, especially around malware and “social engineering” attacks.

Perhaps not as sexy as zero-day exploits, but more effective.

Hitachi ID Systems can help with some of these security strategies:

Whose machine or app is it anyways?

June 10th, 2016

If you deploy a modern OS, such as Windows 10, you may be surprised to learn that it’s calling home. A lot.

For example, here is a screen shot from my Windows 10 PC of the diagnostic settings, relating to what information the PC shares with Microsoft:

feedback-options-windows-10

Notice that there are 3 options, none of which is “stop doing that.”

I have no reason to believe this is unique to Microsoft. Apple MacOSX does it., Android and iOS on phones and tablets certainly monitor you and even some Linux distributions have telemetry features, though generally off by default / offered as a paid service.

Which begs the question — whose device is it? I bought the hardware, I either purchased the OS with the hardware or installed it separately, I *own* the system. What business does it have snooping on me and sending information to a third party?

I know there are legitimate reasons for this – aggregate health diagnostics, heat maps about apps that misbehave, surveillance to find malware, etc. That’s fine, but surely the default should be “off” and sending anything from my PC to some vendor’s servers should require my permission. That’s not the current state of affairs.

Which brings us to compilers. It’s been known for decades that you can insert malware and Trojan horses into compilers and create hard-to-detect compromise of systems.

It seems that Microsoft has recently, perhaps inadvertently, done something close to that. A recent release of Visual Studio inserts code into even the smallest, do-nothing programs to send telemetry home to Microsoft. (infoq.com). When caught, Microsoft confessed to doing this, claiming (probably rightly) that it was intended to be an innocuous performance tuning tool: (reddit.com).

This instance will be backed out, but Windows 10 remains a “call home” machine, as do smart phones and Mac’s.

I wonder what it will take to reverse this awful trend?

Hardware appliances have no place in production

May 27th, 2016

One of our customers runs the RSA/Aveksa product to automate access certification. Interestingly, this product was delivered on a physical appliance.

Some time this week, the hardware in the appliance (which is just a white label PC, I’m sure) died. This means that this company’s access governance system went offline. Since it’s hardware, and presumably because the company didn’t want to pay double to have a hot standby, they now have to wait until next week for replacement hardware. At that time, who knows how much effort will be required to re-enable the service?

All this is in a major US metro area — easily reached by the vendor. Imagine what would happen if this happened to an organization in a more remote part of the world, where import controls and duties can add days or weeks to delivery, and where local integrators are few and far between? What is a 1-3 week outage in the US could be a 1-2 month outage elsewhere.

Of course, this all could have been mitigated by buying redundant appliances. A costly waste of physical infrastructure, but doable.

In this day and age, why do people still ship physical appliances? As evidenced above, they are expensive and unreliable. They are also incompatible with the trend from the 2000’s to virtualize and the trend from the 2010’s to move applications to the cloud. They are energy and space inefficient. They may have unpleasant interactions with national border control officials, who have a fetish for taxing or blocking technology in general and cryptographic technology in particular.

What’s the upside of a physical appliance? If your system requires exotic hardware (ASICs) to perform well, then OK, I get it. That may apply to super high performance firewalls or anti-malware scans at wire speed, but not to most applications. Or perhaps your application is horribly complex to install, and you can shave days or weeks off of deployment time by pre-installing at the factory. I would argue that if the latter is true, you have a crappy application, and should fix it rather than hacking your way around the problem with an appliance. And if you must pre-package, then for god’s sake use a virtual appliance, not physical hardware!

In the context of an IAM system, the implementation effort has more to do with business process definition, policy setup and target system integrations. Installing the app itself shouldn’t take more than an hour — see above: if it does, then your app is the problem, go fix it.

The bottom line is this: The 1990s called, and they want their physical appliances back!

Anti-virus software creates a entry way for … malware

May 17th, 2016

If you think that running anti-virus software is good for security, think again.

There have been multiple exploits lately of vulnerabilities in badly written, badly architected A/V software, such that an attacker can exploit the A/V bug to compromise your system.

Here’s the latest whopper: ZDnet.com.

I think a well patched OS may well be safer than one encumbered by these badly built “security” products. Wow.

Passwords may be insecure, but so is everything else

April 28th, 2016

Interesting bit of news today. Apparently Microsoft’s Office 365 had – for some lengthy but undisclosed period of time – a vulnerability exposing every single account to public access. This means that if your organization offloaded Exchange to O365, all your e-mails and documents were wide open for some long period of time (months? years?).

The details are here.

The short version is that there was a bug in how SAML assertions, allowing O365 to offload user identification, authentication and authorization to another system, such as on-premise ADFS for example, were processed. An attacker could consequently impersonate anyone with a bogus SAML assertion.

Wow. Just wow.

This is no different than if they had dumped plaintext passwords for all of their users.

So to everyone claiming that if we could only get rid of passwords, the world would be safe again – here’s the counter example. It doesn’t matter how you authenticate users, security bugs trump everything.

Safe computing everyone!

Politicians, crypto and craziness

April 14th, 2016

Politicians are commonly technologically ignorant. This is not news. In the few times I’ve seen politicians give speeches about IT security at conferences, my common reaction has been “wow, these people were elected, and have actual power!”

So today should come as no surprise, as two US congress members propose legislation that implicitly requires encryption back doors. Nowhere in the draft bill does it say that vendors have to create back doors, but that’s clearly what the bill is about:

Techcrunch.com article

Thankfully, it sounds like this turd won’t survive a senate hearing or a presidential veto, but you never know. If such a thing were to pass, then:

  • There would be zero impact on security, since strong crypto is widely available in the world and in any case terrorists are often too dumb to use it.
  • There would be massive adverse consequences for US tech companies, which will either be forced to relocate to safer harbours (Canada is nice!) or lose all sorts of non-US market.

I suppose the tactical question is: “how do we block stupid legislation like this?”

The bigger question is “how do we recruit politicians who are not idiots?” That’s a much harder question – politics is nasty, and smart people know well enough to avoid it.

The curious case of Apple and the FBI

February 22nd, 2016

The story about the FBI asking Apple for help decrypting the contents of the phones belonging to the San Bernadino shooters smells fishy, and the media seems to have totally missed the point.

Either there is a way to decrypt the filesystem of an iOS device or there isn’t.

If there is no way to decrypt the device, why is Apple protesting? Just say it’s not possible and be done with it.

If it’s possible, then why does the FBI need Apple’s help? Are they technically inept? Can they not be bothered to disassemble the phones in question and inject every possible 4 digit unlock code automatically, to see which one works? Heck, you could hire a kid to spend a few days trying to type all the codes, one by one, into the login screen.

I honestly don’t know what the fuss is about, but I think both the FBI and Apple are liars. Here is what I think is actually going on:

  • It’s trivial to unlock these devices.
  • The FBI is just making noise for political reasons, to support moves in Congress to force technology companies to create more convenient back doors on these devices.
  • Apple is making equally political noise, to oppose such moves.
  • None of this is about a single terrorism investigation.

For the record, I think that forcing technology companies to inject back doors into hardware or software products is monumentally stupid — the bad guys will just use real crypto downloaded from the Internet, innocent consumers will be made vulnerable and foreign markets will be closed to US goods, whose products are (correctly identified as) compromised. As usual, the political class is totally clueless and actively harmful to the economy.

P.S. My friend pointed this out to me – I think it’s a brilliant analysis. Probably wrong, but nonetheless sheer genius:
http://www.cringely.com/2016/02/19/the-fbi-v-apple-isnt-at-all-the-way-you-think-it-is/

The app permission model on iOS and Android is broken: here’s how to fix it

January 26th, 2016

The permission model on smart phones is all wrong, and everybody knows it. I’ll use Android as an example, mainly because I’m more familiar with it, but I don’t imagine iOS is much different.

When you install an app on your phone (or tablet – which is just an oversized phone after all), it asks you whether you will allow the app to have various permissions on the device — can it see your location? Access your contacts? Your camera? The network? The set of possible answers is, unfortunately, very limited: yes or no. No means you can’t install the app, yes means you accept whatever it says.

There are two problems with this model:

(a) You have no idea *why* the app wants each permission. There is only an assertion that it does, but not what it will do with that access. Often, the reason is quite legitimate and benign, but it’s entirely up to the phone’s owner’s imagination to figure out why the heck app A needs permission B.
(b) The set of possible responses is too limited. It’s either accept the app, with all the permissions it wants, or do not install it at all.

These problems lead to a third problem, which is habituation. Since the reason for apps to demand permissions is opaque and the only option is to not install the app which, presumably, the user actually wants, users stop reading the warnings and just blindly accept all apps with all permissions, no matter how bizarre.

I’d like to suggest a slightly more nuanced model, one that could easily be implemented by the phone OS vendors, that would improve the situation a great deal:

(1) Require app vendors to provide a bit of text next to each permission, that explains what they will do with the access they require. Users don’t have to read this – perhaps just click through on the permission to see what it’s all about.
(2) Provide a forum for interested users to complain to the OS vendors, in the context of their app markets, when they detect (through debugging, network diagnostics, etc.) that app vendors actually use permissions in ways that they did not initially indicate. Basically catch out liars. If an app vendor is caught lying about what they need permissions for, and the OS vendor confirms that, then add a warning to the app installer saying “but the vendor is lying here.” I bet there would be very few lies very quickly!
(3) Enhance the OS to be able to send simulated data to apps, in place of the permissions they seek, under user control. For example, if I install an app that wants access to my camera, and I do want to run the app but don’t really want to give it camera access, I should be able to feed the app fake/random images. Same goes for network access – let the app think it’s hitting the networking API, but connect it to a loopback interface only. App wants to see my contacts? Feed it random contacts. This way, I can have more fine grained control over permissions, not just “yes, everything / no, no app” without breaking the app’s code.

As an app vendor, I’d be perfectly happy to live within these constraints. We publish apps that do want access to contacts, cameras, the network, etc. We have perfectly good reasons for these permissions and would love to be able to explain those to users at installation time. If users want to feed our app simulated data, that’s fine – they will lose some functionality, but gain some comfort. That’s totally fine.

As a phone user, I’d also love to have this kind of control. Install the apps I want but limit what permissions they get based on my personal preferences.

So how about it, Google and Apple? These are technically minor enhancements with a major, positive impact on the security of your app ecosystems.

When deploying security software – be sure it’s written by experts

January 12th, 2016

A colleague pointed out this interesting thread, about a variety of security exploits in Trend Micro’s ‘Password Manager’ module:

The nature and severity of the exploits are … breathtaking. The slow and indecisive response of the vendor is similarly amazing. For TrendMicro customers, the conclusion is simple: don’t install this thing. More generally, it’s clear that the team who built the product has a very weak grasp of security. Is it just this team, or the whole company? Who knows? What about other vendors in the IT security space?

It turns out that other security products, theoretically designed to protect you, actually introduce their own exploits, as discussed here, and herefor example.

That’s quite scary, because most users think they are improving their security posture by deploying such software, but (a) these programs require very deep OS privileges to run, and (b) these programs are, it seems, sometimes written by people who don’t have the skills required to write secure code. The consequence is that users, thinking they are doing the “right thing,” are actually endangering themselves.

What to do?

First, I’m not much of a fan of anti-malware software. The OS (Linux, Mac, Windows, etc.) is more secure than most people imagine — most vendors have been taking security quite seriously for years now, have pretty good designs and usually respond promptly to any discovered vulnerabilities. Do turn on the security features of your OS, do encrypt your filesystems, do keep your software actively and promptly patched, do avoid sketchy web sites, and you should be fine. Where’s the anti-malware program in that narrative? Notably absent!

Second, if you do go for anti-malware, keep an eye on public disclosures of exploits. If a vendor has been caught doing egregiously dumb things – as TrendMicro has here – then avoid them. Who knows what other products have been infected by the same clueless development practices?

Safe computing!