The curious case of Apple and the FBI

February 22nd, 2016

The story about the FBI asking Apple for help decrypting the contents of the phones belonging to the San Bernadino shooters smells fishy, and the media seems to have totally missed the point.

Either there is a way to decrypt the filesystem of an iOS device or there isn’t.

If there is no way to decrypt the device, why is Apple protesting? Just say it’s not possible and be done with it.

If it’s possible, then why does the FBI need Apple’s help? Are they technically inept? Can they not be bothered to disassemble the phones in question and inject every possible 4 digit unlock code automatically, to see which one works? Heck, you could hire a kid to spend a few days trying to type all the codes, one by one, into the login screen.

I honestly don’t know what the fuss is about, but I think both the FBI and Apple are liars. Here is what I think is actually going on:

  • It’s trivial to unlock these devices.
  • The FBI is just making noise for political reasons, to support moves in Congress to force technology companies to create more convenient back doors on these devices.
  • Apple is making equally political noise, to oppose such moves.
  • None of this is about a single terrorism investigation.

For the record, I think that forcing technology companies to inject back doors into hardware or software products is monumentally stupid — the bad guys will just use real crypto downloaded from the Internet, innocent consumers will be made vulnerable and foreign markets will be closed to US goods, whose products are (correctly identified as) compromised. As usual, the political class is totally clueless and actively harmful to the economy.

P.S. My friend pointed this out to me – I think it’s a brilliant analysis. Probably wrong, but nonetheless sheer genius:

The app permission model on iOS and Android is broken: here’s how to fix it

January 26th, 2016

The permission model on smart phones is all wrong, and everybody knows it. I’ll use Android as an example, mainly because I’m more familiar with it, but I don’t imagine iOS is much different.

When you install an app on your phone (or tablet – which is just an oversized phone after all), it asks you whether you will allow the app to have various permissions on the device — can it see your location? Access your contacts? Your camera? The network? The set of possible answers is, unfortunately, very limited: yes or no. No means you can’t install the app, yes means you accept whatever it says.

There are two problems with this model:

(a) You have no idea *why* the app wants each permission. There is only an assertion that it does, but not what it will do with that access. Often, the reason is quite legitimate and benign, but it’s entirely up to the phone’s owner’s imagination to figure out why the heck app A needs permission B.
(b) The set of possible responses is too limited. It’s either accept the app, with all the permissions it wants, or do not install it at all.

These problems lead to a third problem, which is habituation. Since the reason for apps to demand permissions is opaque and the only option is to not install the app which, presumably, the user actually wants, users stop reading the warnings and just blindly accept all apps with all permissions, no matter how bizarre.

I’d like to suggest a slightly more nuanced model, one that could easily be implemented by the phone OS vendors, that would improve the situation a great deal:

(1) Require app vendors to provide a bit of text next to each permission, that explains what they will do with the access they require. Users don’t have to read this – perhaps just click through on the permission to see what it’s all about.
(2) Provide a forum for interested users to complain to the OS vendors, in the context of their app markets, when they detect (through debugging, network diagnostics, etc.) that app vendors actually use permissions in ways that they did not initially indicate. Basically catch out liars. If an app vendor is caught lying about what they need permissions for, and the OS vendor confirms that, then add a warning to the app installer saying “but the vendor is lying here.” I bet there would be very few lies very quickly!
(3) Enhance the OS to be able to send simulated data to apps, in place of the permissions they seek, under user control. For example, if I install an app that wants access to my camera, and I do want to run the app but don’t really want to give it camera access, I should be able to feed the app fake/random images. Same goes for network access – let the app think it’s hitting the networking API, but connect it to a loopback interface only. App wants to see my contacts? Feed it random contacts. This way, I can have more fine grained control over permissions, not just “yes, everything / no, no app” without breaking the app’s code.

As an app vendor, I’d be perfectly happy to live within these constraints. We publish apps that do want access to contacts, cameras, the network, etc. We have perfectly good reasons for these permissions and would love to be able to explain those to users at installation time. If users want to feed our app simulated data, that’s fine – they will lose some functionality, but gain some comfort. That’s totally fine.

As a phone user, I’d also love to have this kind of control. Install the apps I want but limit what permissions they get based on my personal preferences.

So how about it, Google and Apple? These are technically minor enhancements with a major, positive impact on the security of your app ecosystems.

When deploying security software – be sure it’s written by experts

January 12th, 2016

A colleague pointed out this interesting thread, about a variety of security exploits in Trend Micro’s ‘Password Manager’ module:

The nature and severity of the exploits are … breathtaking. The slow and indecisive response of the vendor is similarly amazing. For TrendMicro customers, the conclusion is simple: don’t install this thing. More generally, it’s clear that the team who built the product has a very weak grasp of security. Is it just this team, or the whole company? Who knows? What about other vendors in the IT security space?

It turns out that other security products, theoretically designed to protect you, actually introduce their own exploits, as discussed here, and herefor example.

That’s quite scary, because most users think they are improving their security posture by deploying such software, but (a) these programs require very deep OS privileges to run, and (b) these programs are, it seems, sometimes written by people who don’t have the skills required to write secure code. The consequence is that users, thinking they are doing the “right thing,” are actually endangering themselves.

What to do?

First, I’m not much of a fan of anti-malware software. The OS (Linux, Mac, Windows, etc.) is more secure than most people imagine — most vendors have been taking security quite seriously for years now, have pretty good designs and usually respond promptly to any discovered vulnerabilities. Do turn on the security features of your OS, do encrypt your filesystems, do keep your software actively and promptly patched, do avoid sketchy web sites, and you should be fine. Where’s the anti-malware program in that narrative? Notably absent!

Second, if you do go for anti-malware, keep an eye on public disclosures of exploits. If a vendor has been caught doing egregiously dumb things – as TrendMicro has here – then avoid them. Who knows what other products have been infected by the same clueless development practices?

Safe computing!

Cloud … so we forget lessons of the past

December 14th, 2015

I just returned from the Gartner IAM Summit in Vegas. As usual, good content, smart and engaged attendees all packed into a godawful hotel.

But I digress. What really impressed me at the event is how easily lessons are forgotten. Recall, IAM is all about eliminating silos and managing all identities, entitlements and credentials together. That seems like common sense, no?

Apparently not. As soon as you say the magical word “cloud” you can forget these lessons and reintroduce new silos. For example, I learned that Microsoft has added a (preview) feature for privileged access management on Azure. I was also reminded that Okta, mainly a vendor with federated SSO to SaaS apps (good!) delivered itself as SaaS (cool!) offers IAM capabilities to create/delete accounts on SaaS apps, but only on SaaS apps (huh?).

The lesson is pretty simple: you should have a single IAM system to create/manage/delete accounts, entitlements and credentials everywhere, not just on-premise and not just SaaS. Creating new silos is silly – there is no technical reason for such constraints. What’s the user experience supposed to be? Go here to manage these accounts and go there to manage those? “Here” and “there” being collections of apps identified by where they happen to be hosted? That’s not exactly a great approach to usability.

So remember folks: if something made sense before the “cloud” it makes sense when you move some of your compute workloads to someone else’s servers.

Crypto regulation: political amateur hour

November 19th, 2015

What is it about cryptography that brings out the worst in politicians?

It doesn’t seem to matter which jurisdiction you look at, the political class seems to have fantasies of putting the crypto genie back in the bottle. For example, in the UK they want companies like Google and Apple to allow government to peek into the content of communication that passes through their platforms. That’s impossible if there is end-to-end encryption, of course. In the US, the FBI wants companies to build technological solutions to prevent encryption above all else.

This is idiotic on two levels:

  1. Do the bad guys – such as ISIL – actually use strong crypto, or are they too stupid for that? The evidence is that they do not use strong crypto, at least not yet.
  2. Is it feasible to prevent the bad guys from using strong crypto? The techniques, algorithms, know-how and software to secure communications are all widely known and available as open source. The best a government can hope for is to make it a nuisance for law abiding citizens and for dumb criminals to use strong crypto. Smart criminals will use it regardless of what the law says.

How’s this for a suggestion? Any government official responsible for public safety, security, military, policing or trade who seriously and publicly advocates to control the use, import or export of strong crypto should be fired from their job, on the basis of gross incompetence. To suggest such controls is to admit profound ignorance of the topic at hand.

There is no “discussion” about whether crypto controls work. They do not. Decades of experience have shown that they only serve to impair trade, as buyers avoid products from manufacturers whose governments limit crypto (notably the US with ITAR). As for the bad guys? They will use whatever crypto they want, including none at all, without regard for laws.


August 20th, 2015

In case you aren’t familiar with the term, “schadenfreude” is a German word for enjoying others’ misery. I think it fits the release of Ashley Madison customer data this week.

So what should we make of this compromise and disclosure? I think there are at least two subject areas – technical/security and sociological/moral.


As has been pointed out before, this looks very much like an inside job. It just screams for better internal controls, including Privileged Access Management, data loss prevention and plain old employee and contractor screening. It’s quite possible that, despite lots of claims about motivation, this is the work of a disgruntled employee or contractor.

It’s also interesting to see what the operators of the site — Avid Life Media — got right and wrong:

  • Right:
    • Strong encryption of customer passwords (blowfish plus hash).
  • Wrong:
    • No privileged access management.
    • Retain excessive customer data, including physical location (GPS coordinates, presumably from smart phone app), phone number, personal e-mail address, security question/answer in plaintext, detailed credit card data, including mailing address.
    • Failed to delete this data, even when paid to do so.


This discussion is just beginning, and will no doubt continue for a long time. A few observations:

  • Despite best efforts by the AM legal team, the data is out in the wild. They got it removed from a few web sites, but it’s on BitTorrent where content is essentially un-removable and un-deletable. Get over it – the data is permanently public.
  • The data appears to be quite authentic. Some had thought (hoped?) that the data may be fake – but that’s just not so.
  • The volume of data is huge – about 32,000,000 customer records.
  • It’s mostly men. Really – there aren’t many women on this site. It’s a lot of men, chasing after a few women. A completely one-sided seller’s market for women.
  • It will be interesting to see if someone can figure out how many of the profiles are real people, and how many are bogus data injected by the company. I suspect a significant number of fake or duplicate profiles, because the numbers just beggar belief. For example, there appear to be over 100,000 profiles in Calgary, where I live. There are just over a million people here, and I don’t believe that 1 in 10 are trying to cheat on their spouse. The data are mostly men, so that’s really, 1 in 5 males. If you subtract children, the elderly and single people, it probably reduces to 1/3 or 1/2 of adult males in relationships, and no matter how low your view of humanity, that’s just not believable. But that’s the data, so the data is obviously lying.
  • This is a treasure trove of data for various purposes. For example, someone has already published a heat map of where the users (real or fake) are and whether they are overwhelmingly male (>85%) or merely majority male (<85%).
  • This will be a bonanza for divorce lawyers. Not as big as everyone assumes, however, as there are certainly many users on the site who are not endangering an existing relationship:
    • Fake or duplicate users, as mentioned above.
    • I know at least one person who has a profile on the site, that he setup while single – he was just using it as a normal dating site. I bet there are lots of these.
    • There are probably many users on the site for whom the excuse “I was just curious” is actually true – they were curious about the market or looking for their current partner, to see if that person was on the site.
    • Another person I know pointed out that sex workers use this and similar sites, so there are likely thousands of those.
  • As always happens with disclosure of sexual behaviour that is widely considered to be immoral, public figures, especially those who spout socially conservative views, will be shamed. I’m not too sure what “family values” are other than a code word for social conservatism, but apparently someone who pushes that as a political cause has already been caught with his pants (literally?) down: some idiot public figure called Josh Duggar.
  • I bet the security establishment in many countries is looking at this data, as it provides leverage for foreign governments against their own people, in sensitive positions. I would expect employees to be fired or shuffled to less sensitive positions as a result.
  • Employers may cross check employees or candidates against this data set, as an (unethical and almost certainly illegal) test of character.
  • I fear that physical harm may come to some people whose data was disclosed, including sex workers and people with overzealous partners.

I’m sure there’s more.

The big lesson, as always, is to assume that privacy is a chimera. If there is something you don’t want to share with the world, don’t upload it to some web site!

Avid Life Media hack

July 20th, 2015

If you haven’t read this one yet, then do so now:

Online Cheating Site AshleyMadison Hacked

This is interesting on so many levels!

  • The data that was apparently exfiltrated is about people cheating on their spouses. There is a delicious moral irony involved in the possible release of this.
  • At the same time, this is a criminal event. Proprietary and personally identifying data was stolen. Theft is theft, even if it’s just a copy of data and even if it’s used to shame cheaters.
  • A company in this line of business should surely make security paramount. That they kept plaintext data with PII – including sexual fetishes and compromising photos around – is simple incompetence, applied at an industrial scale.
  • The attack seems to have been perpetrated by an insider. The ALM people seem to think they know who did this, and imply it was a contractor of some sort. If this doesn’t cry out for Privileged Access Management then I don’t know what does.
  • The societal impact of this hack could be huge. Imagine what happens if this data set is published and tens or hundreds of thousands of divorces, family breakups and job terminations ensue. That could make this the most impactful hack in history, in terms of financial and personal harm. Family lawyers will be in the money from years as a result.

It’ll be interesting to see how this story unfolds in the coming days.

So glad we don’t use Java

June 30th, 2015

Interesting news regarding litigation around Java intellectual property (IP) today:

Basically the courts are bouncing back and forth decisions regarding a lawsuit between Google and Oracle regarding ownership and use of the specifications for the Java API.

I’m not a lawyer, but generally I think that languages and runtime environments are well adopted if they are open and unencumbered. Nobody claims copyright over C or stdlib, for example.

Oracle has – unsurprisingly given its corporate culture – tried to make as much as possible of the Java ecosystem proprietary, so that they can generate license fees from this asset. This should cause many developers to think twice about investing in this platform — since there is a risk of undefined fees in your future.

Tread with caution. Not only is Java a terrible platform for performance, it turns out that it’s also at risk of becoming increasingly proprietary. Not a healthy place to develop.

LastPass hack

June 16th, 2015

I guess it was inevitable that a consumer-oriented password manager service would get hacked, and today we’ve learned that one did:

So is there a lesson here for us? A few, I suppose:

  • Security is only as good as the weakest link. I don’t think plaintext passwords were exposed, and it’s not even clear that encrypted ones got leaked, but password recovery hints did, and that may be enough to compromise some passwords.
  • The size of a target matters. I’m sure hackers much prefer to compromise popular systems to obscure ones. For consumers, this leads to the following interesting guidance: see where the herd is running – and run the other way. Choose less commonly used services if you can (but subject to other constraints, like commercial viability and likelihood of the service being well/professionally operated – have fun figuring out which is which).
  • The push to federate will only accelerate. Nobody wants separate passwords for various web sites, when the operators of those sites could easily federate to Facebook, Google, etc. Why solve the problem yourself if you can simply farm it out, for free?

If you are/were a lastpass user, you have a couple of options:

  • Change everything – your master password and your hints.
  • Delete your profile. Take your business elsewhere or give up on this class of application.

Stay safe!

Appliances are Dangerous (because nobody patches them)

May 26th, 2015

Putting sensitive infrastructure on physical or virtual appliances, rather than running it as a traditional on-premise application or a newer software-as-a-service system is a security disaster just waiting to happen.

Why? Because unlike on-premise applications and also unlike the servers running SaaS applications, there is no guarantee that anyone will apply critical security patches to your appliances, either at all or on time. Systems with unpatched security vulnerabilities are an open door to your otherwise secure infrastructure. Tolerate them at your peril.

I just recently spoke with a customer of ours who had – a few years ago – deployed a privileged access management product from one of our competitors. That product includes one or more “jump servers” which mediate login sessions from the desktops of authorized users to logins on managed endpoint systems. Such a “jump server” architecture is common in the privileged access management product category.

The problem for this customer has been that these jump servers — which have access to the most sensitive passwords in the company — run on the original Windows 2008 Server OS (i.e., before the first service pack was released). Since the vendor has made custom changes to the OS to “harden” it, it has been impossible to patch the OS on these jump servers. As a result, today, these jump servers run an OS that was released on February 27, 2008. The OS was released 2,645 days ago. 7.25 years ago. Our customer is scrambling to rip out this product, which endangers their entire infrastructure (it also has performance problems, but that’s another story).

Just think about how many security exploits have been discovered for and patched on Windows 2008+IIS since this platform was released on 2008-02-27. This recently discovered vulnerability comes to mind:


Using this particularly dangerous vulnerability, an attacker can remotely gain full SYSTEM privileges on any Windows system running IIS. Yup – including Windows 2008. This exploit is being actively leveraged in the wild, so the risk is very real.

Imagine that your privileged access management system — or any other critical infrastucture — runs on an old, unpatched OS like this. How secure would your organization be?

Is it ever OK to use appliances — physical or virtual — instead of just installing software on a well managed OS image?

I can think of only two cases where appliances are acceptable:

  1. Physical appliances which incorporate specialized hardware, to perform some task very fast. There is simply no software alternative to custom ASICs.
  2. Appliances (physical or virtual) with an automatically managed patch system. i.e., they should run a stock OS and be subject to automatic and timely patches from all the software vendors that contributed components: OS, web server, app server, DB, etc.:
    • If human intervention is required to patch, you’re likely going to forget or at least be late, which will create windows of opportunity for attackers: no good.
    • If only some components get automatically patched (say just the OS), it follows that others aren’t being patched (say the app server) and again you’ll be vulnerable.
    • If the runtime platform has been significantly customized (i.e., “hardened”) then automatic patching will likely break and you’ll achieve insecurity by trying to be too clever.

What if you’ve already deployed appliances that aren’t automatically patched?

  1. Try to patch them manually. Right now.
  2. Talk to the vendor. They are putting you at risk and had better step up and correct the error of their ways, or else you’ll be obliged to rip out their products.
  3. Look for alternatives, since these things are ticking time bombs on your infrastructure.