Archive for 2011


Tuesday, August 2nd, 2011

The big trend these days seems to be the use of consumer computing devices (smart phones, tablets, etc.) in the enterprise. Bring Your Own Device or BYOD for short.

On the one hand, I get it. Users want a particular device and as more and more apps move to a web UI, they actually can use their device as the web browser. Really – a typical corporate user needs MS Office or equivalent, a web browser, an IM client maybe, access to filesystems and e-mail. Even my phone can do that stuff. Why shouldn’t I be able to use my phone anyways? Users don’t want multiple devices for multiple applications either. If they already have an iPhone, or their own laptop they don’t want to lug that *and* a corporate device on their trips.

The problem with this is risk management. Sure, the user’s own device is compatible, but how does the corporation know that there isn’t a keylogger installed on it, leaking corporate passwords and other data? How can the corporation be assured that the device’s filesystem is encrypted, so that if it’s lost or stolen, there isn’t data loss? How do they know that the user’s PC doesn’t have a virus installed on it, which will propagate as soon as it’s plugged into the corporate network. These are pretty serious risks, that users don’t seem to really understand.

It seems to me that BYOD should, to comply with audit and regulatory requirements, go hand in hand with some basic requirements:

* Make the device stateless, or at least keep all the corporate data in a VM, whose configuration (including filesystem crypto) is managed.
* Require users to run some sort of anti-malware code on their device, to prevent basic attacks like keyloggers.
* Require users and IT to collaborate in ensuring that consumer devices meet these requirements.
* Absolve IT from supporting the device, beyond this vetting process.

Are users willing to live with these constraints? I honestly don’t know, but they seem pretty foundational to me.

Economic growth and energy

Tuesday, August 2nd, 2011

OK, so this is nothing to do with IAM, but it’s interesting nonetheless.

In this post, Tom Murphy points out, quite rightly, that continuing economic growth implies continuing growth in the energy use of society. Continuing energy use means (a) we run into physical limits in the amount of energy we can harvest and (b) we heat the earth to unliveable temperatures.

He’s right of course – nothing is forever – so the real question is: “how much longer can we continue to grow the world economy?”

I wish I knew. 🙂

cool analysis of how users choose (weak) passwords

Monday, July 18th, 2011

Check this out – it’s an analysis of the compromised password databases at Sony and Gawker. Very cool breakdown of how users choose passwords, at least when not constrained by a policy engine:

How many exploits does it take to bring a company down?

Thursday, June 2nd, 2011

Just watch Sony to find out!

Can you say pattern? I know you can!

— Idan

RSA breach, round 2: Lockheed Martin

Saturday, May 28th, 2011

When RSA originally announced their security breach, they were quite circumspect about what exactly was stolen. There was lots of conjecture flying around, but nobody knew for sure, because RSA wasn’t saying much.

The RSA announcement was here:

What set the industry abuzz was the suspicion that:

  • The attacks were carried out by state actors, not just random criminals.
  • The attacks compromised key material used in the RSA SecurID token authentication process.
  • This key material could be used, by a reasonably sophisticated attacker, to impersonate a legitimate user in an organization that relies on RSA SecurID tokens.

Nobody knew for sure, but this seemed like a strong possibility and a dangerous one at that.

Today we started getting confirmation that this exact scenario is what has been playing out. Lockheed Martin, one of the largest US military contractors, is reporting ongoing attacks related to RSA tokens, which are typically deployed to authenticate remote users in their VPN connections.

So if reports are to be believed – and where there’s smoke there is usually a fire – then it’s likely that a state actor (probably China) first compromised RSA to acquire key material for all RSA tokens everywhere, then used this data to construct fake tokens and attack user accounts at interesting organizations, including US military contractors.

If your organization uses RSA tokens, then you presumably deployed them to increase the security of remote user connections to your network from a somewhat complex single factor (a password) to two factors, consisting of evidence that the user physically posesses his token plus an even simpler knowledge-based factor (typically a 4 digit PIN).

What you actually got, however, now that the key material was breached, was a change from a single, moderately complex password to a single, definitely simple, PIN. The token part can be impersonated, by at least one foreign entity. Your adversary also has to figure out which token is associated with which of your users, and apparently they are using phishing to figure this out.

So what to do?

Clearly, the RSA tokens should be replaced. I imagine RSA will be offering replacement tokens based on different key material. I wouldn’t go there, however, since the basic problem with this architecture is that there is a single point of failure — RSA — and that’s a very tempting target for powerful adversaries.

To RSA’s competitors — do your architectures also have this weakness? Unless you can demonstrate that your tokens don’t aggregate risk in the same way, then you are guilty by association…

Perhaps another token solution? Or smart cards, if you have pervasive readers? Or combination smart-card/token devices, if you don’t have card readers everywhere, or one of several mechanisms that leverage user mobile phones as an authentication factor?

All of these make sense — choose the one that works for you.

Heck, going back to just passwords, but making them strong ones and authenticating the endpoint (i.e., is this the same PC that my user usually signs on from?) would be better than the RSA tokens at this point. More convenient for end users too.

Whatever you do, think about risk aggregation. Maybe that’s the new motto for authentication technologies.

— Idan

Trends in enterprise authentication

Tuesday, May 24th, 2011

Someone recently asked me what trends I could see in the kinds of authentication used by medium to large organizations. I thought I’d share my response here – it might be of general interest.

First, I’d like to say that despite every prognostication for years now that we’ll be moving away from passwords Real Soon Now, what I see in the real world is more and more passwords.

This is not to say that organizations aren’t evolving – it’s just that nothing is as cost effective or well understood as passwords. Yes, passwords have security and usability problems, but consider the alternatives:

* Biometrics: you typically need extra hardware at the user’s device plus no matter which biometric you choose, some user will be unable to use it (no hands, no fingers, unable to scan finger print, unable to get retinal image, etc. etc.)

* One time password tokens: extra hardware for the organization to purchase and distribute, extra junk for users to carry around, serious boundary condition problems when a user loses his token or leaves it at home plus the recent debacle at RSA.

* Smart cards: same problems as tokens plus more difficult integration and readers required at the endpoint. Ever tried to use a smart card with an app you sign into from your smart phone? Moreover, smart cards carry a PKI certificate payload, so organizations have to stand up and manage a PKI too. That’s tons of fun.

* Mobile phone as authentication factor: increasingly popular, but often as a backup authentication factor, rather than a primary, because it’s still more of a nuisance to users than just a password.

So what’s changing? The big change I see is user location and endpoint device authentication. Sites like facebook check to see not only that I typed the correct password, but also that I’m signing on from a computer they have seen me use before. If not, they’ll ask me to take extra steps, such as recognizing the faces of my friends and/or answering some personal questions. I think this trend is definitely on the upswing.

What else? More and more web sites are supporting federated authentication. I can sign into the airport Wifi using my facebook or yahoo accounts, for example. That’s the new pattern – never mind registering for each and every site (newspapers? stores? airports? coffee shops?). Just use your e-mail or social networking ID on most sites, maybe except for things like the bank which are held to a higher security standsard.

And for those organizations that have already deployed tokens or smart cards – I see some of them retiring the technology and going back to passwords. Seriously? Yes. The integration and cost of ownership issues have turned out to be too high for many organizations, and when the economic times are tough, the “cool but hard to use and costly to support” technology goes out the window.

So who is still using smart cards and PKI? Government. That’s about it, really. Private sector organizations are just not going there in any serious way.

And who is using OTP? Fewer and fewer organizations. Mobile phones turn out to be a more user friendly option.

So aren’t we less secure, if everyone reverts to passwords?

Maybe, but first we can mitigate the security problems with passwords — you know, have fewer of them (synchronization and federation come to mind here) and make sure that they are robust, hard-to-guess passwords. And make sure nobody can compromise the password database itself, because that’s how massive exploits happen.

And that’s my $0.02 for today.

Approvals – Concurrent or Sequential?

Friday, May 20th, 2011

An issue that seems to come up with every identity management project, and increasingly with privileged access management projects, is whether approvals workflows should run in a parallel or serial fashion. To clarify, we’re talking about cases where there is a request for something – create a new user, change someone’s entitlements, grant access to a privileged account, whatever – and two or more people have to approve it before it can be fulfilled. When this happens, should we invite those people one at a time, or all at once?

As it happens, I have a pretty strong opinion about that. 😉

From a security perspective, I think it makes no difference. The security policy generally says “so long as persons A and B (.. and C and …) agree that the request is business-appropriate, then go ahead and do it. Conversely, if at least one of the individuals in this set says “no” then block the request.

In other words: security policy has no preference for concurrent vs. serial. It just says “meet the minimum requirements and get no rejections before proceeding.”

What about the responsiveness (SLA) of the system? If you invite authorizers sequentially, then the time elapsed after a request is submitted and before it can be fulfilled (assuming it gets approved) is the sum of the response times of the individual authorizers. i.e., T(A) + T(B) + T(C) + … If any of the authorizers is slow to respond, the request will take a long time to complete. This is bad for business. Alternately, if the approvals process is concurrent — i.e., we invite all the authorizers at the same time, when the request is submitted, and fulfill it once they have provided the minimum required set of approvals, then the SLA is max( T(A), T(B), T(C), … ) — i.e., fulfillment happens after the slowest authorizer gets back to us. Definitely better.

OK, so security doesn’t care and SLA prefers concurrent approvals. What other variables are there?

One argument raised is “we did it this way on paper and we don’t want this project to turn into a business process re-engineering quagmire.” That’s fair – to a point. If the old process identified who should approve something, and all you change in the new process is sequence, but not basic security logic, then I’d argue that you aren’t really doing business process re-engineering. Instead, you are applying a minor optimization (more favourable process timing) to an old process. In other words, unless you have some intense politics to overcome, I’d say rearrange the process timing and don’t turn it into a big discussion.

What else?

Another argument I hear in favour of sequential authorization is that in the event that an early approver rejects a request, this saves subsequent approvers from the trouble of being invited to review the request. Approvers get fewer junk e-mails and are happier as a result. Makes sense – but how often does that really happen? In talking to our customers, I’ve learned that well over 90% of all requests are approved. This is for many reasons, including: (a) requests submitted by automation are almost always correct; (b) business users have better things to do than submit dumb requests; (c) everyone knows that there is an audit trail, so they don’t want to be caught requesting things that are not business appropriate; and (d) when requests are rejected, it’s almost always due to errors in input at request time, rather than the *intent* of the request being denied. i.e., if we do better form validation for the requester, we’ll wind up with even less rejected requests.

So the nuisance-to-later-approvers effect is very minor at best. From a business value perspective, what’s more important? Rapid fulfillment and tight SLA for change requests, or 1-in-10 or 1-in-20 request e-mails being pointless because someone else might have been able to reject it before you. I can’t speak for everyone, but it takes me less than a second to delete a SPAM e-mail. Maybe a few seconds to read a workflow request e-mail and decide that it’s nonsense. I just don’t buy the nuisance argument in favour of serialization.

So thus far we have 1 vote and 3 abstentions – SLA is better if we invite authorizers all at once. Security doesn’t care, the business process re-engineering argument is a paper tiger and the nuisance argument doesn’t stand up to data about real-world approval rates.

The final question is configuration complexity. Which is easier to setup in the workflow engine – serial or parallel? While nobody talks about this much, I think this is the main reason for choosing one system over the other. It is *very* difficult to configure a traditional workflo wengine to invite an unknown number of approvers all at once and then to wait for a minimum set of approvals before proceeding. Indeed, with a traditional flow-charting tool, it’s quite a nuisance to configure even basic approval processes with a fixed number of authorizers, once you allow for reminders, escalation, delegation and so forth. Try drawing a flowchart for N approvers (N not defined in advance) who are invited to review something, get e-mail reminders if they don’t respond quickly enough, get replaced with alternates if they continue to remain silent, etc. Very hard to draw.

In my opinion, people impement serial workflows really for just two reasons: (a) they used to do it that way on paper and (b) their workflow system lends it self to serial approvals more than to parallel.

That’s a terrible reason to do anything, by the way. “I’ll do it this way because my tool can’t manage anything better.”

Get a better tool.

Of course, I’m biased, because Hitachi ID Identity Manager is designed for concurrent invitations sent to a variable number of dynamically selected authorizers, by default. Plus early escalation away from authorizers who Exchange says are out of the office. Plus reminders, escalation and delegation. Out of the box.

So next time you think about approvals workflow, please consider a parallel process. It’s just better – better SLA, same security, no downside that I can see.

— Idan

Tablets taking over?

Thursday, May 19th, 2011

HP reported lower PC sales yesterday and everyone is theorizing that the rapid growth in the tablet market (and lets be honest, it’s really just iPads that are selling like hotcakes) is the cause.

This makes one think – are tables a replacement for PCs? Are people buying them instead of new PCs (from HP or others)?

I think the answer is actually more complex than yes/no. First, while I think tablets are a cool technology, they are in no way a viable replacement for a PC. They are a neat way to surf the Internet and perhaps read a book while reclining on the couch – and in general they are a cute way to consume, but not generate, content.

On the other hand, I don’t believe anyone would buy a tablet as a primary device. It’s always a “I have one or two PCs, and my next device will be…” situation. In fact, to be really useful, tablets have to interact with a primary PC, to archive content, repackage media, etc. Tablets don’t replace PCs, they augment them.

So what’s going on in the market? I think the answer is pretty simple — consumers only have so much money to spend on gadgets, and this year they are spending it on adding tablets to their hardware mix. Once the market for tablets has saturated, people will resume buying other devices and we’ll see a new equilibrium.

So tablets don’t replace PCs, but the cost of tablets displaces budget for PCs.


— Idan

Another day, another hack

Tuesday, April 5th, 2011

Just read this, about an ex-admin at Gucci who retained both VPN and admin access and caused some harm after being let go:

The sad thing is that – statistically – this sort of thing is all too predictable. The failures here are pretty simple, if numerous:

* The organization allowed this guy to create a user profile for a fake employee. Can you say broken controls? I know you can…
* The help desk subsequently authenticated this fake employee and enabled his VPN one time password device. What happened to the approvals process? Shouldn’t this bogus employee’s manager have had a say about VPN access?
* It sounds like admin accounts had shared, static passwords. Once on the VPN, this ex-employee was able to use his old passwords to cause damage. Why didn’t Gucci implement a privileged access management system to eliminate static and shared passwords?

Organizations can’t continue to assume that all employees will behave professionally and ethically all the time. While that’s going to be true 99.9% of the time, the 0.1% of the time when it’s not is a problem.

So to abuse a term from the cold war: “Trust, but verify!”

Obviously identity and access management systems, implementing sound processes, would have locked this user out of the network. Just as clearly, privileged access management systems would have blocked this user, even if he got back on the network, from gaining admin rights.

Compromised CA – another kind of authentication?

Monday, April 4th, 2011

Last month, a major certificate authority was compromised, leading to the issue of a number of fraudulent SSL certificates for major sites such as and You can read more about it at

When we think about authentication, we are normally talking about verifying that a (human) user is, in fact, who he or she claims to be.

This incident raises questions about a whole other type of authentication — how does your web browser authenticate the secure (HTTPS) server it’s communicating with? Technically, this is done using a signed SSL certificate — a certificate authority (CA) signs the public key of the site you are visiting. Your browser comes with a list of about 300 “trusted” CAs and will silently accept SSL web site certificates if they are signed by any of them.

The trouble with this model is: should you really trust this arbitrary list of CAs? Some of them may not operate very securely, which means that a compromise of their systems could lead to bogus site certificates being issued, which your browser would then treat as valid sites. Another question is should you trust these organizations? Some of them are government agencies, in countries with dubious track records regarding human rights or due process. Would you trust the US government to vouch for the security of a web site? No? What about a government-owned telephone company in the United Arab Emirates? Odds are pretty good that your web browser already trusts the latter…

But if you don’t trust the CAs, then who do you trust? If you do trust them, when do you stop trusting them? How do you train your browser to stop trusting one of these sites? This is a sticky question, because anything that’s complicated will be rejected by end users.

One idea is for the browser to display the chain of trust to the end user whenever a “secure” site is visited. This means that a user would see a message like “You are about to visit – a secure site vouched for by Verisign of Mountainview, California, USA.” The idea here is that a user might be startled if one day he sees a message like this, instead: “You are about to visit – a secure site vouched for by TURKTRUST Bilgi Iletisim ve Bilisim Guvenligi Hizmetleri A.S.” (Nothing against Turktrust here – this is just an example of a strange combination.)

Another idea is for your browser to frequently download a refreshed list of CAs, with ones that are known to have been compromised deleted. Some browsers already do this, but the process that leads a key to be removed is a big question mark. If a CA is used by a government to lawfully monitor its own citizens, should a company such as Microsoft (that makes IE) or a foundation such as Mozilla (that makes Firefox) revoke that government’s CA in their respective web browsers? What constitutes a serious enough incident to revoke a CA?

One more idea might be to setup a peer-to-peer relationship between CAs. To be accepted by a browser, perhaps the CA’s own key should be signed by a few other CAs, and vice versa. This way, a “rotten apple” CA would be quickly removed by their peers. Once again, the devil is in the details – CAs would be incented to create a tightly knit oligarchy and block new competitors from entering the market. That’s not the intent at all!

Regardless of how the question is addressed, something really needs to be done to secure the server authentication process — too much hangs in the balance!

— Idan