Skip to main content

Hitachi ID Systems Blogs

Archive for May, 2011

RSA breach, round 2: Lockheed Martin

Saturday, May 28th, 2011

When RSA originally announced their security breach, they were quite circumspect about what exactly was stolen. There was lots of conjecture flying around, but nobody knew for sure, because RSA wasn’t saying much.

The RSA announcement was here:

http://www.rsa.com/node.aspx?id=3872

What set the industry abuzz was the suspicion that:

  • The attacks were carried out by state actors, not just random criminals.
  • The attacks compromised key material used in the RSA SecurID token authentication process.
  • This key material could be used, by a reasonably sophisticated attacker, to impersonate a legitimate user in an organization that relies on RSA SecurID tokens.

Nobody knew for sure, but this seemed like a strong possibility and a dangerous one at that.

Today we started getting confirmation that this exact scenario is what has been playing out. Lockheed Martin, one of the largest US military contractors, is reporting ongoing attacks related to RSA tokens, which are typically deployed to authenticate remote users in their VPN connections.

http://www.nytimes.com/2011/05/28/business/28hack.html

So if reports are to be believed – and where there’s smoke there is usually a fire – then it’s likely that a state actor (probably China) first compromised RSA to acquire key material for all RSA tokens everywhere, then used this data to construct fake tokens and attack user accounts at interesting organizations, including US military contractors.

If your organization uses RSA tokens, then you presumably deployed them to increase the security of remote user connections to your network from a somewhat complex single factor (a password) to two factors, consisting of evidence that the user physically posesses his token plus an even simpler knowledge-based factor (typically a 4 digit PIN).

What you actually got, however, now that the key material was breached, was a change from a single, moderately complex password to a single, definitely simple, PIN. The token part can be impersonated, by at least one foreign entity. Your adversary also has to figure out which token is associated with which of your users, and apparently they are using phishing to figure this out.

So what to do?

Clearly, the RSA tokens should be replaced. I imagine RSA will be offering replacement tokens based on different key material. I wouldn’t go there, however, since the basic problem with this architecture is that there is a single point of failure — RSA — and that’s a very tempting target for powerful adversaries.

To RSA’s competitors — do your architectures also have this weakness? Unless you can demonstrate that your tokens don’t aggregate risk in the same way, then you are guilty by association…

Perhaps another token solution? Or smart cards, if you have pervasive readers? Or combination smart-card/token devices, if you don’t have card readers everywhere, or one of several mechanisms that leverage user mobile phones as an authentication factor?

All of these make sense — choose the one that works for you.

Heck, going back to just passwords, but making them strong ones and authenticating the endpoint (i.e., is this the same PC that my user usually signs on from?) would be better than the RSA tokens at this point. More convenient for end users too.

Whatever you do, think about risk aggregation. Maybe that’s the new motto for authentication technologies.

– Idan

Trends in enterprise authentication

Tuesday, May 24th, 2011

Someone recently asked me what trends I could see in the kinds of authentication used by medium to large organizations. I thought I’d share my response here – it might be of general interest.

First, I’d like to say that despite every prognostication for years now that we’ll be moving away from passwords Real Soon Now, what I see in the real world is more and more passwords.

This is not to say that organizations aren’t evolving – it’s just that nothing is as cost effective or well understood as passwords. Yes, passwords have security and usability problems, but consider the alternatives:

* Biometrics: you typically need extra hardware at the user’s device plus no matter which biometric you choose, some user will be unable to use it (no hands, no fingers, unable to scan finger print, unable to get retinal image, etc. etc.)

* One time password tokens: extra hardware for the organization to purchase and distribute, extra junk for users to carry around, serious boundary condition problems when a user loses his token or leaves it at home plus the recent debacle at RSA.

* Smart cards: same problems as tokens plus more difficult integration and readers required at the endpoint. Ever tried to use a smart card with an app you sign into from your smart phone? Moreover, smart cards carry a PKI certificate payload, so organizations have to stand up and manage a PKI too. That’s tons of fun.

* Mobile phone as authentication factor: increasingly popular, but often as a backup authentication factor, rather than a primary, because it’s still more of a nuisance to users than just a password.

So what’s changing? The big change I see is user location and endpoint device authentication. Sites like facebook check to see not only that I typed the correct password, but also that I’m signing on from a computer they have seen me use before. If not, they’ll ask me to take extra steps, such as recognizing the faces of my friends and/or answering some personal questions. I think this trend is definitely on the upswing.

What else? More and more web sites are supporting federated authentication. I can sign into the airport Wifi using my facebook or yahoo accounts, for example. That’s the new pattern – never mind registering for each and every site (newspapers? stores? airports? coffee shops?). Just use your e-mail or social networking ID on most sites, maybe except for things like the bank which are held to a higher security standsard.

And for those organizations that have already deployed tokens or smart cards – I see some of them retiring the technology and going back to passwords. Seriously? Yes. The integration and cost of ownership issues have turned out to be too high for many organizations, and when the economic times are tough, the “cool but hard to use and costly to support” technology goes out the window.

So who is still using smart cards and PKI? Government. That’s about it, really. Private sector organizations are just not going there in any serious way.

And who is using OTP? Fewer and fewer organizations. Mobile phones turn out to be a more user friendly option.

So aren’t we less secure, if everyone reverts to passwords?

Maybe, but first we can mitigate the security problems with passwords — you know, have fewer of them (synchronization and federation come to mind here) and make sure that they are robust, hard-to-guess passwords. And make sure nobody can compromise the password database itself, because that’s how massive exploits happen.

And that’s my $0.02 for today.

Approvals – Concurrent or Sequential?

Friday, May 20th, 2011

An issue that seems to come up with every identity management project, and increasingly with privileged access management projects, is whether approvals workflows should run in a parallel or serial fashion. To clarify, we’re talking about cases where there is a request for something – create a new user, change someone’s entitlements, grant access to a privileged account, whatever – and two or more people have to approve it before it can be fulfilled. When this happens, should we invite those people one at a time, or all at once?

As it happens, I have a pretty strong opinion about that. ;-)

From a security perspective, I think it makes no difference. The security policy generally says “so long as persons A and B (.. and C and …) agree that the request is business-appropriate, then go ahead and do it. Conversely, if at least one of the individuals in this set says “no” then block the request.

In other words: security policy has no preference for concurrent vs. serial. It just says “meet the minimum requirements and get no rejections before proceeding.”

What about the responsiveness (SLA) of the system? If you invite authorizers sequentially, then the time elapsed after a request is submitted and before it can be fulfilled (assuming it gets approved) is the sum of the response times of the individual authorizers. i.e., T(A) + T(B) + T(C) + … If any of the authorizers is slow to respond, the request will take a long time to complete. This is bad for business. Alternately, if the approvals process is concurrent — i.e., we invite all the authorizers at the same time, when the request is submitted, and fulfill it once they have provided the minimum required set of approvals, then the SLA is max( T(A), T(B), T(C), … ) — i.e., fulfillment happens after the slowest authorizer gets back to us. Definitely better.

OK, so security doesn’t care and SLA prefers concurrent approvals. What other variables are there?

One argument raised is “we did it this way on paper and we don’t want this project to turn into a business process re-engineering quagmire.” That’s fair – to a point. If the old process identified who should approve something, and all you change in the new process is sequence, but not basic security logic, then I’d argue that you aren’t really doing business process re-engineering. Instead, you are applying a minor optimization (more favourable process timing) to an old process. In other words, unless you have some intense politics to overcome, I’d say rearrange the process timing and don’t turn it into a big discussion.

What else?

Another argument I hear in favour of sequential authorization is that in the event that an early approver rejects a request, this saves subsequent approvers from the trouble of being invited to review the request. Approvers get fewer junk e-mails and are happier as a result. Makes sense – but how often does that really happen? In talking to our customers, I’ve learned that well over 90% of all requests are approved. This is for many reasons, including: (a) requests submitted by automation are almost always correct; (b) business users have better things to do than submit dumb requests; (c) everyone knows that there is an audit trail, so they don’t want to be caught requesting things that are not business appropriate; and (d) when requests are rejected, it’s almost always due to errors in input at request time, rather than the *intent* of the request being denied. i.e., if we do better form validation for the requester, we’ll wind up with even less rejected requests.

So the nuisance-to-later-approvers effect is very minor at best. From a business value perspective, what’s more important? Rapid fulfillment and tight SLA for change requests, or 1-in-10 or 1-in-20 request e-mails being pointless because someone else might have been able to reject it before you. I can’t speak for everyone, but it takes me less than a second to delete a SPAM e-mail. Maybe a few seconds to read a workflow request e-mail and decide that it’s nonsense. I just don’t buy the nuisance argument in favour of serialization.

So thus far we have 1 vote and 3 abstentions – SLA is better if we invite authorizers all at once. Security doesn’t care, the business process re-engineering argument is a paper tiger and the nuisance argument doesn’t stand up to data about real-world approval rates.

The final question is configuration complexity. Which is easier to setup in the workflow engine – serial or parallel? While nobody talks about this much, I think this is the main reason for choosing one system over the other. It is *very* difficult to configure a traditional workflo wengine to invite an unknown number of approvers all at once and then to wait for a minimum set of approvals before proceeding. Indeed, with a traditional flow-charting tool, it’s quite a nuisance to configure even basic approval processes with a fixed number of authorizers, once you allow for reminders, escalation, delegation and so forth. Try drawing a flowchart for N approvers (N not defined in advance) who are invited to review something, get e-mail reminders if they don’t respond quickly enough, get replaced with alternates if they continue to remain silent, etc. Very hard to draw.

In my opinion, people impement serial workflows really for just two reasons: (a) they used to do it that way on paper and (b) their workflow system lends it self to serial approvals more than to parallel.

That’s a terrible reason to do anything, by the way. “I’ll do it this way because my tool can’t manage anything better.”

Get a better tool.

Of course, I’m biased, because Hitachi ID Identity Manager is designed for concurrent invitations sent to a variable number of dynamically selected authorizers, by default. Plus early escalation away from authorizers who Exchange says are out of the office. Plus reminders, escalation and delegation. Out of the box.

So next time you think about approvals workflow, please consider a parallel process. It’s just better – better SLA, same security, no downside that I can see.

– Idan

Tablets taking over?

Thursday, May 19th, 2011

HP reported lower PC sales yesterday and everyone is theorizing that the rapid growth in the tablet market (and lets be honest, it’s really just iPads that are selling like hotcakes) is the cause.

This makes one think – are tables a replacement for PCs? Are people buying them instead of new PCs (from HP or others)?

I think the answer is actually more complex than yes/no. First, while I think tablets are a cool technology, they are in no way a viable replacement for a PC. They are a neat way to surf the Internet and perhaps read a book while reclining on the couch – and in general they are a cute way to consume, but not generate, content.

On the other hand, I don’t believe anyone would buy a tablet as a primary device. It’s always a “I have one or two PCs, and my next device will be…” situation. In fact, to be really useful, tablets have to interact with a primary PC, to archive content, repackage media, etc. Tablets don’t replace PCs, they augment them.

So what’s going on in the market? I think the answer is pretty simple — consumers only have so much money to spend on gadgets, and this year they are spending it on adding tablets to their hardware mix. Once the market for tablets has saturated, people will resume buying other devices and we’ll see a new equilibrium.

So tablets don’t replace PCs, but the cost of tablets displaces budget for PCs.

Hmmm.

– Idan

page top page top