Hitachi ID Facebook Page Hitachi ID Twitter Page Find us on Google+ Hitachi ID YouTube Page

Hitachi ID Systems Blogs

Archive for the ‘Uncategorized’ Category

Do you still limit daily password changes?

Wednesday, July 30th, 2014

Once upon a time, password history enforcement used code that stored hashes of the “last N” passwords for each user.  N was typically a small integer – often less than 10.  Some users would figure out what N was and, when the time came to change their password, made N+1 password changes, where the last password reverted to their original password.

Such users are both smart and stupid.  Smart, because they figured out how the underlying security system worked and how to circumvent it.  Stupid, because they opted for static passwords and thus weakened the security of their own accounts.

Some organizations figured out that they had users using the “N+1 trick,” and found a low-tech way to make it painful for the offending users.  They limited the number of times a user could change his own password in the course of a single day, typically to just 1.  A user who wanted to use the “N+1 trick” would be obliged to do so over N+1 days, which pretty much eliminated the attractiveness of the scheme.

Unfortunately, limiting the number of password changes per day is unfriendly to users.  Say I change my password, then decide that the new password is not so nice after all, and want to change it again, to a better value?  What about if I change my password, forget it and need to reset it.  These are legitimate use cases and users should be able to change their password as often as needed.

Fast forward to the present, where N can either be very large (say 100) or simply open-ended (Password Manager supports  this).  In either case, we get the same benefit of the old “max once daily” rule, but without annoying users who just want to change their password again.

In this context, “max once daily” loses any conceivable benefit. There is no security advantage to preventing an authenticated user from changing his own password as often as he likes.  There is no benefit in the sense of stronger measures against password reuse, if the password history data is large or open ended.  There is only down-side.

I raise this because we still, occasionally, meet organizations that insist on enforcing this rule.  Get over it – open ended history is a better solution.  This rule should be abandoned to a curiosity of history, removed from enforced password security policies.  Use a product like Password Manager to enforce open-ended history or just set N to a large number.

Default passwords strike again!

Wednesday, June 11th, 2014

Amusing article in the
Winnipeg Sun. A couple of “computer whiz” grade 9 kids used a search engine to find an operators manual for the ATM at their local grocery store. In the manual, there are instructions for signing into the ATM as an admin, along with a default password.

Lo and behold, the BMO bank machine still used the default admin password, so the kids got it. Now these are nice kids, so they visited the local branch, explained the problem and made sure that the problem was fixed. No harm done, and instead rather a good deed.

What’s interesting here is that in this day and age, a *bank* was so lax about security as to leave a *cash machine*, which is protected by exactly *zero* physical security and is installed in a public place, with a *default administrator password*.

I can’t think of a more clear cut use case for deploying a Privileged Access Management solution. This password should have been a long, pseudo-random, daily string — not a factory default.

eBay compromise – the largest incident yet?

Thursday, May 22nd, 2014

It seems that compromised password databases are getting bigger and bigger.

The latest one is a report that 145 million user account records (i.e., username, hashed password, some profile information such as date of birth) were exfiltrated from eBay.

(Gotta love that word … exfiltrated.)

I don’t know what attack vector was used to compromise this data, other than that the attack was carried out from inside the eBay corporate network, so discussing that will have to wait for another day.

As the scale of these incidents gets larger, new problems arise. For example, eBay (the corporation) has reacted very responsibly here – disclosing what they know and advising users to change their passwords. Users are getting used to these kinds of incidents and are trying to change their passwords. So far, so good.

But there are 145,000,000 users trying to change their passwords, more or less all at the same time. The eBay web site clearly cannot keep up. I tried to change my password, but failed:

  • First, it was hard to find the password change screen (but I did find it in the end…)
  • Once I found it, I learned that the eBay site requires confirmation that it’s a legitimate user (me) making the password change, by sending a code to my personal e-mail or phone.
  • But … the system is under such high load that I never got the confirmation e-mail. I tried asking for a text message but the site just refused, complaining about load.
  • What about users who registered an e-mail account with eBay years ago, and no longer have that account? I suppose they cannot change their password – at least not without human assistance, which also won’t scale to 145,000,000 accounts…

In short, at this scale, remediation is a problem. Maybe I’ll try to change my password tonight or tomorrow. Hopefully the storm of password changes will have slowed down by then.

What about users that employ the same password on multiple web sites (i.e., almost everybody)? This incident implies that 100,000,000 or more users are now trying to change their passwords on facebook, reddit, flickr, google, live.com, etc. I bet those sites are slammed too, and perhaps also unable to respond.

All this sounds like a strong argument for federating identity and authentication — but federating to a few large providers (like Google or Microsoft) will concentrate risk. Imagine if Google or Microsoft get compromised, and everybody was using those platforms as federated identity/authentication providers for web sites such as eBay. That would be even worse than the current eBay incident! Moreover, federation creates linkages between accounts on different services, so has the (unintended) effect of diluting privacy.

Ultimately, I think federating to a large number of small providers would be best, because compromise of any one provider would have only modest impact. Unfortunately, we are still very far from such an architecture.

Another day, another rogue admin

Wednesday, May 21st, 2014

Some people never learn, I guess.

This guy: (link to IT World article) will spend some quality time in jail for sabotaging work systems before he learned about his own imminent termination.

Getting fired sucks. Going to jail for these shenanigans is definitely worse.

Clearly, a privileged access management system could have mitigated the harm.

Real security: the new SOX

Monday, May 5th, 2014

In the past few years, the looming threat of non-compliance with Sarbanes-Oxley (SOX) has driven much spending on IT security. This, despite that the “security” bits in the SOX legislation are laughably vague. In section 404 of the SOX legislation, there are requirements for public-listed companies to implement, assess and certify the quality of the internal controls that impact financial systems and data. That’s about it. Weak.

Despite being totally ambiguous, fear of SOX non-compliance has led corporations to spend billions on IT security. I imagine much of that money was spent on useless technology and process – things that *look* like they work, but may not actually be effective.

That was then. This is now.

I just read that the CEO of Target was removed, in large part because of the huge security incident they had, with tens of millions of credit card records compromised. Now that’s a serious threat, with a material impact on the corporation, both in terms of liability (to the card companies) and brand (shoppers going elsewhere because they are afraid of the nuisance of identity theft). It seems that the impact on management is actually more serious than SOX. I can’t recall any CEO of a major corporation being terminated before, due to an IT security breach. But now we have one, and I bet all the other CEOs will take note.

http://www.reuters.com/article/2014/05/05/us-target-ceo-idUSBREA440BD20140505

It will be interesting if the response to this will be any different than it had been to SOX. i.e., if the focus this time will be on actual security, rather than merely passing audits.

Of course, we have a vested stake in this game. Organizations seeking real security need to worry about all kinds of things — control over privileged accounts, prompt/reliable/complete access deactivation when users leave, assigning needs-appropriate access rights, strong user passwords and much more. We make software that addresses these problems.

Bait-and-switch, buyer beware

Friday, May 2nd, 2014

It seems that this industry can never stop with the bait-and-switch sales strategy. It never ceases to amaze me what some of our competitors will offer, or what some customer organizations will believe.

A couple of recent examples:

  • Last summer, we were talking to a global enterprise interested in replacing a legacy, out-of-support identity management product. They had a really unrealistic timeline: 6 weeks to implement and deploy a replacement. We told them that it would simply not be that fast. One of our competitors assured them that it would be no problem. Our competitor won the business, on the basis of ludicrously false promises. Now, in retrospect, reality has set in. 8 months later (not 1.5 months!), they are about half way done with the replacement effort. Good times.
  • Recently, a software vendor in our space started offering “free” software. Of course, it would only come with limited integrations, and limited features, and just one day of consulting, and would only be free for the first year. But hey – it’s free! And if any organization is silly enough to adopt this “free” software, why they would be hard pressed to walk away when the time comes to start paying for things, or adding useful features.

This is not new – we’ve seen companies promising “enterprise deployment” of privileged access management systems, with thousands of integrations and full business process, in under 30 days. Seriously? Most organizations will need that time just to define their requirements, never mind deploy, test, migrate to production, retest, document and hand-off to operations.

But there must be money in it, because people keep offering things that are clearly too good to be true (and so are not true).

Windows 8.x: Is Microsoft in Trouble?

Tuesday, April 22nd, 2014

I recently bought a new laptop. As is usual these days, it came with Windows 8. This was my first “Windows 8″ computer, so I was a bit surprised by the experience, despite what others have reported.

Here’s what it came to:

  • Remove adware, trialware and other junk: 1 hour
  • Apply all available patches from Windows Update: 2 hours
  • Install useful apps (e.g., firefox, acrobat reader, skype, etc.): 1 hour
  • Upgrade to Windows 8.1: 12 hours (overnight – 3.5GB download!)
  • Fix up Windows settings to make Windows 8.1 less annoying (that whole start screen / metro thing is an abomination on non-touch devices): 1 hour

Total to make the Windows 8.1 system usable: 17 elapsed hours, of which about 2 hours required my involvement.

No wonder people buy Mac’s!

I’m a frequent Linux user, so next I shrank one of the partitions and installed Ubuntu 14.04:

  • Shrink Windows partition: 5 minutes
  • Download Ubuntu 14.04 image: 5 minutes
  • Burn the image to a USB flash drive: 5 minutes
  • Install Linux, including a full set of apps: 10 minutes

Total time to add Linux plus all the usual apps to this laptop: 25 minutes.

The irony is that the Linux install is (I think) much more user friendly. None of that Metro/Start Screen/Modern junk with jarring transitions between full-screen and Windowed apps.

I’m biased, sure, but in both directions:

  • Our company’s products install on and integrate well with Windows — I want Microsoft to succeed, at least in the corporate space.
  • I’ve been a long-term Unix user, so I’m probably more productive and certainly more comfortable with a shell than a GUI.

Still – the difference in installation experience and in the quality of the resulting desktop environment is significant, and not in Microsoft’s favour.

Ultimately, people will choose what they are familiar or comfortable with. Most users will buy a computer based on price point and will live with whatever junk it came with. Compatibility with hardware and apps is a big decision factor, and I expect that support for shiny-and-new printers and scanners is probably a point in favour of Windows. But for the average user, whose PC is used to browse the web, access e-mail, write the occasional document, etc. — Linux looks to me like a much nicer option than Windows.

That’s not good for Microsoft.

Patches, mobile devices and network equipment

Monday, April 14th, 2014

The recent ‘heartbleed’ vulnerability should teach us something, and that is to pay more attention to patch processes.

A recap: this vulnerability is due to a bug in certain versions of the open source OpenSSL library, where implementation of the ‘heartbeat’ part of the TLS protocol does not include proper bounds checking. Consequently, a malicious client can effectively ask a server to dump the contents of its memory space, or a malicious server can ask for a copy of a client’s memory.

The patch is trivial (likely 1 line of code). Most organizations have been pretty good about patching their web sites. So far so good.

The challenge is that OpenSSL seems to have been embedded, over the years, in many products. This includes hardware devices such as routers and phones. This has happened both because it is free and because OpenSSL is both useful well made. I know that, in general, I would rather trust the security of OpenSSL than RSA BSAFE, as the latter has been explicitly compromised by the NSA.

The problem is that while we, as an industry, have gotten pretty good at patching servers and PCs, we have absolutely no handle on the process for patching phones, tablets, network devices and even some apps. There is no clean, standard, autonomous process for patching firmware on corporate devices – it’s a one-device-at-a-time effort. Things get worse on consumer devices: home users won’t know that they should patch or how. Phones are a problem too, because – iOS and Google Nexus devices aside – telco’s control the patch process and they are very slow to push patches to consumer devices.

So think of ‘Heartbleed’ as a call to arms to vendors: patch your products quickly and automatically, please. This should include embedded software in hardware products, operating systems and apps on phones and tablets and applications. These segments of the market need to catch up with modern operating systems, which can download and apply patches automatically.

Heardbleed bug – actual impact, history and shoddy journalism

Thursday, April 10th, 2014

By now, I imagine everyone has read about the “heartbleed” bug in the OpenSSL library.

If you haven’t read about it yet:

  • There is a whole web site dedicated to it here: heartbleed.com
  • The short form is this:
    • OpenSSL is probably the most popular SSL library in the world.
    • There is a bug in OpenSSL such that an attacker can retrieve segments of server memory from an OpenSSL-protected web server.
    • This memory may contain the server’s private SSL certificate, user passwords or anything else.
    • If someone attacks a server this way, the server would not log the attack – the compromise is silent.

So what does this all mean?

  • If you operate a web site protected by HTTPS, with the S — SSL — implemented using the OpenSSL code, you obviously have to patch immediately.
  • Anything on affected web servers may have been compromised. The most worrying bit is the server’s private SSL certificate. Why is that a problem? Because someone who steals that private SSL certificate could subsequently impersonate your web site without alerting users visiting his fake server that they are not communicating with the legitimate site. This is a man-in-the-middle attack.
  • If you sign into a web site that has been compromised this way, then a man-in-the-middle attack such as the above may have been used to steal your password (in plaintext – it does not matter how good your password was). This is especially problematic in public spaces like coffee shops or airports, where a man-in-the-middle attack would be much easier to carry out than – say – when you sign into systems from your home or office.
  • A server that was compromised might also leak password data. Very few systems store plaintext passwords these days, and large web sites would not store password hashes on the front-end web server anyways, so this is mainly a concern if you sign into a smallish web site, which stores password hashes locally on the web server, and if your password was a fairly easy to guess one.

Do we know about anyone who has actually been “hacked” this way? The short answer is NO. There is a great risk of compromise here, but I have not heard of any actual compromised web sites, server certificates or passwords. Be careful, but don’t panic, in other words.

How are the media handling this? As you might expect, with lots of sensational and misleading nonsense. I read an article in the local newspaper that was particularly shocking — i.e., the quality and accuracy of the coverage was about as poor as I’ve seen in recent years. If you want a chuckle, read this:

Worst technology reporting in recent memory at calgaryherald.com.

More seriously, Theo de Raadt of the OpenBSD project recently pointed out that all this could have been avoided if a security measure in libc had not been bypassed. I recommend following Theo – he’s an awfully smart guy (and lives a stone’s throw away from my office to boot).

So what to do?

  • Do you operate an affected web site? Patch your OpenSSL library and get/install a fresh certificate, because there is a risk that your old cert was compromised. Good business for the certificate authorities here.
  • Do you sign into an affected web site? (You should assume yes, since OpenSSL is so common). Wait a few days and change your password. Make sure your web browser checks for revoked certificates. On my Firefox instance, I checked about:config and found that app.update.cert.checkAttributes was true. That’s good.

Addendum: as one might expect, the always-brilliant XKCD explains the vulnerability perfectly:

Cloud/IaaS: It’s all about the workload

Wednesday, April 2nd, 2014

So you think moving your server workloads to the cloud will save you money?

Think again.

The cloud paradigm is no longer new to computing, but even when it was a new computing idea, it was already an old commercial idea. When we move workloads to cloud servers, we are in effect leasing servers rather than buying them.

How is that relevant?

I can buy a big screen TV at my local electronics shop, or I can lease one next door. If I lease it, initially it will seem less costly, but over time, leasing will cost more than buying. The same is true with cars: I can buy one and drive it into the ground, over 10 or 15 years, or I can lease one, and replace it with a newer model every couple of years. Leasing will definitely cost more.

So the lesson here is that cloud == leasing, on-premise == buying. Leasing costs more but has the benefits of offloading administration to someone else, plus the opportunity to replace the product or service with a newer version quite frequently. In other words, with leasing, I pay more, and I get more.

IaaS and SaaS are the same. You don’t buy the compute capacity – you lease it. You don’t buy the hosted app – you lease access to it. Someone else manages it and someone else upgrades it once in a while. It costs more in the long run, but you get those benefits.

So does this mean that IaaS is definitely more costly than purchased compute capacity? Mostly, yes. The main exception to this rule is where the workload is sporadic. If I have a VM that needs to run for just 1 hour per day, if I buy the capacity, then I’ve effectively purchased 24x as much capacity as I needed, so even the buy-vs-lease cost savings won’t help me. It’s better to lease that for just the time windows I need it.

What does this mean in practice?

IaaS is cost effective for specific workloads — those that are only run on demand, and are shut down most of the time. Training systems. Demo systems. Peak capacity web farms. POC and lab environments. Testing systems used infrequently. There are lots of workloads where – if you have the discipline to shut things off when not in use – you can save money by moving the runtime platform to the cloud.

But who has the discipline? That’s the real problem. Human users forget to shut down their VMs, so they might move to the cloud to host sporadic workloads, but then they forget to turn things off, and wind up leasing much more capacity than they really need.

This is where Hitachi ID can help. Our Privileged Access Manager can be used to “check out” a whole machine (not just a privileged account), which has the desirable side-effect of turning the machine on. The subsequent check-in, which might be manual or due to a timeout, will suspend the same VM, effectively stopping the billing until the machine is needed again.

This can amount to a huge cost savings for IaaS used to host sporadic workloads.

If your IaaS usage fits this pattern, call us — we can save you money.