Skip to main content

Hitachi ID Systems Blogs

Bait-and-switch, buyer beware

May 2nd, 2014

It seems that this industry can never stop with the bait-and-switch sales strategy. It never ceases to amaze me what some of our competitors will offer, or what some customer organizations will believe.

A couple of recent examples:

  • Last summer, we were talking to a global enterprise interested in replacing a legacy, out-of-support identity management product. They had a really unrealistic timeline: 6 weeks to implement and deploy a replacement. We told them that it would simply not be that fast. One of our competitors assured them that it would be no problem. Our competitor won the business, on the basis of ludicrously false promises. Now, in retrospect, reality has set in. 8 months later (not 1.5 months!), they are about half way done with the replacement effort. Good times.
  • Recently, a software vendor in our space started offering “free” software. Of course, it would only come with limited integrations, and limited features, and just one day of consulting, and would only be free for the first year. But hey – it’s free! And if any organization is silly enough to adopt this “free” software, why they would be hard pressed to walk away when the time comes to start paying for things, or adding useful features.

This is not new – we’ve seen companies promising “enterprise deployment” of privileged access management systems, with thousands of integrations and full business process, in under 30 days. Seriously? Most organizations will need that time just to define their requirements, never mind deploy, test, migrate to production, retest, document and hand-off to operations.

But there must be money in it, because people keep offering things that are clearly too good to be true (and so are not true).

Windows 8.x: Is Microsoft in Trouble?

April 22nd, 2014

I recently bought a new laptop. As is usual these days, it came with Windows 8. This was my first “Windows 8″ computer, so I was a bit surprised by the experience, despite what others have reported.

Here’s what it came to:

  • Remove adware, trialware and other junk: 1 hour
  • Apply all available patches from Windows Update: 2 hours
  • Install useful apps (e.g., firefox, acrobat reader, skype, etc.): 1 hour
  • Upgrade to Windows 8.1: 12 hours (overnight – 3.5GB download!)
  • Fix up Windows settings to make Windows 8.1 less annoying (that whole start screen / metro thing is an abomination on non-touch devices): 1 hour

Total to make the Windows 8.1 system usable: 17 elapsed hours, of which about 2 hours required my involvement.

No wonder people buy Mac’s!

I’m a frequent Linux user, so next I shrank one of the partitions and installed Ubuntu 14.04:

  • Shrink Windows partition: 5 minutes
  • Download Ubuntu 14.04 image: 5 minutes
  • Burn the image to a USB flash drive: 5 minutes
  • Install Linux, including a full set of apps: 10 minutes

Total time to add Linux plus all the usual apps to this laptop: 25 minutes.

The irony is that the Linux install is (I think) much more user friendly. None of that Metro/Start Screen/Modern junk with jarring transitions between full-screen and Windowed apps.

I’m biased, sure, but in both directions:

  • Our company’s products install on and integrate well with Windows — I want Microsoft to succeed, at least in the corporate space.
  • I’ve been a long-term Unix user, so I’m probably more productive and certainly more comfortable with a shell than a GUI.

Still – the difference in installation experience and in the quality of the resulting desktop environment is significant, and not in Microsoft’s favour.

Ultimately, people will choose what they are familiar or comfortable with. Most users will buy a computer based on price point and will live with whatever junk it came with. Compatibility with hardware and apps is a big decision factor, and I expect that support for shiny-and-new printers and scanners is probably a point in favour of Windows. But for the average user, whose PC is used to browse the web, access e-mail, write the occasional document, etc. — Linux looks to me like a much nicer option than Windows.

That’s not good for Microsoft.

Patches, mobile devices and network equipment

April 14th, 2014

The recent ‘heartbleed’ vulnerability should teach us something, and that is to pay more attention to patch processes.

A recap: this vulnerability is due to a bug in certain versions of the open source OpenSSL library, where implementation of the ‘heartbeat’ part of the TLS protocol does not include proper bounds checking. Consequently, a malicious client can effectively ask a server to dump the contents of its memory space, or a malicious server can ask for a copy of a client’s memory.

The patch is trivial (likely 1 line of code). Most organizations have been pretty good about patching their web sites. So far so good.

The challenge is that OpenSSL seems to have been embedded, over the years, in many products. This includes hardware devices such as routers and phones. This has happened both because it is free and because OpenSSL is both useful well made. I know that, in general, I would rather trust the security of OpenSSL than RSA BSAFE, as the latter has been explicitly compromised by the NSA.

The problem is that while we, as an industry, have gotten pretty good at patching servers and PCs, we have absolutely no handle on the process for patching phones, tablets, network devices and even some apps. There is no clean, standard, autonomous process for patching firmware on corporate devices – it’s a one-device-at-a-time effort. Things get worse on consumer devices: home users won’t know that they should patch or how. Phones are a problem too, because – iOS and Google Nexus devices aside – telco’s control the patch process and they are very slow to push patches to consumer devices.

So think of ‘Heartbleed’ as a call to arms to vendors: patch your products quickly and automatically, please. This should include embedded software in hardware products, operating systems and apps on phones and tablets and applications. These segments of the market need to catch up with modern operating systems, which can download and apply patches automatically.

Heardbleed bug – actual impact, history and shoddy journalism

April 10th, 2014

By now, I imagine everyone has read about the “heartbleed” bug in the OpenSSL library.

If you haven’t read about it yet:

  • There is a whole web site dedicated to it here: heartbleed.com
  • The short form is this:
    • OpenSSL is probably the most popular SSL library in the world.
    • There is a bug in OpenSSL such that an attacker can retrieve segments of server memory from an OpenSSL-protected web server.
    • This memory may contain the server’s private SSL certificate, user passwords or anything else.
    • If someone attacks a server this way, the server would not log the attack – the compromise is silent.

So what does this all mean?

  • If you operate a web site protected by HTTPS, with the S — SSL — implemented using the OpenSSL code, you obviously have to patch immediately.
  • Anything on affected web servers may have been compromised. The most worrying bit is the server’s private SSL certificate. Why is that a problem? Because someone who steals that private SSL certificate could subsequently impersonate your web site without alerting users visiting his fake server that they are not communicating with the legitimate site. This is a man-in-the-middle attack.
  • If you sign into a web site that has been compromised this way, then a man-in-the-middle attack such as the above may have been used to steal your password (in plaintext – it does not matter how good your password was). This is especially problematic in public spaces like coffee shops or airports, where a man-in-the-middle attack would be much easier to carry out than – say – when you sign into systems from your home or office.
  • A server that was compromised might also leak password data. Very few systems store plaintext passwords these days, and large web sites would not store password hashes on the front-end web server anyways, so this is mainly a concern if you sign into a smallish web site, which stores password hashes locally on the web server, and if your password was a fairly easy to guess one.

Do we know about anyone who has actually been “hacked” this way? The short answer is NO. There is a great risk of compromise here, but I have not heard of any actual compromised web sites, server certificates or passwords. Be careful, but don’t panic, in other words.

How are the media handling this? As you might expect, with lots of sensational and misleading nonsense. I read an article in the local newspaper that was particularly shocking — i.e., the quality and accuracy of the coverage was about as poor as I’ve seen in recent years. If you want a chuckle, read this:

Worst technology reporting in recent memory at calgaryherald.com.

More seriously, Theo de Raadt of the OpenBSD project recently pointed out that all this could have been avoided if a security measure in libc had not been bypassed. I recommend following Theo – he’s an awfully smart guy (and lives a stone’s throw away from my office to boot).

So what to do?

  • Do you operate an affected web site? Patch your OpenSSL library and get/install a fresh certificate, because there is a risk that your old cert was compromised. Good business for the certificate authorities here.
  • Do you sign into an affected web site? (You should assume yes, since OpenSSL is so common). Wait a few days and change your password. Make sure your web browser checks for revoked certificates. On my Firefox instance, I checked about:config and found that app.update.cert.checkAttributes was true. That’s good.

Addendum: as one might expect, the always-brilliant XKCD explains the vulnerability perfectly:

Cloud/IaaS: It’s all about the workload

April 2nd, 2014

So you think moving your server workloads to the cloud will save you money?

Think again.

The cloud paradigm is no longer new to computing, but even when it was a new computing idea, it was already an old commercial idea. When we move workloads to cloud servers, we are in effect leasing servers rather than buying them.

How is that relevant?

I can buy a big screen TV at my local electronics shop, or I can lease one next door. If I lease it, initially it will seem less costly, but over time, leasing will cost more than buying. The same is true with cars: I can buy one and drive it into the ground, over 10 or 15 years, or I can lease one, and replace it with a newer model every couple of years. Leasing will definitely cost more.

So the lesson here is that cloud == leasing, on-premise == buying. Leasing costs more but has the benefits of offloading administration to someone else, plus the opportunity to replace the product or service with a newer version quite frequently. In other words, with leasing, I pay more, and I get more.

IaaS and SaaS are the same. You don’t buy the compute capacity – you lease it. You don’t buy the hosted app – you lease access to it. Someone else manages it and someone else upgrades it once in a while. It costs more in the long run, but you get those benefits.

So does this mean that IaaS is definitely more costly than purchased compute capacity? Mostly, yes. The main exception to this rule is where the workload is sporadic. If I have a VM that needs to run for just 1 hour per day, if I buy the capacity, then I’ve effectively purchased 24x as much capacity as I needed, so even the buy-vs-lease cost savings won’t help me. It’s better to lease that for just the time windows I need it.

What does this mean in practice?

IaaS is cost effective for specific workloads — those that are only run on demand, and are shut down most of the time. Training systems. Demo systems. Peak capacity web farms. POC and lab environments. Testing systems used infrequently. There are lots of workloads where – if you have the discipline to shut things off when not in use – you can save money by moving the runtime platform to the cloud.

But who has the discipline? That’s the real problem. Human users forget to shut down their VMs, so they might move to the cloud to host sporadic workloads, but then they forget to turn things off, and wind up leasing much more capacity than they really need.

This is where Hitachi ID can help. Our Privileged Access Manager can be used to “check out” a whole machine (not just a privileged account), which has the desirable side-effect of turning the machine on. The subsequent check-in, which might be manual or due to a timeout, will suspend the same VM, effectively stopping the billing until the machine is needed again.

This can amount to a huge cost savings for IaaS used to host sporadic workloads.

If your IaaS usage fits this pattern, call us — we can save you money.

Modern day IAG delusions

March 27th, 2014

“HR is the source of truth” –> Really? Are they reliable? Timely? Do they know about contractors? Vendors? Are they a willing participant in non-HR processes (such as access management)?

“Job title determines role” –> Really? Who defines job titles? What governance process determines what titles are valid and who can get which ones? How are they updated? Is the level of granularity of the random string of text on my business cards really the same as my access rights?

“Just define and assign roles, then all the access rights problems will be solved.” –> Really? You think the access rights of back-office workers are easily compartmentalized, well defined and static, so that they can be trivially assigned via roles?

Consumer credit card data breaches

January 14th, 2014

Another day, another breach, or so it seems.

Both Target and Neiman Marcus have been victims of large scale compromise of customer data, including credit card data:

Aside from the large size of these compromises — tens of millions of payment card numbres — is the fact that they seem to have been carried off in the physical retail environment.

For a long time, the pattern of breaches we see reported in the press has been compromises of web sites or back office operations, and consumers have probably come to believe that if they were at risk at all (probably not many worry about this, given the volume of online purchases), they were at risk when shopping on-line but not in person.

The reality, however, is that a lot of fraud and identity theft happens in the physical world. Low tech attacks include “dumpster diving” to get personal information (discarded bank statements and the like), telephone based “social engineering” attacks (I call your bank or a retailer and pretend to be you) and in-person attacks (I visit the bank and try to impersonate you or I use a stolen truck to literally break off and haul away an entire ATM).

Now we are seeing mixed attacks. Point of sale systems are under attack, but sophisticated IT technology (such as RAM scrapers and code that sends home stolen data) are used as well.

This means that corporations have a much larger physical perimeter to protect — including their retail operations and “road warrior” users. However, the defenses have not really changed. They begin with physical security. In this case, that means hardened devices and locked server rooms, including in the retail world. Electronic defenses are the same as they have been for years — Encrypt filesystems, authenticate/authorize/audit both regular and privileged users, encrypt storage and transmission, deploy and maintain anti-malware and patches, etc.

The payment card industry actually has excellent standards for this stuff. “Payment Card Industry, Data Security Standards V2″ (PCI-DSSv2) is clear, reasonable and explit:

One would hope that these retailers, and anyone else that touches credit card data, actually complies with these standards.

For those that need help, we do offer some assistance:

  • Hitachi ID Privileged Access Manager to secure access to root, admin, DBA and service accounts.
  • Hitachi ID Identity Manager to ensure users get appropriate access rights and have that access deactivated promptly and reliably when they leave the organization (a big deal in retail!)
  • Hitachi ID Password Manager to securely and efficiently manage corporate credentials, lowering the risk of a user’s (weak) password being compromised and that user’s access then being abused.

The bad guys have upped their game. The good guys must follow suit.

Adobe hack

October 30th, 2013

Reports are circulating today that a recent hack of Adobe and exfiltration of customer data was larger than thought – data about 38 million active users was compromised:

nakedsecurity.sophos.com

This raises some interesting questions:

  • There is a fundamental risk to a subscription-based business model, which is what Adobe.com has moved to. If you want to charge your customers monthly, like a utility, to use your products or services, then necessarily you have their contact info, credit card numbers, etc. That makes for quite an attractive target for compromise!
  • Clearly the data in question should be secured very carefully — encrypted, access controlled (e.g., using a privileged access management system, monitored, etc. Something in these controls clearly failed at Adobe.

This is a warning to customers to beware sharing CC and similar data with firms that have to retain the indefinitely. It is also a warning to firms that have such practices to be incredibly careful.

PCI-DSS includes lots of good guidelines about how to protect such data — I wonder which rules Adobe managed to not follow?

Finger prints again

September 23rd, 2013

Interesting. How long as the iPhone 5S been on the market? 2 weeks?

Unsurprisingly, the finger print scanner has already been “hacked” — meaning that if someone can take a photo of your fingerprint, for example from your beer glass, they can photo manipulate it and cover it in latex or just plain glue to make a working pattern that will sign them into your phone.

The Guardian

Chaos Computer Club

This is no big deal – most and perhaps all consumer grade finger print scanners are vulnerable to this kind of thing. It’s just evidence that:

  • A finger print scanner is all about convenience, not about security
  • If you want security, combine multiple authentication factors.

I wonder if that basic advice shows up anywhere on Apple’s marketing material or user guides? Probably not.

Fingerprint scanners: a sign of the end of growth?

September 12th, 2013

Finger print scanners may have seemed high tech once upon a time, but they became commodity technology years ago. In fact, for years PC makers were adding bells and whistles, and it was around the time that they ran out of useful ideas (and added finger print scanners) that growth in the PC business seems to have come crashing to a halt.

Now the PC makers weren’t doing anything wrong — it’s just that the market had saturated and they ran out of useful things to offer, with finger print scanners being the last, mostly-useless gadget they could think of to throw in for minimal incremental cost. By when these things showed up, laptops were powerful, had lots of disk, CPU and RAM, had built-in gigabit Ethernet and Wifi, speakers and microphones, webcams, etc. i.e., quite nice machines, for not much money.

Apple just released new iPhones today, and one of them has a fingerprint scanner. I think that marks the end of growth in the smart phone hardware market, just as it did for PCs. Smart phones today are nice — high resolution colour screens, decently fast CPUs, lots of RAM and storage, WiFi, GSM, LTE, tethering, apps, music, video, document processing, GPS/navigation, accelerometers, light sensors, response to speech input, light sensors, front and back cameras, etc.

I don’t think there’s all that much left to add – just slightly better, faster and cheaper with each generation.

This is a big problem for the phone manufacturers, as their volumes will (or perhaps already have?) flat-line and their margins will compress.

The only growth left is to saturate developing country markets – China, India, etc. That won’t be easy for the major players, as China at least has quite strong domestic manufacturers who play well in a market where relationships with the telcos matter a lot and where consumers are very price conscious.

So I’ll stick my neck out and make some predictions:

  • Apple revenues will stay flat and they will become a utility, as Microsoft, Cisco and Intel have before.
  • Samsung has a bit more runway (better product mix and geographic diversity) but in a couple of years they will flat line too.
  • We won’t see any major innovations in smart phones for years.
  • Maybe others will pick up on the finger print gimmic, and maybe not – I don’t think anyone cares.

By the way, this is only peripherally an identity-related blog entry. :-) Finger print scanners are a biometric authentication device, so fair game. But really, it’s about the rapid maturation and saturation of the smart phone market, which is interesting in its own right.

page top page top