Skip to main content

Hitachi ID Systems Blogs

Archive for April, 2014

Windows 8.x: Is Microsoft in Trouble?

Tuesday, April 22nd, 2014

I recently bought a new laptop. As is usual these days, it came with Windows 8. This was my first “Windows 8″ computer, so I was a bit surprised by the experience, despite what others have reported.

Here’s what it came to:

  • Remove adware, trialware and other junk: 1 hour
  • Apply all available patches from Windows Update: 2 hours
  • Install useful apps (e.g., firefox, acrobat reader, skype, etc.): 1 hour
  • Upgrade to Windows 8.1: 12 hours (overnight – 3.5GB download!)
  • Fix up Windows settings to make Windows 8.1 less annoying (that whole start screen / metro thing is an abomination on non-touch devices): 1 hour

Total to make the Windows 8.1 system usable: 17 elapsed hours, of which about 2 hours required my involvement.

No wonder people buy Mac’s!

I’m a frequent Linux user, so next I shrank one of the partitions and installed Ubuntu 14.04:

  • Shrink Windows partition: 5 minutes
  • Download Ubuntu 14.04 image: 5 minutes
  • Burn the image to a USB flash drive: 5 minutes
  • Install Linux, including a full set of apps: 10 minutes

Total time to add Linux plus all the usual apps to this laptop: 25 minutes.

The irony is that the Linux install is (I think) much more user friendly. None of that Metro/Start Screen/Modern junk with jarring transitions between full-screen and Windowed apps.

I’m biased, sure, but in both directions:

  • Our company’s products install on and integrate well with Windows — I want Microsoft to succeed, at least in the corporate space.
  • I’ve been a long-term Unix user, so I’m probably more productive and certainly more comfortable with a shell than a GUI.

Still – the difference in installation experience and in the quality of the resulting desktop environment is significant, and not in Microsoft’s favour.

Ultimately, people will choose what they are familiar or comfortable with. Most users will buy a computer based on price point and will live with whatever junk it came with. Compatibility with hardware and apps is a big decision factor, and I expect that support for shiny-and-new printers and scanners is probably a point in favour of Windows. But for the average user, whose PC is used to browse the web, access e-mail, write the occasional document, etc. — Linux looks to me like a much nicer option than Windows.

That’s not good for Microsoft.

Patches, mobile devices and network equipment

Monday, April 14th, 2014

The recent ‘heartbleed’ vulnerability should teach us something, and that is to pay more attention to patch processes.

A recap: this vulnerability is due to a bug in certain versions of the open source OpenSSL library, where implementation of the ‘heartbeat’ part of the TLS protocol does not include proper bounds checking. Consequently, a malicious client can effectively ask a server to dump the contents of its memory space, or a malicious server can ask for a copy of a client’s memory.

The patch is trivial (likely 1 line of code). Most organizations have been pretty good about patching their web sites. So far so good.

The challenge is that OpenSSL seems to have been embedded, over the years, in many products. This includes hardware devices such as routers and phones. This has happened both because it is free and because OpenSSL is both useful well made. I know that, in general, I would rather trust the security of OpenSSL than RSA BSAFE, as the latter has been explicitly compromised by the NSA.

The problem is that while we, as an industry, have gotten pretty good at patching servers and PCs, we have absolutely no handle on the process for patching phones, tablets, network devices and even some apps. There is no clean, standard, autonomous process for patching firmware on corporate devices – it’s a one-device-at-a-time effort. Things get worse on consumer devices: home users won’t know that they should patch or how. Phones are a problem too, because – iOS and Google Nexus devices aside – telco’s control the patch process and they are very slow to push patches to consumer devices.

So think of ‘Heartbleed’ as a call to arms to vendors: patch your products quickly and automatically, please. This should include embedded software in hardware products, operating systems and apps on phones and tablets and applications. These segments of the market need to catch up with modern operating systems, which can download and apply patches automatically.

Heardbleed bug – actual impact, history and shoddy journalism

Thursday, April 10th, 2014

By now, I imagine everyone has read about the “heartbleed” bug in the OpenSSL library.

If you haven’t read about it yet:

  • There is a whole web site dedicated to it here: heartbleed.com
  • The short form is this:
    • OpenSSL is probably the most popular SSL library in the world.
    • There is a bug in OpenSSL such that an attacker can retrieve segments of server memory from an OpenSSL-protected web server.
    • This memory may contain the server’s private SSL certificate, user passwords or anything else.
    • If someone attacks a server this way, the server would not log the attack – the compromise is silent.

So what does this all mean?

  • If you operate a web site protected by HTTPS, with the S — SSL — implemented using the OpenSSL code, you obviously have to patch immediately.
  • Anything on affected web servers may have been compromised. The most worrying bit is the server’s private SSL certificate. Why is that a problem? Because someone who steals that private SSL certificate could subsequently impersonate your web site without alerting users visiting his fake server that they are not communicating with the legitimate site. This is a man-in-the-middle attack.
  • If you sign into a web site that has been compromised this way, then a man-in-the-middle attack such as the above may have been used to steal your password (in plaintext – it does not matter how good your password was). This is especially problematic in public spaces like coffee shops or airports, where a man-in-the-middle attack would be much easier to carry out than – say – when you sign into systems from your home or office.
  • A server that was compromised might also leak password data. Very few systems store plaintext passwords these days, and large web sites would not store password hashes on the front-end web server anyways, so this is mainly a concern if you sign into a smallish web site, which stores password hashes locally on the web server, and if your password was a fairly easy to guess one.

Do we know about anyone who has actually been “hacked” this way? The short answer is NO. There is a great risk of compromise here, but I have not heard of any actual compromised web sites, server certificates or passwords. Be careful, but don’t panic, in other words.

How are the media handling this? As you might expect, with lots of sensational and misleading nonsense. I read an article in the local newspaper that was particularly shocking — i.e., the quality and accuracy of the coverage was about as poor as I’ve seen in recent years. If you want a chuckle, read this:

Worst technology reporting in recent memory at calgaryherald.com.

More seriously, Theo de Raadt of the OpenBSD project recently pointed out that all this could have been avoided if a security measure in libc had not been bypassed. I recommend following Theo – he’s an awfully smart guy (and lives a stone’s throw away from my office to boot).

So what to do?

  • Do you operate an affected web site? Patch your OpenSSL library and get/install a fresh certificate, because there is a risk that your old cert was compromised. Good business for the certificate authorities here.
  • Do you sign into an affected web site? (You should assume yes, since OpenSSL is so common). Wait a few days and change your password. Make sure your web browser checks for revoked certificates. On my Firefox instance, I checked about:config and found that app.update.cert.checkAttributes was true. That’s good.

Addendum: as one might expect, the always-brilliant XKCD explains the vulnerability perfectly:

Cloud/IaaS: It’s all about the workload

Wednesday, April 2nd, 2014

So you think moving your server workloads to the cloud will save you money?

Think again.

The cloud paradigm is no longer new to computing, but even when it was a new computing idea, it was already an old commercial idea. When we move workloads to cloud servers, we are in effect leasing servers rather than buying them.

How is that relevant?

I can buy a big screen TV at my local electronics shop, or I can lease one next door. If I lease it, initially it will seem less costly, but over time, leasing will cost more than buying. The same is true with cars: I can buy one and drive it into the ground, over 10 or 15 years, or I can lease one, and replace it with a newer model every couple of years. Leasing will definitely cost more.

So the lesson here is that cloud == leasing, on-premise == buying. Leasing costs more but has the benefits of offloading administration to someone else, plus the opportunity to replace the product or service with a newer version quite frequently. In other words, with leasing, I pay more, and I get more.

IaaS and SaaS are the same. You don’t buy the compute capacity – you lease it. You don’t buy the hosted app – you lease access to it. Someone else manages it and someone else upgrades it once in a while. It costs more in the long run, but you get those benefits.

So does this mean that IaaS is definitely more costly than purchased compute capacity? Mostly, yes. The main exception to this rule is where the workload is sporadic. If I have a VM that needs to run for just 1 hour per day, if I buy the capacity, then I’ve effectively purchased 24x as much capacity as I needed, so even the buy-vs-lease cost savings won’t help me. It’s better to lease that for just the time windows I need it.

What does this mean in practice?

IaaS is cost effective for specific workloads — those that are only run on demand, and are shut down most of the time. Training systems. Demo systems. Peak capacity web farms. POC and lab environments. Testing systems used infrequently. There are lots of workloads where – if you have the discipline to shut things off when not in use – you can save money by moving the runtime platform to the cloud.

But who has the discipline? That’s the real problem. Human users forget to shut down their VMs, so they might move to the cloud to host sporadic workloads, but then they forget to turn things off, and wind up leasing much more capacity than they really need.

This is where Hitachi ID can help. Our Privileged Access Manager can be used to “check out” a whole machine (not just a privileged account), which has the desirable side-effect of turning the machine on. The subsequent check-in, which might be manual or due to a timeout, will suspend the same VM, effectively stopping the billing until the machine is needed again.

This can amount to a huge cost savings for IaaS used to host sporadic workloads.

If your IaaS usage fits this pattern, call us — we can save you money.

page top page top