Archive for 2011

IE6 is finally going to go away!

Tuesday, December 20th, 2011

Just read this:

Looks like Microsoft is going to use Windows Update to push out a “patch” that finally drives a nail into the coffin of IE6/IE7. Hooray! The world will be a better place with that ancient piece of junk gone. Just think of all the HTML and CSS hacks that we can all retire. 😉

SCADA system hacked

Friday, November 18th, 2011

It used to be that nobody in their right mind would connect mission critical SCADA systems to the Internet. That was certainly the “accepted best practice” many years ago when I used to do pen tests.

It seems that some people these days place convenience over security, and the result is predictable: hacked SCADA systems and disruption to physical infrastructure.

Attack on City Water Station Destroys Pump

Given the fun the Iranians have been having with Stuxnet, you’d think people would be smarter than that…

Ian Glazer’s talk: Developing a Business- Centric Identity and Access Management Strategy

Thursday, November 17th, 2011

I just came home from San Diego, where I was at Gartner’s Identity and Access Management conference.

A good event, actually. Lots of smart people in one place, all passionate about identity and access management and sharing their real-world experiences.

One session that really struck a chord for me was Ian Glazer’s talk titled “Developing a Business-Centric Identity and Access Management Strategy.” This may seem self-serving, but it struck a chord because a lot of what he was saying was so closely aligned with what we’ve been doing here at Hitachi ID for some time.

For example, he proposes an incremental approach to identity management projects, which we also recommend. I think that’s pretty widely accepted these days. “Boil the ocean” never worked for anyone, in IT or other disciplines.

More importantly, he offers a nice, practical way to help organizations figure out how to break down their monolithic identity management objectives into bite-sized pieces. Basically get your stake-holders to sit down together and fill in a worksheet with the following columns:

  • Constituent:
  • Kind
  • Source of truth
  • Opportunity for life cycle automation
  • Life cycle event
  • Target system:
  • Name
  • Value
  • Fulfillment volume

This is very similar to what we already do in workshops that we hold with customers before starting an engagement. Our terminology is slightly different, but basically it boils down to:

  • User community (e.g., US employees or EU contractors, etc.
  • Data source (e.g., HR, contractor database, manager input, etc.
  • Data quality and timeliness (e.g., is HR providing data early enough to trigger automation?
  • Change type (e.g., hire, move, fire, etc.)
  • Volume (e.g., how often does this change happen?
  • Application (e.g., where does the organization want to create, modify or delete access?
  • Impact (e.g., how important is it to get these changes made quickly?

Do you see the similarity? It’s almost identical! Gotta love it when different practitioners arrive at the same solution.

We work with customers to collect this information and prioritize. Ian structures this stuff in a table and each row is a potential deliverable, organized into a priority sequence. Nice and clear!

But there’s more! Ian was promoting the value of an “entitlement catalog.” I’ve been convinced for some time that it’s important to assign human-legible descriptions, help links and meta data to groups, accounts, roles and other resources assigned to users. The list of these objects and their meta data are exactly Ian’s entitlement catalog.

Enough similarity? No! Ian also gives a nice and clear model of manual vs. automatic processes. He structures it as follows:

  • Lifecycle Events
  • Access Policy Management
  • Fulfillmente
  • Identity repository
  • Entitlement catalog

This maps 1:1 to our own terminology:

  • Requests portal (a.k.a. manual input of lifecycle events).
  • Automatic administration engine (a.k.a. automatic input of lifecycle events from an HR feed or similar).
  • Approvals workflow (a.k.a. manual access policy).
  • SoD and other automated policy checks (a.k.a. automatic access policy).
  • Transaction manager and connectors (a.k.a. automatic fulfillment).
  • Implementers workflow (a.k.a. manual fulfillment).
  • Profile attributes (a.k.a. identity repository).
  • Resource attributes (a.k.a. entitlement catalog).

Man. A love-fest with Ian Glazer?

Well, that would be just a bit too weird, so I have to disagree with him on at least one thing!

Ian suggests that “Access Governance” should be decoupled from “Identity Administration.”

I agree with the need for almost all the functions in both of these buckets, but these two sets of features share so much data (remember the identity repository and entitlement catalog? policy stores? change history?). Whenever two products share a ton of data the natural question to ask is: “are they really two products, or just multiple features in the same product?” I think these features should actually live in a single product for just this reason. Plus customers expect connectors with their “governance” user interface and portal. Customers expect a usable request portal, approvals process and access certification with their connectors. I think customer expectations are reasonable.

That’s why our Identity Manager includes:

  • Auto-discovery of identity and entitlement data on the applications where it already exists.
  • Connectors to create, modify and delete users and entitlements.
  • Automation to create workflow requests as a consequence of detected changes.
  • A web portal for users to access one-anothers’ profiles and submit workflow requests interactively.
  • Policy engines for things like SoD checks and approvals routing.
  • Roles for simplified entitlement definitions.
  • Workflow processes to invite human beings to implement approved changes.
  • Workflow processes and screens to invite human beings to review and certify or remediate entitlements.
  • Reports and dashboards to monitor all of that.

How is this different from Ian’s model? It’s all in a single product, with a single back-end database, a single UI and a uniform set of processes. I think the market will follow us in this integrated direction. The decoupling between “access governance” and “user provisioning” should be conceptual, not technical.

Fun and interactive password strength calculator

Wednesday, October 5th, 2011

Sometimes we forget how fast things change…

Wednesday, September 28th, 2011

I noticed an advertisement in an in-flight magazine yesterday. It was for an HP tablet. I checked the magazine – it was dated September. Just think: this ad was purchased by HP very recently, and in the very short time between advertising purchase and when I picked up the magazine, HP dumped its inventory for $100/unit and killed off the whole division.

The dichotomy between normal time-lines for print advertising and the pace at which IT companies are forced to make and execute on major, strategic decisions couldn’t have been more clearly illustrated.


VMWare Workstation 6.5.5 on Ubuntu 11.04 “Natty Narwhal”

Tuesday, September 20th, 2011

I had the pleasure of upgrading the OS on my laptop the other day. And I use the term pleasure very loosely, because I had to move around about 150GB or data – so much fun.

Anyways, I was not surprised to see that the most recent patchlevel of VMWare Workstation 6.5.x didn’t want to install on my new OS. That’s been my experience with VMWare over the past few years – doesn’t work out of the box on Ubuntu. Insert grumbling noises.

A google search didn’t turn up anything, and it was a weekend job, so I just left it.

Good thing too – turns out that while nobody has published a patch to VMWare Workstation 6.5.5 for a 64-bit install of Ubuntu 11.04, one of the guys at work had already done the work. He asked to remain slightly anonymous – but I’ll share his initials – JN. Nice work JN.

So in case you are reading this and wanting to install VMWare Workstation 6.5.5 on Ubuntu 11.04 64-bit, please try this patch:

And everyone say thank you to JN for making it (a) work and (b) easy. 🙂

— Idan

SPML is dead … long live “SPML envelope” ?

Saturday, September 17th, 2011

I just got a demo from our engineers of an integration they completed for a higher-ed customer. The customer is using a prominent ERP for higher ed and needed to send onboarding and deactivation requests, in real time, to our identity manager. This was to be done using an SPML gateway.

The demo went great – the web services gateway does, indeed, send messages to our system to provision and deactivate students, faculty and staff in real time. Everything works nicely, especially once our guys and the customer’s team worked through crashes in the J2EE app server hosting the software that sends messages to our system.

Did I mention that J2EE sucks and major J2EE servers are crashy junk? Lets leave that for another day.

Just one hitch: when I had a look at the message format, I discovered that while the message envelope was indeed SPML, the message body was not. Indeed, the message body was clear and human-legible, something nobody would accuse SPML of. There are plenty of sample SPML messages out there (google for SPML example if you’re curious) but the key point is that they are nasty, overburdened XML not suitable for human eyes.

Our friendly higher-ed ERP vendor clearly wanted (a) to be seen as a leader adopting standard protocols, such as SPML and (b) to deliver a developer-friendly, legible web service. (a) and (b) are not compatible, so they seem to have found a sneaky way to do both – use the header from SPML but send nice content in the body.


We had to write a bit of custom code to parse the message body, but hey – it wasn’t anything like the spaghetti required to parse real SPML, so no complaints.

This is the first time we’ve ever had an actual, honest-to-goodness use case for SPML at a living customer – not a demo for an analyst firm or trade show, so I thought we’d get to exercise more of the standard. I guessed wrong.

SPML *could* be used to manage security in an app, but I’ve yet to meet an app that supports inbound SPML instead of a proprietary API.

SPML *could* be used to notify a user provisioning system of events in a system of record (as happened in our deployment here), in real time, but I’ve yet to meet an application that does that using actual SPML message bodies.

I guess SPML is so ugly that a bunch of cloud vendors invented a simpler alternative – SCIM ( I don’t think there is anything particularly “cloud” about SCIM – it’s just that the people pushing it are SaaS vendors and the word “cloud” is sexy these days.

Hopefully SCIM succeeds where SPML failed – perhaps by having a clear schema and simple syntax that humans can read unaided.

Here’s to hoping.

— Idan

Last time: SocGen, this time: UBS

Thursday, September 15th, 2011

Lovely news today – a massive loss due to unauthorized trading at UBS:

I find this one more distressing than the last time this happened, at Societe General, for a couple of reasons:

1) Didn’t anyone learn anything from the last incident?
2) UBS? Seriously? I bank with these guys!

The solutions to prevent this sort of thing are both technical and business ones.

The technical solutions are good controls over access to sensitive systems, including segregation of duties policy enforcement, to ensure that it takes at least two people to do something stupid.

Of course, I’m biased – we make software that can help with the technical part of the fix.

The business part is more contentious, but perhaps more important. I think part of the problem with controls is the ridiculous volume of transactions that investment banks make, hoping to turn a profit on super-fast trades and arbitrage. I don’t think that stuff actually does the economy at large any good — it’s just a part of the casino mentality in the financial industry. One rule I’d impose, if I magically got the power to do so tomorrow, would be to force entities who purchase any kind of financial instrument to hold it for a while. Say for an hour. Or a day.

That doesn’t sound like a big deal to anyone who is a retail investor, but I bet it sends cold shivers down the spine of big institutional investors. What? I can’t buy some stock and sell it again 20 milliseconds later? You’re kidding?

Oh well. I can’t fix the business problems, so I’ll stick to making technology that helps enforce some basic controls: privileged access management, session recording, segregation of duties enforcement, access certification, approvals workflows, etc. You know, the easy stuff.

— Idan

Tablets are to laptops as TV is to computers…

Friday, August 19th, 2011

I got a tablet a while back, and I’ve been thinking about what it means for the IT business. This has taken on more significance in light of HP’s announcement that they will exit the tablet and phone businesses.

What I notice with the tablet is that while it’s a convenient device for consuming media – watching movies and TV shows, listening to music, browsing a few web pages, playing simple games, etc. — the user experience is brutal when I try to input text.

I think I’ve gotten pretty good at using the capacitive touch screen for text input on both the tablet and my (very large screen) phone. Still, I doubt I could sustain more than about 3 words per minute of text input into these things. By comparison, once upon a time, probably 20 years ago, I clocked myself at 120 words per minute of text input using a keyboard.

So for me, a keyboard is about 40 times more effective than a touch screen. If I have more than a few words to enter, I’ll reach for the PC or laptop, thank you very much. My pain threshold for glacial user input just isn’t that high.

So the tablet is basically a media consuming device. A step up from a portable DVD player, if you will. Doesn’t that make it more or less equivalent to a television, where we all sit on the couch, numbly consuming dumbed-down content? I think the analogy is pretty compelling.

To imagine where tablets will go in the future, look no further than the evolution of TVs. They sold (and continue to sell) like hotcakes. Millions of units shipped every year. Fancy technology (think huge LCD flat screens, 3D TV, etc.) all dedicated to pushing visually stunning but largely dumbed down content to numb consumers.

Whereas a PC is an interactive device where people actually contribute something — you know, write documents, send e-mails, heck – even play interactive games with their friends — TVs are just numbing. I think the portable version of a PC is a laptop, and the portable version of a TV is a tablet.

This means that tablets will continue to be a huge commerical success for their manufacturers, but their social impact will resemble that of the TV…

As for the “business use” of tablets … what business use? Reading e-mails on the go? Watching movies while flying to a sales meeting? I think business users want tablets-as-toys, and the “business use” of tablets is just a made up justification to get the company to buy the toy. Maybe I’m just cynical.

I think this even impacts the “Web 2.0” movement. Remember that? It was supposed to mean that users contribute content to the web – rather than just reading static web pages. I think tablets are not “Web 2.0 compatible” — they are really “Web 1.0” devices.

Funny, that.

Google buys Motorola Mobility – So What?

Monday, August 15th, 2011

Interesting news today about Google buying Motorola’s mobile products division:

So what does this mean and who should care?

First, why would Google buy Motorola Mobility? I tend to agree with other opinions out there, that this was basically a purchase of a patent portfolio and a mobile products company was attached to the deal but wasn’t the real target. I don’t think Google is particularly interested in the company they just bought — they wanted a war chest of patents.

There is a patent war brewing in the mobile phone market and Google needed the ammunition to threaten Apple and Microsoft with counter-suits as they became increasingly litigous.

Of course, Google will try to keep Motorola Mobility profitable, to help pay for their acquisition of a bunch of patents.

This just highlights the foolishness of software / business method patents. They are incredibly wasteful of capital and add nothing to the economy. Google had to spend $12B to add no shareholder or customer value, just to defend themselves against a bunch of pointless lawsuits.

So what happens next?

First, it seems reasonable to assume that Google will want to continue to nurture its Android partner ecosystem. These partners will be understandably worried now that their OS supplier will compete with them in the hardware space. That’s quite the unfair advantage.

Google doesn’t want to scare off the Android ecosystem, so they will presumably run the acquired company independently of the main Google corporation. It will probably have no special advantages (such as early access to new OS versions) as compared to other Android partners. Google can then transfer the patents from this new subsidiary or organizational unit to its Android business unit, to be used as a defensive asset.

I would expect Google to use the patents to help defend its existing Android partners against suits by Apple, Microsoft and others. Google partners actually benefit from today’s transaction in that sense.

Does this mean that the formerly-Motorola business unit will continue with business as usual? Probably not quite. I would think that Google will make phones that are less full of crappy third party add-ons and “enhancements.” I would expect to see a clean OS and a clean UI on new Motorola phones, probably starting to show up in 6-12 months.

I don’t think Google is interested in the relatively small revenue streams generated by pre-installing junk and teaser software on phones. They are much more interested in a healthy Android ecosystem, which will drive future revenue growth on their ad platform, as more people search for more content from their phones. Google strikes me as a company with a long-term strategy, willing to sacrifice short-term revenue to win the long-term game.

This can only be good for users, especially as the other phone manufacturers are forced to clean up their OS distributions and stop filling their phones with junk, in order to compete with new Google/Motorola phones that have a cleaned up UI.

Presumably this is bad for Apple on at least two fronts:

  • Google and other Android partners can counter-sue Apple for patent infringement, effectively neutering their Apple’s strategy.
  • Google will force the entire Android ecosystem to make more user friendly phones, with fewer annoying add-ons, making any UI advantage Apple might enjoy today, at least as compared to non-rooted Android phones, disappear.

Apple may continue with a litigation-heavy strategy to compete with Android, in which case they will likely get shot down, or they may change strategies and focus on innovation instead. That would be better for everyone, including Apple.

Now that Google has bought Motorola, will Microsoft follow suit and buy RIM or Nokia? There is certainly buzz about that and both of those stocks bounced today.

Microsoft hasn’t traditionally been (a) acquisitive or (b) interested in the hardware business. They already have an extensive patent portfolio, so they can’t be too interested in RIM or Nokia’s patent portfolios. My bet is that they don’t make any acquisitions in response to today’s news, especially not RIM, whose platform is not really compatible with Microsoft’s future direction.

Interesting times we live in, but at least today the consumer came out as the big winner.