Skip to main content

Hitachi ID LinkedIn Page Hitachi ID Facebook Page Hitachi ID Twitter Page Find us on Google+ Hitachi ID YouTube Page

Hitachi ID Systems Blogs

Keep it short and to the point, please

March 26th, 2015

Part of my job is to review responses to requests for proposal (RFPs) that we receive from current and prospective customers. The idea is pretty simple:

  • An organization wishes to procure an IAM system.
  • They find some vendors who make products in the space. Perhaps they search for web sites that say so using Google or they contact an analyst firm like Gartner, Forrester or KuppingerCole.
  • They either independently or with the aid of a consultant write down a wish list of features, integrations and other capabilities.
  • They send this wishlist to all the candidate vendors, who respond in writing indicating whether and how they can comply.
  • Based on these responses, they down-select vendors to follow up with — via presentations, demos, a POC deployment, etc.

Sounds good in theory. We used the same process, more or less, to procure VoIP, e-mail and CRM services over the past couple of years.

But the process can go horribly wrong, and I’ve seen it do that more often than I care to think about:

  • Ask too many questions, and you may just get what you wished for. I just reviewed our response to an RFP with over 400 requirements and over 200 pages. Imagine 10 responses like that. Who will read 2000 pages of response material? Who can even comprehend so much information?
  • Ask lots of internal stake-holders to submit questions, and blindly aggregate them. Some of these requirements will be silly, others mutually contradictory, others off-topic. The person assembling the RFP should understand and consolidate the requirements, not just blindly merge them!
  • Ask questions with yes/no answers. Guess what? Every vendor will just answer “yes” to every question, and you will learn nothing.

So what’s the right way to do this?

  • Don’t ask about every conceivable requirement. Figure out which requirements you think are either critical or hard to hit, and focus on just those. If you’ve asked 100 questions, then you’ve probably asked too many and won’t be able to digest the responses.
  • Engage in a conversation with the vendors and any integrators or other third parties. Ask their advice. Maybe your requirements are ill-conceived? Maybe there is a better way to solve the problem? You’ll never know if you stick to a formal, no-discussions-allowed process!
  • Invite vendors to give presentations and product demos before issuing an RFP. You’ll get some ideas this way, including how to refine your requirements and which vendor approaches you like. You can then send an RFP to fewer vendors, with more targetted questions.
  • Hire someone to help. I hear Gartner does that. Other analyst firms will as well. Integrators have lots of good ideas, especially if they are vendor-neutral. One caution though: be careful of integrators that are strongly affiliated with a particular vendor. For example, I hear that Deloitte likes to push Oracle products, because they get lots of business from Oracle and frankly because Oracle products require huge amounts of consulting to deploy. This is great for the integrator, but terrible for the customer.
  • Figure out how the market has segmented features into product categories. Only ask about one product category in a single RFP. If you have requirements that span multiple categories – fine – send out multiple RFPs, probably to different, though likely overlapping, lists of vendors.

Good luck out there! Keep it short and simple, if you can!

Can your product do “X?”

March 15th, 2015

Frequently I get this question – from customers, prospects, partners and even internally: “can you do X?” This is a deceptively simple question and often exactly the wrong thing to ask.

Why is it wrong? It’s not because I can’t or won’t answer. I often answer “yes” and sometimes “no” and I usually elaborate. That’s not the trouble.

The trouble is that people are trying to solve a problem. Their thought process goes something like this: (a) they have a problem; (b) they have identified a possible solution; (c) their solution requires some feature “X” so (d) they go shopping for “X.”

It doesn’t matter what “X” is here – any feature in any product will do. The problem is that by asking for a particular feature, the person doing the asking is not revealing the problem (a), which is what actually needs to be solved. Perhaps there is a better solution to (a) and the subject matter expert (sometimes – that might be me!) will point it out. Sometimes the proposed solution (b) has subtle problems that haven’t been anticipated yet. Again – if I don’t know about it, I can’t warn the person I’m responding to about that or help them find a better approach.

What I wind up having to do in such conversations is try to figure out (a) and (b) – the problem and proposed solution – by inferring what the person I’m talking to really wants when they ask for “X.” That works out fine (if a bit time consuming) in a voice conversation. It’s a bit slower but still yields the same result in e-mail threads. In a formal RFP process, however, there is no real conversation. There is a single broadcast set of requirements, and a single collected set of responses. There may be a single exchange of questions and answers in between those two bookends. What there isn’t, really, is an interactive converation, but that’s where everyone learns the most. That’s a real weakness of formal RFPs, incidentally – no conversations.

It would be so much better for current and prospective customers and partners to include some background in their questions. What problems are they trying to solve? How do they propose to solve these problems? Transparency and conversations create value – vendors and customers are not adversaries and withholding information does not create advantage in some conflict.

Sometimes, there aren’t even problems to be solved, but rather an aggregate of features from one or more vendors. I wish people would stop asking for things they don’t need, just because some vendor somewhere says they can do it. Products should solve problems, not compete in some checklist match. But that’s a rant for another day.

So what do we call this thing, anyways?

March 5th, 2015

Every vendor in the privileged access management (PAM) market seems to refer to the product category using a different name and acronym. Some analysts simply refer to the market as PxM in recognition of this situation.

I’d like to put forward the argument, here, that the most appropriate term is PAM, as above. Our own product, Hitachi ID Privileged Access Manager (HiPAM), connects authorized and authenticated users, temporarily, to elevated privileges. This may be accomplished through a shared account, whose password is stored in the HiPAM credential vault and may be periodically randomized. It may also be via privilege escalation, such as temporarily assigning the user’s pre-existing (directory) account to security groups or temporarily placing the user’s SSH public key in a trusted keys file of a privileged account. In all cases, HiPAM assigns privileged access, to users, temporarily.

Following are other names for this product category, along with the reasons that I think each is incomplete or simply wrong:

  • Privileged Identity Management (PIM):

    Identity management means creating and deleting identities — such as login accounts — on managed endpoint systems. No PAM product actually performs this function. Rather, a PAM system connects authorized, pre-existing users to managed, pre-existing accounts for short time periods, subject to authentication, authorization and audit. There is simply no identity management, in the sense of Create/Update/Delete (CrUD) operations in the current generation of PAM products.

  • Identity and Access Management (IAM):

    IAM systems can, in principle manage the lifecycle of privileged accounts. In practice, there is rarely a need to do so, as most privileged accounts come pre-installed on the device, operating system, hypervisor, database, or other system that must be managed. The main IAM use case relating to privileged accounts is to create and track meta data, such as account ownership.

    Architecturally, typical IAM systems can scale to a few thousand endpoints. Enterprise PAM deployments, on the other hand, scale to hundreds of thousands of managed endpoints. Few IAM products, even with lots of custom code to close the functional gap, could deploy at PAM scale.

    In short, IAM is complementary to PAM, but the two product categories address distinct problem categories with distinct functionality at different scales.

  • Privileged User Management (PUM):

    The same argument presented vis-a-vis PIM above holds. User management — of privileged or other accounts — is simply not what PAM products available today do.

  • Privileged Password Management (PPM):

    Hitachi ID Systems previously use this term, before offering other methods to connect users, temporarily, to privileges. While this label may still apply to some products, today HiPAM also allows for temporary group membership and temporary SSH trust relationships, making the term obsolete.

  • Privileged Account Management (PAM):

    For some products in the market, this is probably an accurate description, since authorized users are connected to specific, pre-existing, privileged accounts for defined time intervals. This is also a fair description of one of the methods that HiPAM uses to connect users to privileges. Since HiPAM also supports temporary trust and temporary group membership (i.e., privilege escalation), this description would be incomplete for Hitachi ID.

  • Privileged Session Management (PSM):

    Some analyst firms refer to this as a distinct product category — software that establishes sessions connecting users to shared accounts on managed endpoints, typically via a proxy appliance. It’s not clear to me how such a product could function independently of a credential vault, presumably complete with a password change process. In short, this is a subset of what HiPAM actually provides and I’m pretty sure it’s a subset of what our competitors do too. Not a real product.

  • Application to Application Password Management (AAPM):

    Another subset of HiPAM functionality — allowing programs to be modified to eliminate passwords embedded in scripts and configuration files, and instead be fetched on demand from a secure, authenticating vault. Not a product category: just a subset of functionality.

  • Superuser Privileged Management (SUPM):

    A somewhat complementary product category, where a central policy server controls what commands a user can issue, to execute as root or Administrator, on individual Unix, Linux or Windows systems. HiPAM accomplishes much the same thing through a combination of temporary group membership, along with local policies linking groups to commands (via Linux sudo or Windows GPOs). In the Unix/Linux environment, I’m not sure I buy that products like this are actually effective. If you give me the ability to run something like sed or grep on a Linux box, as root, then I can do pretty much anything. Many programs would let me shell out to run a sed or grep command, so I suspect that this whole product category is more about optics than actual security. YMMV.

So what do you think? Anyone care to refute my ideas here, and support use of a different category name and acronym? :-)

Zero days and cyber-warfare

December 18th, 2014

The ongoing saga of the Sony hack is quite interesting. Some notable developments:

  • It is quite clear that the attack on Sony was at least ordered and funded by a state actor (North Korea). They may have hired out the actual work to a criminal gang (in case they did not have adequate domestic talent to carry this off) but they certainly are behind it.
  • Sony has decided to pull the release of their movie about assasination of North Korea’s dictator. In effect, the aggressor in this case has won, the victim has capitulated. This is really unprecedented.
  • If you define cyber-warfare as aggressive action by a nation state, causing economic or physical harm, through disruption of digital systems, then this is clearly cyber-war. The only comparable case I can think of is the Stuxnet worm used by Americans and Israelis to disrupt Iranian centrifuges.

So how did the attack succeed so spectacularly? Presumably there was some combination of zero-day vulnerabilities and perhaps spear-phishing attacks against Sony’s network and people. This would get the attacker into the Sony network, but not compromise the entire house of cards.

From there, one assumes some combination of packet sniffing, DNS spoofing and/or keylogging was used to allow the attackers to expand their scope of influence. This is where a privileged access management system would have come in handy – to slow down the advance of the hackers. At the same time, there appears to have been a total failure to detect the ongoing attack. A SIEM system would presumably have helped to detect that something was amiss, as would a Data Loss Prevention (DLP) system.

Why all of these systems either were not in place to protect Sony, or did not function, is anybody’s guess. If nothing else, this incident should be a wake-up call to other organizations, reminding them that:

  • You are a target, whether you like it or not.
  • Zero day exploits and spearphishing attacks can be used to get inside your perimeter. Only an air gap will keep your perimeter 100% secure, and that’s not compatible with an ongonig business.
  • Security infrastructure, including patch management, perimeter defense, privileged access management, SIEM and DLP is not an option. You need all these components, and vigilance, to mitigate the risk inherent in attackers getting into your network.

Stay safe everyone!

ShellShock: the big security scare over nothing

September 29th, 2014

The security advisory business must be getting desperate.

Evidence: this latest Shell Shock bug. Security vendors and the media are making it out to be a serious threat to every organization. Organizations are in a panic, asking every software and services vendor to confirm that their solutions are either unaffected or patched.

Lets break down this bug, shall we?

  • Bash is a command shell. It is the Unix/Linux equivalent to cmd.exe
    on Windows.
  • There is a bug in Bash – it has likely been there for years. With
    this bug, the contents of an environment variable are executed, though
    they should not be.

As a user, I use Bash daily. With this exploit, I can get Bash to run programs as me. Hmm. I do that anyways. Not much of an exploit. I can’t gain new privileges – just cause it to do stuff that it would do anyways.

So what’s the problem? Well, what if I can get another program, which normally does run bash, to run it with my commands. OK, that’s more interesting. It would then get that other program to run things on my behalf – a legitimate exploit.

But what programs run bash?

Mostly sshd. SSH is the program I use to connect to another computer on the network. Usually, I sign into sshd with my own account (ID/password or public/private key). The ssh daemon (sshd) then runs bash and lets me type commands to run on the remote computer. Great. With this Bash bug I can … get bash to do illicitly what it already does for me. So what? So nothing.

What else? Well, in rather unusual circumstances, you can configure sshd to run just a few, limited commands on behalf of many people. For example, I might setup an account on a firewall called ‘monitor’, set a password on the account, configure it to only show firewall log records and nothing else, and share that account with many people. In this context, people who do have legitimate access to the ‘monitor’ account would be able to break out of the ‘command jail’ and run more commands on the firewall. This is an actual vulnerability, but not a major one — after all, ‘monitor’ is likely not all that privileged an account, and these are people I gave a password to in the first place, so hopefully they won’t do anything naughty.

The big fear is that there are web exploits. The idea here is that someone writes a web application in Bash. Now that’s just bizarre – using a command shell to provide content to a web site. In this case, anyone who can connect to the web site could cause the command shell to run arbitrary commands. Again, these commands would run as the web server’s designated user ID (usually a very unprivileged account), but in this case, a more serious exploit, because the set of possible attackers is large and they could come from anywhere. OK – now we’re talking about a real security problem, but wait – who writes web applications in Bash anyways? It turns out, almost nobody. It’s just not the right tool for the job. That’s like asking: “who hammers nails with a screw driver?” I’m sure it happens, not it’s not exactly easy to do and is therefore unusual.

Bottom line: this ‘Shell Shock’ security bug is a legitimate security bug, but with near-zero impact in the wild.

So why all the panic?

It must have been a slow news week last week.

Advanced search in an IAM system: a privacy threat

September 15th, 2014

XKCD posted an amusing comic about the intersection of SQL and data privacy a while ago, here – xkcd.com/1409/.

This is interesting in that it highlights the threat to privacy by an innocuous seeming search feature. Never mind the SQL syntax in the comic – imagine that you can search for users whose ‘scheduled term date’ is in the next 30 days, or whose ‘most recent performance review’ was a low grade. Even if the IAM system refuses to show you the values of those fields, the presence or absence of a user in a search result set would compromise privacy and possibly corporate security.

What to do?

You have two options:

  • Eliminate search on sensitive attributes entirely; or
  • Ensure that the IAM system filters out search results which were included on the basis of the values of sensitive attributes.

I imagine that most IAM products and deployments out there opt for the former. You can’t search on ‘scheduled term date’ and the like. That’s fine – it’s safe, I guess, but it’s also extremely limiting. What if, as a manager, I want to run that query and see which of my subordinates, some of which are contractors, are about to reach the end of their work term? I might then wish to request extensions for some of them, because their projects are still active. Alternately, I might request to turn some contractors into employees.

In other words, simply refusing to search on these things is not a satisfactory solution – it leaves out too much useful functionality.

That brings us to the second option — build a search engine smart enough to figure that a given record should not be included in the result set because this particular requester should not be able to see the sensitive attribute value for this particular user. That’s hard, but creates much more value for the end user.

This is the approach we opted for at Hitachi ID. Hopefully our customers like it. ;-)

My CC was hacked – but where?

September 4th, 2014

We recently got a call from the bank, asking to verify that a series of transactions were valid. As it turned out, nothing was amiss and no action (e.g., new CC) had to be taken.

But this kind of call gets you thinking: why did the bank call? Presumably because there was a bulk compromise of some or all transactions at some retailer that the bank deals with. Recent news releases (Target, Home Depot, etc.) make this seem likely.

As a consumer, I really want to know. Which vendor got hacked? At all locations or just one (physical or virtual) retail outlet? I want to know because, frankly, I might be more careful doing business with that retailer or outlet in the future.

The banks don’t release this information today. There is a hack, they know about it, but they don’t tell me who was hacked, or when, or where? This is presumably because they don’t want to anger (embarass?) their merchant customer. I get that, but I’m their customer too, and so are millions of other consumers. I think the banks should disclose what they know to the consumers, as this will reduce the total cost of the impact of the hack. It will also provide a much stronger incentive to merchants to lock things down, and over time may reduce the cost of attacks.

At the end of the day, the bank did nothing wrong, the merchant had an error of omission, not commission (inadequate protection) and the consumer did nothing wrong. Can’t we just all be open and transparent about the event, to help work together to keep out the bad guys?

Another day, another hack

September 2nd, 2014

This time, it’s the “Fappening” – titillating name, that. A bunch of young starlets had compromising photos lifted from their iCloud accounts and posted online.

Apple claims innocence, and they are likely telling the truth. Why would only a few dozens (or hundreds? thousands?) of iCloud accounts have been hacked? That’s not consistent with a systemic security failure at Apple.

So how did these “hackers” get in? Likely they found a stash of e-mail addresses and passwords from some other breach, on-line somewhere. There have been plenty of large scale breaches in the past year or two. They would then have looked for persons of interest in the stash of IDs/passwords and, having found some, tried some of these same login credentials on the Apple site.

Simple enough – no technical skills required – just persistence.

Are there lessons to be learned here? Sure!

  • If your account on one system has been hacked, change your passwords everywhere, not just on that one site.
  • Heck, change all your passwords once in a while, on the theory that some of them may have been hacked and you were not notified.
  • Keep different passwords on different systems, or at least on systems that have different security profiles, both in terms of how securely you suspect they are managed and in terms of how much you would care if they were compromised. Don’t use the same password on Facebook and your bank, for example.
  • Don’t store sensitive, personal data in plaintext on systems or media you don’t physically control. Putting nude pictures of yourself on the cloud? Not so smart.

Mind you, nobody ever seems to learn. I’m sure this sort of thing will happen again, soon.

Do you still limit daily password changes?

July 30th, 2014

Once upon a time, password history enforcement used code that stored hashes of the “last N” passwords for each user.  N was typically a small integer – often less than 10.  Some users would figure out what N was and, when the time came to change their password, made N+1 password changes, where the last password reverted to their original password.

Such users are both smart and stupid.  Smart, because they figured out how the underlying security system worked and how to circumvent it.  Stupid, because they opted for static passwords and thus weakened the security of their own accounts.

Some organizations figured out that they had users using the “N+1 trick,” and found a low-tech way to make it painful for the offending users.  They limited the number of times a user could change his own password in the course of a single day, typically to just 1.  A user who wanted to use the “N+1 trick” would be obliged to do so over N+1 days, which pretty much eliminated the attractiveness of the scheme.

Unfortunately, limiting the number of password changes per day is unfriendly to users.  Say I change my password, then decide that the new password is not so nice after all, and want to change it again, to a better value?  What about if I change my password, forget it and need to reset it.  These are legitimate use cases and users should be able to change their password as often as needed.

Fast forward to the present, where N can either be very large (say 100) or simply open-ended (Password Manager supports  this).  In either case, we get the same benefit of the old “max once daily” rule, but without annoying users who just want to change their password again.

In this context, “max once daily” loses any conceivable benefit. There is no security advantage to preventing an authenticated user from changing his own password as often as he likes.  There is no benefit in the sense of stronger measures against password reuse, if the password history data is large or open ended.  There is only down-side.

I raise this because we still, occasionally, meet organizations that insist on enforcing this rule.  Get over it – open ended history is a better solution.  This rule should be abandoned to a curiosity of history, removed from enforced password security policies.  Use a product like Password Manager to enforce open-ended history or just set N to a large number.

Default passwords strike again!

June 11th, 2014

Amusing article in the
Winnipeg Sun. A couple of “computer whiz” grade 9 kids used a search engine to find an operators manual for the ATM at their local grocery store. In the manual, there are instructions for signing into the ATM as an admin, along with a default password.

Lo and behold, the BMO bank machine still used the default admin password, so the kids got it. Now these are nice kids, so they visited the local branch, explained the problem and made sure that the problem was fixed. No harm done, and instead rather a good deed.

What’s interesting here is that in this day and age, a *bank* was so lax about security as to leave a *cash machine*, which is protected by exactly *zero* physical security and is installed in a public place, with a *default administrator password*.

I can’t think of a more clear cut use case for deploying a Privileged Access Management solution. This password should have been a long, pseudo-random, daily string — not a factory default.

page top page top