Archive for 2009

Security threats and defense measures

Monday, December 28th, 2009

It’s been interesting to read and watch TV reports about the recent terrorist attempt on a KLM flight to Detroit.

The attempted attack was interesting in several ways:

  • The attacker was not some kid from the slums — he was, like the 9/11 terrorists — a kid from a wealthy background.
    • These attackers seem to fit the profile of radicalized kids from middle-class or wealthy families.
    • Given what these kids are sacrificing (everything) and what they can possibly hope to gain (nothing), they’re clearly idiots.
  • The security infrastructure failed completely:
    • The kid’s dad raised an alarm with authoritites that his son was a threat.
    • The kid’s name showed up in various watch lists.
    • The kid got a US entry visa, despite all this.
    • The kid got an explosive device past at least one airport security checkpoint, and possibly 2 or 3 of them.
  • The attack failed because passengers, since 9/11, are vigilant and there is no way one dumb kid can overpower a plane full of people who want to land safely.

The response has been predictable — more security theatre:

  • No moving about the cabin in the last hour of flights.
    • So what’s a terrorist to do? Perhaps blow something up sooner?
  • Nothing (laptop, blanket, etc.) on your lap during the last hour of flights.
    • So what’s a terrorist to do? Perhaps blow something up in the bathroom?
  • All passengers and baggage screened again at the gate.
    • Are they doing pat-downs? Strip searches? If not, then the new measures wouldn’t have stopped this particular dumb kid — he wore his explosives inside his pants, apparently.
  • Less personal baggage allowed (1 item rather than two).
    • Do they think this helps because terrorists need two bags to hold their bombs? Or perhaps it’s because they don’t have enough time to perform the above (pointless) search?

It seems to me that it’s not all that hard to blow up airplanes, if you’re willing to die for your cause. Airplanes aren’t falling out of the skies because there just aren’t enough people out there who are both dumb enough to want to blow themselves up and yet wealthy and cunning enough to pull it off. I think we’re safe, mostly, because that combination of wealthy, fanatic, idiot is pretty rare!

But lets assume that the threat really is very serious — lets imagine that there are thousands of these idiots just lining up to blow themselves up in the air, and kill hundreds of innocent bystanders in the process. What security measures would actually prevent this sort of thing?

I would argue that most of the measures that inconvenience travelers now are just theatre — they are designed to impress travelers that the government cares about our safety, but they don’t actually stop the bad guys.

I suspect that better bomb detectors, with swabs taken from all travelers and all baggage would be helpful.

I also propose that putting armed security staff on all flights, or at least flights with a minimum number of people on them (say 100 or 150 passengers) would make us safer, by having a trained professional take out the kid with a bomb, rather than a volunteer passenger.

More and better fire suppression systems on planes would be good too.

And maybe, just maybe, we should pay attention to who is boarding these planes? And not issue visas so quickly to people whose own families think they are a security threat?

I’m sure even more could be done — but it’s all expensive, and not anywhere near as theatrical as what we actually get for our tax dollars.

It’s all about cached passwords…

Thursday, November 5th, 2009

One would think that – as IT infrastructure in every organization evolves
and more and more applications take advantage of a central LDAP directory
(in reality: this is usually Active Directory), the need for managing
passwords – at least at work – will gradually decline.

Gone are the days where every application has its own password — right?

And if that’s the case, a simple system to manage AD passwords, with
things like enrollment of security questions and self-service password
reset should be all that any company needs — right?

Wrong.

Increasingly, it seems that password management is becoming more
complicated, rather than less. And the complexity is not happening on
the back end, where indeed many applications are learning to externalize
at least user identification and authentication, to Kerberos, LDAP,
SAML or other mechanisms.

The problem is happening on the client side of the equation. Consider:

* Mobile Windows users continue to have cached credentials on their PC.

If they forget their password and get a password reset from the help
desk, the user will still not be able to sign on until he reconnects
to the network and to the domain.

* Lotus Notes users now increasingly deploy the Notes SSO client.
Guess what – it caches the user’s password too, so a password change
or reset made over the network, from a web browser or by the help desk,
will invalidate the SSO cache.

* What about VPN software? Most users have that too, and guess what,
most VPN clients also cache passwords on the PC.

* What about full disk encryption software? Not only is the password
cached, but it is used to protect the HDD encryption key on the master
boot record or a similar location.

I’m sure there are more scenarios that I haven’t thought of off-hand.

So what does this mean?

For starters, any “enterprise-scale” password management system needs
to include increasingly sophisticated client software to perform tasks
ranging from assisting a locked-out user at the login prompt to updating
cached Windows and Lotus Notes passwords to decrypting and re-encrypting
the key used to encrypt the hard disk.

And this client software should work with the bewildering range of client
software variations used by enterprise users – everything from Windows
2000 to Windows 7, at various patchlevels, in 32- and 64-bit versions.
And terminal services. And Citrix servers. And that’s just the Windows
platform!

And what vendors can help with this?

Not the usual “enterprise identity management” players, unfortunately.
Sun does not offer any client-side software at all. Oracle and IBM
offer a bit (GINA DLL for locked out users) but nowhere near enough
for the challenges described here. Microsoft? They *wrote* the OS,
but they are only now offering a GINA DLL, and it’s pretty weak —
nothing for cached passwords, certainly nothing for Lotus Notes, etc.

Maybe I’m biased. Here at Hitachi ID we’ve been working on these problems
for years. They aren’t easy to solve and maintaining functionality every
time Microsoft, IBM/Lotus or others change the platform is a real pain.
But it’s *our* job to deal with it, not our customers’ job.

Which authentication factor makes sense?

Wednesday, September 23rd, 2009

We’ve been discussing strong authentication options with one of our
customers lately, and I thought I’d post some thoughts about the
strengths, weaknesses and suitability of a couple of technology options.

The business driver our customer is facing is a corporate mandate for
strong authentication — and in particular “stronger than passwords.”
I won’t delve deeper into their needs than that — presumably they know
what they want and why they want it.

In the past, this particular organization, which happens to be a company
with world-wide operations, has deployed smart cards to authenticate
users into their Windows PCs and into the corporate VPN.

Today, there is some question about whether to continue with the smart
cards. This is really for two reasons:

* The cost of ownership of smart cards has been and continues to be
relatively high.

This is presumably due to a combination of factors: the need
to acquire physical cards and readers for each PC, the need to
initialize those cards for new hires and collect/deactivate cards and
the PKI certificates they carry at termination time and (relative
to passwords) complex and costly processes to support users with
lost or stolen cards or who forgot their PIN.

* While the smart cards work quite well for users signing into corporate
PCs (desktop or laptop running Windows and a member of the corporate
AD domain), they don’t work at all when a user wishes to access
an application using a smart phone, their home PC or (conceivably,
anyways) an Internet kiosk.

So what are the alternatives, if the basic requirement is “strong”
authentication and the practical meaning of that is “two factors?”

Really the only other technology out there is one time password tokens,
such as RSA SecurID or Vasco Digipass. Tokens have strengths and
weaknesses of their own:

Strengths of tokens:

* While you do have to provision tokens to users, there is no associated
“reader” hardware.

* Eliminating the reader reduces hardware and integration costs.

* Eliminating the reader also means that tokens work well from smart
phones, home PCs, kiosks, etc.

But everything comes at a price. The weakness of tokens is that they
are not really suitable for mobile users signing into their laptop when
it’s not connected to the network. This is because a server is needed
to validate the token’s current pass-code, and if a user is off-line,
the user’s PC cannot contact the server to do that.

There are other technologies, of course. For example, there are “1 1/2”
factor systems where a user clicks around an image. There are also
systems where a server sends an SMS message to a user trying to sign
into an application, where the contents of that message are a one-time,
random PIN the user must type to sign on.

These kinds of solutions are cool, but they don’t generally work well on
smart phones — for example, would you have to hang up your smart phone
web application login to read the SMS PIN, and then relaunch the micro
web browser to sign on again? That sounds awful!

So really we have two technologies, each of which is suitable for a
different scenario:

* Smart cards: more expensive to deploy and manage, optimal for
signing into the corporate PC.

* Tokens: somewhat less expensive to deploy and manage, optimal
for VPN access, home PCs and apps accessed from a smart phone.

It seems like the best solution is actually to deploy *both*. The
trouble with that, of course, is cost. There are certainly combination
devices out there that function as both a smart card and OTP device:

http://rsa.com/experience/sid800/RSA_SID800_Final.html
http://www.globalsmart.com/VASCO_Launches_New_Version
http://www.thefreelibrary.com/Aladdin+Introduces+Software-Based+Smartcard+and+OTP+Authentication…-a0198208471

One way to save money is to look at USB-attached devices, that
electronically act like smart cards but don’t require a reader.

Another way to manage costs is to automate management processes:

* Onboarding new users — and automatically allocating a device,
streamlining how it’s initialized, etc.

* Supporting users who forgot their smart card and/or token PIN —
using self-service.

* Supporting users whose device was lost or stolen, through self service
access to temporary emergency access passwords.
using self-service.

Ultimately, organizations have to ask themselves if all this cost and
complexity is really warranted by the improved security. Yes, these
authentication technologies are more robust than passwords. But are
passwords really so bad, if they are complex and changed regularly?
The cost/benefit calculation may not be as compelling as one might
first think.

XML APIs? Yes. XML in the DB? No!

Wednesday, September 23rd, 2009

Dave Kearns writes about the XML infrastructure in Microsoft’s new IAM product:

http://www.networkworld.com/newsletters/dir/2009/092109id2.html?source=NWWNLE_nlt_security_identity_2009-09-23

I don’t think XML is *necessarily* a good thing!

Yes, its’ a great way to package APIs for communication between two systems. That’s what XML was always designed for and it does a great job.

But some vendors – in the IAM space and elsewhere, and I think this includes Microsoft’s Forefront Identity product, store XML “blobs” in the database to represent complex data structures, such as users along with their identity attributes, roles, accounts and group memberships.

Storing XML objects in the database is actually *terrible*. It can kill performance — try running a search through a table where a given attribute of an XML blob in some column has a given value!

It also pretty much eliminates the possibility of using third-party tools to run reports against the data. You want to use Crystal Reports, for example, to see what users are in some group and have some role? Too bad for you! That data is encoded in XML blobs that the reporting program can’t parse.

So yeah, having a web services, XML-based API is great. But storing XML in the database is nothing more than the product of lazy programmers who can’t be bothered to update the database schema to reflect the evolving needs of their app.

Windows domain controllers – buggy by design?

Tuesday, August 18th, 2009

We’ve been working on our Privileged Password Manager lately, and have
been thinking about how to secure and reliably manage the passwords for
service accounts on Windows.

Some background information may be in order at this point. Unlike on
many other operating systems, services on Windows are executed using
both their login ID and password. That is, services either run in the
context of a built-in OS user called SYSTEM, which has an extremely
broad range of privileges, or they run in the context of a named user,
who may be assigned more restricted security rights. Windows Service
Control Manager, which is responsible for starting and stopping service
programs, needs to know the login ID **and password** for every such
user in order to launch processes as that users — called service
accounts — when a service is first installed.

We start running into problems when we try to change passwords for
service accounts. Clearly, if a service account password is changed,
SCM has to be notified of the change. If we forget to do that, next time
SCM tries to start the service, it will fail – because it will try
to use the wrong password.

This problem extends beyond just services. Scheduled Tasks and IIS
web sites have the same problem. IIS actually knows how to change
the password for its “anonymous user” accounts. The Windows Scheduler
does not. Third party programs often have the same requirement —
they need to know the password for certain accounts that are part of
their software.

Starting with Windows 2008 R2, there is a formal concept called a
service account and SCM can change the password for these accounts.
So long as SCM is the *only* thing that needs the password for a given
service account, this works great. Unfortunately, if a third party
application also needs that password, the SCM service account construct
is not helpful.

The situation gets more complicated when we consider services that
execute on domain controllers.

On a Windows 2000, 2003 or 2008 domain controller, there is no concept
of a local user. All users are AD domain users. That means that any
service that runs as a user other than SYSTEM does so with the login ID
and password of an Active Directory user. If you change the AD user’s
account, you also have to tell SCM on the DC the new password.

That’s fine … but what if services on a thousand different DCs run with
the same AD service account? Suddenly, changing a service account’s
password becomes a project — you have to have a comprehensive list of
DCs and you have to check every DC to see if it has services running
as that user. If so, you have to tell SCM on that DC the new password.
If you fail to connect to a DC — for example, if the network connection
to the site where the DC lives is temporarily down or if the DC has
a hardware problem — you must retry connecting to that DC until you
succeed.

In other words, running services on multiple DCs with the same AD user
creates serious problems and makes it very difficult to change passwords
for that user.

This problem only arises because AD DCs don’t have a notion of a
“local user” as distinct from a “domain user.” That’s weird, because
even Windows workstations differentiate between domain and local login
accounts.

In other words – not having local users separate from domain users on
DCs is BROKEN BY DESIGN.

To avoid this problem, organizations should ensure that if they run
services on a Windows DC with a non-SYSTEM service account, they should
create an account for just that one service, on just that one DC.
If they run the same service on a different DC — they should create a
separate service account for that DC, and so on.

This may sound like a pain, but it’s better than the problems that arise
when trying to coordinate password changes to shared service accounts.

Identity 101 — build a consolidated, clean view?

Thursday, July 30th, 2009

I’m at the Burton Group’s Catalyst conference this week in San Diego,
where interesting conversations about identity management, entitlement
management, role management and more are everywhere.

One conversation got me thinking about the (what seems like) old strategy
of building a “master system of record” as a pre-requisite to deploying
an enterprise identity management system. We used to work with our
customers to do just that, back when our company was called M-Tech.
I take it that most of our software vendor peers and our collective
systems integrator partners did the same thing.

A couple of years ago, when we were designing the then-next/now-current
generation of our identity management suite, we examined this idea more
closely, and came away with the conclusion that it wasn’t really necessary.

Yes, you read that right. It’s not necessary to consolidate all your
systems of records and construct a “gold standard” directory of users,
from which your identity management processes will flow.

Our conclusion is based on a few ideas:

First, it’s expensive and time consuming to build this database … so
if we want rapid deployment of an enterprise IDM system, it’s preferable
to skip the huge data cleansing project.

Second, there’s the observation that the user population is a moving
target and it’s very difficult to get a perfect data set about all
those users, since they keep getting hired, fired, moved, name-changed,
responsibility-changed and so on.

So if we don’t put together a “gold standard” data set, what will happen?
To answer that, we should first think about how that data set is intended
to be used. Mostly, I think it’s useful for automated provisioning,
de-provisioning and identity synchronization between systems. If you have
a pure and clean HR feed, for example, you might synchronize it with
your corporate AD, RACF system, SAP user base and so on — and then those
would be perfect too.

But really – you don’t want to look at every user on every system on every
pass through the automation process. That’s expensive – and slow. Instead,
it makes more sense to look at changes to a system of record — who got hired
today? Who got fired today? Who changed jobs? Whose surname got changed?
Those changes are a much smaller data set, and each of them presumably
represents a state change — either to correct an error in the data or to
update data that used to be correct to its new state.

So if we capture changes and propagate them out into the enterprise,
we’ll gradually improve the quality of data on all systems, but without
having to clean everything up in one fell swoop.

And for that matter, what’s so special about systems of record? Why
don’t we monitor *every* system for changes that were initiated outside
of the IDM system. Any change could merit a response — it could
be a new e-mail address set in the Exchange server, which we should
replicate to HR. It might be a change in the user’s cubicle number
in a white pages application, which might be handy to copy to AD.
It might even be an unauthorized addition of a user to the Administrators
group in AD, which could trigger automatic removal and a security alert.

It’s not just HR we should be watching…

This approach is auto-provisioning based on propagating change events,
rather than directory synchronization based on comparing the full state
of two systems. It’s also the difference between our old ID-Compare
automation engine and our new ID-Track engine.

This strategy means that you can deploy a system that makes data cleaner at
every step, without having to make data perfect before you go live.

Another way to think about it is that “perfect” is the enemy of “good,”
and I think what we really want is “good,” without the pre-requisite of
“perfect.”

Export controls, crypto as a munition

Thursday, July 23rd, 2009

OK – this blog post may be 20 years too late, but it’s interesting that
this nonsense is still going on.

We just completed an internal review of our corporate export
controls. Despite the fact that “cryptography is a munition” is idiotic,
the fines that companies can face if they are caught exporting it without
a license are nothing to sneeze at.

In the process, we learned that the US – and a variety of other countries
– still have strong export and import controls over cryptographic
technology. The biggest offenders are the US, France, Israel and Hong Kong
(oddly enough).  China is also threatening to require that all crypto
in their country be “Chinese-origin” (and presumably Chinese-back-door).

Now of course this is nonsense. Algorithms don’t explode, so classifying
cryptography as a “munition” only makes sense to the mentally
challenged. Moreover, if a criminal/terrorist/bad guy really wants to
protect their communication, of course they will do so – SSH, PGP and
other tools all support strong encryption and are all free and easy to
download. The Windows OS and even Blackberry phones also incorporate
strong crypto – world-wide.

This means that “we want to keep crypto out of the hands of bad guys”
is a completely nonsense argument. Nobody buys it.

And if keeping crypto out of the hands of bad guys is not the objective,
then the only remaining possibility is that export controls are intended
to keep cryptography out of the hands of law-abiding citizens.

Think about that for a minute. The US government wants to make it
more difficult for the citizens of other countries to communicate
securely. Since bad guys will presumably use crypto anyways, this
means that they what they really want is to violate citizens’ privacy –
domestically and abroad.

With that in mind, consider what happened in Iran recently – popular
unrest and a violent government crack-down. One would think that what the
US government really wants is exactly the opposite of that: to empower
citizens everywhere to communicate freely and safely, without fearing
government interception. That wouldn’t endanger benign government (like
the US?) but it would definitely cause headaches for dictatorships like
those in Iran and China.

Maybe the protesters in Iran would have had an easier time organizing
if they all had mail clients that embedded PGP. That would surely be in
the US national interest!

So export controls on cryptography are backwards!  If US foreign policy
interests are really the motivation, then the US should *promote* strong
cryptography for citizens everywhere.  That wouldn’t cause real harm to
US law enforcement or intelligence services, since the objects of their
surveillance already have strong crypto. On the other hand, it would
cause harm to those governments where the US (and the West in general)
would like to see regime change.

So what’s wrong with this picture? When will this old, cold-war thinking
finally give way to a pragmatic realization that crypto-for-all is good
for the US?

Copyright laws in Canada

Wednesday, July 22nd, 2009

Today I learned that the Canadian federal government is seeking public
input in order to refresh copyright law. So I offered some — copied
here.

If we are to update copyright legislation in Canada, the first question
is to ask what social good the law is supposed to promote?

I think everyone will agree that copyright exists to make it possible
for creative people and organizations to be (financially and otherwise)
compensated for their effort. If I write a book, I expect to be able
to profit from its publication. If I write and play a song, the same
is true.

I don’t think this principle is any different in the digital age.
What has changed is the technical ease with which consumers of
digitally-encoded content can break copyright law and redistribute
content in a manner which does not reward the original creative person
or organization.

This has caused content brokers (e.g., music distributors, movie studios)
to panic, because it threatens their business model. Note that music
is not threatened, and it’s likely that movies aren’t either — it’s
the companies that “buy content wholesale and sell it retail” who are
threatened.

While it’s important to reward content creators, it would be much harder
to argue that a broker, who purchases content from its creators,
promotes that content (i.e., marketing) and sells that content deserves
the same protection by law.

The Internet has a powerful effect of removing friction from the
marketplace. It replacies business models based on brokerage with
ones based on adding value to a product or service.

Real estate agents, stock brokers and travel agents have all already
learned that in the Internet world, they must add value to a transaction
or go out of business. The same should be true of book, music and
movie distributors.

Consumers have rights too. I think it’s unreasonable to have to
pay twice for the same content, if I want to use it in a different way.
For example, if I purchase a CD, I should not have to pay again to
listen to it on my MP3 player. If I buy a movie, I should not have
to pay again to back it up, play it on a different device or invite a
friend over to watch it with me.

So what can we conclude from all these observations?

* Copyright is a useful tool and should continue.
– But … it should protect the author, not the broker.
– This means that it should be reasonably short — say 10 or 20 years,
and definitely no longer than the life of the author.

* Copy protection is an impediment to fair use. While content
publishers and brokers should be free to try to sell content that
has been encumbered by copy protection, they should not be
encouraged by the legal framework to do so:
– Publishers should be required to advertise what sorts of
copy protection are embedded in their products, so that
consumers can make an informed choice about whether they
want to buy such encumbered products. Publishers may
quickly learn that consumers don’t like being restricted!
– Mechanisms that allow consumers to bypass such copy protection
should be explicitly legal. This is sort of the opposite
of the DMCA – an ill-conceived US law.

* Some use cases will continue to be complicated and we’ll have to
figure out, as a society, how to support them:
– How does a library work in a DRM-free digital age? If I borrow
a CD from the library and encode it into an MP3 so that I can
play it on my commute, that seems like a legal use case. But if
I return the CD and keep the MP3, that’s a copyright violation.
How will we, as a society, persuade people to not do that?
– If I’m an artist and I embed audio and video snippets into
an A/V compilation video, that should be legal. But if I embed
a whole 3.5 minute song, or 10 minutes of video from a commercial
movie, that probably shouldn’t be allowed, unless I pay for
the rights. Where is the threshold between these scenarios?

With updated copyright legislation, content authors should be protected
by law (not by technology!). Content brokers will probably have to
come up with new business models, where they truly add value to a product,
or else go out of business.

Business models should never be based on “we will prevent you from doing
X” — they should always be based on “we will enable you to do Y.”

Using the Internet, content authors are already able to survive and
thrive without large, corporate-style brokers in any case — the argument
that the sky is falling because music labels have declining revenue is
complete bunk, as anyone who searches for indepdently-published music
online can see.

Just my $0.02. 🙂

Pass phrases – the illusion of security?

Tuesday, June 30th, 2009

Apparently one of our partners is looking at replacing their various
internal system and application passwords, which are subject to password
strength policies and regular expiration, with a universal passphrase,
which must be somewhat long, but which users can keep unchanged for about
a year at a time.

This is an interesting approach to the age-old password management
problem that most organizations face and it got me thinking about just
how secure passphrases really are.

Others have written on this subject, so I’ll try not to repeat too much:

http://en.wikipedia.org/wiki/Passphrase

http://technet.microsoft.com/en-us/library/cc512613.aspx

I also talked to a friend of mine, who happens to be a linguist and knows
a thing or two about entropy in English-language text.

The bottom line is pretty simple:

* Users will most likely choose a series of words for passphrases.
Perhaps a sentence, which means something to them.

* There aren’t that many commonly used words in the English language.
I ran an analysis against all my mail folders, and found fewer than
20,000 distinct words (i.e., letter sequences that appear at least twice
in message text).

* If we assume a 5-word passphrase, the size of the search
space is at most 20k ^ 5 or 3.2 * 10^21 — sounds secure!

* BUT … the 100 most popoular words in my mail folder represented
over 50% of the word occurrences, so the real entropy is more like
200^5 or 300^5 — 3.2 * 10^11 to 2.43 * 10^12.

* This doesn’t even take into account grammar, which should make
some word pairs much more likely than others. I’d take 3*10^11
as a definite upper bound on the security of an English passphrase!

* My linguist friend suggested that the average entropy of a letter
in an English word is no more than 1.5 bits — if it were higher,
English would be too hard for us to learn. Since English words average
about 5 letters, that suggests an equivalent password strength of
1.5 ^ (5 * 5) = 25000. That seems a bit low to me — a lower bound?

In comparison, consider an 8 character password, with mixed case,
digits and 3 possible punctuation marks. Assume it’s really random —
password choice is subject to a policy engine which prevents the use of
dictionary words, etc. Such passwords should have an entropy of something
like (26+26+10+3)^8 = 3.2 * 10^14.

This makes the whole passphrase proposition sound a bit fishy to me.
Organizations should either use a *really long* passphrase, or still
require mixed case, special characters, etc. in their passphrase.
But if they do that — will users really benefit from passphrases?
Won’t they just be really really long passwords, which users still hate
and are even more likely to write down?

Of course, organizations could just stick with the “evil” they know
— modest-length passwords, that are subject to complexity rules and
change every 2-3 months. This structure has been analyzed to death and
we have a pretty good idea of how secure they are (or aren’t, depending
on the rules, etc.).

Hello World!

Tuesday, June 16th, 2009

Hello World!

Or perhaps that should be printf(“Hello, World!\n”); — because that
reflects my technical background.

I’m probably a bit late to the game, as this is my first blog post,
but the common wisdom seems to be that it’s better late than never.

About me

For those who don’t already know me, my name is Idan
Shoham and for the past 17 years I’ve been the CTO of a company now
called Hitachi ID Systems, Inc. (you may be more familiar with our old
name: M-Tech). At Hitachi ID we write software that helps medium to
large organizations better manage the identitities, security privileges,
passwords and other authentication factors of their users, both internal
(employees and contractors) and external (partners and customers).

This blog

The point of launching this blog is for me to share ideas and sometimes
rants — about identity management technology and projects, about
how to develop and deploy software in general, about how to secure
authentication, authorization and audit processes and more.

Hopefully this will be useful information to you — our visitors. This
blog should shed some light about our thought processes here at Hitachi
ID and about the direction we are taking with new products and services.

Why you should read this

I hope to make this blog interesting enough to attract return
visits. If you’re in the middle of an identity management project or
just starting one, you should find the posts helpful.
If you are developing or refreshing your organization’s identity
management strategy, I hope to help you choose a direction.

Feedback

This blog is open to feedback from readers. Initially, I’ve left the
“moderated” setting on, but the parameters for accepting or rejecting
posts are pretty simple. Feedback will be allowed by default, and
rejected if and only if it is offensive, SPAM or wildly off-topic.

Hopefully you will post lots of feedback and we can make this a
conversation, rather than a publication.