Automatically generate LDAP entries with OpenDJ make-ldif




Do you need to generate a large number of LDAP entries for benchmark testing?   I was all set to write a utility to do this - when a colleague pointed me to make-ldif that comes with OpenDJ.

In a nut-shell, make-ldif uses template files to create sample LDIF data, which can then be imported into your ldap server.  make-ldif can generate random data, and/or use various patterns (for example - selecting from a list of cities, phone numbers, etc.).

Check out the documentation




Fun with Active Directory

As we all know Active Directory can cause quite a few headscratching moments in general, now I’m trying to conclude some of my findings so maybe others won’t suffer from the same thing.

Creating user fails with LDAP 53

LDAP 53 is UNWILLING_TO_PERFORM, now there is two main candidate reason for this to happen during an ADD operation:

  • SSL isn’t used. AD will not process LDAP operations that involves setting/modifying the unicodePwd attribute if the request was made on a non-secure connection. Always make sure you use LDAPS when connecting to AD.
  • The provided password does not satisfy the password policies configured in AD. This is a bit harder to identify, but for the sake of testing try to use complex passwords and see if the user gets created like that.

Changing password fails

As stated already: AD requires SSL connection when sending the unicodePwd attribute. If this wouldn’t be already enough you have to also enclose the password value with double quote characters and the password should be in a UTF-16LE encoded format. A sample code snippet would look something like:

public byte[] encodePassword(String password) {
    return (""" + password + """).getBytes(Charset.forName("UTF-16LE"));
}

There is two main way to change passwords by the way:

  • A non-administrative password change: you BIND as the user whose password needs to be changed, and then you send a ModifyRequest with a DELETE and an ADD ModificationItem containing the old and the new password respectively (in the encoded format of course).
  • Administrative password reset: in this case you BIND as an admin user and then send a ModifyRequest with a REPLACE ModificationItem only containing the new password value.

Easy, right?

UPDATE

It appears that the DELETE/ADD combination can fail with the following error:

Constraint Violation: 0000052D: AtrErr: DSID-03190F80, #1:
	0: 0000052D: DSID-03190F80, problem 1005 (CONSTRAINT_ATT_TYPE), data 0, Att 9005a (unicodePwd)

In our case the root cause was the default password minimum age policy in AD (1 days). Once I’ve ran the following command from an Administrator CMD:

net accounts /MINPWAGE:0

AD started to like my ModifyRequests.

ForgeRock OpenAM and Google Authenticator: Will it blend?




OpenAM provides built in support for OATH authentication (not to be confused with OAuth, which is a different kettle of fish altogether).


OATH defines an open standard for One Time Password (OTP) generators.  These can be HMAC Hash based (HOTP), or time based (TOTP).


Google Authenticator is a free application that you can download for your Android or iOS device that provides an implementation of the OATH TOTP standard.   It turns out to be surprisingly easy to configure Google Authenticator to work with OpenAM.


Let's walk through the steps.

We will configure this in a realm called "test". Realm's are a kick butt feature of OpenAM that allows us to create isolated administration, data store and policy domains.  A common use would be to configure separate environments for customers and employees, but realms are also great for creating test environments.


Navigate to your test realm, click on the "Authentication Tab". Under "Modules" edit the OATH module. It should look something like this:




The key attributes we need to set for Google Authenticator are:
  • Auth Level.  This is a higher strength multi-factor module, so we assign a value of 10 here. 
  • One Time Password Length: This is the length of the OTP that will be displayed by the Google Authenticator application. Six is the default for Authenticator. 
  • Minimum secret key length: I used 8 for this example, which is too short for production. This is the length (in hex characters) of the encoded secret. 
  • Secret Key Attribute: This is the name of the ldap attribute where the secret key is stored. For this example I am using the "title" attribute. This isn't a great choice, and for production you would extend your ldap schema with a dedicated attribute. 
  • OATH Algorithm:  TOTP for Google Authenticator
  • Last Login Time Attribute:  The OATH TOTP module needs to store the last login timestamps (UNIX long time) in this attribute.  I am using "description" but again you should extend your schema with a dedicated attribute.
You will want to ensure that the attributes chosen above are fetched by OpenAM. Navigate to your datastore configuration for the realm and make sure that "LDAP User Attributes" contains the two attributes you have configured in the OATH module.

Now that we have configured the OATH module, we need to modify the default ldapService chain to use the new module:


The above chain will first authenticate the user against the Datastore (i.e. ldap username/password), and if that is successful will pass them on to the OATH module. Both modules are "REQUIRED" - so the user must pass through both to authenticate. 

Registering the Shared Secret 


Before we can test this out, we need to seed our LDAP with the shared secret. The OpenAM server and Google Authenticator must share a common secret to authenticate the user. Briefly, the way this works is that the Authenticator client will hash the current time using the shared secret to obtain a OTP. The server will do the same and compare the results with the OTP entered by the user. 

For convenience the easiest way to register a secret with the Authenticator application is to use a QR code. The user simply scans the generated QR code to setup the account. The same secret is registered with the LDAP server.  

Ideally we would have a nice registration page that handles this - but for now we will do this manually.

The tricky bit is the encoding of the secret for storage.  The secret key stored in LDAP is the Base16 encoded value of the secret. For generating a QR code for scanning, we must use the Base32 encoding. 

Using the example below:
Shared Secret: "hello" (without the quotes)
Base16 encoded value: 68656C6C6F (http://online-calculators.appspot.com/base16/ to generate this)
Base32 encoded value: NBSWY3DP (http://online-calculators.appspot.com/base32/ )

To register the QR code we create a special URL of the form:

otpauth://totp/test@forgerock.com?secret=NBSWY3DP

Where the secret is the Base32 encoded value that we determined above. This URL can be pasted into a QR generator (http://goqr.me/ for example) and scanned into your Google Authenticator application. Here is an example:





We can use an LDAP Client such as Apache Studio to register the base 16 encoded secret:





Let's give it a whirl!  Navigate to the login page for your realm. For example http://openam.example.com:8080/openam/XUI/#login/test

You will first see the LDAP module challenge:


Followed by the OTP challenge:



Enter the OTP displayed by your Google Authenticator Application.  You should be authenticated and redirected to the profile landing page.

Troubleshooting

If you run into problems enable the debug log at message level (Configuration -> Servers and Sites -> your server url -> Debugging).  Look for messages with "oath" in the string.

The logs are under your openam config directory (openam-config/openam/debug).



The Evolution of Identity & Access Management

Identity and access management is going through a renaissance.  Organisations, both public and private have spent thousands of hours (and dollars) implementing and managing infrastructure that can manage the creation of identity information, as well as management of the authentication and authorization tasks associated with those identities.  Many organisations do this stuff, because they have to.  They're too large to perform these tasks manually, or perhaps have external regulations that require that they have a handle on the users who access their key systems. But how and why is all this changing?



The Enterprise and The Perimeter

Changing Identities
15 years ago, identity and access management was focused on stuff that happened within the corporate firewall.  Employees joined the company, were entered into the payroll system and 'IT' set them up on the necessary systems they needed.  That setup process was often manual, inconsistent and perhaps involved several different application and system owners and administrators.  IT being IT, would look to try and automate that account creation process.  This was driven partly by business benefits (new employees don't need to wait 3 days for to get working) and also the costs savings associated with migrating manual tasks to a centralised provisioning system.


Cloud, Services & The Modern Enterprise

Organisations are not the same as they were 15 years.  I talked about this recently with the onset of the 'modern' enterprise.  What does that mean?  Due to economic changes and changes in working patterns,  organisations are now multifaceted complex beasts.  No one team or department can be associated with a single process or business function.  Supply chains are now swollen by outsourced providers, all rapidly engaged and critical to short term product launches or business deliverables.  These business changes rely heavily on an agile identity management and authentication infrastructure, that can not only quickly engage new partners or suppliers, but also track, authorize, audit and remove users when they are no longer required or a partner contract expires.

Continually Connected

Identity from a consumer sense has also altered.  More and more individuals have an identity context on line.  That could be something like a Facebook or LinkedIn account, right through to personal email, banking and ecommerce as well as consumer outsourced services such as Spotify, Kindle books or iTunes.  Individuals are embracing applications and services that can give them non-physical access to experiences or data stores, all centred about their own identity.  These online consumer identities are only as valid of course, if the identity owner is able to connect to those services and sites.  That connectivity is now ubiquitous, making life experiences richer, whilst increasing demands for consumer scale infrastructure.

Standards and More Standards

I recently watched the Gartner on demand catch up series of the recent Catalyst event, that was neatly titled the "Identity Standards Smackdown".  A panel of 5 leading identity go-getters, represented some of the emerging and long standing IAM standards, promoting their worth in the current landscape.  The five represented were OAuth2, SCIM, XACML, OpenID Connect and SAML2.  The details of each are all varied and there are numerous pro's and con's to each.  What is interesting, is that we are now at a position where all of these standards are now playing a part in both public and private enterprise adoption, acting as catalysts for new service offerings by services and software vendors, as well as acting as a yardstick to aid comparisons, maturity metrics, interoperability and more.

The standards all play slightly different parts in the provisioning, authentication and authorization life cycle, but the healthy debate goes to show the both end user and vendor interest in this space is as hot as it has even been.

By Simon Moffatt

The Evolution of Identity & Access Management

Identity and access management is going through a renaissance.  Organisations, both public and private have spent thousands of hours (and dollars) implementing and managing infrastructure that can manage the creation of identity information, as well as management of the authentication and authorization tasks associated with those identities.  Many organisations do this stuff, because they have to.  They're too large to perform these tasks manually, or perhaps have external regulations that require that they have a handle on the users who access their key systems. But how and why is all this changing?



The Enterprise and The Perimeter

Changing Identities
15 years ago, identity and access management was focused on stuff that happened within the corporate firewall.  Employees joined the company, were entered into the payroll system and 'IT' set them up on the necessary systems they needed.  That setup process was often manual, inconsistent and perhaps involved several different application and system owners and administrators.  IT being IT, would look to try and automate that account creation process.  This was driven partly by business benefits (new employees don't need to wait 3 days for to get working) and also the costs savings associated with migrating manual tasks to a centralised provisioning system.


Cloud, Services & The Modern Enterprise

Organisations are not the same as they were 15 years.  I talked about this recently with the onset of the 'modern' enterprise.  What does that mean?  Due to economic changes and changes in working patterns,  organisations are now multifaceted complex beasts.  No one team or department can be associated with a single process or business function.  Supply chains are now swollen by outsourced providers, all rapidly engaged and critical to short term product launches or business deliverables.  These business changes rely heavily on an agile identity management and authentication infrastructure, that can not only quickly engage new partners or suppliers, but also track, authorize, audit and remove users when they are no longer required or a partner contract expires.

Continually Connected

Identity from a consumer sense has also altered.  More and more individuals have an identity context on line.  That could be something like a Facebook or LinkedIn account, right through to personal email, banking and ecommerce as well as consumer outsourced services such as Spotify, Kindle books or iTunes.  Individuals are embracing applications and services that can give them non-physical access to experiences or data stores, all centred about their own identity.  These online consumer identities are only as valid of course, if the identity owner is able to connect to those services and sites.  That connectivity is now ubiquitous, making life experiences richer, whilst increasing demands for consumer scale infrastructure.

Standards and More Standards

I recently watched the Gartner on demand catch up series of the recent Catalyst event, that was neatly titled the "Identity Standards Smackdown".  A panel of 5 leading identity go-getters, represented some of the emerging and long standing IAM standards, promoting their worth in the current landscape.  The five represented were OAuth2, SCIM, XACML, OpenID Connect and SAML2.  The details of each are all varied and there are numerous pro's and con's to each.  What is interesting, is that we are now at a position where all of these standards are now playing a part in both public and private enterprise adoption, acting as catalysts for new service offerings by services and software vendors, as well as acting as a yardstick to aid comparisons, maturity metrics, interoperability and more.

The standards all play slightly different parts in the provisioning, authentication and authorization life cycle, but the healthy debate goes to show the both end user and vendor interest in this space is as hot as it has even been.

By Simon Moffatt

Identity Repositories

Identity Repositories (or Data Stores if you’d like) are almost the most important part of OpenAM. They enable you to manage identities within your organization (to a certain degree), and are involved in many processes in general:

  • during authentication it is often necessary to contact the IdRepo to get informations about the user (like the e-mail address, so an OTP code can be sent out)
  • after authentication takes place the user’s profile is looked up (for example to check that the user account is not inactivated)
  • once you have a session you can easily configure a PA to retrieve profile attributes in HTTP headers
  • when you authenticate via SAML, the IdRepo is contacted to retrieve the values for the SAML attributes in the assertion
  • etc

Some (boring) background about IdRepos

As you can see IdRepo is quite an important part of the product, so let’s try to get a better understanding of it.
In general, a given identity repository basically represents a single source of identity data. In this case identity can mean users, groups (and with ODSEE it can even mean roles and filtered roles). While it is common to use a regular relational database to store application data, in enterprise systems it is much more common to use some LDAP-capable directory server (OpenDJ, Active Directory, etc) instead. Since OpenAM is meant to be used as an enterprise solution it has been designed to work mainly with directory servers, but it is not limited to those (there is also Database IdRepo implementation, and with some patience any other sort of implementation can be written just as well). To make it simple let’s only talk about directory servers for now…

So as you already can guess, OpenAM has an LDAPv3Repo named IdRepo implementation which basically should be able to communicate with any LDAPv3 compliant directory server. While this sounds great, this also means that the implementation has grown quite big with time. There are parts for example where special handling is required for a given directory server type (yes, I’m talking about AD)..
In case of versions prior to 10.2.0, the LDAPv3Repo implementation has been using the quite old Netscape LDAP SDK to perform the LDAP operations. Although this solution works, I must say that support for the Netscape SDK is pretty much non-existent (and on top of that the whole SDK has been basically repackaged within OpenAM), so when it comes to debugging, it can become quite difficult to tackle down a given problem (again, we are talking about legacy code here). Because of this, and the fact that the OpenDJ team has started to develop an own LDAP SDK (which by the way ROCKS!), we decided to drop Netscape from the product. The good news is that this change already starts with 10.2.0 (though unfortunately there will be still parts of OpenAM using the old SDK, but hopefully not for too long).

So how do IdRepos work?

You can define any amount of Identity Repositories (Data Stores) within a realm, the only caveat is that all of the configured IdRepos will be contacted whenever an operation needs to be performed. You may ask why, well here it is:
OpenAM uses IdRepos to abstract away from identity repository differences, and within a realm it uses AMIdentityRepository to abstract away from the multiple IdRepo concept. This means basically that OpenAM only knows about the single AMIdentityRepository, which then knows everything about the different IdRepo configurations within the realm. To complicate things AMIdentityRepository is using IdServices to access all the IdRepo implementations, so let’s see the 4 different IdServices implementation for reference:

  • IdServicesImpl – the most basic implementation, this one just looks through the IdRepo list and executes the given operation on all of them. At the end it combines the results from the different IdRepos.
  • IdCachedServicesImpl – this implementation stores operation results in a cache, and when a given requested data cannot be found in the cache (which by the way is invalidated based on persistent search results), it just asks IdServicesImpl to contact the IdRepos for it.
  • IdRemoteServicesImpl – a remote implementation which is using JAX-RPC to contact an OpenAM server (which in turn will use Id(Cached)ServicesImpl) to perform operations remotely
  • IdRemoteCachedServicesImpl – as its name suggests: caches on the client side and asks OpenAM via IdRemoteServicesImpl if there is a cache miss.

Because IdRepos are implemented this way, you can hopefully see why we say that OpenAM wasn’t meant to be used for identity management. Creating/reading/updating/deleting entries from all the configured IdRepos cannot always fulfill all the requirements, and the expected behavior is poorly defined (for example what happens if you run a delete on an entry that only exists in one IdRepo?).

Alright, alright, but what’s new in OpenAM 10.2.0?

We have implemented a new IdRepo implementation, which is only using OpenDJ LDAP SDK to perform LDAP operations, keeping in mind that the behavior should be similar to the old Netscape-based IdRepo implementation. The main changes however are:

  • The LDAP failover code has been rewritten, and it should be much more reliable with the new release.
  • In the old version the user search attribute was misused, and it was basically handled as the naming attribute of the entry (i.e. first attribute name in the DN), this has been changed, so now the user search attribute is only used when performing search operations in the IdRepo, and the already existing naming attribute setting is used as naming attribute now.
  • Two new option has been introduced: heartbeat interval and heartbeat timeunit. These settings control the interval for sending out heartbeat search requests for idle connections. This means that if the connection is not idle, there won’t be any heartbeat request either.
  • Because of this the idle timeout setting has been removed. When there is a network component closing idle connections after a certain period of time, you should just configure the heartbeat interval correctly, and that should basically prevent the connection becoming idle.
  • Data store level cache has been removed (not implemented).

I think we can all agree on the fact that 10.2.0 will be an exciting release. :)

ForgeRock Open Identity Summit comes to Europe…

Join us for the Open Identity Stack Summit Europe, on 14-16 October 2013 at the Domaine de Béhoust, France.

We will be gathering at ForgeRock’s luxe Chateau, Domaine de Béhoust (just outside Paris), where our Open Identity Stack community will delve into OpenAM, OpenIDM, and OpenDJ best practices, use cases, how-tos, and more.

We’ve been saying for a long time that identity & access management (IAM) must be reconstructed to adapt to today’s problems. Modern APIs, standards, scale, speed, and modular architecture are all needed for successful modern IAM deployments. The agenda will include dynamic working sessions addressing the latest IAM developments, including mobility, identity bridge, and customer case studies.

A call for papers is open. If you are doing something interesting with the Open Identity Stack and you would like to share the experience by presenting a session at the summit, send your proposal by September 4.

ForgeRock’s chateau is large, but registration is limited. Therefore, I encourage you to reserve your spot and register quickly !

If you want to get a feel of the atmosphere of the conference, check the photo album from the first ForgeRock Open Identity Summit or get a glimpse at the skills of one of our keynote speakers :
LP0_8856I hope to see you at ForgeRock’s chateau in October !

 


Filed under: General Tagged: conference, europe, ForgeRock, france, identity, openam, opendj, OpenIdentityStack, openidm, opensource, summit

Java EE agent internals

Not so long ago I’ve been working on JBoss v7 agent, so let me try to sum up some of my findings with Java EE agents in general.
A Java EE agent basically stands from 4 main components:

  • Installer
  • Container specific JAAS integration
  • Java EE Filter implementation
  • Agent application

Let’s go through each one of these and see how they really work.

Agent installer

As its name suggest, the agent installer helps to install the Policy Agent (PA) on the chosen Java EE container. The main behavior of the installer is defined within config/configure.xml file. This file is basically a big XML descriptor of the followings:

  • interactions – what should be asked from the user when performing install/custom-install/migrate/uninstall
  • tasks – what tasks needs to be performed as part of the agent installation, things like backup, creating directory layout, etc

So basically this XML describes how the installer should work in general. Now let’s go even more deeper…

Interactions

During an agent installation all user input is being stored in a thing called IStateAccess (well sometimes there is a persistent=”false” interaction which isn’t persistent, but that part I don’t fully grasp yet :) ).
As part of an interaction it is possible to set up different validators (even custom ones), so for example this makes it possible that a given agent is actually installed on a supported container version. For example with JBoss we are running “JBOSS_HOME/bin/instancename.(bat|sh) –version” command to get back the version number of the JBoss instance, parse it and if it matches, we let the installer further.

Tasks

Tasks are usually just a well defined “change” that needs to be performed by the installer. Implementing a task is not just performing a given change, rollback is just as important part of it. In order to make an agent work, it is very likely that some container specific configuration file needs to be modified (for example to configure/enable the JAAS integration), the necessary file paths are all calculated based on the user input during the interactions (IStateAccess).
Usually an agent installer performs the following tasks:

  • creating backups for the container’s configuration files
  • perform modifications to the container configuration
  • create agent directory layout (i.e. creating the Agent_00x folder structure)
  • encrypt the agent profile’s password and save it in the agent bootstrap config
  • generate the configuration files based on the values provided to the installer
  • in certain scenarios: deploy agentapp.war in the container
  • in case of custom-install: create agent profile if needed

As you can probably see there is nothing super magical about the agent installer, its sole purpose is basically to make the deployer’s life easier by automating this whole process.
I feel we heard enough about the installer for now, so let’s go a bit further and see what the JAAS integration is really all about.

Container specific JAAS integration

JAAS in general isn’t that new really, it’s been part of JRE 1.4 already, so let’s look at it very shortly (here is a longer version):

  • You can implement a custom authentication module
  • You can define authentication “chains” using built-in/custom modules in a configuration file

Well hopefully this concept doesn’t sound too new for you, since the whole OpenAM authentication system is based on JAAS. ;)
As an extra, application servers tend to have the concept of “JAAS realm”, which is basically a collection of users (to be sparse as much as possible). Unfortunately there is not much standard around implementing JAAS support for application servers, so this part of the agents is different per container (and sometimes even across container versions).

Advantages of JAAS

Well the main advantage of JAAS is that it can help making the OpenAM integration as less intrusive as possible, but there are others as well:

  • Retrieving the logged in user’s name is quite simple, you can call request#getRemoteUser() or request#getUserPrincipal().
  • By setting up Java EE Security it is possible to define roles in the application’s descriptor files and later on in your application you can just simply check against these using request#isUserInRole(String) (authorization).
  • In case you later on decide to use an LDAP based JAAS module instead, the idea is that you can simply switch out the JAAS login module (of course you’ll need to configure it to work similarly to OpenAM), without actually modifying any part of your application code.
  • While this is already good enough, I think JAAS really pays off when EJBs are used in applications. In that case you can use the @DeclareRoles and @RolesAllowed annotations and magically your EJB method will be protected, only those in the configured role can actually invoke the method (anything else results in failure), which sounds quite neat. :)

So what about agents?

The Java EE agents support JAAS, they integrate well with the container and within the agent configuration it is possible to set up role mapping based on pretty much any arbitrary data (session property, profile attribute). To enable JAAS support though you must ensure that you configure the agent in J2EE_POLICY or ALL mode. While it is possible to define security-constraints in web.xml to protect access to pages, usually that’s a bit cumbersome, instead it’s probably simpler to just define OpenAM policies (i.e. use the agent in ALL mode), and only use JAAS for the previously described goodies.

Java EE Filter implementation

The agent itself is implemented as a Java EE filter, which is (obviously) standard, so this part is common for all the agents and as such it is part of the agent SDK. When the filter intercepts a given request, it will basically go through a list of “TaskHandlers” and execute them in a predefined order. If a given TaskHandler returns with an AmFilterResult, then the processing stops and the result will be handled. Here is a small example subset of the available TaskHandlers:

  • CDSSOTaskHandler – detects if the user needs to authenticate using CDSSO and redirects to CDCServlet if necessary
  • CDSSOResultTaskHandler – processes the incoming LARES response and creates the cookie on the application’s domain
  • URLPolicyTaskHandler – checks the currently visited URL against configured URL policies in OpenAM, and blocks access if necessary
  • InitialPDPTaskHandler – saves POST data if the user hasn’t authenticated yet, so later on the POST can be resubmitted once authentication took place.
  • XSSDetectionTaskHandler – checks incoming request parameters for malicious characters

Since the agent is implemented as a filter, you can see how it can alter the outcome of a given incoming request. If the user doesn’t have session yet, it will prevent further processing of the page, and redirect straight to OpenAM instead. This however means that in order to protect the application fully, you must set the agent’s filter as the first one, otherwise it may be possible that a filter earlier in the chain alters the response, or returns earlier, which could basically prevent the agent to protect the application properly.

Agent application

And here is the last component which is pretty simple, the agent application. The purpose of this application is to provide a unique contextroot for the agent, so the agent can receive notifications from OpenAM, and also LARES responses in case of CDSSO. To be fair the agentapp only has the agent’s filter defined, and that basically handles everything. If there is an incoming notification, then the filter just checks if the requested URL is actually the notification URL, and processes it if so.
As a sidenote I would mention that deploying agentapp.war is not necessary when using global web.xml with Tomcat 6. That is simply because the global web filter already includes the agent filter, hence any incoming request to the /agentapp context will be catched by the filter already.

Conclusion

Agents are fun, but that doesn’t mean they are not complex. :)

2-Factor Is Great, But Passwords Still Weak Spot

The last few months have seen a plethora of consumer focused websites and services, all adding in two-factor authentication systems, in order to improve security.  The main focus of these additional authentication steps, generally involve a secondary one time password, being sent to the authenticating user, either via a previously registered email address or mobile phone number.  This is moving the authentication process away from something the user knows (username and password) to something the user has - either an email address or mobile phone.  Whilst these additional processes certainly go some way to improve security, and reduce the significance of the account password, it highlights a few interesting issues, mainly that password based authentication is still a weak link.




Consumers Accept New Security

Two factor authentication solutions have been around for a number of years, either in the form of hard tokens (RSA for example) or physical proximity cards for use with a pin to access a controlled physical site.  However, many have been used for general high security enterprise or internal scenarios, such as access to data centers or perhaps dialing into a secure network from an unsecure location.  The interesting aspect today, is that many of these SMS based 'soft' approaches to two factor authentication, are being made available to consumers, accessing standard web applications and sites.  The services those sites offer, whilst containing identity data or personal information, are not particularly life threatening or business critical.  It is interesting to see websites taking a risk with regards to user convenience, in order to implement greater security.  As a security professional, even just from an awareness perspective this a positive move.  Many end users, most of whom are non-technical, now willingly accept these additional steps, in order to reduce the risk associated with their account being hacked.


Password Security is Fundamentally Weak

But why the increased use of two-factor and why are users happy to accept this new level of security?  The main underlying point, is that simple password based authentication, is and never really will be, a totally secure way of protecting resources.  I've blogged on this topic several times in the past 18 months (Passwords And Why They're Going Nowhere, - March 2013,  The Problem With Passwords (again, still) - Oct 2012, The Password Is Dead (long live the password!) - Feb 2012), but the situation still remains: passwords have numerous weaknesses.  Some arise from the end user side (use of non-complex passwords, password sharing between sites, passwords being written down) and some from the custodian side, especially with regards to password storage (use of clear text - yes really!, symmetric encryption as opposed to hashing) and password transit (use of non SSL / HTTPS communication).  The complexity of password hacking techniques is also pretty mature, with automated tooling, pre-compiled hashing tables and harvesting engines, all make application protected by just a username and password, a risky proposition.

Biometrics - Face Recognition

Ok, so everyone knows passwords are weak.  So what are the options?  Due to the rise of mobile technology - both smart phones and tablets - the raw hardware technology available to most end users, is considerably higher than it was say 5 years ago.  Most devices will have high resolution cameras and touch screens that can be used for additional authentication checks, without the need for additional costly hardware.  Facial recognition is available on many of the Android and iOS handsets, when used alongside a secondary PIN.  Most facial recognition systems either use an algorithm to analyze the relative position of things like the nose, eyes and mouth or perhaps analyse a selection of facial images to create a normalized view.  This area is certainly developing, but can perhaps be circumvented by pictorial replays or other savvy attacks.  Google has certainly taken a lead in this area, by recently announcing a patent based on facial authentication.


Biometrics - Voice Recognition

Another area of interest is that of voice or speech based authentication.  On a similar front to facial recognition, this is focusing on the premise, that something you are, is certainly a lot more secure than something you know (password) and even more so than something you own (token).  Vocal recognition requires the 'printing' of the users voice, in order to identify the unique characteristics of the individual.  This is akin to a fingerprint, and when measured accurately using the amplification levels of key frequencies and other pause factors, makes an arguably world unique view of a user's voice, similar to a DNA sample.  At login time, a user is asked to repeat a certain phrase that was used at registration time in order to identify a match.

Any biometric method will raise questions about practicality (accuracy of technology, avoidance of poor type I and type II error rates for example), as well as managing the privacy concerns of holding individual biological data.  The latter part however, could probably be overcome by holding simple hashes of key checking metrics as opposed to raw data.

Either way, passwords may at last be on the long goodbye away from centre stage.

By Simon Moffatt

2-Factor Is Great, But Passwords Still Weak Spot

The last few months have seen a plethora of consumer focused websites and services, all adding in two-factor authentication systems, in order to improve security.  The main focus of these additional authentication steps, generally involve a secondary one time password, being sent to the authenticating user, either via a previously registered email address or mobile phone number.  This is moving the authentication process away from something the user knows (username and password) to something the user has - either an email address or mobile phone.  Whilst these additional processes certainly go some way to improve security, and reduce the significance of the account password, it highlights a few interesting issues, mainly that password based authentication is still a weak link.




Consumers Accept New Security

Two factor authentication solutions have been around for a number of years, either in the form of hard tokens (RSA for example) or physical proximity cards for use with a pin to access a controlled physical site.  However, many have been used for general high security enterprise or internal scenarios, such as access to data centers or perhaps dialing into a secure network from an unsecure location.  The interesting aspect today, is that many of these SMS based 'soft' approaches to two factor authentication, are being made available to consumers, accessing standard web applications and sites.  The services those sites offer, whilst containing identity data or personal information, are not particularly life threatening or business critical.  It is interesting to see websites taking a risk with regards to user convenience, in order to implement greater security.  As a security professional, even just from an awareness perspective this a positive move.  Many end users, most of whom are non-technical, now willingly accept these additional steps, in order to reduce the risk associated with their account being hacked.


Password Security is Fundamentally Weak

But why the increased use of two-factor and why are users happy to accept this new level of security?  The main underlying point, is that simple password based authentication, is and never really will be, a totally secure way of protecting resources.  I've blogged on this topic several times in the past 18 months (Passwords And Why They're Going Nowhere, - March 2013,  The Problem With Passwords (again, still) - Oct 2012, The Password Is Dead (long live the password!) - Feb 2012), but the situation still remains: passwords have numerous weaknesses.  Some arise from the end user side (use of non-complex passwords, password sharing between sites, passwords being written down) and some from the custodian side, especially with regards to password storage (use of clear text - yes really!, symmetric encryption as opposed to hashing) and password transit (use of non SSL / HTTPS communication).  The complexity of password hacking techniques is also pretty mature, with automated tooling, pre-compiled hashing tables and harvesting engines, all make application protected by just a username and password, a risky proposition.

Biometrics - Face Recognition

Ok, so everyone knows passwords are weak.  So what are the options?  Due to the rise of mobile technology - both smart phones and tablets - the raw hardware technology available to most end users, is considerably higher than it was say 5 years ago.  Most devices will have high resolution cameras and touch screens that can be used for additional authentication checks, without the need for additional costly hardware.  Facial recognition is available on many of the Android and iOS handsets, when used alongside a secondary PIN.  Most facial recognition systems either use an algorithm to analyze the relative position of things like the nose, eyes and mouth or perhaps analyse a selection of facial images to create a normalized view.  This area is certainly developing, but can perhaps be circumvented by pictorial replays or other savvy attacks.  Google has certainly taken a lead in this area, by recently announcing a patent based on facial authentication.


Biometrics - Voice Recognition

Another area of interest is that of voice or speech based authentication.  On a similar front to facial recognition, this is focusing on the premise, that something you are, is certainly a lot more secure than something you know (password) and even more so than something you own (token).  Vocal recognition requires the 'printing' of the users voice, in order to identify the unique characteristics of the individual.  This is akin to a fingerprint, and when measured accurately using the amplification levels of key frequencies and other pause factors, makes an arguably world unique view of a user's voice, similar to a DNA sample.  At login time, a user is asked to repeat a certain phrase that was used at registration time in order to identify a match.

Any biometric method will raise questions about practicality (accuracy of technology, avoidance of poor type I and type II error rates for example), as well as managing the privacy concerns of holding individual biological data.  The latter part however, could probably be overcome by holding simple hashes of key checking metrics as opposed to raw data.

Either way, passwords may at last be on the long goodbye away from centre stage.

By Simon Moffatt