Infosecurity Europe 2013: Hall of Fame Shlomo Kramer & Mikko Hypponen

London, 23rd April 2013 - For the last 5 years the medal of honour of the information security world has been presented to speakers of high renown with the ‘Hall of Fame’ at Infosecurity Europe. Voted for by fellow industry professionals the recipients of this most prestigious honour stand at the vanguard of the technological age and this year both Shlomo Kramer and Mikko Hypponen will be presented with the honour on Wednesday 24 Apr 2013 at 10:00 - 11:00 in the Keynote Theatre at Infosecurity Europe, Earl’s Court, London.


Shlomo Kramer is the CEO and a founder of Imperva (NYSE:IMPV), prior to that he co-founded Check Point Software Technologies Ltd. in 1993 (NASDAQ:CHKP). Kramer has participated as an early investor and board member in a number of security and enterprise software companies including Palo Alto Networks (NYSE:PANW), Trusteer, WatchDox, Lacoon Security, TopSpin Security, SkyFence, Worklight, Incapsula and SumoLogic.

Shlomo Kramer commented “I am delighted to have been chosen by Infosecurity for the “hall of fame” in 2013 – it’s a great honour to be recognised for the work that I have done in the IT security industry as a founder of companies such as Check Point and Imperva. I love nothing more than creating and fostering successful enterprise IT- focused businesses and will continue to put my energy into combating the ever increasing onslaught from the cyber-criminal world.”

Mikko Hypponen is the Chief Research Officer of F-Secure in Finland. He has been working with computer security for over 20 years and has fought the biggest virus outbreaks in the net.  He's also a columnist for the New York Times, Wired, CNN and BBC. His TED Talk on computer security has been seen by over a million people and has been translated to over 35 languages. Mr. Hypponen sits in the advisory boards of the ISF and the Lifeboat foundation.

"I've worked in the industry for 22 years and haven't had a boring day yet. I'm honoured to be inducted to the hall of fame", commented Mikko Hypponen. "The enemy is changing all the time so we must keep up."

Previous speakers have included some of the world’s leading thinkers in information security including Professor Fred Piper, Professor Howard Schmidt, Bruce Schneier, Whitfield Diffie, Paul Dorey, Dan Kaminsky, Phil Zimmerman, Lord Erroll, Eugene Kaspersky, Charlie McMurdie, Stephen Bonner and Ed Gibson. To view all previous speakers, along with a short biography, you can visit the Infosecurity website:  http://www.infosec.co.uk/Education-Programme/fame/

The 2013 Hall of Fame will be conducted in the Keynote theatre where both Shlomo and Mikko Hypponen will join Professor Fred Piper in a panel chaired by Raj Samani from the CSA which will address other industry professionals in what always proves to be a compelling and exhilarating event.
The speakers inducted into the Hall of Fame have met the following criteria:
  • Be an internationally recognised and respected Information Security practitioner or advocate 
  • Have made a clear and long-term contribution to the advancement of Information Security 
  • Have provided intellectual or practical input that has shifted the advancement of Information Security 
  • Be an engaging and revolutionary thought leader in Information Security 
The Hall of Fame has proven to be the highlight of previous shows and this year is no different. Setting the standard for other industry professionals and defining contemporary issues, the Hall of Fame speakers aim to challenge conventional thought with a mix of pragmatism and provocation. It really is the must see event of the year.

Microsoft Security Intelligence Report Volume 14

Yesterday, Microsoft released volume 14 of its Security Intelligence Report (SIRv14) which included new threat intelligence from over a billion systems worldwide.  The report was focused on the 3rd and 4th quarters of 2012.
One of the most interesting threat trends to surface in the enterprise environment was the decline in network worms and rise of web-based attacks.  The report found:



·         The proportion of Conficker and Autorun threats reported by enterprise computers each decreased by 37% from 2011 to 2H12.
·         In the second half of 2012, 7 out of the top 10 threats affecting enterprises were associated with malicious or compromised websites.
·         Enterprises were more likely to encounter the iFrame redirection technique than any other malware family tracked in 4Q12.
·         One specific iFrame redirection family called IframeRef, increased fivefold in the fourth quarter of 2012 to become the number one malicious technique encountered by enterprises worldwide.   
·         IframeRefwas detected nearly 3.3 million times in the fourth quarter of 2012.

The report also takes a close look at the dangers of not using up-to-date antivirus software in an article titled “Measuring the Benefits of Real-time Security Software.”  New research showed that, on average, computers without AV protection were five and a half times more likely to be infected

The study also found that 2.5 out of 10, or an estimated 270 million computers worldwide were not protected by up-to-date antivirus software

Whilst many of the findings surrounding real-time protection seem pretty obvious, the numbers are pretty startling.  As security is often best implemented using a strength-in-depth, or rings approach, anti-virus or real time malware detection seems to be taking a back seat.  For mobile devices, or devices based on Linux this can become a significant issue, especially if those devices carry email destined for Microsoft based machines.

By Simon Moffatt

Who Has Access -v- Who Has Accessed

The certification and attestation part of identity management is clearly focused on the 'who has access to what?' question.   But access review compliance is really identifying failings further up stream in the identity management architecture.  Reviewing previously created users, or previously created authorization policies and finding excessive permissions or misaligned policies, shows failings with the access decommissioning process or business to authorization mapping process.



The Basic Pillars of Identity & Access Management


  • Compliance By Design
The creation and removal of account data from target systems falls under a provisioning component.  This layer is generally focused on connectivity infrastructure to directories and databases, either using agents or native protocol connectors.  The tasks, for want of a better word, are driven either by static rules or business logic, generally encompassing approval workflows.  The actual details and structure of what needs to be created or removed  is often generated elsewhere - perhaps via roles, or end user requests, or authoritative data feeds.  The provisioning layer helps fulfill what system level accounts and permissions need creating.  This could be described as compliance by design and would be seen as a panacea deployment, with quite a pro-active approach to security, based on approval before creation.
  • Compliance By Control
The second area could be the authorization component.  Once an account exists within a target system, there is a consumption phase, where an application or system uses that account and associated permissions to manage authorization.  The 'what that user can do' part.  This may occur internally, or more commonly, leverage an external authorization engine, with a policy decision point and policy enforcement point style architecture.  Here there is a reliance on the definition of authorization policies that can control what the user can do.  These policies may include some context data such as what the use is trying to access, the time of day, IP address and perhaps some business data around who the user is - department, location and so on.  These authorization 'policies' could be as simply as the read, write, execute permission bits set within a Unix system (the policy here is really quite implicit and static), or something more complex that has been crafted manually or automatically and specific to a particular system, area and organisation.  I'd describe this phase as compliance by control, where the approval emphasis is on the authorization policy.
  • Compliance By Review
At both the account level and authorization level, there is generally some sort of periodic review.  This review could be for internal or external compliance, or to simply help align business requirements with the underlying access control fulfillment layer.  This historically would be the 'who has access to what?' part.  This would be quite an important - not to mention costly from a time and money perspective - component for disconnected identity management infrastructures.  This normally requires a centralization of identity data, that has been created and hopefully approved at some point in the past.  The review is to help identify access misalignment, data irregularities or controls that no longer fulfill the business requirements.  This review process is often marred by data analysis problems, complexity, a lack of understanding with regards to who should perform reviews, or perhaps a lack of clarity surrounding what should be certified or what should be revoked.

SIEM, Activities and Who Has Accessed What?

One of the recent expansions of the access review process has been to marry together security information and event monitoring (SIEM) data with the identity and access management extracts.  Being able to see what an individual has actually done with their access, can help to determine whether they actually still need certain permissions.  For example, if a line manager is presented with a team member's directory access which contains 20 groups, it could be very difficult to decide which of those 20 groups are actually required for that individual to fulfill their job.  If, on the other hand, you can quickly see that out of the 20 groups, twelve were not used within the last 12 months, that is a good indicator that they are no longer required on a day to day basis and should be removed.

There is clearly a big difference between what the user can access and what they actually have accessed.  Getting this view, requires quite low level activity logging within a system, as well as the ability to collect, correlate, store and ultimately analyse that data.  SIEM systems do this well, with many now linking to profiling and identity warehouse technologies to help create this meta-warehouse.  This is another movement to the generally accepted view of 'big data'.  Whilst this central warehouse is now very possible, the end result, is still only really trying to speed up the process of finding failures further up the identity food chain.

Movement to Identity 'Intelligence'

I've talked about the concept of 'identity intelligence' a few times in the past.  There is a lot of talk about moving from big data to big intelligence and security analytics is jumping on this band wagon too.  But in reality, intelligence in this sense is really just helping to identify the failings faster.  This isn't a bad thing, but ultimately it's not particularly sustainable or actual going to push the architecture forward to help 'cure' the identified failures.  It's still quite reactive.  A more proactive approach is to apply 'intelligence' at every component of the identity food chain to help make identity management more agile, responsive and aligned to business requirements.  I'm not advocating what those steps should be, but it will encompass an approach and mindset more than just a set of tools and rest heavily on a graph based view of identity.

By analyzing the 'who has accessed' part of the identity food chain, we can gain yet more insight in to who and what should be created and approved, within the directories and databases that under pin internal and web based user stores.  Ultimately this may make the access review component redundant once and for all.

By Simon Moffatt

Protect Data Not Devices?

"Protect Data Not Devices", seems quite an intriguing proposition given the increased number of smart phone devices in circulation and the issues that Bring Your Own Device (BYOD) seems to be causing, for heads of security up and down the land.  But here is my thinking.  The term 'devices' now covers a multitude of areas.  Desktop PC's of course (do they still exist?!), laptops and net books, smart phones and not-so-smart phones, are all the tools of the trade, for accessing the services and data you own, or want to consume, either for work or for pleasure.  The flip side of that is the servers, mainframes, SAN's, NAS's and cloud based infrastructures that store and process data.  The consistent factor is obviously the data that is being stored and managed, either in-house or via outsourced services.


Smarter the Device, The More Reliant We Become

This is a pretty obvious statement and doesn't just apply to phones.  As washing machines became more efficient and dishwashers became cheaper and more energy saving, we migrated in droves, allowing our time to be spent on other essential tasks.  The same is true for data accessing devices.  As phones morphed in to micro desktop PC's, we now rely on them for email, internet access, gaming, social media, photography and so on.  Some people even use this thing called the telephone on them.  Crazy.  As the features and complexity ramp up, we no longer need another device for listening to music, taking pictures or accessing Facebook.  Convenience and service provision increases, as does the single-point-of-failure syndrome and our reliance on them being available 99.999% of the time, up to date and online.

Smarter the Device, The Less Important It Becomes

Now this next bit seems a bit of a paradox.  As the devices becomes smarter, greater emphasis is placed on the data and services those devices access.  For example.  A fancy Facebook client is pretty useless if only 100 people use Facebook.  A portable camera is just that, unless you have a social outlet for which to distribute the images.  The smartness of the devices themselves, is actually driven by the services and data they need to access.  Smartphones today come with a healthy array of encryption features, remote backup, remote data syncing for things like contacts, pictures and music, as well device syncing software like Dropbox.  How much data is actually specifically related to the device?  In theory nothing.  Zip.  Lose your phone and everything can be flashed back down in a few minutes, assuming it was set up correctly.  Want to replace a specific model and brand with a model of equivalent specification from a different vendor?  Yep you can do that too, as long as you can cope with a different badge on the box.  Feature differentiation is becoming smaller, as the technology becomes more complex.

Data Access versus Data Storage

As more and more services become out sourced (or to use the buzz of being moved to the 'cloud'), the storage part becomes less of a worry for the consumer.  The consumer could easily be an individual or an organisation.  Backup, syncing, availability, encryption and access management all fall to the responsibility of the outsourced data custodian.  Via astute terms and conditions and service level agreements, the consumer shifts responsibility across to the data custodian and service provider.

The process of accessing that data then starts to fall partly on the consumer.  How devices connect to a network, how users authenticate to a device and so on, all fall to the device custodian.  Access traffic encryption will generally require a combination of efforts from both parties.  For example, the data custodian will manage SSL certificates on their side, whilst the consumer has a part to play too.

So to slightly contradict my earlier point (!), this is where the device is really the egress point to the data access channel, and so therefore requires important security controls to access the device.  The device itself is still only really a channel to the data at the other end, but once an individual (or piece of software, malicious or not) has access to a device, they then in turn can potentially open access channels to outsourced data.  The device access is what should be protected, not necessarily the tin itself.

As devices become smarter and service providers more complex, that egress point moves substantially away from the old private organisational LAN or equivalent.  The egress point is the device regardless of location on a fixed or flexible network.

Data will become the ultimate prize not necessarily the devices that are used to access it.

By Simon Moffatt


Passwords And Why They’re Going Nowhere

Passwords have been the bane of security implementers ever since they were introduced, yet still they are present on nearly every app, website and system in use today.  Very few web based subscription sites use anything resembling two-factor authentication, such as one-time-passwords or secure tokens.  Internal systems run by larger organisations implement additional security for things like VPN access and remote working, which generally means a secure token.


Convenience Trumps Security

Restricting access to sensitive information is part of our social make up.  It doesn't really have anything to do with computers.  It just so happens for the last 30 years, they're the medium we use to access and protect that information.  Passwords came before the user identity and were simply a cheap (cost and time) method of preventing access to those without the 'knowledge'.  Auditing and better user management approaches resulted in individual identities, coupled with individual passwords, providing an additional layer of security.  All sounds great.  What's the problem then?  Firstly users aren't really interested in the security aspect.  Firstly, users aren't interested in the implementation of the security aspect.  They want the stuff secure, they don't care how that is done, or perhaps more importantly, don't realise the role they play in the security life cycle.  A user writing down the password on a post-it is a classic complaint of a sysadmin.  But the user is simply focused on convenience and performing their non-security related revenue generating business role at work, or accessing a personal site at home.


Are There Alternatives & Do We Need Them?

The simple answer is yes, there are alternatives and in some circumstances, yes we do need them.  There are certainly aspects of password management that can help with security, if alternatives or additional approaches can't be used or aren't available.  Password storage should go down the 'hash don't encrypt' avenue, with some basic password complexity requirements in place.  Albeit making those requirements too severe often results in the writing down on a post-it issue...

Practical alternatives seem to be few and far between (albeit feel free to correct me on this).  By practical I'm referring to both cost (time and monetary) and usability (good type-I and type-II error rates, convenient).  So biometrics have been around a while.  Stuff like iris and finger print scanning as well as facial recognition.  All three are pretty popular at most large-scale international airports, mainly as the high investment levels can be justified.  But what about things like web applications?  Any use of biometric technology at this level would require quite a bit of outlay for new capture technology and quite possibly introduces privacy issues surrounding how that physical information is stored or processed (albeit hashs of the appropriate data would probably be used).

There are also things like one-time-passwords, especially using mobile phones instead of tokens.  But is the extra effort in deployment and training, enough to warrant the outlay and potential user backlash?  This would clearly boil down to a risk assessment of the information being protected, which the end user could probably not articulate.


Why We Still Use Them...

Passwords aren't going anywhere for a long time.  For several reasons.  Firstly it's cheap.  Secondly it's well known by developers, frameworks, libraries, but most importantly the end user.  Even a total IT avoider, is aware of the concept of a password.  If that awareness changes, there is suddenly an extra barrier-to-entry for your new service, application or website to be successful.  No one wants that.

Thirdly, there are several 'bolt on' approaches to using a username and password combination.  Thinking of things like step-up authentication and knowledge based authentication.  If a site or resource within a site is deemed to require additional security, further measures can be taken that don't necessarily require a brand new approach to authentication, if a certain risk threshold is breached.

As familiarity with password management matures, even the most non-technical of end users, will become used to using passphrases, complex passwords, unique passwords per applications and so on.  As such, developers will become more familiar with password hashing and salting, data splitting and further storage protection.  Whilst all are perhaps sticking plaster approaches, the password will be around for a long time to come.

By Simon Moffatt


Optimized Role Based Access Control

RBAC.  It's been around a while.  Pretty much since access control systems were embedded in to distributed operating systems.  It often appears in many different forms, especially at an individual system level, in the form of groups, or role based services, access rules and so on.  Ultimately, the main focus is the grouping of people and their permissions, in order to accelerate and simplify user account management.


Enterprise RBAC
Enterprise role management has become quite a mature sub-component of identity and access management in the last few years.  Specialist vendors developed singularly focused products, that acted as extensions to the provisioning tooling.  These products developed features such as role mining, role approval management, segregation of duties analysis, role request management and so on.  Their general feature set was that of an 'offline' identity analytics database, that could help identify how users and their permissions could be grouped together, either based on business and functional groupings or system level similarities.  Once the roles had been created, they would then be consumed either by a provisioning layer, or via an access request interface. The underlying premise, being that access request management would be simplified due to business friendly representations of the underlying permissions and the account creation and fulfillment process would be accelerated.

The Issues & Implementation Failures
The process of developing an RBAC model was often beset with several problems.  IAM encompasses numerous business motives and touch points - which is why many still argue identity management is a business enabler more than a security topic - and developing roles across multiple business units and systems is time consuming and complex.  A strong and detailed understanding of the underlying business functions, teams, job titles and processes is required, as well the ability to perform analysis of the required permissions for each functional grouping.  Whilst this process is certainly mature and well documented, implementation is still an effort laden task, requiring multiple iterations and sign off, before an optimal model can be released.  Once a model is then available for use, it requires continual adaption as systems alter, teams change, job titles get created and so on.  Another major issue with RBAC implementation, is the often mistaken view, that all users and all system permissions must be included in such an approach.  Like any analytic model, exceptions will exist and they will need managing as such, not necessarily be forced into the RBAC approach.

Speeding up Role Creation
Role creation is often accomplished using mining or engineering tools.  These tools take offline data such as human resources and business functional mappings, as well as system account and permissions data.  Using that data, the process is to identify groupings at the business level (known as top down mining) as well as identifying similarities at the permissions level (known as bottom up mining).  Both processes tend to be iterative in nature, as the results are often inconsistent, with difficulties surrounding agreement on user to functional mapping and of function to permissions mapping.

One of the key ways of speeding up this process, is to use what is known as 'silent migration'.  This approach allows roles to be created and used without change to the users underlying permissions set.  This instantly removes the need for continual approval and iteration in the initial creation step.  The silent migration consists of initially mapping users into their functional grouping.  For example, job title, team, department and so on.  The second step is to analyse the system permissions for users in each functional grouping only.  Any permissions identified across all users in the grouping are applied to the role.  No more, no less.  With it, no changes are therefor made to the users permissions.  This process is simply performing an intersection or each users' permissions.

Focus on the Exceptions
Once the users of each functional grouping have had their permissions migrated into the role, it's now important to identify any user associated permissions that are left over.  These can simply be known as exceptions, or permissions of high risk.  They're high risk, as they are only assigned to specific individuals and not the group.  This association could well be valid - a line manager for example may have different permissions - but as a first pass, they should be reviewed first.  To identify which are exceptions, a simple subtraction can be done between the users current permissions (as identified by their target system extract) and the permissions associated with their functional grouping.  Anything left needs reviewing.

This approach can also help with the acceleration of access review strategies.    Instead of looking to review every user, every permission and every functional grouping, simply analyse anything which is anomalous, either via peer comparison or functional grouping.

RBAC is a complex approach, but can provide value in many access review and access request use cases.  It just isn't a catch all, or perhaps approach for every system and user.  Specific application using a more simplified approach can reap rewards.

By Simon Moffatt


Single Sign-On – the basic concepts

What is Single Sign-On?

A good buzzword at least, but on top of that it’s a solution which lets users authenticate at one place and then use that same user session at many completely different applications without reauthenticating over and over again.
To implement SSO, OpenAM (as most other SSO solutions) uses HTTP cookies (RFC 6265 [1]) to track the user’s session. Before we go into any further details on the how, let’s step back first and get a good understanding first on cookies. Please bear with me here, believe me when I say that being familiar with cookies will pay out at the end.

Some important things to know about cookies

By using cookies an application can store information at the user-agent (browser) across multiple different HTTP requests, where the data is being stored in a name=value basic format. From the Cookie RFC we can see that there are many different extra fields in a cookie, but out of those you will mostly only run into these:

  • Domain – the domain of the cookie where this cookie can be sent to. In case the domain is not present, it will be handled as a host based cookie, and browsers will only send it to that _exact_ domain (so no subdomains! Also beware that IE may behave differently…[2]).
  • Max-Age – the amount of time the cookie should be valid. Of course IE doesn’t support this field, so everyone uses “Expires” instead with a GMT timestamp.
  • Path – a given URL path where the cookie applies to (for example /openam)
  • Secure – when used, the cookie can be only transferred through HTTPS, regular HTTP requests won’t include this cookie.
  • HttpOnly – when used, the cookie won’t be accessible through JavaScript, giving you some protection against XSS attacks

Let’s go into a bit more detail:

  • If you create a cookie with Domainexample.com“, then that Cookie will be available for an application sitting at foo.example.com as well, and basically for all other subdomains, BUT that very same cookie won’t be available at badexample.com nor at foo.badexample.com, because badexample.com does not match the example.com cookie domain. Browsers will only send cookies with requests made to the corresponding domains. Moreover browser only will set cookies for domains where the response did actually come from (i.e. at badexample.com the browser will discard Set-Cookie headers with example.com Domain).
  • Browsers will discard cookies created by applications on TLDs (Top Level Domain, like .co.uk or .com), also the same happens with cookies created for IP addresses, or for non valid domains (like “localhost”, or “myserver”), the Domain has to be a valid FQDN (Fully Qualified Domain Name) if present.
  • If you do not specify Max-Age/Expires, the cookie will be valid until the browser is closed.
  • In order to clear out/remove a cookie you need to create a cookie with the same name (value can be anything), and set the Expires property to a date in the past.
  • In case you request a page, but a Set-Cookie is coming out of a subsequent request (i.e. resource on the page – frame/iframe/etc), then that cookie is considered as a third party cookie. Browsers may ignore third party cookies based on their security settings, so watch out.
  • Containers tend to handle the cookie spec a bit differently when it comes to special characters in the cookie value. By default when you create a cookie value with an “=” sign in it for example, then the cookie value should be enclosed with quotes (“). This should be done by the container when it generates the Set-Cookie header in the response, and in the same way when there is an incoming request with a quoted value, the container should remove the quotes, and only provide the unquoted value through the Java EE API. This is not always working as expected (you may get back only a portion of the actual value, due to stripping out the illegal characters), hence you should stick with allowed characters in cookie values.

Funky IE problems

  • IE does not like cookies created on domain names that do not follow the URI RFC [3]
  • IE9 may not save cookies if that setting happened on a redirect. [4]

How is all of this related to SSO?

Well, first of all as mentioned previously OpenAM uses cookie to track the OpenAM session, so when you do regular SSO, then the sequence flow would be something like this:

SSO Authentication Flow

SSO Authentication Flow


And here is a quick outline for the above diagram:

  • User visits application A at foo.example.com, where the agent or other component realizes that there is no active session yet.
  • User gets redirected to OpenAM at sso.example.com, where quite cleverly the cookie domain was set to example.com (i.e. the created cookie will be visible at the application domain as well)
  • User logs in, as configured OpenAM will create a cookie for the example.com domain.
  • OpenAM redirects back to the application at foo.example.com, where the app/policy agent will see the session cookie created, and it will then validate the session, upon success, it will show the protected resource, since there is no need to log in any more.

Oh no, a redirect loop!

In my previous example there are some bits that could go wrong and essentually result in a redirect loop. A redirect loop 90+% of the time happens because of cookies and domains – i.e. misconfiguration. So what happens is that OpenAM creates the cookie for its domain, but when the user is redirected back to the application, the app/agent won’t be able to find the cookie in the incoming request, but authentication is still required, so it redirects back to OpenAM again. But wait, AM already has a valid session cookie on its domain, no need for authentication, let’s redirect back to the app, and this goes on and on and on. The most common reasons for redirect loops:

  • The cookie domain for OpenAM does not match at all with the protected application.
  • The cookie domain is set to sso.example.com, instead of example.com, and hence the cookie is not available at foo.example.com .
  • The cookie is using Secure flag on the AM side, but the protected application is listening on HTTP.
  • Due to some customization possibly, the AM cookie has a path “/openam” instead of the default “/”, so even if the cookie domain is matching the path won’t match at the application.
  • You run into one of the previously listed IE quirks. :)
  • Your cookie domain for OpenAM isn’t actually an FQDN.

So how does OpenAM deal with applications running in different domains? The answer is CDSSO (Cross Domain Single Sign-On). But let’s discuss that one in the next blog post instead. Hopefully this post will give people a good understanding of the very basic SSO concepts, and in future posts we can dive into the more complicated use-cases.

References

Custom Auth module – Configuration basics

If you want to create a custom OpenAM auth module or a service, then you probably going to end up writing a configuration XML. This XML describes to OpenAM what kind of UI elements to render on the admin console, and what values should be stored in the configuration store for the given module.
Let’s take a look at the following sample XML:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE ServicesConfiguration
PUBLIC "=//iPlanet//Service Management Services (SMS) 1.0 DTD//EN"
"jar://com/sun/identity/sm/sms.dtd">

<ServicesConfiguration>
    <Service name="sunAMAuthMyModuleService" version="1.0">
        <Schema
serviceHierarchy="/DSAMEConfig/authentication/sunAMAuthMyModuleService"
            i18nFileName="amAuthMyModule"
            revisionNumber="1"
            i18nKey="sunAMAuthMyModuleServiceDescription">

            <Organization>
                <AttributeSchema name="sunAMAuthMyModuleAuthLevel"
                                 type="single"
                                 syntax="number_range" rangeStart="0"
                                 rangeEnd="2147483647"
                                 i18nKey="a500">
                    <DefaultValues>
                        <Value>0</Value>
                    </DefaultValues>
                </AttributeSchema>

                <SubSchema name="serverconfig" inheritance="multiple">
                    <AttributeSchema name="sunAMAuthMyModuleAuthLevel"
                                     type="single"
                                     syntax="number_range" rangeStart="0"
                                     rangeEnd="2147483647"
                                     i18nKey="a500">
                        <DefaultValues>
                            <Value>0</Value>
                        </DefaultValues>
                    </AttributeSchema>
                </SubSchema>
            </Organization>
        </Schema>
    </Service>
</ServicesConfiguration>

What you should know about authentication module service XMLs:

  • According to this piece of code the service name HAS to start with either iPlanetAMAuth or sunAMAuth, and HAS to end with Service (like sunAMAuthMyModuleService in our case)
  • the serviceHierarchy attribute is /DSAMEConfig/authentication/<service-name>
  • in the i18nFileName attribute you need to add the name of a properties file, which is on OpenAM classpath (like openam.war!WEB-INF/classes). This internationalization file will be used by OpenAM to lookup the i18nKeys for the various items in the XML.
  • You should put the module options into an Organization Schema, this will make sure that the module will be configurable per realm.
  • All the attribute schema definitions should be also listed under a SubSchema element (this will allow you to set up two module instances based on the same module type with different configurations).

The AttributeSchema element contains informations about a single configuration item (what name should OpenAM use to store the parameter in the configstore, what kind of UI element needs to be rendered, what restrictions does the attribute have, etc).

Available types for an attribute:

  • single -> singlevalued attribute
  • list -> multivalued attribute
  • single_choice -> radio choice, only one selectable value
  • multiple_choice -> checkboxes, where multiple item could be selected
  • signature -> unknown, possibly not used
  • validator -> if you want to use a custom validator for your attribute, you need to include the validator itself (beware of OPENAM-974).

Available uitypes for an attribute:

  • radio -> radiobutton mostly for displaying yes-no selections
  • link -> this is going to be rendered as a link, where the href will be the value of the propertiesViewBeanURL attribute
  • button -> unknown, not used
  • name_value_list -> generates a table with add/delete buttons (see Globalization settings for example)
  • unorderedlist -> a multiple choice field in which you can dynamically add and remove values. The values are stored unordered.
  • orderedlist -> a multiple choice field in which you can dynamically add and remove values. The values are stored ordered.
  • maplist -> a multiple choice field in which you can add/remove key-value pairs
  • globalmaplist -> same as maplist, but it allows the key to be empty.
  • addremovelist -> basically a palette where you can select items from the left list and move them to the right list

Available syntaxes for an attribute:

  • boolean -> can be true or false
  • string -> any kind of string
  • paragraph -> multilined text
  • password -> this will tell the console, that it should mask the value, when it’s displayed
  • encrypted_password -> same as the password syntax
  • dn -> valid LDAP DN
  • email
  • url
  • numeric -> its value can only contain numbers
  • percent
  • number
  • decimal_number
  • number_range -> see rangeStart and rangeEnd attributes
  • decimal_range -> see rangeStart and rangeEnd attributes
  • xml
  • date

NOTE: Some of these syntaxes aren’t really used within the product, so choose it wisely.

Other than these, there is also the i18nKey attribute, which basically points to i18n keys in the referred i18nFile configured for the service. This is used when the config is displayed on the admin console.
This should cover the basics for authentication module service configuration I think. ;)

Session Quota basics

What is Session Quota?

Session Quota provides a way to limit the number of active sessions for a given user. Basically in the session service you can configure how many active sessions could a user have. For simplicity let’s say that the active session limit is set to 5. If a user would log in for the 6th time, then based on the session quota settings a QuotaExhaustionAction gets executed, which will decide what happens with the new and the existing sessions.

How session quota is evaluated?

When session quota is enabled, OpenAM tracks the user sessions mapped by universal IDs, hence it can easily detect the number of active sessions for a given user (this mapping is only available when session quota is enabled or SFO is being used). There is 3 way how the session quota can be evaluated:

  • Single Server Mode: if there is only one OpenAM node (with or without a site), then the quota will be evaluated per that one server.
  • Local Server Only Mode: When the advanced property “openam.session.useLocalSessionsInMultiServerMode” is set to true, OpenAM will only evaluate session quota per local server (so a user can end up having number of servers * number of allowed concurrent sessions). This option has been introduced to provide a certain level of support for multiserver environments without SFO (see OPENAM-875)
  • Session Failover Mode: In multiserver setup when using SFO it is possible to enforce deployment-wide session quota

So we now know how OpenAM detects if the session quota is being exceeded by a given user, but we don’t know yet what to do when that happens. Here comes QuotaExhaustionAction into the picture. This interface allows deployers to customize the behavior of OpenAM for these cases. To understand this a bit more let’s look at the built-in session quota exhaustion actions:

  • DENY_ACCESS: basically it will deny the access for the new session, but will keep the old session alive.
  • DESTROY_NEXT_EXPIRING: destroys the next expiring session (based on min(full, idle) for all the sessions), and lets the new session live.
  • DESTROY_OLDEST_SESSION: destroys the session with the lowest expiration time, and lets the new session live.
  • DESTROY_OLD_SESSIONS: destroys all the existing sessions, and lets the new session live.

As you can see there are many different ways to handle session quota exhaustion, and it really depends on the requirements, which method really fits. In case none of these are good for you, you can simply implement a custom QuotaExhaustionAction, for that you can find the built-in implementations here, but there is also a sample GitHub project.
Once you are ready with your custom implementation follow the installation steps on the GitHub project README.

How to enable session quota?

On the admin console go to Configuration -> Global -> Session page and:

  • Set the number of “Active User Sessions” to your choosen value.
  • Turn ON “Enable Quota Constraints”.
  • Select the preferred exhaustion action under the “Resulting behavior if session quota exhausted” option.

Quite possibly you need to restart OpenAM for the changes to take effect, but after then it should work just as you would want it to. ;)

Implementing remember me functionality – part 2

In my last post we were trying to use the built-in persistent cookie mechanisms to implement remember me functionality. This post tries to go beyond that, so we are going to implement our own persistent cookie solution using a custom authentication module and a post authentication processing hook. We need these hooks, because:

  • The authentication module verifies that the value of the persistent cookie is correct and figures out the username that the session should be created with.
  • The post authentication processing class makes sure that when an authentication attempt was successful a persistent cookie is created. Also it will clear the persistent cookie, when the user logs out.

In order to demonstrate this implementation, I’ve created a sample project on Github, so it’s easier to explain, the full source is available at:
https://github.com/aldaris/openam-extensions/tree/master/rememberme-auth-module
You most likely want to open up the source files as I’m going through them in order to see what I’m referring to. ;)

Let’s start with the Post Authentication Processing (PAP) class, as that is the one that actually creates the persistent cookie. In the PAP onLoginSuccess method, I’m checking first whether the request is available (for REST/ClientSDK authentications it might not be!), then I try to retrieve the “pCookie” cookie from the request. If the cookie is not present in the request, then I start to create a string, that holds the following informations:

  • username – good to know who the user actually is
  • realm – in which realm did the user actually authenticate (to prevent accepting persistence cookies created for users in other realms)
  • current time – the current timestamp to make the content a bit more dynamic, and also it gives a mean to make sure that an expired cookie cannot be used to reauthenticate

After constructing such a cookie (separator is ‘%’), I encrypt the whole content using OpenAM’s symmetric key and create the cookie for all the configured domains. The created cookie will follow the cookie security settings, so if you’ve enabled Secure/HttpOnly cookies, then the created cookie will adhere these settings as well.
In the onLogout method of the PAP I make sure that the persistent cookie gets cleared, so this way logged out users will truly get logged out.

On the other hand the authentication module’s job is to figure out whether the incoming request contains an already existing “pCookie”, and if yes, whether the value is valid. In order to do that, again, we check whether the request is available, then try to retrieve the cookie. If there is no cookie, then there is nothing to talk about, otherwise we will decrypt the cookie value using OpenAM’s symmetric key.
The decrypted value then will be tokenized based on the “%” character, then we first check whether the current realm matches with the cookie’s realm. If yes, then we check for the validity interval and the stored timestamp. If things don’t add up, then this is still a failed authentication attempt. However if everything is alright, then we can safely say that the user is authenticated, and the username is coming from the decrypted content.
In case there was some issue with the cookie, then we will just simply remove the “pCookie”, so hopefully we won’t run into it again.

Limitations

There are a couple of limitations with this example module though:

  • when the PAP is part of the authentication process, it will always create a persistent cookie for every single user (but only when the cookie don’t already exist).
  • the validity interval and the cookie name is hardcoded, moreover every single realm will use the same cookie, that can be a problem in certain deployments.

If you are looking for installation details, then check out the Github project README ;)