Implementing remember me functionality – part 2

In my last post we were trying to use the built-in persistent cookie mechanisms to implement remember me functionality. This post tries to go beyond that, so we are going to implement our own persistent cookie solution using a custom authentication module and a post authentication processing hook. We need these hooks, because:

  • The authentication module verifies that the value of the persistent cookie is correct and figures out the username that the session should be created with.
  • The post authentication processing class makes sure that when an authentication attempt was successful a persistent cookie is created. Also it will clear the persistent cookie, when the user logs out.

In order to demonstrate this implementation, I’ve created a sample project on Github, so it’s easier to explain, the full source is available at:
https://github.com/aldaris/openam-extensions/tree/master/rememberme-auth-module
You most likely want to open up the source files as I’m going through them in order to see what I’m referring to. ;)

Let’s start with the Post Authentication Processing (PAP) class, as that is the one that actually creates the persistent cookie. In the PAP onLoginSuccess method, I’m checking first whether the request is available (for REST/ClientSDK authentications it might not be!), then I try to retrieve the “pCookie” cookie from the request. If the cookie is not present in the request, then I start to create a string, that holds the following informations:

  • username – good to know who the user actually is
  • realm – in which realm did the user actually authenticate (to prevent accepting persistence cookies created for users in other realms)
  • current time – the current timestamp to make the content a bit more dynamic, and also it gives a mean to make sure that an expired cookie cannot be used to reauthenticate

After constructing such a cookie (separator is ‘%’), I encrypt the whole content using OpenAM’s symmetric key and create the cookie for all the configured domains. The created cookie will follow the cookie security settings, so if you’ve enabled Secure/HttpOnly cookies, then the created cookie will adhere these settings as well.
In the onLogout method of the PAP I make sure that the persistent cookie gets cleared, so this way logged out users will truly get logged out.

On the other hand the authentication module’s job is to figure out whether the incoming request contains an already existing “pCookie”, and if yes, whether the value is valid. In order to do that, again, we check whether the request is available, then try to retrieve the cookie. If there is no cookie, then there is nothing to talk about, otherwise we will decrypt the cookie value using OpenAM’s symmetric key.
The decrypted value then will be tokenized based on the “%” character, then we first check whether the current realm matches with the cookie’s realm. If yes, then we check for the validity interval and the stored timestamp. If things don’t add up, then this is still a failed authentication attempt. However if everything is alright, then we can safely say that the user is authenticated, and the username is coming from the decrypted content.
In case there was some issue with the cookie, then we will just simply remove the “pCookie”, so hopefully we won’t run into it again.

Limitations

There are a couple of limitations with this example module though:

  • when the PAP is part of the authentication process, it will always create a persistent cookie for every single user (but only when the cookie don’t already exist).
  • the validity interval and the cookie name is hardcoded, moreover every single realm will use the same cookie, that can be a problem in certain deployments.

If you are looking for installation details, then check out the Github project README ;)

Implementing remember me functionality – part 1

It’s quite common requirement to have long running sessions within OpenAM, so I’m going to try to enumerate the different ways to achieve “persistent” sessions and provide some comparison.
There is two main way to achieve long running sessions:

  • Using built-in functionality
  • Implementing custom authentication module + post processing class

Today we are only going to deal with the built-in persistent cookie features, the next article will describe the more user-specific solution.

Setting Expires attribute on session cookie

By default OpenAM issues session cookies without Expires attribute, meaning the cookie is only stored until the browser is stopped/restarted. This solution is especially handy if you have large session timeout values set, because in any other cases your session cookie would be saved in the browser for a good while, but the underlying session could have already time out.
Now there are two ways to enable this mode:

  • Enable persistent cookies globally for all sessions:
    Go to Configuration -> Servers and Sites -> Default Server Config -> Advanced tab, then add the following properties:

    openam.session.persist_am_cookie=true
    com.iplanet.am.cookie.timeToLive=100
    
  • After doing so you will probably need to restart the container for the changes to take effect. Any new sessions created afterwards will set the Expires attribute on the session cookie.
    When using DAS you need to set these properties in the DAS configuration file instead.

  • Allow to have persistent sessions on-demand:
    Go to Configuration -> Servers and Sites -> Default Server Config -> Advanced tab, then add the following properties:

    openam.session.allow_persist_am_cookie=true
    com.iplanet.am.cookie.timeToLive=100
    

    In this case to get a persistent cookie you need to include the openam.session.persist_am_cookie=true parameter on the Login URL (so you can’t actually put this on the Login form unfortunately)

  • When using DAS you need to set these properties in the DAS configuration file instead.

NOTE: the timeToLive value is in minutes.
NOTE 2: these properties only fully work on the core servers if you have OPENAM-1280 integrated, but if you use DAS these should work just fine.
NOTE 3: the timeToLive value only tells how long the cookie should be stored by the browser, it has absolutely NO influence to the actual session timeout intervals.

Creating a long living non-session cookie

When using this option OpenAM will create a second “DProPCookie” cookie next to the iPlanetDirectoryPro cookie with an encrypted content. This encrypted cookie stores a set of informations that are necessary to “recreate” a session when a user goes to OpenAM without a valid session cookie, but with a valid persistent cookie. OpenAM then decrypts the cookie and based on the stored information, it will create a new session.
In order to enable this persistent cookie mode you need to go to Access Control -> realm -> Authentication -> All Core Settings page and do the followings:

  • Enable the “Persistent Cookie Mode” option
  • Set “Persistent Cookie Maximum Time” option

After this when you include the “iPSPCookie=Yes” parameter on the login URL, or you have an actual checkbox on your Login.jsp submitting the exact same parameter during login, OpenAM will issue the DProPCookie persistent cookie with the configured timeout.
This method does not seem to be implemented for DAS.
NOTE: OpenAM will delete the persistent cookie as well, when the user logs out.
NOTE 2: due to a possible bug (OPENAM-1777) OpenAM may not invoke Post Authentication Processing classes when it recreates the user session.

Conclusion

As you probably noticed, the Expires on session cookie method is only really useful when you have large timeouts, also it is a global option, meaning that once you enable it, it will be set for all the different realms.
On the other hand DProPCookie is configurable per realm, but it does not seem to work when using DAS.

That’s all folks, this is how the built-in persistent cookies work, if these do not seem to match your requirements, then stay tuned for the next post.

Configuring Oracle OIF for Salesforce.com SAML SSO


Here is a quick how-to on configuring Oracle Identity Federation (OIF) as the SAML Identity Provider for Salesforce.com.

This turns out to be surprisingly easy to set up. For pre-requisites you should have the following in place:

  •  It is easiest if your OIF instance is configured to use POST SAML bindings by default. You can  override this on a provider basis but most often you will use POST, so it makes sense to set it as the default.
  • You need a salesforce.com account. Developer accounts are free and support SAML.
  • Create a user in your ldap for testing SAML SSO. We will match on the users "mail" attribute. Set this to a relevant value (testuser@example.com). The mail attribute does not need to match the salesforce.com account id. 
  • This example assumes we are using OAM as the authentication engine for OIF, and they both are referencing the same ldap server. 


Step 1 is to import your OIF IdP cert into salesforce.com.  The cert is available at http://example.com/fed/idp/cert (where example.com is replaced by your install domain /port). For example:



Save this cert to a text file, and import it into salesforce.com.  The salesforce SAML SSO setup is under the Security Controls left hand nav bar:


(Click on the image to enlarge it)





On the salesforce SSO setting page perform the following:

  • Import the saved certificate file.
  • Set the issuer to the provider id for OIF. This is available from the OIF console under "Identity Provider" properties. It usually of the form:  http://yourserver:7499/fed/idp. Note that the provider id must be a url format - but the id itself is opaque. In my example OIF is actually not available on port 7499 - but that's OK. The provider id is just for matching purposes.
  • Check the SAML id user type and location as shown above
  • Enter "mail" as the attribute name
After saving your changes, export the salesforce.com metadata and save it to a file.

At this time you should also set the salesforce.com Federation Id for your test user to match your ldap user email attribute. This is under the salesforce.com  menu:
  • Personal Setup -> My Personal Information -> Personal Information -> Edit. 

You want to set the Federation Id under Single Sign On Information:





On your OIF console, navigate to Admin -> Federations and import the saved meta data:





If your defaults in OIF are correct, you are most likely "done".  Test SSO out by going to the following url:

http://yourserver.com/fed/idp/initiatesso?providerid=https://saml.salesforce.com


Pro tip: The Firefox SAML tracer plugin lets you view what is being sent in the assertion.

If federation is not working the most likely cause is that the federation id does not match, or OIF is not sending the right attribute.

You can edit the salesforce.com provider settings in OIF to fiddle with this. For example:


You can also add an explicit attribute mapping (hit the "Edit" button above to add a mapping)

With my OIF defaults it "just worked". YMMV



Rotating debug logs

Since OpenAM 10.0.0 it is possible to enable time based debug log rotation in order to make your log files a bit more managable. Time based as in the logs will be rotated periodically after a configurable amount of time has passed.
Now the problem with debug logs is that they get written to in the very early phase of the startup process: just think about the fact that an error can occur already while trying to connect to the configuration store. This means that to store the rotation config, we can’t actually use the service config system, as by the time those settings get loaded, files are already written. To overcome this problem, instead, you need to modify/create files in the WAR file to make these configurations accessible for startup as well.
To enable debug log rotation, you need to create/modify the WEB-INF/classes/debugconfig.properties file:

org.forgerock.openam.debug.prefix=mylogs-
org.forgerock.openam.debug.suffix=-MM.dd.yyyy-kk.mm
org.forgerock.openam.debug.rotation=

The properties in detail:

  • org.forgerock.openam.debug.prefix (optional): an arbitrary string that will be used as a filename prefix for every debug log
  • org.forgerock.openam.debug.suffix (optional): a valid Java dateformat string. Please note that if you set up rotation for every 5 minutes and you don’t include minutes in the format string, then you won’t really rotate the debug log every 5 minutes (the logs won’t get overwritten, you’ll just have one log file with all the content).
  • org.forgerock.openam.debug.rotation: rotation interval for the debug logs in minutes. If rotation interval is higher than 0, then rotation is enabled

Since this has been implemented in the shared Debug class, this also means, that you can put this properties file on ClassPath for any OpenAM component that is using the ClientSDK, so things like DAS and Java EE agents too.
Obviously after creating/modifying the properties file, you have to restart the container for the changes to take effect.

Enabling SSL Termination for OAM and OAAM



Some components of the identity stack need to verify the user connected via SSL.

If you are proxying connections through OHS to your OAM servers you can set up OHS to terminate SSL and pass through the connection to OAM (usually running on port 14100). But there is a little trick that is needed to tell Weblogic that the connection is secure - even though it may be coming in over a non secure port (14100).

Chris Johnson has a comprehensive write up on SSL offloading which covers the more complex scenario where an external load balancer is doing the termination.

The recipe I describe here is for the simpler case where OHS is terminating SSL and forwarding the connections to OAM vi the mod_weblogic plugin.

First validate that your Weblogic domain has the Weblogic Plugin enabled (see Chris's article above). I found that it was enabled by default. YMMV.

Login to /oamconsole and navigate to

System Configuration -> Access Manager -> Access Manager Settings


Edit your load balancer settings to enable SSL. Here is an example:




Note that the server host and port are your OHS instance (not the OAM server / port number).

Restart the oam_server1 managed server for this change to take effect.

Now edit your mod_wl_ohs.conf in your OHS instance and set the "WLProxySSL" to ON for OAM and OAAM:

<Location /oaam_server>
 SetHandler weblogic-handler 
 WebLogicPort 14100 
 WLProxySSL ON 
</Location>
<Location /oam>
 SetHandler weblogic-handler
 WebLogicPort 14100 
 WLProxySSL ON 
</Location>


Restart your OHS instance.

Try to go to a protected resource. You should be redirected to the OAM login page over an SSL connection.






Let’s revive this blog ;)

I know, I know… It’s been a while… Believe me I feel the guilt… I should really share more with the community, so here I am begging for your mercy, and also for your help.
Recently I often find myself a bit clueless, as I don’t know what to write about any more, there are just too many subjects about OpenAM. :)
So I think the best way to resolve this issue is probably to ask the readers themselves: what would you like to read about? Are there any areas where the documentation could be enhanced? Tell me. I’ve created a very simple Google Form here:
https://docs.google.com/spreadsheet/viewform?formkey=dGVXeDZ2SlQxbmtIS1dpT3ZZb1F5QlE6MQ.

By filling this out you can help me to figure out:
* what subjects are people most interested in
* how many active readers I might have :)

Guaranteed to take less than 5 minutes!

Thanks in advance.

SAML Federation in OAM 11g R2



Oracle Access Manger 11G R2 adds SAML Relying Party support as a native feature. You no longer need to stand up and integrate OIF if you want to federate with another IdP.

SAML IdP support didn't quite make it into the first OAM R2 release - so you will still need OIF. This is on the roadmap - so stay tuned.

In this article I will show how easy it is to set up OAM as a SAML relying party.

I am using OIF configured as the sample IdP. See my previous article on setting up OIF to self federate (handy for experimenting). Assuming you have OIF configured you should be able to bring up the test SP SSO page:  http://demo:7499/fed/user/testspsso


You will be challenged for credentials. After logging in you will see this:




Great. Now we have a working IdP we can proceed to setting up OAM as a relying party.

As a pre-requisite make sure you have federation services enabled in OAM 11G (System Configuration -> Available Services)

Bring up the OAM Console and navigate to System Configuration -> Identity Federation

Click on “Identity Providers” and select the new icon.







Next we need to load the SAML IdP meta data from OIF. You can export it from the em console or bring up and save this URL:

http://demo:7499/fed/idp/metadata

Select and i
mport this meta data file into OAM. You should also select “Create Authentication Scheme and Module” at this time. Save your changes. You now have OAM configured as a Relying Party.


We still need to configure OIF to know about OAM as the service provider. To do this, export OAM’s SP meta data (under Federation Settings), and import it into OIF (Admin -> Federations ):







Edit the new federation in OIF:

In the edit screen click on the Enable Attributes in Single Sign-on.

  • Click “X509 Subject Name” and “Email Address” checkboxes and click Apply button.
  • Then click on Edit button next to the “Attribute Mapping and Filters”.
  • In Name mapping tab – add two mappings
    • User Attribute Name – givename Assertion Name givenname
    • User Attribute Name – title Assertion Name role
Make sure “Send With SSO Assertion” is enabled.

Back in OAM, navigate to Policy -> Authentication Schemes. You will see a new Authentication Scheme has been created:





You can use this Authentication Scheme in a policy - just like any other (LDAP, Kerberos, etc.).

To test your federation, create a test directory under your ohs1 instance and protect this URL with the federation Authentication Scheme. The OHS folder is something like:

/app/oracle/fmw/Oracle_WT1/instances/instance1/config/OHS/ohs1/htdocs

For this example we have created federation/index.html file under htdocs.

Bring up the policy domain for ohs1, and create an Authentication Policy that use the Federation Scheme:







Now create a resource and protect it with the policy:







Clear your browsers cookies and bring up the URL:

http://demo:7777/federation/index.html


Because this URL is protected by the Federation Scheme, OAM will initiate federation with the configured IdP. You should see the IdP logon page. After log on you will be re-directed back to the protected page.


Pro Tip: Install the SAML tracer firefox plugin so you can watch the SAML message exchange!




OAM R2 REST APIs for Policy Management



Oracle Access Manager 11g R2 provides several new REST APIs. This continues a trend to expose key functionality via Web Services.

The OAM Mobile and Social service provides APIs for Authentication, Authorization  and User Profile services.  I will cover those APIs in a future article (have a look here for examples) - but today I want to focus on the  policy management APIs.

The Policy Administration API enables to you to interact with OAM to create a variety of Policy objects such as Application Domains, Resources, AuthN Schemes, and AuthN/AuthZ policies. The policy model is shown below:




For example, if you want to retrieve all of the resources in an Application Domain you can perform a GET against the /resource URI:


curl -u USER:PASSWORD http://<SERVER>:<PORT>/oam/services/rest/11.1.2.0.0/ssa/policyadmin/resource?appdomain="IAM Suite"

Note: The port above is where the OAM Admin Server is deployed (often 7001). It is NOT the managed server (oam_server1 - 14100 by default). 

These APIs are useful for anyone who wants to automate policy creation. To provide an example: Let's assume we want to automate the process of bringing new applications online. Each application will have a resource URL to protect (example: /financeapp/**) and an LDAP group which should be used to enforce access to the application (example: FinanceManagers).

To demonstrate this functionality I have created a small Netbeans project that you can download here.

The demo uses the Jersey client to invoke OAM's policy management API.

The XML schema is used to generate JAXB bindings for the various policy objects.  This schema can be found in your deployment directory for OAM (perform a find ..../domains/IAM -name *.xsd -print) to locate it. A copy of the schema is included in the project file - but this may change with subsequent releases.

The demo does not cover every use case - but it should give you the general idea. Feedback is welcome!






Subject to change – JAAS to JASPI

The move from JAAS to JASPI subtly changes how we interact with identities. In the world of JAAS we deal with Subjects who are the entities making a request, typically a user, whilst Java EE deals with Principals, the representation of that entity such as a username. The difference may not seem great, but a Subject may have several Principals and this has caused some headaches when using JAAS, leaving determination of the relevant Principal to the implementation.

The days of JAAS have long been numbered however, and JSR-196 (also known as JASPI or JASPIC) is emerging at last; inclusion in JEE6 has definitely helped to push JASPI beyond just Glassfish support.

One of the changes is using the CallerPrincipalCallback to present to the container which Principal is applicable; and which is then available in the ServletRequest using getUserPrincipal(…).

Some background music for mulling over Subjects and Principals: Subject’s theme from Aldo Nova

Stupid Oracle vktm tricks to improve VirtualBox performance



In the process of creating a demo VirtualBox image running OEL 6 and the Oracle database 11.2.0.3.0 I noticed the idle CPU consumption was quite high (8% on the guest, 35% on the host).

The culprit turned out to be the Oracle database vktm process. This is a time keeping process - and it calls gettimeofday() *very* frequently.  This can have a negative performance impact in virtualized environments.

A colleague who is a database whiz suggested the following trick:


sqlplus / as sysdba
alter system set "_high_priority_processes"='LMS*' scope=spfile; 

This removes the vktm process from the list of high priority processes.

After this change (you need to bounce the database) the idle CPU consumption comes down to 1-2% or so. A nice improvement!

It goes without saying that this is:

a) Totally unsupported
b) Probably dangerous. This will most certainly break things in the database - such as statistics, auditing, etc.
c) For demo/development use only. If you care about your data don't do this!

YMMV.