Impersonation Authentication module for OpenAM

Introduction

Support for impersonation is useful in the enterprise use cases where designated administrators are required to act on behalf of a user in certain scenarios. By impersonating another user an administrator, if authorized to do so, gains access to a restricted view of the user’s profile in the system. This is helpful in situations involving password reset, request-based access and profile updates. However, the design of such a system must call for controls that actively restrict access to the user’s entitlements at the outset. This can be achieved using step-up authentication for gaining access to private user data, and also by using the OpenAM policy engine for performing advanced resource-based decisioning.

Configuration

An OpenAM custom authentication module was written to enable impersonation support. The module requires input of the username of the end-user being impersonated and the administrator credentials. After submitting the username and password, the admin account is authenticated first and then it is also authorized to complete the impersonation request using REST calls to a specified OpenAM Policy endpoint. This policy can be either local or external as we shall examine further. The impersonated user is also validated for being in active status in the system. If all is okay, the administrator is permitted to impersonate and OpenAM creates a session for the impersonated user. The module can be configured using the following gauges to complete the described functions correctly:

  1. Setup the resource-set you want to check policy for. This resource set its nothing but a special URL that invokes policy evaluation for impersonation
  2. The authentication realm you want the administrator to authenticate in. The authentication module allows for realm-specific authentication
  3. The OpenAM server where the policy resides, the realm where the policy resides, and the policy-set name. The policy does not need to be local and can be on a remote policy host
  4. Check whether you want the administrator to be a member of a local group as well, in addition to the external policy authorization.
A step by step account of the workings of the module follows.

Development

Configuration read from Module Instance

options -> {iplanet-am-auth-check-group-membership=[True], iplanet-am-auth-impersonation-hash-enabled=[true], iplanet-am-auth-authentication-realm=[authn], iplanet-am-auth-impersonation-auth-level=[1], iplanet-am-auth-resource-set=[http://openam:8080/openam/index.html], moduleInstanceName=impersonate, iplanet-am-auth-impersonation-id=[Enter the user id to impersonate?], iplanet-am-auth-impersonation-group-name=[impersonation], iplanet-am-auth-openam-server=[http://openam:8080/openam], iplanet-am-auth-policy-realm=[impersonation], iplanet-am-auth-policy-set-name=[impersonation]}

Authorize the administrator locally

In our test scenario, the ‘user.0’ is really an administrative user that has been granted membership to the group named ‘impersonation’, as configured in the module (see above).

We build an AMIdentity object for the group and validate membership.

[AMIdentity object: id=impersonation,ou=group,o=impersonation,ou=services,dc=openam,dc=forgerock,dc=org]
value of attribute: uid=user.0,ou=People,dc=forgerock,dc=com
userName to check: user.0
match found! admin: user.0 allowed to impersonate user: user.1

Authorize the Administrator

Get the ssotoken for the admin who is trying to impersonate via a policy call, and authenticate the user to the realm specified in the config

json/authn/authenticate response-> {"tokenId":"AQIC5wM2LY4Sfcxokjvdayf3ig0oDuQITXRTWT9B_3hq72A.*AAJTSQACMDEAAlNLABI1ODk0Nzg1NTEyNDUzNzcxNDI.*","successUrl":"/openam/console"}
tokenId-> AQIC5wM2LY4Sfcxokjvdayf3ig0oDuQITXRTWT9B_3hq72A.*AAJTSQACMDEAAlNLABI1ODk0Nzg1NTEyNDUzNzcxNDI.*

 

Build the 2nd policy rest call, and use the resource set, openam server, policy set and policy container from the configuration passed to the module.

stringentity-> {"resources": ["http://openam:8080/openam/index.html"],"application":"impersonation", "subject": {"ssoToken":"AQIC5wM2LY4Sfcxokjvdayf3ig0oDuQITXRTWT9B_3hq72A.*AAJTSQACMDEAAlNLABI1ODk0Nzg1NTEyNDUzNzcxNDI.*"}}
json/impersonation/policies?_action=evaluate response-> [{"advices":{},"actions":{"POST":true,"PATCH":true,"GET":true,"DELETE":true,"OPTIONS":true,"PUT":true,"HEAD":true},"resource":"http://openam:8080/openam/index.html","attributes":{"uid":["user.0"],"cn":["Javed Shah"],"roleName":["timeBoundAdmin"]}}]

Custom response attributes can be passed back to the module for further evaluation if needed. For example, a statically defined roleName=timeBoundAdmin could be used to further restrict this impersonation request within the time window specified in the ‘timeBoundAdmin’ control. This example is only given to seed the imagination, the module currently does not restrict the impersonation session using a time window, but this is possible to do.

Parse the JSON response from Policy Evaluation

jsonarray-> {"resource":"http://openam:8080/openam/index.html","attributes":{"uid":["user.0"],"cn":["Javed Shah"],"roleName":["timeBoundAdmin"]},"advices":{},"actions":{"POST":true,"PATCH":true,"GET":true,"DELETE":true,"OPTIONS":true,"HEAD":true,"PUT":true}}
If the ACTION set returned for GET/POST is TRUE, the admin is permitted to impersonate. This could be extended to include other actions as necessary. Finally, destroy the admin session, now that it is not needed anymore and return the impersonated user as the Principal for constructing an OpenAM session.

Demo

Our short demo begins with the administrator being asked for the username of the user they want to impersonate.
Next, the module asks for the admin credentials.
If the administrator is unable to authenticate, or does not belong to the local group, or fails external policy evaluation, the following error screen is shown.
If all checks pass, the adminsitrator is granted the user’s session and logs into OpenAM.

Source

This article was first published on the OpenAM Wiki Confluence site: Impersonation in OpenAM

Configuring OpenAM IDP Proxy with ADFS and Salesforce

Introduction

This post will describe how OpenAM can be configured as a hosted SAML Identity Provider Proxy with Salesforce acting as Service Provider, and Active Directory Federation Services 2.0 as the Identity Provider. Note that this use case uses Salesforce as the Service Provider. Note that to a Service Provider, an IdP Proxy looks like an ordinary IdP. Likewise, to an Identity Provider, an IdP Proxy looks like an SP. Thus an IdP Proxy has the combined capability of being both an IdP and SP.

The following table is lifted from Spaces. Like a Web (HTTP) Proxy, an IdP Proxy delivers increased efficiency, security, and flexibility.

Web Proxy IdP Proxy
Efficiency ..cache web pages ..cache attributes
Security ..controlled access to web pages ..controlled access to federation IdPs
Flexibility ..HTTP request/response filtering ..SAML request/response filtering

Presented here is the IdP Proxy flow:

  1. A browser client requests a web resource protected by a SAML SP (Salesforce). If a security context for the principal already exists at Salesforce, skip to step 14.
  2. The client is redirected to the IdP component of the IdP Proxy (OpenAM-IdP0), which is protected by the SP component of the IdP Proxy (OpenAM-SP1).
  3. The client makes a SAML AuthnRequest to the SSO service at OpenAM-IdP0. If a security context for the principal already exists at OpenAM-IdP0, skip to step 10.
  4. The AuthnRequest is cached and the client is redirected to the terminal IdP (ADFS). ADFS presents a BA prompt for authentication by default.
  5. The client makes a SAML AuthnRequest to the SSO service at ADFS. If a security context for the principal does not exist, ADFS identifies the principal.
  6. ADFS updates its security context for this principal, issues one or more assertions, and returns a response to the client.
  7. The client submits the response to the assertion consumer service at OpenAM-SP1. The assertion consumer service validates the assertions in the response.
  8. OpenAM-SP1 updates its security context for this principal and redirects the client to OpenAM-IdP0.
  9. The client makes a SAML AuthnRequest to OpenAM-IdP0, the same AuthnRequest made at step 3.
  10. OpenAM-IdP0 updates its security context for this principal, issues a single assertion, and returns a response to the client. The response may also contain the assertions issued by ADFS at step 6.
  11. The client submits the response to the assertion consumer service at Salesforce. The assertion consumer service validates the assertions in the response.
  12. Salesforce updates its security context for this principal and redirects the client to the resource.
  13. The client requests the resource, the same request issued at step 1.
  14. The resource makes an access control decision based on the security context for this principal and returns the Salesforce landing page to the client.

 

For starters, please refer to Victor’s excellent post about preparing the metadata files for a similar scenario at SAMLv2 IDP Proxy Part 1.

Follow steps 1-4 in that post to prepare your metadata.

Federation Entities in OpenAM

In this section we will survey the entities you have imported in OpenAM so far:

Circle of Trust

 

Remote Service Provider: Salesforce

Your settings should be very similar to those presented here:

Signing and encryption can be turned off, if not needed.

This screen shows a critical settings related to the IDP Proxy. Ensure your ADFS 2.0 Entity ID is correctly defined in the list.

Remote Identity Provider: ADFS 2.0

Hosted IDP Proxy

IDP Section

Set “test” as the signer certificate in the IDP section of the Hosted IDP/SP proxy entity.

IDP Section continued..

 

SP Section

The first page..

Assertion processing screen..

The mapping shown below is critical. Here we map the ADFS credential to an internal (anonymous) user, in our case it is “demo”. It could be “anonymous” if such a user is present in your user repository.

Since ADFS does not support Scoping elements, also necessary to achieve this integration is a custom Service Provider adapter that removes the Scoping element from SAML AuthRequest sent to ADFS.

SP Section Continued..

Add the Entity ID for Salesforce here:

 

Preliminary Steps: Configure OpenAM

1. Import certificates into OpenAM keystore and Java keystore:

/usr/java/jdk1.7.0_45/bin/keytool -importcert -alias sfdc -file SelfSignedCert_09Mar2014_053347.crt -keystore keystore.jks
/usr/java/jdk1.7.0_45/bin/keytool -importcert -alias adfs -file adfscert.cer -keystore keystore.jks
/usr/java/jdk1.7.0_45/bin/keytool -exportcert -alias adfs -file adfscert.crt -keystore keystore.jks
/usr/java/jdk1.7.0_45/bin/keytool -exportcert -alias sfdc -file sfdc.crt -keystore keystore.jks
/usr/java/jdk1.7.0_45/bin/keytool -importcert -alias adfs -file ccgadfs.crt -trustcacerts -keystore /usr/java/jdk1.7.0_45/jre/lib/security/cacerts
/usr/java/jdk1.7.0_45/bin/keytool -importcert -alias sfdc -file sfdc.crt -trustcacerts -keystore /usr/java/jdk1.7.0_45/jre/lib/security/cacerts

2. In OpenAM, after importing the metadata files, add the Federation Authentication Module under /local realm

Step 5: Creating the Single Sign On settings in Salesforce

In Salesforce, under Security Controls -> Single Sign On Settings, create a new “SAML Single Sign-On Setting”, and fill in the Identity Provider Login URL, and Logout URLs from the metadata file “machineb.idpproxy.com-idp-meta.xml” in Step 4.a.

Step 6: Importing the Service Provider descriptor from the IdP Proxy into ADFS 2.0

On the Windows server, start up AD FS 2.0 Management utility, and create a new relying part trust, by cliking on “Add Relying Party Trust”.

Select “Import data about the relying party from a file”, and use the “machineb.idpproxy.com-sp-meta.xml” you created in Step 4.c. Call it “Salesforce via OpenAM IDP Proxy” and finish.

Select the newly created relying party and ensure the settings match the screenshots presented here.

For example, change the default SAML ACE from Artifact to POST.

Also, change the secure hash algorithm to SHA-1 as shown here.

Click on “Edit Claim Rules” and follow instructions given in OpenAM and ADFS2 configuration to create the first rule.

Create a custom claim rule using the following script:

c:[Type == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn"] => issue(Type = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value, ValueType = c.ValueType, Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/format"] = "urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified", Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/namequalifier"] = "<entity-id of ADFS 2.0>", Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/spnamequalifier"] = "<entity-id of your IDP proxy>");

You should see two rules now:

Click ok to finish editing claim rules.

Optional ADFS 2.0 configuration

You can configure ADFS to not encrypt or sign SAML responses. Follow these steps if necessary:

  1. Use Windows Power Shell to check for installed ADFS snap-in: Get-PSSnapin -Registered
  2. You should be able to see: Microsoft.Adfs.PowerShell 1.0 “This powershell snap-in contains cmdlets used to manage Microsoft Identity Server resources”
  3. Now proceed to add it : Add-PSSnapin Microsoft.Adfs.Powershell
  4. Configure ADFS to not encrypt SAML response: Set -ADFSRelyingPartyTrust -TargetName “Salesforce via OpenAM IDP Proxy” -EncryptClaims $False
  5. **If you get an erroneous SAML StatusCode “Responder” error in OpenAM during testing, run these commands to turn off certificate revocation checks in ADFS:
    1. Set-ADFSRelyingPartyTrust -TargetName “Salesforce via OpenAM IDP Proxy” -EncryptionCertificateRevocationCheck ‘None’
    2. Set-ADFSRelyingPartyTrust -TargetName “Salesforce via OpenAM IDP Proxy” -SigningCertificateRevocationCheck ‘None’

Testing

Navigate to your Salesforce SSO URL, you will immediately be taken to the ADFS basic authentication prompt:

 

Enter your ADFS domain credentials here and hit Log In. If all is well, you should be taken to your Salesforce landing page.

OpenAM 2FA using the TeleSign PhoneID Score API

Introduction

TeleSign PhoneID Score combines predictive data from multiple sources, including telecom data, traffic patterns, and reported fraud to assess the risk level of a phone number. The product provides a reputation score, ranging from 0 to 1000, a risk level, and a recommendation based on a user’s phone number. PhoneID Score provides a real-time score, risk assessment, and recommendation. The scoring algorithm is dynamically updated to match ever-changing fraud trends. In fact, whenever a user’s activities change so does the score. More information can be found at TeleSign .

TeleSign Verify SMS API can be used to authenticate a known user, verify a transaction, or block fraudsters from opening thousands of accounts on your site. TeleSign Verify SMS verifies users in real time by sending a one-time verification code via SMS to their mobile phone. TeleSign delivers SMS to more than 200 countries. More info can be found here.

Demo

A custom authentication module receives the user’s telephone number, and uses TeleSign PhoneID Score API to evaluate the risk level of the phone number. If the phone number is of acceptable risk (low or moderate), then the authentication module uses the TeleSign Verify SMS API to send a verification code to the phone number entered.

The API returns a risk level for the phone number. If the risk level is “allow”, the module will send a random verify code via the TeleSign Verify SMS API, and upon user entry, use the TeleSign Verify SMS API to verify the code.

If the risk level from Score API is “deny”, OpenAM will show a denied screen to the user. A deny response looks like this:

{"reference_id":"####","sub_resource":"score","errors":[],"status":{"updated_on":"2015-03-10T22:48:59.139040Z","code":300,"description":"Transaction successfully completed"},"numbering":{"original":{"phone_number":"####","complete_phone_number":"####","country_code":"92"},"cleansing":{"sms":{"phone_number":"####","country_code":"92","min_length":9,"max_length":12,"cleansed_code":105},"call":{"phone_number":"####","country_code":"92","min_length":9,"max_length":12,"cleansed_code":105}}},"risk":{"level":"high","score":959,"recommendation":"block"}}

If the user entered an incorrect code, the TeleSign Verify SMS API will indicate as such, and OpenAM will reject the login attempt.

If the verify code validates correctly, then the user simply has to enter their username to login to AM.

An low risk response from the TeleSign PhoneID Score API looks like this:

{"reference_id":"####","sub_resource":"score","errors":[],"status":{"updated_on":"2015-03-10T22:52:17.603141Z","code":300,"description":"Transaction successfully completed"},"numbering":{"original":{"phone_number":"####","complete_phone_number":"####","country_code":"1"},"cleansing":{"sms":{"phone_number":"####","country_code":"1","min_length":10,"max_length":10,"cleansed_code":100},"call":{"phone_number":"####","country_code":"1","min_length":10,"max_length":10,"cleansed_code":100}}},"risk":{"level":"low","score":1,"recommendation":"allow"}}

Note that TeleSign uses a global phone directory, so if you do not prefix your country code, it will take the first digit(s) from the phone from the left and use that to identify the country.

Source Code

Github

 

This article was first published on the OpenAM Wiki Confluence site: OpenAM and TeleSign Integration

New version of ForgeRock Identity Platform™

This week, we have announced the release of the new version of the ForgeRock Identity Platform, which brings new services in the following areas :

  • Continuous Security at Scale
  • Security for Internet of Things (IoT)
  • Enhanced Data Privacy Controls

FRPlatform

This is also the first identity management solution to fully implement the User-Managed Access (UMA) standard, making it possible for organizations to address expanding privacy regulations and establish trusted digital relationships. See the article that Eve Maler, VP of Innovation at ForgeRock and Chief UMAnitarian posted to explain UMA and what it can do for you.

A more in depth description of the new features of the ForgeRock Identity Platform has also been posted.

The ForgeRock Identity Platform is available for download now at https://www.forgerock.com/downloads/

In future posts, I will detail what is new in the Directory Services part, built on the OpenDJ project.


Filed under: Identity Tagged: access-management, Directory Services, ForgeRock, identity, Identity Relationship Management, opendj, platform, release, security, uma

OpenDJ Security Advisory #201508

Two security vulnerabilities have been discovered in all released versions of OpenDJ.

This advisory provides guidance on how to ensure your deployments can be secured.  Workarounds or patches are available for the issues, which will also be included in the forthcoming OpenDJ 2.6.4 maintenance release.

The severity of the issues in this advisory is Medium. Deployers should take steps as outlined in this advisory and apply the relevant update at the earliest opportunity.

The recommendation is to deploy the relevant patch or to upgrade to OpenDJ 2.6.4 when it becomes available.

Combined patches fixing all OpenDJ security advisories are available to customers for OpenDJ 2.6.0 – 2.6.3 from BackStage. Customers with other deployed patches should contact the support organization to obtain an updated patch. Customers running earlier releases need to upgrade. The fixes are also present in the community “trunk” nightly builds.

Issue #201508-01: LDAP read entry controls reveal protected attributes.
Product: OpenDJ
Affected versions: 2.4.0 – 2.4.6, 2.5.0-Xpress1, 2.6.0 – 2.6.3
Fixed versions: n/a
Component: Core Server
Severity: Medium

OpenDJ supports controls allowing an LDAP user to read and return the target entry of an update operation as part of the update operation itself. If the update operation succeeds, the target entry attributes should be returned subject to access control checks. These access control checks were not performed by OpenDJ, and the server would incorrectly return any attribute from the target entry.

The vulnerability can be exploited if the LDAP user performing the update has all of the following:

  • allowed access to use either the 1.3.6.1.1.13.1 or 1.3.6.1.1.13.2 controls;
  • allowed access to update (add/modify/delete/rename) an entry;
  • denied access to reading certain attributes on the entry being updated.

By default the impact is low because in OpenDJ anonymous users may not use these controls. By default authenticated users may only update their own entries, and anonymous users are read-only. By default users are prevented from reading only a few operational attributes from their own entry.

Customers with customized access control policies may wish to review them with ForgeRock support.

Workaround:

To prevent the vulnerability from being exploited, a simple solution is to temporarily restrict permission to use the two controls to trusted users until the patch is deployed. Ideally this would be done using the dsconfig command to identify the global ACI that allows the use of the two controls, and to then remove those two controls from that ACI’s targetcontrol list. Instructions for using dsconfig are in the OpenDJ Administration Guide.

A simple alternative would be to temporarily restrict the use of controls to RootDN users using the following ldapmodify command. Replace the parameters in italics:

ldapmodify -h localhost -p 1389 -D "cn=Directory Manager" -j passwd.txt
dn: dc=example,dc=com
changetype: modify
add: aci
aci: (targetcontrol="1.3.6.1.1.13.1 || 1.3.6.1.1.13.2")
 (version 3.0; acl "ForgeRock Security advisory 201508";
 deny(read) userdn="ldap:///anyone";)
-

Note: These controls are rarely used but you must test your applications to make sure they will not be affected. OpenAM does not use these controls and will not be affected. OpenDJ’s REST interfaces use these controls if the “readOnUpdatePolicy” configuration for an endpoint is set to “controls”.

Resolution:

Update/upgrade to a fixed version or deploy the relevant patch.

Issue #201508-02: OpenDJ Administration Connector doesn’t reject anonymous operations.
Product: OpenDJ
Affected versions: 2.4.0 – 2.4.6, 2.5.0-Xpress1, 2.6.0 – 2.6.3
Fixed versions: n/a
Component: Core Server
Severity: Medium

OpenDJ has a global configuration parameter called “reject-unauthenticated-requests” that can be set to disallow any non-authenticated request. This provides an additional layer of protection in the server in addition to the normal access control protection. This parameter is used on any LDAP and LDAPS connection handlers (e.g. on port 389 and 636) however it was not used on the administration connector interface, which is typically on port 4444.

The parameter is set to “false” by default.

The bug’s impact is low, as access controls should always be used to enforce basic security and restrict the ability of non-authenticated connections to read or modify data.

Workaround:

Access controls should always be used to limit the data that non-authenticated connections can access. System-level firewall rules could be used to restrict access to the administration connector from only selected systems.

Resolution:

Update/upgrade to a fixed version or deploy the relevant patch.

Distributed Authentication in ForgeRock OpenAM

This blog post was first published @ www.fedji.com, included here with permission.

Let me start with a word of caution. I made a screen-cast to demonstrate the Distributed Authentication in ForgeRock OpenAM and you’ll find the same embedded on this post. Some of my actions in there are questionable and should never be attempted even in a development environment, such as setting a URL in the OpenAM Administration Console to redirect to after a Successful Authentication. This video demonstration is solely intended to give a hint on the positioning of Distributed Authentication UI in OpenAM Deployment Topology, but several other things like Network/Firewall configuration, Post Authentication Processing that goes hand in hand with the Distributed Authentication in OpenAM was beyond the scope of this short screen-cast. I really hope you get an idea on what the Distributed Authentication in OpenAM is expected to achieve.

The following illustration might give you an idea on what’s demonstrated in the video. We have a client network who cannot (or who is not supposed to) access the OpenAM Server in a different Network directly (say for Security reasons). So in a Demilitarized Zone (DMZ) or Perimeter Network, we have a Server that offers a Distributed Authentication UI to the clients from the ‘untrusted network’. That way, the clients get to see the UI of OpenAM by access the Server in DMZ, who would in turn talk to the OpenAM Server through a trusted channel. As one can imagine, Network Configuration like Firewall plays an important role in a deployment scenario, but sadly that’s all beyond the scope in our mini demonstration.

DistributedAuthUI
So if you have ~10 minutes to spare, watch it

Enjoy!

Thanks: ForgeRock Documentation on OpenAM

OpenAM Security Advisory #201507

Security vulnerabilities have been discovered in OpenAM components. These issues may be present in versions of OpenAM including 12.0.x, 11.0.x, 10.1.0-Xpress, 10.0.x, 9.x, and possibly previous versions.

This advisory provides guidance on how to ensure your deployments can be secured. Workarounds or patches are available for all of the issues.

The maximum severity of issues in this advisory is Critical. Deployers should take steps as outlined in this advisory and apply the relevant update(s) at the earliest opportunity.

The recommendation is to deploy the relevant patches. Patch bundles are available for the following versions (in accordance with ForgeRock’s Maintenance and Patch availability policy):

  • 10.0.2
  • 11.0.2
  • 11.0.3
  • 12.0.0
  • 12.0.1
  • 12.0.2

Customers can obtain these patch bundles from BackStage.

Issue #201507-01: Business Logic Vulnerability

Product: OpenAM
Affected versions: 11.0.0-11.0.3, 12.0.1-12.0.2
Component: Core Server, Server Only
Severity: Critical

A specific type of request to /openam/frrest/oauth2/token endpoint can expose user tokens to another user.

Workaround:

Block all access to the /openam/frrest/oauth2/token endpoint.

Resolution:
Use the workaround or deploy the relevant patch bundle.

Issue #201507-02: Cross Site Scripting

Product: OpenAM
Affected versions: 9-9.5.5, 10.0.0-10.0.2, 10.1.0-Xpress, 11.0.0-11.0.3, 12.0.0-12.0.2
Component: Core Server, Server Only
Severity: High

OpenAM is vulnerable to cross-site scripting (XSS) attacks which could lead to session hijacking or phishing.
Affecting 9-9.5.5, 10.0.0-10.0.2, 10.1.0-Xpress, 11.0.0-11.0.3 and 12.0.0-12.0.2:

  • /openam/ccversion/Masthead.jsp

Affecting 10.0.0-10.0.2, 10.1.0-Xpress, 11.0.0-11.0.3 and 12.0.0-12.0.2:

  • /openam/oauth2c/OAuthProxy.jsp

Workaround:

Protect the listed endpoints with the container (for example using the mod_security Apache module) or filter external requests until a patch is deployed.

Resolution:
Use the workaround or deploy the relevant patch bundle.

OpenAM Security Advisory #201506

Security vulnerabilities have been discovered in OpenAM components. These issues are present in versions of OpenAM including 12.0.x and 11.0.x.

This advisory provides guidance on how to ensure your deployments can be secured. Workarounds or patches are available for all of the issues, which are also included in the 12.0.2 maintenance release.

The maximum severity of issues in this advisory is Critical. Deployers should take immediate steps as outlined in this advisory and apply the relevant update(s) at the earliest opportunity.

The recommendation is to upgrade to OpenAM 12.0.2 or deploy the relevant patches. Patch bundles are available for the following versions:

  • 11.0.3
  • 12.0.0
  • 12.0.1

Customers can obtain these patch bundles from BackStage.

Issue #201506-01: Thread-safety issues with CTS when encryption is enabled

Product: OpenAM
Affected versions: 11.0.0-11.0.3 and 12.0.0-12.0.1
Fixed versions: 12.0.2
Component: Core Server, Server Only
Severity: Critical

When the Core Token Service token encryption is enabled and the system is under a heavy load, it is possible that incorrect session/SAML/OAuth2 tokens are returned by the CTS.

Workaround:

Disable token encryption by setting the following property to false:

com.sun.identity.session.repository.enableEncryption

in the OpenAM console via Configuration -> Servers and Sites -> Default Server Settings -> Advanced or via ssoadm:

ssoadm update-server-cfg --servername default --adminid amadmin --password-file /tmp/pwd.txt --attributevalues com.sun.identity.session.repository.enableEncryption=false

This setting is false by default.

Note:

By changing this setting, any existing encrypted tokens stored in CTS will become unreadable by OpenAM.

Resolution:
Use the workaround or update/upgrade to a fixed version or deploy the relevant patch bundle.

Issue #201506-02: Possible user impersonation when using OpenAM as an OAuth2/OIDC Provider

Product: OpenAM
Affected versions: 10.1.0-Xpress, 11.0.0-11.0.3, 12.0.0-12.0.1
Fixed versions: 12.0.2
Component: Core Server, Server Only
Severity: High

When using multiple realms, it is possible for an authenticated user in realmA to acquire OAuth2 and OpenID Connect tokens that correspond to realmB.

Workaround:

None.

Resolution:
Update/upgrade to a fixed version or deploy the relevant patch bundle.

techUK: Securing the IoT – Workshop Review

This week saw the techUK host a workshop on securing the Internet of Things and overcoming the risks associated with an increasingly connected world. The event (#IoTSecurity) attracted a variety of speakers from the public and private sector and brought about some interesting topics and further questions on this ever changing landscape.

Embedded Device and Host Device Life Cycle Disparity

Stephen Pattison from ARM, introduced the event, and brought up and interesting view of the challenge with keeping IoT devices up to date - either with firmware, software or hardware improvements.  He observed there is often a disparity between the small inexpensive sensor, actuator, or controller type components and the host device with respect to life span.  For example, a car may last 15 years, whilst a tracking component may last 36 months.  The rip and replace nature of general consumerism has subtle issues with respect to the IoT landscape, where the re-provisioning of new embedded devices, or the improvement in existing devices is often overlooked.

IoT Security Issues versus Opportunities

Duncan Brown, European Security Research Director at IDC, outlined some of the key problems facing the IoT landscape from a security perspective.  The main factors contributing to the security issue, can basically be broken down into the number of physical devices and the amount of data those devices generate.  The sheer volume of connected devices, opens up a new attack vector, with often the network these devices operate on, only being as secure as the weakest link.  That weakest link is often a low powered and poorly protected device, which allows a land and expand pivot style attack, which if successful, can quickly allow attacks on to more powerful computing resources.  The second main factor is associated with the yottabytes (a trillion terabytes !) of data IoT devices related devices are capable of collecting.  That data needs to be protected in transit and also at rest, where transparent access control and sharing protocols need to be applied.  These issues of course, are now opening up new sub-industries, where security assessments, device certifications, software audits and consultancy practices can provide services for.

As with many consumer related interactions, IoT also create an 'elastic security compromise'.  You seemingly can only have 2, out of enjoyable user experience, low risk and low cost.

Indirect Attacks

David Rogers, CEO of Copper Horse Solutions, with his specialism in mobile security, focused on describing how some of the challenges facing the telco operators over the last 10 years, can now be applied to the IoT space. With many newly manufactured cars by 2017 going to contain SIM technology, attack vector, data collecting and data sharing aspects of driving will increase substantially.  David made a subtle observation with respect to how IoT attacks could develop.

Whilst many laugh at the prospect of their digital fridge or washing machine being hacked as a gimmick, the net result of a large scale attack on home automation, isn't necessarily placing the immediate home owner as the victim.  The attacker in this case, could well be targeting the insurance market - which would face a deluge of claims if their washing machine suddenly flooded for example.

Privacy Challenges

Sian John, Security Strategist at Symantec, then focused on the IoT standards and privacy landscape. She argued that IoT is in fact rapidly becoming the 'Internet of Everything', where increased connectivity is being applied to every aspect of everyday life.  Whilst this may delivery better service or convenient experiences, this also opens up new security vulnerabilities and issues with regards to consumer data privacy.  Whilst the IoT ecosystem is clearly focused on physical devices, Sian argued that there is in fact a triad of forces at work: namely people, things and data (albeit I prefer 'people, data and devices...').  Often, the weakest link is the people aspect, who are often concerned with regards to personal data privacy, but don't have the knowledge or understanding with regards to terms of condition, consent questioning or device configuration.

Sian also pointed out that many consumers have a deep distrust of both technology vendors and social network operators when it comes to personal data privacy.

Overall, it seemed the discussions were focused on the need for a strong and varied security ecosystem, that can focus on the entire 'chip to cloud' life cycle of IoT data, where the identity of both the devices and people associated with those devices is strongly managed.

By Simon Moffatt











techUK: Securing the IoT – Workshop Review

This week saw the techUK host a workshop on securing the Internet of Things and overcoming the risks associated with an increasingly connected world. The event (#IoTSecurity) attracted a variety of speakers from the public and private sector and brought about some interesting topics and further questions on this ever changing landscape.

Embedded Device and Host Device Life Cycle Disparity

Stephen Pattison from ARM, introduced the event, and brought up and interesting view of the challenge with keeping IoT devices up to date - either with firmware, software or hardware improvements.  He observed there is often a disparity between the small inexpensive sensor, actuator, or controller type components and the host device with respect to life span.  For example, a car may last 15 years, whilst a tracking component may last 36 months.  The rip and replace nature of general consumerism has subtle issues with respect to the IoT landscape, where the re-provisioning of new embedded devices, or the improvement in existing devices is often overlooked.

IoT Security Issues versus Opportunities

Duncan Brown, European Security Research Director at IDC, outlined some of the key problems facing the IoT landscape from a security perspective.  The main factors contributing to the security issue, can basically be broken down into the number of physical devices and the amount of data those devices generate.  The sheer volume of connected devices, opens up a new attack vector, with often the network these devices operate on, only being as secure as the weakest link.  That weakest link is often a low powered and poorly protected device, which allows a land and expand pivot style attack, which if successful, can quickly allow attacks on to more powerful computing resources.  The second main factor is associated with the yottabytes (a trillion terabytes !) of data IoT devices related devices are capable of collecting.  That data needs to be protected in transit and also at rest, where transparent access control and sharing protocols need to be applied.  These issues of course, are now opening up new sub-industries, where security assessments, device certifications, software audits and consultancy practices can provide services for.

As with many consumer related interactions, IoT also create an 'elastic security compromise'.  You seemingly can only have 2, out of enjoyable user experience, low risk and low cost.

Indirect Attacks

David Rogers, CEO of Copper Horse Solutions, with his specialism in mobile security, focused on describing how some of the challenges facing the telco operators over the last 10 years, can now be applied to the IoT space. With many newly manufactured cars by 2017 going to contain SIM technology, attack vector, data collecting and data sharing aspects of driving will increase substantially.  David made a subtle observation with respect to how IoT attacks could develop.

Whilst many laugh at the prospect of their digital fridge or washing machine being hacked as a gimmick, the net result of a large scale attack on home automation, isn't necessarily placing the immediate home owner as the victim.  The attacker in this case, could well be targeting the insurance market - which would face a deluge of claims if their washing machine suddenly flooded for example.

Privacy Challenges

Sian John, Security Strategist at Symantec, then focused on the IoT standards and privacy landscape. She argued that IoT is in fact rapidly becoming the 'Internet of Everything', where increased connectivity is being applied to every aspect of everyday life.  Whilst this may delivery better service or convenient experiences, this also opens up new security vulnerabilities and issues with regards to consumer data privacy.  Whilst the IoT ecosystem is clearly focused on physical devices, Sian argued that there is in fact a triad of forces at work: namely people, things and data (albeit I prefer 'people, data and devices...').  Often, the weakest link is the people aspect, who are often concerned with regards to personal data privacy, but don't have the knowledge or understanding with regards to terms of condition, consent questioning or device configuration.

Sian also pointed out that many consumers have a deep distrust of both technology vendors and social network operators when it comes to personal data privacy.

Overall, it seemed the discussions were focused on the need for a strong and varied security ecosystem, that can focus on the entire 'chip to cloud' life cycle of IoT data, where the identity of both the devices and people associated with those devices is strongly managed.

By Simon Moffatt