Using OpenAM as a Trusted File Authorization Engine

A common theme in the DevOps world, or any containerization style infrastructure, may be the need to verify which executables (or files in general) can be installed, run, updated or deleted within a particular environment, image or container.  There are numerous ways this could be done.  Consider a use case where exe’s, Android APK’s or other 3rd party compiled files […]

Storing JSON objects in LDAP attributes…

jsonUntil recently, the only way to store a JSON object to an LDAP directory server, was to store it as string (either a Directory String i.e a sequence of UTF-8 characters, or an Octet String i.e. a blob of octets).

But now, in OpenDJ, the Open source LDAP Directory services in Java, there is now support for new syntaxes : one for JSON objects and one for JSON Query. Associated with the JSON query, a couple of matching rules, that can be easily customised and extended, have been defined.

To use the syntax and matching rules, you should first extend the LDAP schema with one or more new attributes, and use these attributes in object classes. For example :

dn: cn=schema
objectClass: top
objectClass: ldapSubentry
objectClass: subschema
attributeTypes: ( 1.3.6.1.4.1.36733.2.1.1.999 NAME 'json'
SYNTAX 1.3.6.1.4.1.36733.2.1.3.1 EQUALITY caseIgnoreJsonQueryMatch SINGLE-VALUE )
objectClasses: (1.3.6.1.4.1.36733.2.1.2.999 NAME 'jsonObject'
SUP top MUST (cn $ json ) )

Just copy the LDIF above into config/schema/95-json.ldif, and restart the OpenDJ server. Make sure you use your own OIDs when defining schema elements. The ones above are samples and should not be used in production.

Then, you can add entries in the OpenDJ directory server like this:

$ ldapmodify -a -D cn=directory manager -w secret12 -h localhost -p 1389

dn: cn=bjensen,ou=people,dc=example,dc=com
objectClass: top
objectClass: jsonObject
cn: bjensen
json: { "_id":"bjensen", "_rev":"123", "name": { "first": "Babs", "surname": "Jensen" }, "age": 25, "roles": [ "sales", "admin" ] }

dn: cn=scarter,ou=people,dc=example,dc=com
objectClass: top
objectClass: jsonObject
cn: scarter
json: { "_id":"scarter", "_rev":"456", "name": { "first": "Sam", "surname": "Carter" }, "age": 48, "roles": [ "manager", "eng" ] }

The very nice thing about the JSON syntax and matching rules, is that OpenDJ understands how the values of the json attribute are structured, and it becomes possible to make specific queries, using the JSON Query syntax.

Let’s search for all jsonObjects that have a json value with a specific _id :

$ ldapsearch -D cn=directory manager -w secret12 -h localhost -p 1389 -b "dc=example,dc=com" -s sub "(json=_id eq 'scarter')"

dn: cn=scarter,ou=people,dc=example,dc=com
objectClass: top
objectClass: jsonObject
json: { "_id":"scarter", "_rev":"456", "name": { "first": "Sam", "surname": "Carter" }, "age": 48, "roles": [ "manager", "eng" ] }
cn: scarter

We can run more complex queries, still using the JSON Query Syntax:

$ ldapsearch -D cn=directory manager -w secret12 -h localhost -p 1389 -b "dc=example,dc=com" -s sub "(json=name/first sw 'b' and age lt 30)"

dn: cn=bjensen,ou=people,dc=example,dc=com
objectClass: top
objectClass: jsonObject
json: { "_id":"bjensen", "_rev":"123", "name": { "first": "Babs", "surname": "Jensen" }, "age": 25, "roles": [ "sales", "admin" ] }
cn: bjensen

For a complete description of the query  filter expressions, please refer to ForgeRock Common  REST (CREST) Query Filter documentation.

The JSON matching rule supports indexing which can be enabled using dsconfig against the appropriate attribute index. By default all JSON fields of the attribute are indexed.

In a followup post, I will give more advanced configuration of the JSON Syntax, detail how to customise the matching rule to index only specific JSON fields, and will outline some best practices with the JSON syntax and attributes.

Filed under: Directory Services Tagged: attributes, Directory Services, directory-server, ForgeRock, Json, ldap, opendj, opensource, query, REST, schema, search

This blog post was first published @ ludopoitou.com, included here with permission.

Protect Bearer Tokens Using Proof of Possession

Bearer tokens are the cash of the digital world.  They need to be protected.  Whoever gets hold of them, can well, basically use them as if they were you. Pretty much the same as cash.  The shop owner only really checks the cash is real, they don’t check that the £5 note you produced from your wallet is actually your £5 note.

This has been an age old issue in web access management technologies, both for stateless and stateful token types, OAuth2 access and refresh tokens, as well as OpenID Connect id tokens.

In the hyper connected Consumer Identity & Access Management (CIAM) and Internet (Identity) of Things worlds, this can become a big problem.

Token misuse, perhaps via MITM (man in the middle) attacks, or even resource server misconfiguration, could result in considerable data compromise.

However, there are some newer standards that look to add some binding ability to the tokens – that is, glue them to a particular user or device based on some simple crypto.

The unstable nightly source and build of OpenAM has added the proof of possession capability to the OAuth2 provider service. (Perhaps the first vendor to do so? Email me if you see other implementations..).

The idea is, that the client makes a normal request for an access_token from the authorization service (AS), but also adds another parameter in the request, that contains some crypto the client has access to – basically a public key of an asymmetric key pair.

This key, which could be ephemeral for that request, is then baked into the access_token.  If the access_token is a JWT, the JWT contains this public key and the JWT is then signed by the authorization service.  If using a stateful access_token, the AS token introspection endpoint can relay the public key back to the resource server at look up time.

This basically gives the RS an option to then issue a challenge response style interaction with the client to see if they are in possession of the private key pair – thus proving they are the correct recipient of the originally issued access_token!

 

The basic flow, sees the addition of a new parameter to the access_token request to the OpenAM authorization service, under the name of “cnf_key”.  This is a confirmation key, that the client is in possession of.  In this example, it would be a base64 encoded JSON Web Key representation of a public key.

So for example, a POST request to the endpoint ../openam/oauth2/access_token, would now take the parameters grant_type, scope and also cnf_key, with an authorization header containing the OAuth2 client id and secret as normal.  A cnf_key could look something like this:

eyJqd2siOnsKICAiYWxnIjogIlJTMjU2IiwKICAiZSI6ICJBUUFCIiwKICAibiI6ICJ2TDM0UXh5bXdId1dEOVpWTDljaU42Yk5ybk91NTI0cjdZMzRvUlJXRkpjWjc3S1dXaHB1Si1iSlZXVVNUd3ZKTGdWTWlDZmFxSTZEWnIwNWQ2VGdONTNfMklVWmtHLXgzNnBFbDZZRWs1d1ZnX1ExelFkeEZHZkRoeFBWajJ3TWNNcjFyR0h1UUFEeC1qV2JHeGRHLTJXMXFsVEdQT253SklqYk9wVm1RYUJjNHhSYndqenNsdG1tcndzMmZNTUtNTDVqbnFwR2RoeWRfdXlFTU0wdHpNTGFNSVN2M2lmeFM2UUw3c2tpZTZ5ajJxamxUTUd3QjA4S29ZUEQ2QlVPaXd6QWxkUmJfM3k4bVA2TXY5cDdvQXBheTZCb25pWU8yaVJySzMxUlRaLVlWUHRleTllSWZ1d0ZFc0RqVzNES0JBS21rMlhGY0NkTHEyU1djVWFOc1EiLAogICJrdHkiOiAiUlNBIiwKICAidXNlIjogInNpZyIsCiAgImtpZCI6ICJzbW9mZi1rZXkiCn19Cg==

Running that through base64 -d on bash, or via an online base64 decoder, shows something like the following: (NB this JWK was created using an online tool for simple testing)

{
   "jwk":
            "alg": "RS256",
             "e": "AQAB",
             "n": "vL34QxymwHwWD9ZVL9ciN6bNrnOu524r7Y34oRRWFJcZ77KWWhpuJ-                               bJVWUSTwvJLgVMiCfaqI6DZr05d6TgN53_2IUZkG-                                                x36pEl6YEk5wVg_Q1zQdxFGfDhxPVj2wMcMr1rGHuQADx-jWbGxdG-2W1qlTGPOnwJIjbOpVmQaBc4xRbwjzsltmmrws2fMMKML5jnqpGdhyd_uyEMM0tzMLaMISv3ifxS6QL7skie6yj2qjlTMGwB08KoYPD6BUOiwzAldRb_3y8mP6Mv9p7oApay6BoniYO2iRrK31RTZ-YVPtey9eIfuwFEsDjW3DKBAKmk2XFcCdLq2SWcUaNsQ",
          "kty": "RSA",
           "use": "sig",
            "kid": "smoff-key"
     }
}

The authorization service, should then return the normal access_token payload.  If using stateless OAuth2 access_tokens, the access_token will contain the new embedded cnf_key attribute, containing the originally submitted public key.  The resource server, can then leverage the public key to perform some out of band challenge response questions of the client, when the client comes to present the access_token later.

If using the more traditional stateful access_tokens, the RS can call the ../oauth2/introspect endpoint to find the public key.

The powerful use case is to validate the the client submitting the access_token, is in fact the same as the original recipient, when the access_token was issued.  This can help reduce MITM and other basic token misuse scenarios.

This blog post was first published @ http://www.theidentitycookbook.com/, included here with permission from the author.

OpenAM as an identity provider for Office 365 (WSFed)

This post will run through the step necessary to configure OpenAM 13.5 to be an identity provider for Office 365 and Azure using WS-Federation.

One of the new features in OpenAM 13.5 is support for WS-Federation Active Requestor Profile.  This will enable OpenAM to support a greater range of Office 365 rich clients and Azure authentication scenarios when acting as an IDP.

Why WS-Federation?

 Office 365 and Azure support WS-Federation, SAML2 and in some cases OpenID Connect for integration with third party identity providers. While SAML2 (specifically SAML2 ECP) can be used for federation, it is only supported in newer Microsoft rich clients. WS-Federation must be used to support slightly older products such as Lync, Outlook 2011 (Mac) and Office 2010 (Windows). In particular, to support Lync and support adding an email account in Outlook 2010/2011, WS-Federation active requestor profile must be enabled.

What Works

 Using my completely informal testing procedure on my two laptops, an iPad and a Nexus tablet – I’ve managed to get the following results with the configuration below.

platform test result
MacOS 10.11.5 Office 2011 Office setup from Word ok
Lync for Mac 2011 SignIn ok
Outlook 2011 E-Mail account setup & signin ok
Office 2011 Document Connection ok
Office 2016 SignIn ok
Chrome web Sign In ok
Safari Web Sign In ok
Windows 8.1 Office 2013 CTR Setup ok
Skype For Business Sign In ok
Outlook 2013 E-Mail account setup ok
OneDrive SignIn ok
IE web Sign In ok
FireFox web SIgn In ok
Chrome web Sign In ok
Apple iPad IOS 9.3.5 Microsoft Word SignIn ok
Skype For Business Sign In ok
Outlook E-Mail account setup ok
OneDrive for business Sign In ok
Safari Web Sign In ok
Apple Mail, contacts, calendar Account setup & signin ok
Nexus Tablet Android 5.3 Microsoft Word SignIn ok
Outlook E-Mail account setup ok
OneDrive for business Sign In ok
Chrome web Sign In ok
Gmail, calendar, contacts, device management ok
Skype For Business Sign In ok

Overview of the steps

I’ll go through everything that is needed to configure OpenAM 13.5 to work with Office 365.
  1. Configuring a DNS zone for Office 365.
  2. Setting up the required DNS records for that zone.
  3. Making sure that the active directory domain is properly configured with the right UPN suffixes for the user accounts.
  4. Setting up Windows Desktop SSO (Integrated Windows Authentication) to work with OpenAM.
  5. Using PowerShell to configure Office365 to use an external identity provider.
  6. Setting up OpenAM with the WS-Federation entities for Office 365.
  7. Setting up an account in Office365 and setting it to “federated” mode.

Configuration Overview

In this configuration, our public DNS zone is test365.forgepoc.com and we’ll have users with the email address something@test365.forgepoc.com. They’ll have accounts in an internal Active Directory domain. In common with best practice for Active Directory, the DNS zone of the AD domain in this exercise uses a subdomain of the public DNS zone, test365corp.test365.forgepoc.com.

You don’t need Active Directory to make this work, you could use any database such as OpenDJ. But as Office 365 is typically used on Windows desktops in an Active Directory domain, I’m using it here.

The reverse proxy will be configured with a public trusted SSL certificate (required for WS-Federation active requestor profile to work with Office 365) and act as an SSL termination point for OpenAM. The host will be called login.test365.forgepoc.com.

You don’t need a proxy in order to use OpenAM as an IDP for Office 365, but it is highly recommended.

Requirements

  1. A Windows Active Directory domain (see my blog post on setting up one of these).
  2. An OpenAM instance on any supported operating system, configured to use Active Directory for authentication and profile attributes (see my blog post here on setting up and active directory dataStore). OpenAM should be setup with SSL internally.
  3. A reverse proxy in between OpenAM and the internet, capable of supporting SSL termination (I’m using NginX).
  4. A Windows Active Directory domain configured with network connectivity to the OpenAM instance.
  5. A business Office 365 subscription capable of federating with third party identity providers. in this example, I used a business premium subscription.
  6. An SSL certificate issued from a public trusted certificate authority such as goddaddy.
  7. A public DNS zone which you can configure with the DNS records required for use with Office 365.
  8. Windows machines and devices for testing.
Note: Currently WSFed Active Requestor profile is only supported in the top level realm in OpenAM. There is an open issue for this here. Alternatively you can set up OpenAM to federate with office 365 using SAML2 ECP, but this only works with newer MS rich clients.

Configuring a DNS zone for Office 365

Microsoft make this extremely easy using the admin section of the Office 365 dashboard. Here I’m choosing to set this up myself because I already own the domain forgepoc.com and I have other stuff on it. However, it is much easier to allow Office 365 to act as your DNS service. Doing so automatically configures the required DNS records.To do it the manual way, log in to your Office 365 subscription and select the admin center:

From the left hand menu select settings > Domains.
Click “add a domain” and enter your domain name.

In common with obtaining SSL certificates, you have to verify ownership of your DNS domain by adding a specific TXT record to it.

Now add the required DNS records. It’s important that these are accurate, but luckily Microsoft provide a test tool to verify they are set correctly. Here is a screenshot of the records for my DNS Zone test365.forgepoc.com from my DNS provider:

CNAME Records
Host Name               Points To    
lyncdiscover            webdir.online.lync.com    
msoid                   clientconfig.microsoftonline-p.net    
sip                     sipdir.online.lync.com    
enterpriseregistration  enterpriseregistration.windows.net    
enterpriseenrollment    enterpriseenrollment.manage.microsoft.com    
autodiscover            autodiscover.outlook.com    

MX Records                
Host Name               Points To                                           Priority    
@                       test365-forgepoc-com.mail.protection.outlook.com    0    

SRV Records
Host Name               Points To                                           Port    Weight    Priority    
_sip._tls               sipdir.online.lync.com                              443     1        100    
_sipfederationtls._tcp  sipfed.online.lync.com                              5061    1        100    

TXT Records
Host Name               Value    
@                       v=spf1 include:spf.protection.outlook.com -all

Configuring your proxy

A proxy is not required to get Office 365 up and running with OpenAM, but it is recommended. You’ll need some sort of proxy configuration if you plan to use multiple OpenAM servers in a HA deployment behind a load balancer.

For WSFED to work correctly behind a proxy, we need to set the host header on the proxy. Here is my NginX site configuration:

 

location / {
    proxy_set_header X-Forwarded-Server $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Host $host:$server_port;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header Host $host;
    proxy_pass https://login.test365corp.o365.forgepoc.com:8443;
    proxy_redirect default;
}
In the apache web server, you can achieve the same thing by setting the proxy_pass, proxy_pass reverse and ProxyPreserveHost directives.
<VirtualHost *:443>
    ServerName login.test365.forgepoc.com:443
 ProxyRequests off
 ProxyPass / https://login.test365corp.test365.forgepoc.com:8443/
 ProxyPassReverse / https://login.test365corp.test365.forgepoc.com:8443/
 ProxyPreserveHost On

    ...

</VirtualHost>

Configuring Active Directory UPN suffixes – optional, but recommended

If you have an existing Active Directory domain then it is likely that this will be configured already and you can skip this section. However, if you’ve set up an AD domain to do some basic tests and integration work, then read on.

Let’s say I make a user in active directory for Charlie Brown. I use the wizard in Active Directory users and computers and give him first name Charlie, last name Brown, username charlie.brown. Charlie will then be able to put in any of the following into the username prompt when logging on to a domain joined machine:

  • samAccountName: charlie.brown
  • cn: charlie brown
  • dn: CN=charlie brown,OU=user accounts,OU=test365corp,DC=test365corp,DC=o365,DC=forgepoc,DC=com
  • UPN: charlie.brown@test365corp.test365.forgepoc.com

Active Directory has a concept of user principal names (UPN) – a means of allowing a user account to be referenced by an email address style username that can have a different domain part to the DNS domain of the active directory domain. A UPN consists of the samAccountName and a DNS domain specified by an administrator.

Now let’s say mike wants to log on to his laptop with his email address, charlie.brown@test365.forgepoc.com. To do that, the Windows sysadmin needs to define an additional UPN suffix in the domain.

In office 365, if users authenticate to services directly with office 365 (WSFed active profile) then the user has to authenticate with their UPN. Therefore it makes sense to set the user’s active directory UPN to be the same as their office 365 UPN, which would normally be their email address. That way, users only need to remember one username.

In this example, we’ll configure a UPN suffix for the external DNS domain test365.forgepoc.com. Open Active Directory Domains and Trusts (domain.msc) and right click on the root node in the left pane:

Add the UPN suffix:

Quickstart OpenAM configuration using ssoadm batch commands

If you know OpenAM well, here are some ssoadm batch commands that will get you setup quickly. Use the metadata files from below and skip the remaining sections on configuring OpenAM.
create-datastore -e / -m ActiveDirectory -t LDAPv3ForAD -a "sun-idrepo-ldapv3-config-ldap-server=svr1.test365corp.test365.forgepoc.com:636" "sun-idrepo-ldapv3-config-authid=CN=ldapUser,CN=Users,DC=test365corp,DC=test365,DC=forgepoc,DC=com" "sun-idrepo-ldapv3-config-authpw=SOMEPASSWORD" "sun-idrepo-ldapv3-config-connection-mode=LDAPS" "sun-idrepo-ldapv3-config-organization_name=DC=test365corp,DC=test365,DC=forgepoc,DC=com" "sun-idrepo-ldapv3-config-people-container-name=ou" "sun-idrepo-ldapv3-config-people-container-value=test365corp" "sun-idrepo-ldapv3-config-psearchbase=CDC=test365corp,DC=test365,DC=forgepoc,DC=com"
create-auth-instance -e / -t AD -m ActiveDirectoryModule
update-auth-instance -e / -m ActiveDirectoryModule -a "iplanet-am-auth-ldap-bind-dn=cn=ldapAuth,cn=users,DC=test365corp,DC=test365,DC=forgepoc,DC=com" "iplanet-am-auth-ldap-bind-passwd=SOMEPASSSWORD" "iplanet-am-auth-ldap-server=svr1.test365corp.test365.forgepoc.com:636" "openam-auth-ldap-connection-mode=LDAPS" "iplanet-am-auth-ldap-user-naming-attribute=cn" "iplanet-am-auth-ldap-base-dn=DC=test365corp,DC=test365,DC=forgepoc,DC=com" "iplanet-am-auth-ldap-user-search-attributes=mail" "iplanet-am-auth-ldap-user-search-attributes=cn" "iplanet-am-auth-ldap-return-user-dn=true"
create-auth-cfg -e / -m employeeChain
update-auth-cfg-entr -e / -m employeeChain -a "ActiveDirectoryModule|REQUIRED"
set-realm-svc-attrs -e / -s iPlanetAMAuthService -a "iplanet-am-auth-org-config=employeeChain"
import-entity -e / -c wsfed -m /home/centos/idpMeta.xml -x /home/centos/idpMetaExtended.xml
import-entity -e / -c wsfed -m /home/centos/spMeta.xml -x /home/centos/spMetaExtended.xml
create-site -s site1 -i https://login.test365.forgepoc.com:443/openam
add-site-members -s site1 -e https://login.test365corp.test365.forgepoc.com:8443/openam

Configure OpenAM to work behind a proxy

Use the deployment menu in OpenAM 13.5 to add a site to the deployment. Make the site URL the URL of your proxy. Once done, add your OpenAM server(s) to that site.

Configuring OpenAM authentication services

I’m assuming here that you’ve already got an Active Directory dataStore setup in the top level realm.
Head over to authentication and create an Active Directory authentication module. Make sure to configure this as described here, it is required for WS-Federation active requestor profile to work later on.
Set up the module as described below. Below, I’ve allowed users to log on with mail as well as their usual login method, assuming that the mail attribute in the user account matches the UPN of the account setup in office 365.

However, if you have UPN sufixes setup in your AD domain (as described in my instructions above) you’ll probably want to use userPrincipalName instead of mail:

Add this module to a chain and set it to be the default organisation login chain for your realm:

Create your WS-Federation hosted IDP and Remote SP

Manually creating WSFED entities in OpenAM is a bit tedious so I’ve provided some ready made entities for you to use here. I am assuming these will be added to the top level realm – you’ll need to adjust your endpoints to match your realm if you’ve used something different. I am also assuming that we’ll use the default OpenAM test certificate for token signing. You should use something different in production.

Go to the Federation section in OpenAM and create a circle of trust called cot:

Import these entities using the import entity button, making sure to add them to the correct realm:

You should now have a list of entities that looks something like this:

Set up your Azure tenant using PowerShell

If you haven’t done it already, install the Azure Powershell cmdlets on a Windows machine that you have access to.
Authenticate using Connect-MsolService:

Then use the Set-MsolDomainAuthentication cmdlet to setup your domain and make sure to set the signing certificate to the cert you are signing your assertions with:

$BrandName   = "ForgeRock test365"
$dom         = "test365.forgepoc.com"

$IssuerUri   = "urn:uri:test365forgepocemployeestlr"
$PassiveUri  = "https://login.test365.forgepoc.com:443/openam/WSFederationServlet/metaAlias/wsidp"
$ActiveUri   = "https://login.test365.forgepoc.com:443/openam/WSFederationServlet/sts/metaAlias/wsidp"
$MexUri      = "https://login.test365.forgepoc.com:443/openam/WSFederationServlet/ws-trust/mex/metaAlias/wsidp"

$Protocol    = "WsFed"

$SigningCert = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"

Set-MsolDomainAuthentication `
-Authentication Managed `
-DomainName $dom

Set-MsolDomainAuthentication `
-Authentication Federated `
-DomainName $dom `
-FederationBrandName $BrandName `
-PassiveLogOnUri $PassiveUri `
-ActiveLogOnUri $ActiveUri `
-MetadataExchangeUri $MexUri `
-SigningCertificate $SigningCert `
-IssuerUri $IssuerUri `
-LogOffUri $PassiveUri `
-PreferredAuthenticationProtocol $Protocol

Enabling “modern authentication” on the Azure Exchange and Skype services

Until recently, I had all Windows, MacOS, IOS and Android rich clients working, apart from one: Skype for business for Android. I was about to start analysing the network traffic, when my colleague Peter Major directed my to a community forum question from the user Steven Van Geel.

Even though Steven’s question was related to using Android Skype for business with OpenAM as a an Office 365 IDP using SAML2 ECP, the fix he describes also corrects the behaviour with the android app with a WsFed Active profile IDP. Turning on Microsoft “modern authentication” on the Skype for business online tenant allows the Skype client to authenticate.

I have not had a chance to had a look at why this is and why it only affects the Android Skype client. I will update this blog post when I find out. It is highly possible that this may be required for other clients in future as Microsoft gradually move services onto their “Modern Authentication” standard. Note that modern authentication can be turned on for Skype and for Exchange (Outlook).

For now, here are some brief instructions for enabling this. These are pretty similar to the steps for setting up the Azure online domain. First you need to install the Skype for Business PowerShell management cmdlets.

Then run the following to authenticate to your Skype Online (aka Lync Online) tenant:

 

$credential = Get-Credential
$session = New-CsOnlineSession -Credential $credential
Import-PSSession $session
Get-Module
Then turn on Modern Authentication as described in this article:
Set-CsOAuthConfiguration -ClientAdalAuthOverride Allowed

Setting up your first user

All that is left to do now is to configure a user to sign on to Office 365. Office 365 requires that accounts which federate with Office 365 also have an account entry set up in Office 365 itself. Certain properties have to be set on that account, such as the UPN and what licenses are assigned to the user. Microsoft provide the tools DirSync and Azure AD Connect that automatically synchronise on-prem Active Directory with Office 365. These are cut down versions of their identity Management Solution, “ForeFront Identity Manager” (FFIM – now EOL). ForgeRock have an Identity Management product, OpenIDM which has a powerful PowerShell connector. This can be configured to automatically provision accounts to Office 365 (see example scripts here and the IDM trunk docs here), but this is outside the scope of this post.

Here, I’ll show how to manually set up an account.

Assuming you have the following user in Active Directory:

The following PowerShell will set up that user in office 365 by copying the attributes from the same user in Active Directory.
$user = Get-ADUser charlie.brown
MSOnlineExtendedNew-MsolUser `
-DisplayName $user.Name `
-FirstName $user.GivenName `
-ImmutableId ([System.Convert]::ToBase64String(($user.ObjectGUID).ToByteArray())) `
-LastName $user.Surname `
-LicenseAssignment (Get-MsolAccountSku | select -ExpandProperty AccountSkuId) `
-UsageLocation GB `
-UserPrincipalName $user.UserPrincipalName

So how does this PowerShell work? We get the charlie.brown user from AD then pass the properties from it to the New-MsolUser method. We use the Get-MsolAccountSku method to find the SKU of the license we need to assign to the user. We also convert the Active Directory ObjectGUID property into the base64 format expected in Office 365.

Note: If the UPN of your user doesn’t match the name that they may fill in on something like the email account setup in Outlook, then you may want to change the userPrincipalName to use the mail attribute instead.

 

Conclusion

You should now be able to log on to Office 365 using all of the methods described above. There are some issues that you may encounter with this approach which you can track our progress on here.

What next? You may want to look at integrating this with with Integrated Windows Authentication (aka Kerberos, Windows Desktop SSO). I’ve done a blog post on that.

In a future blog post, I’ll look at doing the same thing with SAML2 ECP and the pros/cons of using that instead.

This blog post was first published @ http://authntoz.blogspot.no/, included here with permission from the author.

Debugging OpenAM and OpenIDM

My background is the what I call the world of legacy identity, as such it took me a little while to get used to the world of ForgeRock, REST API’s and the like.


If you come from that world you may find debugging implementation issues with ForgeRock is a little different so I wanted to write up a short guide to share what I have learned.

Have you checked the logs?

Some things never change and your first port of call to debug any issues should be the log files.

OpenAM

There are actually two places to check when debugging OpenAM:


Note: <openam_install_dir> is the directory in which OpenAM was installed, not the web container.


<openam_install_dir>openam/debug



This is where you find the debug logs. Generally very detailed logs for all the different components of OpenAM.


<openam_install_dir>openam/log


This is where you find access and error logs. Detailing access requests and the result of those requests.


For example, if we look at access.csv:



You can see the result of my last login as amadmin.

Configuring log levels

If you don’t see anything, you may need to change the logging configuration.


Navigate to: http://localhost.localdomain.com:18080/openam/Debug.jsp




This interface is fairly straight forward



In the above example I have set the policy component to Message level.


Just hit confirm and the change will be made immediately without a restart required.

OpenIDM

Again, there are actually two places to check when debugging OpenIDM:


<openidm_install_dir>openidm/logs


The main OpenIDM log files can be found here. OpenIDM used JDK logging and the configuration file can be found here if you need to make changes to logging levels:


<openidm_install_dir>openidm/conf/logging.properties


There is a helpful guide as to how do that here: http://www.javapractices.com/topic/TopicAction.do?Id=143


So far that is all fairly standard, however if you do not find anything in the logs then you may want to examine the REST services.

Debugging REST services

As I have said a few times on his blog, the ForgeRock platform is completely underpinned by REST services.


The User Interfaces for OpenAM and OpenIDM both make extensive use of REST service calls in their functioning.


When debugging issues, especially anything that results in an error visible in the UI. You should take a look at the requests and responses.


I use FireFox Developer Tools to do this but I know there are equivalents for other browsers.


It’s as simple as turning on Developer Tools and examining the network requests whilst using OpenAM and OpenIDM.



So lets try making a new authentication chain in OpenAM.



What we need to find is the POST request for the creation of the chain. If you browse up and down the list you should find it pretty quickly. On the right you can see the Headers in the request, the parameters and importantly the response code:




And the response:


So let’s see what that looks like when we have an error:



Generally you will see something like the above, and if you check the actual response, you should see a more detailed message that can help you debug the issue.

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

OpenDJ: Monitoring Unindexed Searches…

FR_plogo_org_FC_openDJ-300x86OpenDJ, the open source LDAP directory services, makes use of indexes to optimise search queries. When a search query doesn’t match any index, the server will cursor through the whole database to return the entries, if any, that match the search filter. These unindexed queries can require a lot of resources : I/Os, CPU… In order to reduce the resource consumption, OpenDJ rejects unindexed queries by default, except for the Root DNs (i.e. for cn=Directory Manager).

In previous articles, I’ve talked about privileges for administratives accounts, and also about Analyzing Search Filters and Indexes.

Today, I’m going to show you how to monitor for unindexed searches by keeping a dedicated log file, using the traditional access logger and filtering criteria.

First, we’re going to create a new access logger, named “Searches” that will write its messages under “logs/search”.

dsconfig -D cn=directory manager -w secret12 -h localhost -p 4444 -n -X 
    create-log-publisher 
    --set enabled:true 
    --set log-file:logs/search 
    --set filtering-policy:inclusive 
    --set log-format:combined 
    --type file-based-access 
    --publisher-name Searches

Then we’re defining a Filtering Criteria, that will restrict what is being logged in that file: Let’s log only “search” operations, that are marked as “unindexed” and take more than “5000” milliseconds.

dsconfig -D cn=directory manager -w secret12 -h localhost -p 4444 -n -X 
    create-access-log-filtering-criteria 
    --publisher-name Searches 
    --set log-record-type:search 
    --set search-response-is-indexed:false 
    --set response-etime-greater-than:5000 
    --type generic 
    --criteria-name Expensive Searches

Voila! Now, whenever a search request is unindexed and take more than 5 seconds, the server will log the request to logs/search (in a single line) as below :

$ tail logs/search
[12/Sep/2016:14:25:31 +0200] SEARCH conn=10 op=1 msgID=2 base="dc=example,
dc=com" scope=sub filter="(objectclass=*)" attrs="+,*" result=0 nentries=
10003 unindexed etime=6542

This file can be monitored and used to trigger alerts to administrators, or simply used to collect and analyse the filters that result into unindexed requests, in order to better tune the OpenDJ indexes.

Note that sometimes, it is a good option to leave some requests unindexed (the cost of indexing them outweighs the benefits of the index). If these requests are unfrequent, run by specific administrators for reporting reasons, and if the results are expecting to contain a lot of entries. If so, a best practice is to have a dedicated replica for administration and run these expensive requests. Also, it is better if the client applications are tuned to expect these requests to take a long time.

Filed under: Directory Services Tagged: directory-server, ForgeRock, index, ldap, opendj, opensource, performance, search, Tips, tuning

This blog post was first published @ ludopoitou.com, included here with permission.

A Beginners Guide to OpenIDM – Part 4 – Mappings

Overview

Last time we configured a connector to process data from a data source. In this blog we start to bring everything together and will define a mapping that creates objects using the connector.

Mappings define the relationships at the attribute level between data in connector data sources and objects. They also define rules to create attributes where they do not already exist and transform attributes if required.

Mappings

Architecture

In OpenIDM mappings consist of a number of logical components:
  • Properties: Defines attribute mappings between source attributes and target attributes, and scripts to create and transform attribute values. Also defines a link qualifier, required to link a single record in the source to multiple different records in the target. Which might be required in some situations e.g. event based HR data feeds where the same user record may appear multiple times.
  • Association: Queries, record validation scripts and association rules that work together define how a source account is linked to a target account. i.e. how OpenIDM knows that Anne’s account in the target system maps to Anne’s account in the source system, and not Bob’s account.
  • Behaviors: Defines the rules that tell OpenIDM what to do in different situations e.g. if an account doesn’t exist in the target system, then you might configure a behaviour to create it. This is incredibly powerful and we will explore it further.
  • Scheduling: Schedules for reconciliation operations e.g. run the mapping every hour, day, week etc. This is not the same as livesync (which we touched on last time) which will sync changes between systems as they occur.

Mappings Example

Creating the Mapping

As always with this blog, the best way to learn is to work through an example. We will build upon the existing work done in blogs 1-3 so you will need to complete those first.
Log in as administrator (openidm-admin), navigate to Configure then Mappings:
Then New Mapping:
You should see the following:
A mapping requires a source and a target and data will flow from the source to the target.
You can see there are a number of selectable options on this screen that you can assign as either a source or target:
  • Connectors: Any connectors you have configured, e.g. flat file, database, LDAP.
  • Managed Objects: The managed objects defined in OpenIDM.
So for this example, we want to use the CSV connector configured previously to create new managed users in OpenIDM. Effectively using the CSV data to create user identities.
Click Add to Mapping on the UserLoadCSV connector, this is going to be our source.
Similarly, now click Add to Mapping on the User managed object, this is our target.
You should see the following:
You have probably noticed that the CSV connector has a selectable drop down. There are some connectors where you might be interested in mapping more than one type of object e.g. accounts & groups in the case of LDAP. We have only configured the default _ACCOUNT_ object in this example.
 
Click Create Mapping.
 
 
Mapping Name: You can enter a more friendly mapping name if you like.
Linked Mapping: If you are creating two mappings in either direction, you should use this to link your second mapping to your first.
Click OK and the mapping should be created:

Configuring the Mapping

We have a mapping, now we need to configure it.
Navigate to the Properties tab and examine the Attribute Grid.
 
 
Start by clicking Add Missing Required Properties, this will automatically add those target properties which the connector definition or managed object specify must be populated. Make sure you press Save Properties after doing this.
You should see the required attributes for the managed user schema:
Now we need to configure the source attributes for each of the following target attributes.
Select the userName row:
You should see the following popup:
There are a few tabs here worth spending a moment on:
  • Property List: The source property to map from.
  • Transformation Script:  In line or file based scripts to transform attributes, a common use of this is to generate LDAP distinguished names.
  • Conditional Updates: Advanced logic that allows you to define rules for the conditional update of certain properties on sync. Useful if you want to ensure that a particular property is never over wrote.
  • Default Values: Simply set a fixed default value for the target, e.g. “true”, “active”, whatever.
Navigate to the Property List tab and select __NAME__. If you recall this corresponds to the connector we defined in the last blog. Now press Update.
Do the same with the rest of the mappings as below, then press Save Properties. You should end up with something like the below.
We can quickly check if this makes sense before we synchronize, look at Sample source, set the Link Qualifier to default and try entering one of the user ID’s in the CSV:
You should see a preview of the mapping result for whichever user you entered.
Next, we need to configure an Association Rule. Navigate to Association then look at Association Rules. Select Correlation Queries
Then Add Correlation Query:
Set the Link Qualifier to default, and look at Expression Builder.
 
 
Press + and select userName:
 
 
Then press Submit, and Save.
Select Save and Reconcile.
You should see the following:
Expand out In Progress, to see the results of reconciliation:
You can see there were 100 Absent results, you can examine the tooltip to see what this means but briefly it means that the source object has no matching target, i.e. there is no managed user record for all 100s lines in the CSV file. Which makes sense, as this is the first time we have run the reconciliation.
You might have expected that those managed users will now have been created, navigate to Manage, then User:
You should see No Data:
Which might not be what we expected, however lets navigate back to Configure, Mappings and look at the Behaviors tab of our mapping again. Look at Current Policy.
By default, new mapping are set up only to read data and not actually perform any operations, such as user creates.
Look at the policy table:
Now change the Current Policy to Default Actions, and note how the table changes:
Note that the actions have all changed, examine the tooltips to understand what is going on here. I plan to revisit Policies in a later blog because they are incredibly powerful.
For now note that Absent has now changed to Create. So for our situation earlier, we would now expect those 100 Absent users to result in Create operations.
Make sure you press Save before moving on.
Finally, press Reconcile Now to run another reconciliation:
You should see the same results as before, but lets check out our Users again under Manage, User.
 
If we have done everything correctly, you should now see all of the users from the CSV file created as OpenIDM managed users.
That’s all for this blog. Thanks.
 

This blog post was first published @ http://identity-implementation.blogspot.no/, included here with permission from the author.

OpenAM Windows Desktop SSO deep dive – part 1

This post will walk you through the necessary steps to configure OpenAM to authenticate users automatically using Windows Desktop SSO (Integrated Windows Authentication). The OpenAM configuration is quite straightforward, but there are a number of things that need to be in place on the Windows side for everything to work correctly.

Overview of Steps

  1. Set up the required forward and reverse lookup DNS records for OpenAM.
  2. Make a Kerberos principal and keytab file in Active Directory using the ktpass command.
  3. Configure OpenAM for Windows Desktop SSO.
  4. Configure your web browser.
  5. Make sure profiles can be loaded from the Active Directory dataStore.
  6. Configure OpenAM to fallback to a username and password when Windows Desktop SSO fails.

Prerequisites

This guide assumes the following prerequisites are in place:

  1. An OpenAM deployment (I’m using OpenAM 13.5 here) set up on the hostname openam.windom.example.com.
  2. A Windows Active Directory domain controller with Active Directory certificate services installed (this automatically enables LDAPS on port 636).
  3. This server has the hostname: svr1.windom.example.com, using the windows domain windom.example.com.
  4. OpenAM setup with an Active Directory dataStore, with users being able to log on using a username and password.
  5. A test Windows Desktop or a separate session on your test domain controller.

I’ve previously written blog posts on setting up a test Active Directory domain and setting up an Active Directory dataStore. I’ve used these as a starting point for the steps in this post.

Setting up DNS

  1. A user may access OpenAM via a given hostname, which would often point to a load balancer.
  2. The user must be able to resolve the OpenAM IP address from a DNS forward lookup on the hostname.
  3. The hostname returned from a reverse DNS lookup on the OpenAM IP must match the hostname used in the forward lookup.
  4. A keytab file should be generated with a Kerberos principal name, which consists of a hostname and a realm name. The hostname in the principal name must match the hostname by which the user can access OpenAM, even if that is the external load balancer hostname.
  5. The principal name configured in the OpenAM Windows Desktop SSO module must match the principal name of your key in your keytab file.
My steps below will show how to configure the required DNS records an Active Directory DNS service. If this is anything more than a test Active Directory instance, you’ll want your friendly Windows SysAdmin to carry out these steps for you.
For Windows Desktop SSO to work, the URL accessed by the user must resolve on a forward AND a reverse DNS lookup with the same IP address and host. This is part of the Kerberos Spec.If you don’t have a reverse lookup zone configured on a DNS server in your active directory domain (isn’t set up by default), you’ll need to set one up in DNS Management (dnsmgmt.msc):

Now it’s time to specify the subnet for your reverse lookup zone. Note that as part of the DNS standard, only class A, B and C blocks are supported.

The subnet here should contain what is seen to be the IP address of OpenAM from the user’s point of view. This is probably the subnet containing your load balancer address.

Now create a reverse DNS record for OpenAM. If OpenAM is behind a load balancer, use the IP address of the load balancer. If you don’t already have a forward lookup record (‘A’ record) then you can create both a forward reverse lookup record at the same time when creating an ‘A’ record by selecting “create associated pointer record”:

If you already have an ‘A’ record for OpenAM, you can assign a PTR record (reverse lookup record) to OpenAM by right clicking on your reverse lookup zone and selecting “New Pointer (PTR)…”

Now that everything is in place, you should be able to perform a forward and reverse DNS lookup on OpenAM and get matching results, like in the following screenshot:

Create an account in active directory for your Kerberos principal

Create an account in Active Directory Users and Computers (dsa.msc) to use as your Kerberos principal. Don’t worry about what the password is, you are about to change it in the next step.

Note that I have set the password never expires flag here. What option you choose is up to you (and your security policy) but remember that when the account password expires, the Kerberos key in your keytab file will need reissuing.

OpenAM uses the GSS API for Kerberos which supports the full 256bit strength of Kerberos encryption, as long as the Java unlimited strength cryptography policy is installed. Select the option to ensure the account supports 256 bit Kerberos encryption in the the account tab:

Creating a KeyTab file

 

Next you will create a Kerberos keytab file. KeyTab is short for key table. A keytab file is a table of keys that map to Kerberos principals. The keys can be used to authenticate to a Kerberos realm. Keytabs are designed to allow services, applications and scripts to authenticate to a Kerberos realm without human interaction using the key(s) stored in the keytab file. This is part of the Kerberos standard and is not unique to Active Directory.

 

OpenAM uses the keytab to authenticate to the Active Directory Kerberos realm. Once authenticated, OpenAM verifies the owner of the Kerberos ticket which is supplied in the SPNEGO process from the user’s web browser. I will cover this in more detail in part two of this blog post, where I will examine the Kerberos communication with a network analyser.

 

In Active Directory, you generate keytab files using the ktpass command. This creates a key for an account in AD (a principal) which is derived from the user’s password. If the password on the account changes, the key becomes invalid.

 

The ktpass command also writes a number of attributes to the user account and it can manage transition between two keys. I will cover how this works in the second part of this blog post.

The  following is an example of running the ktpass command on an active directory domain controller. The +rndpass and /maxpass options set a random 256 character password on the account which is then used to derive the key in the Kerberos principal.

ktpass -out fileName.keytab -princ HTTP/hostname.of.openam@KERBEROS.REALM.NAME -pass +rndPass -maxPass 256 -mapuser <userAccount> -crypto AES256-SHA1 -ptype KRB5_NT_PRINCIPAL -kvno 0

As it is case sensitive, make sure that the principal name is in exactly the following format:

HTTP/openam.windom.example.com@MYDOMAIN.EXAMPLE.COM

…where, openam.windom.example.com is the host name of OpenAM. If OpenAM is behind a load balancer, make this the hostname of the load balancer. MYDOMAIN.EXAMPLE.COM is the name of the Kerberos realm – the FQDN of the Active Directory domain.

In the domain we’ve been building so far, the command will look like the below:

You should now have a keytab file:

This file contains sensitive authentication keys. You should store it in a protected file system location, in a similar way to protecting private SSL keys.

Setting up Windows Desktop SSO in OpenAM

 

Copy your keytab file to a secured location on your OpenAM server(s) and set up a Windows Desktop SSO module. If you have based your configuration on my blog post for setting up an Active Directory DataStore, then you’ll want to take the following steps in your employees realm:

 

 

 

Again, make sure the principal name is typed correctly, matching the case below.

 

Once that is complete, you can make a chain in your realm to begin testing the new authentication module.

For now, I’ve made a chain that only contains the Windows Desktop SSO module in order to observe any error messages that may occur.

Configuring the web browser

On a Windows Desktop, Internet Explorer, Edge and Google Chrome take a number of browser security settings from “Internet Options” on the Windows control panel. By default, these settings cause a user to re-enter their Windows domain credentials before the browser will submit a Kerberos ticket as part of SPNEGO (aka Integrated Windows Authentication).

 

 

Authentication should still succeed, but it’s not a very user friendly experience.

 

The security zones’ default settings permit automatic submission of a Kerberos ticket to sites in the Intranet Zone. If you add OpenAM’s URL to this zone, the user will automatically be signed on with their Windows credentials.

 

Again, if you are setting this up on anything more than a test active directory instance, you will want  your friendly Windows SysAdmin to change your internet options, as these settings will most likely be controlled by group policy.

 

You can find Internet Options on the Windows control panel, but my favourite way to open it (because this works on all versions of Windows since 95) is to run inetcpl.cpl from the command line or the Windows run box (WinKey+R):
 

 

This is likely the only configuration required in the browser settings. However, if you are having problems with IE or Edge specifically, check that “Enable Windows Integrated Authentication” is selected on the advanced tab:

 

 

Also check that the security settings for the intranet zone are set to “automatic logon only in intranet zone”.

 

Now try authenticating using the “wdsso” chain:

http://openam.windom.example.com:8000/am1350/XUI/#login/employees&service=wdsso

Make sure your dataStore can load a profile after authentication

 

You may find that Windows Desktop SSO appears to work correctly now. However, this may apply solely to accounts where the CN matches the Windows account name, such as the domain administrator account.

 

Once OpenAM has authenticated a user using one or more authentication modules, it is normally configured to retrieve a profile from a dataStore – in your case this is Active Directory. In order to retrieve the profile, the authentication modules obtain a name that will be used to lookup the user. The Windows Desktop SSO module provides the windows account name (without the domain component) for this, but the Active Directory type of dataStore is by default configured to search for a user based on the CN, not the “sAMAccountName” attribute, which contains the Windows account name. If the CN does not match the sAMAccountName, as is usually the case in Active Directory, then the profile will not load after authentication. When that happens, you will see the error, “User Requires Profile to Login”.

 

 

If you have followed my previous blog posts on setting up an Active Directory Domain and setting up an Active Directory dataStore in OpenAM, this error will occur.  In order to rectify, you need to change your dataStore to retrieve the profile using the sAMAccountName attribute:

 

 

If you also wish to use the dataStore authentication module and allow users to login with that username, you can change the “authentication naming attribute” in your dataStore.

 

At this point, you should now be able to authenticate using Windows Desktop SSO. But what happens when the user isn’t using a domain-joined computer?

Configuring fallback to the dataStore module

Now that your Windows Desktop SSO module is working and the user profile is retrieved, you will configure an extra module in the chain to allow a username and password prompt to be supplied if the user’s browser is not configured for Kerberos authentication.

 

Below I’ve set the “wdsso” chain to attempt Kerberos authentication using the Windows Desktop SSO module. Because it is set to “sufficient”, it will only proceed to authentication via the dataStore module if authentication via Kerberos fails.

Note: Versions of OpenAM prior to v.13.0.0 required a custom error page to be implemented, allowing a failed login on the Windows Desktop SSO module to progress to the next module in the chain. This is no longer required.

That’s it!

Windows Desktop SSO should now authenticate users who are logged in on a domain-joined computer to OpenAM.

Part 2 of this blog post will look at the mechanics behind Kerberos authentication by analysing the network traffic and will also address common scenarios that can cause Windows Desktop SSO to fail.

This blog post was first published @ http://authntoz.blogspot.no/, included here with permission from the author.

Setting up an Active Directory DataStore in OpenAM

In this post, I’ve going to set up an Active Directory DataStore in OpenAM 13.5. If you are familiar with OpenAM’s authentication and profile store facilities but struggle with the Windows side of things, then this post is for you.

Overview of Steps

  1. Get OpenAM to trust the certificate on an Active Directory LDAPS service.
  2. Create an account in the Windows domain for OpenAM to lookup accounts with (You aren’t using the domain admin account, are you?! ).
  3. Configure a DataStore in a new realm in OpenAM.
  4. Testing login with the default DataStore module

Prerequisites

You’ll need these things in place before you can follow the steps below:

  1. An OpenAM deployment (I’m using OpenAM 13.5 here) set up on the hostname openam.windom.example.com.
  2. A Windows Active directory domain controller with active directory certificate services installed (this automatically enables LDAPS on port 636).
  3. This server has the hostname: svr1.windom.example.com, using the windows domain windom.example.com.

I’ve previously written a blog post on setting up a test active directory domain, including the full installation steps and a script for generating test data. I’ve used this setup as a basis for the steps in this post.

 

Trusting Active Directory LDAPS certificates

If we want OpenAM to use LDAPS to connect to active directory, then it needs to trust the public SSL certificate for the connection. If you followed my blog post on setting up an active directory domain, then this certificate will have been issued by active directory certificate services. So to get OpenAM to trust that certificate, we can get it to trust the certificate services CA.

First, let’s use certutil on our Windows domain controller to export the public CA certificate from certificate services:

certutil -ca.cert -f ca.cer

 

Then import the certificate into the java cacerts truststore on each of the OpenAM servers in your deployment. On CENTOS 7, that command would look like this:

 

sudo keytool -import -trustcacerts -file ~/ca.cer -alias windom-ca -keystore /etc/pki/java/cacerts

Create an account in your Active Directory domain

The account we are creating here is used by OpenAM for authentication operations and profile manipulation. If you are using something more established than a simple test Active Directory domain, ask your Windows SysAdmin to do this for you.

In this example, the user is called openamLdap and it is located in an OU called “Service Accounts”. This OU is put in place by the sample data script from my blog post on setting up active directory.

Use “Active Directory Users and Computers” to do add the openamLdap account (dsa.msc):

 

 

Note that I have set the password never expires flag here. What you choose to do here is up to you (and your security policy) but remember if you choose not to do this, you will need to keep track of password changes for this account.

Once you’ve done that, delegate some admin rights to that account using the active directory delegate control wizard. Below, I’m delegating control of the “User Accounts” OU to the user I’ve just created. Right click on the OU and select delegate control:

 

 

 

Above we are allowing openamLdap to manipulate users in our User Accounts OU. What you choose to do in your setup is up to you.

Once you’ve done that, enable advance view in active directory users and computers:

Then edit your new account, go to the attribute editor tab and scroll down to distinguished name. Copy this value, you’ll need it in a minute for the OpenAM setup:

Creating a DataStore in OpenAM

In OpenAM, create a realm in the top level realm called employees. In the employees realm, remove the default dataStore and add a new active directory dataStore:

Add the hostname(s) of AD servers with the port number, and add the DN of the user you created earlier. You can use the end of the user DN to get the LDAP base DN of the domain.

In this example, LDAPS is used (this is required if you want to change properties of accounts in AD). LDAPS is not enabled by default in AD. If you don’t have it setup, note that a quick way of enabling it is to install the Active Directory Certificate Services role and reboot. After having done that, you need to add the public cert of the Windows CA to the java cacerts file on your OpenAM server.

You may wish to alter the User and Groups OUs in the DataStore configuration. By default both of these point to the Users container in active directory, but this is usually not used in production active directory services because you cannot create OUs underneath this container. If you choose to alter the default search filters, remember that Active Directory does not support LDAP extensible match rules.

In my setup, I have set the LDAP organisation DN to point to my windomcorp OU, OpenAM is not concerned with anything outside of this. I’ve changed the default people container naming attribute to OU and my people container value to “User Accounts”. This matches “OU=User Accounts,OU=windomcorp,DC=windom,DC=example,DC=com”, the location of my regular user accounts which were set up by the script from my previous blog post.

I’ve done the same thing in the group configuration. The groups configuration must point to a valid LDAP entry that is accessible from our LDAP user account, otherwise OpenAM will fail to load the profile of a user after authentication.

Set the root DN for persistent search:

Now scroll to the top and hit save. Once you head back to the realm options, you should be able to see your users in the subjects tab. If you don’t go back and double check your settings and also look for any exceptions (..and “caused by” exceptions) in the “idrepo” debug log.

If you log out of OpenAM, you should be able to log in as a Windows user from the login page of the customers realm:

http://openam.windom.example.com:8000/am1350/XUI/#login/employees

 

What next?

In the steps above, we hardly touched on OpenAM’s powerful authentication capabilities – we simply used the “DataStore” authentication module that is available by default. If we want our user to logon with something other than the CN, the best option is to use the Active Directory authentication module, which allows you login with different usernames. For example, you could specify sAMaccountName and mail as usernames.
The active directory authentication module also supports the LDAPv3 behera standard, which allows OpenAM to respond to situations such as account lockouts, expired passwords and passwords that must be changed.
In my next blog post, I’ll go through the steps to configure this OpenAM deployment to authenticate users with Windows Desktop SSO – the Kerberos part of what is commonly referred to as Integrated Windows Authentication.

This blog post was first published @ http://authntoz.blogspot.no/, included here with permission from the author.

Setting up an Active Directory domain for evaluating the ForgeRock stack

This post walks through setting up a single Windows machine that you can use for testing various parts of the ForgeRock stack that integrate with Microsoft products. It is aimed at those who are tech-savy but new to Microsoft Active Directory.

By the end of the walk through, you should have:

  1. A Windows Active Directory domain
  2. An Active Directory DNS server
  3. An Active Directory LDAP service, running on SSL
  4. Active Directory Certificate Services – a CA.
  5. A kerberos realm.
  6. A vaguely realistic directory layout with sample users.
  7. PowerShell scripts for configuring the above.
The above items should allow you to test:
  1. OpenAM – Active directory authentication, DataStores, self service features and behera password policy support.
  2. OpenAM – Integrated Windows Authentication (Which for some reason we call Windows Desktop SSO in ForgeRock)
  3. OpenAM – SmartCard authentication
  4. OpenAM – ADFS federation with WSFED, SAML2 and OIDC.
  5. OpenIDM – Password Synchronisation

This post will follow the following high level steps:

  1. Setup a Windows VM or cloud instance.
  2. Give the computer an appropriate hostname
  3. Run Windows Update.
  4. Install Active Directory Domain Services (ADDS, also known as “promotion to a domain controller”)
  5. Install Active Directory Certificate Services (ADCS). Amongst other things this is a quick way of installing a certificate on the LDAPS port of a Windows domain controller.
  6. Allow all users to log on locally.
  7. Creating a sample AD structure in PowerShell

 

Set up a Windows Machine

To get hold of a Windows Server instance that you can start playing with, you can either build an instance yourself on your local or on-prem virtualisation software, or rent an instance from the likes of AWS and Azure.
At the time of writing, the most up to date edition of Windows server is 2012 R2, which is what I will use for the remainder of this post.
I find it easiest to perform most of my testing on a local VM on my laptop. If you don’t have an MSDN license or a licensed copy of Windows Server to hand, Microsoft give away a fully functional 180 day trial of Windows Server 2012 R2.
For testing, a Windows 2012 R2 server will scrape by on 2GB of RAM, but I would give it at least 4GB if you can. And while it will install happily on a 20GB hard disk, don’t expect there to be much room for Windows updates or any other software you may want to install. Some cloud providers offer 20GB images, I would avoid these if you can afford it. Go with 40GB at least to avoid constantly having to juggle things around.

Setting up a VM and automating it

Install your copy of Windows Server from the ISO file. I won’t detail going through the installation steps on the first few screens, they simply ask for things like localisation, keyboard layout and the password of the administrator account. I find that the data centre edition covers all the features I need.
If you want to skip through all of this, you can set up an autounattend.xml file and add it to the root of your Windows Server ISO image. Here is one that I made which works with the 180 day trial images of server 2012 R2. I made this using the Windows ADK, but you can also use some third party generator websites or just start with another one and manually edit it yourself.
If you choose to use my autounattend.xml, it is set up with the following:

username: administrator
password: Cangetinwin1
hostname: svr1
IE ESC: disabled

I have disabled IE ESC (aka that setting that prevents Internet Explorer from doing anything at all on Windows Server) above for testing purposes, but in production I would avoid doing this.

Using a cloud instance

A windows machine in the cloud will likely boot straight to the desktop, there isn’t much you need to do. Make sure you can remotely access it by remote desktop (tcp port 3389) from your IP address and make sure you have a remote desktop client handy. Windows obviously comes with one (mstsc.exe), there are some great ones for mac and Linux too.

Choose your weapon – server manager or PowerShell

When you first boot to a Windows 2012r2 desktop, you’ll see four icons on the start bar. First is the Windows 8 start menu, which I would avoid.
Second is the server manager. This useful tool provides quick access to almost everything you need to administer Windows server. I will describe using it to access various tools in the remainder of this post.
Thirdly is PowerShell. I’ll also describe how to do most things on this post with PowerShell.

The fourth icon is good old windows explorer, which gives you access to the file system.

Run Windows Update

In production, your Windows updates would be carefully managed via group policy and possibly a private Windows Server Update Services (WSUS) Server. For test and evaluation, it’s up to you whether you want to get updates from Microsoft. For testing, I would run it once and then turn it off.

To access the Windows Update settings in server manager, navigate to Local Server > Windows Update.

Give the computer an appropriate hostname

You can skip this part if you used my autounattend.xml file above, your hostname will be svr1. Otherwise, in server manager, navigate to Local Server > Computer Name
You can also set the computer name with the Rename-Computer cmdlet in PowerShell:
Rename-Computer -NewName svr1

Install Active Directory Domain Services

Installing active directory domain services (used to be known as promotion to a domain controller) is a two step process. First of all, you install the services required for ADDS. Secondly, you configure it. As usual, you can do these with both PowerShell and server manager.

In the following steps, the following will be configured:

  1. A brand new active directory forest containing a single domain called windom.example.com
  2. A single domain controller, running an active directory DNS server and LDAP service.
  3. A Kerberos realm called windom.example.com

ADDS Using Server Manager

This may seem like a lengthy process, but it mostly consists of clicking next in the installation and configuration wizards. If you are familiar with PowerShell, you may wish to skip to the end of this section and use the PowerShell commands which achieve the same thing.
  1. Open up Server Manager and select “Add roles and Features”

  2. Click Next

  3. You aren’t doing a RDS installation, so just click Next.

  4. The local server should be selected, so click next.

  5. Select the “Active Directory Domain Services” role. This will pop up a dialogue asking you to add some features. Go with the defaults, making sure to install the management tools, then click next. Important: Don’t install certificate services at this point, this should be done AFTER domain services has been installed.
  6. You probably don’t want to install any other features right now, so click next.

  7. There is some general info here about ADDS. Click next.

  8. You can click install at the confirmation stage.

  9. Now the service has been installed, you have to configure it. Note that you can export an XML file here which contains the options you have specified so far. This is useful if you want to script an unattended deployment. Click “promote this server to a domain controller” to continue with configuration.

  10. As we’re just creating an AD setup for test and evaluation, “click add a new forest”. Specify a domain name. If you are testing, make sure you either use a domain that you own or a valid test domain, such as example.com, example.org or example.net. It’s bad practice to use a publicly available DNS domain for Active Directory, so choose a sub domain, such as windom.example.com

  11. Use the default forest and domain functional levels and make sure that you are installing a DNS server. Specify the DSRM password – this is only needed if there is a serious problem with your domain controller that prevents it from booting.

  12. Don’t worry about DNS delegation, you are using a locally installed DNS server.

  13. Use the default netBIOS domain name.

  14. Use the default filesystem locations.

  15. You can now review what you have done before you start the configuration. Note that you can export a pre-made PowerShell script at this point containing what you have configured.

  16. The pre-requisites check should pass with some warnings about default security settings and DNS. This build is just for evaluation, so click install.

  17. After a couple of minutes the machine will reboot.

A this point you should now have a working Active Directory domain controller. You will be able to connect to it using LDAP on port 389, but LDAPS is not available yet.

ADDS Using PowerShell

Here is some PowerShell used to configure everything in the screenshots above. The first command uses an XML file that was generated from the “add roles” wizard above. The second command was generated by the configuration wizard.

Install-WindowsFeature -ConfigurationFilePath .ADDS-DeploymentConfigTemplate.xml

Import-Module ADDSDeployment
Install-ADDSForest `
-CreateDnsDelegation:$false `
-DatabasePath "C:WindowsNTDS" `
-DomainMode "Win2012R2" `
-DomainName "windom.example.com" `
-DomainNetbiosName "WINDOM" `
-ForestMode "Win2012R2" `
-InstallDns:$true `
-LogPath "C:WindowsNTDS" `
-NoRebootOnCompletion:$false `
-SysvolPath "C:WindowsSYSVOL" `
-Force:$true

Installing Active Directory Certificate Services

The following section walks through installing Active Directory Certificate Services (ADCS). This is the enterprise grade PKI infrastructure offering from Microsoft, which you can use to generate certificates for strong authentication, for example when implementing SmartCard authentication.

One nice thing about ADCS is that if you install it on a domain controller, it will automatically issue a certificate for LDAPS and configure the domain controller to use it. The default policy that is enabled on Active Directory prevents changes to any object from occuring over plain text LDAP. Therefore, if you want products like OpenAM and OpenIDM to write anything to active directory, for example when using self service or account provisioning, then you need to be using LDAPS.

ADCS Using Server Manager

  1. Open server manager again and select “add roles and features”. Select “Active Directory Certificate Services”, accept the suggested features and management tools, then click next.

  2. You don’t need any other features right now, so click next.

  3. There is some useful information here.

  4. For now, just install the certificate authority. You can come back and install further services later on if you wish.

  5. Once again, it’s time to install the service. Just like with domain services, you can export an XML file for automating these steps.

  6. Once installed, click “configure active directory certificate services” to begin the configuration process.

  7. For our testing, using the default domain administrator account is fine.

  8. Select certificate authority and click next.

  9. We want our CA to be integrated with Active Directory so that it can automatically issue certificates to services like LDAP. Select Enterprise CA and click next.

  10. Create and new private key.

  11. I’ve increased the default hashing algorithm here from SAH1 to SHA256, as many applications consider SHA1 to be obsolete.

  12. The distinguished name and CN for the CA are set here. These can not be changed, so you may want to consider what they should be for your testing. For my testing with the ForgeRock stack, the defaults are sufficient.

  13. You can increase the certificate validity period here if you wish.

  14. For testing, stick with the default file system locations.

  15. Now it’s time to apply the configuration. Unlike installing services like ADDS and ADFS, there is no option here to generate a PowerShell script of your options. I have created a PowerShell command with these options in the next section.

  16. The installation should complete successfully. Now reboot your server. On the next boot you’ll find that you can connect to your active directory server using LDAPS on port 636.

ADCS Using PowerShell

The following two commands will apply the above configuration. The first command requires an XML file that was generated from the “add roles and features” wizard.

Install-WindowsFeature -ConfigurationFilePath .ADCS-DeploymentConfigTemplate.xml

Install-AdcsCertificationAuthority `
-AllowAdministratorInteraction `
-CAType EnterpriseRootCA `
-HashAlgorithmName SHA256 `
-KeyLength 2048 `
-ValidityPeriod Years `
-ValidityPeriodUnits 10

Allow all users to logon locally

This is something that you should absolutely not do on a production domain controller.

By default, Windows server allows two simultaneous sessions from different users without having to enable the full remote desktop services service. That means you can log on to the machine as the administrator and remote desktop in as another user for testing purposes. Windows server usually runs regular desktop applications just fine (it has to for RDS) so it can make it ideal for testing services such as Office 365.

The default policies on a domain controller prevent normal users from logging on. Here, we are going to change that.
Open up server manager and navigate to Local Server > Remote Desktop. This will open the “system” control panel applet (sysdm.cpl), where you can configure remotes desktop.

Creating a sample AD structure in PowerShell

I’ve put together a script which generates a predictable list of any number of users and a fairly typical directory layout. The script is largely based on this one by SharePointRyan, only it does a few extra things.
If you wish to use it, copy the script to your machine (you can use remote desktop to copy files). Then execute it:
You should see output indicating that 200 users have been added (this number is adjustable in the script):
The script evenly distributes the users between three OUs representing world regions. It also creates a fairly common directory layout under the OU “windomcorp” (the script uses netbios name + “corp”).

Conclusion

That’s it. I will use this configuration as a basis for some future blog posts that I have in the works. Next up is integrating OpenAM with Office 365, then I’ll do a technical deep dive look at using supporting Integrated Windows Authentication with OpenAM.

This blog post was first published @ http://authntoz.blogspot.no/, included here with permission from the author.