Dart’s Async / Await is here. The Future starts now




The latest development release of the Dart Editor includes experimental support for Async / Await.  Check out Gilad's article for an introduction. In the Editor go into preferences -> experimental to enable this feature.

async / await is "syntactic sugar" for what can be accomplished using Futures, Completors and a whack of nested then() closures. But this sugar is ohh so sweet (and calorie free!).  Your code will be much easier to understand and debug.


Here is a little before and after example using async/await. In this example, we need to perform 3 async LDAP operations in sequence. Using Futures and nested then() closures, we get something like this:



// add mickey to directory
ldap.add(dn, attrs).then( expectAsync((r) {
expect( r.resultCode, equals(0));
// modify mickey's sn
var m = new Modification.replace("sn", ["Sir Mickey"]);
ldap.modify(dn, [m]).then( expectAsync((result) {
expect(result.resultCode,equals(0));
// finally delete mickey
ldap.delete(dn).then( expectAsync((result) {
expect(result.resultCode,equals(0));
}));
}));
}));


Kinda ugly isn't it?  And hard to type (did you miss a closing brace somewhere??).  The sequence of operations is just not easy to see. If we add in error handling for each Future operation, this gets even worse.


So let's rewrite using async/await:



// add mickey
var result = await ldap.add(dn, attrs);
expect( result.resultCode, equals(0));
// modify mickey's sn
var m = new Modification.replace("sn", ["Sir Mickey"]);
result = await ldap.modify(dn, [m]);
expect(result.resultCode,equals(0));
// finally delete mickey
result = await ldap.delete(dn);
expect(result.resultCode,equals(0));


The intent of the async/await version is much easier to follow, and we can eliminate the unit test expectAsync weirdness.

If any of the Futures throws an exception, we can handle that with a nice vanilla try / catch block.  Here is a sample unit test where the Future is expected to throw an error:



test('Bind to a bad DN', () async {
try {
await ldap.bind("cn=foofoo","password");
fail("Should not be able to bind to a bad DN");
}
catch(e) {
expect(e.resultCode,equals(ResultCode.INVALID_CREDENTIALS));
}
});

Note the use of the "async" keyword before the curly brace. You must mark a function as async in order to use await.

Yet another reason to love Dart!





Online-ification: The Role of Identity

The Wikipedia entry for Digital Transformation, "refers to the changes associated with the application of digital technology in all aspects of human society".  That is a pretty broad statement.

An increased digital presence however, is being felt across all lines of both public and private sector initiatives, reaching everything from being able to pay your car tax on line, through to being able to order a taxi based on your current location.  This increased focus on the 'online-ification' of services and content, drives a need for a loosely coupled and strong view of an individual or thing based digital identity.

Digital Theme I - Physical to Digital

The classic digital transformation approach that many organisations are going through, generally focusses on the structural change of delivering an on line service that was previously sold in a more face to face manner.  Insurance is a simple example.  20 years ago (or even less..) brokers were common place - high street independents that provided advice and guidance on purchasing insurance for a range of common scenarios, from cars and homes, to health.  Roll forward to 2014, and you can get aggregated quotes in minutes, simply by going through an exchange of personal information with a website.

Whilst this requires greater decision making on consumer side - being able analyse and consume complex information - the ability to reach more customers in a quicker manner, is now a standard mantra for many CEO's.

The main driver of course, is competitive advantage.  If you are not providing your service on line, somebody else will.  This results in rapidly evolving web sites, delivery platforms and profile management solutions, that take only months if not weeks to implement.  Speed and agility are key here.

Digital Theme II - Increasing Self Service

From a cost perspective, getting existing customers into the mindset of self-service is a game changer.  Less reliance on call centre staff, branches and again narrowing the field of vision when it comes to face to face communication.  The interaction with a physical person brings not only cost, but friction, when it comes to executing simple information exchange transactions.  If the service or piece of information being exchanged can be commoditized, or at least treated in a repeatable manner, self-service is a no-brainer.  The most simple example, is password reset or being able to update a users own profile or details.  But this again requires strong identity verification and authorization processes to be successful.

Digital Theme III - Introducing New Services

Whilst themes I and II are more focused on existing customers - and ultimately keeping them happy and being "sticky" towards your organisation, service or good, the introduction of new services is often about expanding and looking for entirely new users and customers. Of course, cross selling and up selling something new into your existing customer base is key, but attracting new users is the "rain making" mantra of sales VP's up and down the land - Net New Business.  Attracting new customers is one thing, getting to sign up to a new service is something entirely different.  There needs to simple and transparent registration processes - perhaps reusing identity attributes from social media providers for example - but providing enough of a carrot, so that the user is happy to exchange their contact and user details in order to either buy or consume a service from you.

Identity at the Core

Throughout the above themes, identity is front and central.  Existing customers need to be set up, transferred or reconciled, in order to have a digital presence.  They require notifications, training, and login credentials, so they can access goods and services to at least the same level as they did when a face to face or physical transaction took place.  Just think of that for a second - 'to at least the same level as they did when a face to face or physical transaction took place'.  That is a big statement.  An on line service that is replacing something physical, cannot be any worse!  It will certainly be different, but it must be at least on parity and hopefully an improvement on the previous ways of working.

There also needs to be effective methods of not only registering a user, but allowing a relatively seamless and transparent method of logging in - single sign on, contextual based login, perhaps taking a device finger print - as well as the reuse of existing identity attributes and passwords through the "bring your own identity" concept.

Identity is core, but it is also often taken for granted.  Security aspects need to be simple and robust - the use of things like one time passwords, not only increase the secure theatre, but should also help reduce risk.

So whilst digital is everywhere, having a strong focus on identity management will help not only fulfil the promise of delivering existing content in a new medium, but help to attract new users and consumers to your organisation.

By Simon Moffatt

2014 European IRM Summit in only a few days away !

Starting Monday next week, at the Powerscourt Estate near Dublin, the European IRM Summit is just a few days away.

I’m polishing the content and demos for the 2 sessions that I’m presenting, one for each product that I’m managing: OpenDJ and OpenIG. Both take place on the Wednesday afternoon in the Technology Overview track.

If you’re still contemplating whether you should attend the event, check the finalised agenda. And hurry up to the Registration site ! I’m told there are a few remaining seats available, but they might not last for long!

I looking forward to seeing everyone next week in Ireland.

gardens


Filed under: General Tagged: conference, Dublin, ForgeRock, identity, Ireland, IRM, IRMSummit, IRMSummit2014, IRMSummitEurope, opendj, openig, summit

OpenIG: A quick look at decorators

openig-logo

Guillaume recently added a cool new feature for the upcoming release of OpenIG: decorators. To use decorators before the release, take a nightly build, or build OpenIG yourself.

Decorators are objects that extend what other objects can do. For example, OpenIG now has a CaptureDecorator that lets Filters and Handlers capture requests and responses as they pass through, and a TimerDecorator that logs how long it takes for Filters and Handlers to do their processing.

You configure decorators in the heap alongside your other objects.

For example, to configure a CaptureDecorator itself that also captures the entity when logging a request or response, you add a configuration object that looks like this:

{
    "name": "capture",
    "type": "CaptureDecorator",
    "config": {
        "captureEntity": true
    }
}

To add decorations to other objects, you have two options. You can either set default “decorations” for all applicable objects in the heap, or you decorate individual objects. The scope of the former is everything in the heap to which the decoration applies. The scope of the latter is only the individual object.

You configure a decoration by using its “name” as a top-level property. For example, to add a “capture” decoration to all Filters and Handlers, capturing at all possible capture points, add the “decorations” to the heap like this.

{
    "heap": {
        "objects": [
            {
                "name": "capture",
                "type": "CaptureDecorator",
                "config": {
                    "captureEntity": false
                }
            },
            ... other objects ...,
            {
                "name": "ClientHandler",
                "type": "ClientHandler"
            }
        ],
        "decorations": {
            "capture": "all"
        }
    },
    "handler": "ClientHandler"
}

The configuration shown here results in a lot more capturing than when you use a CaptureFilter. A CaptureFilter captures only the request and response where it is configured in a chain. When you add capture decorations at all capture points to all your Filters and Handlers, then the capture happens for requests and responses as they enter and leave each Filter, and for requests as they enter and responses as they leave Handlers.

Also, notice that “decorations” is a heap property, rather than a global property of the server or the route.

Once you have narrowed down what you want to observe, and find for example that you only want to capture requests as they enter the “ClientHandler” and responses as they leave the “ClientHandler”, comment out “decorations” and decorate only the “ClientHandler”.

{
    "heap": {
        "objects": [
            {
                "name": "capture",
                "type": "CaptureDecorator",
                "config": {
                    "captureEntity": false
                }
            },
            ... other objects ...,
            {
                "name": "ClientHandler",
                "type": "ClientHandler",
                "capture": [ "request", "response" ]
            }
        ],
        "_decorations": {
            "capture": "all"
        }
    },
    "handler": "ClientHandler"
}

Notice what has changed in the configuration. “ClientHandler” now has the “capture” decoration, this time with an array of capture points rather than a single string. The field “_decorations” now employs Guillaume’s famous underscore commenting convention. (When OpenIG does not recognize a JSON field name, it ignores the field. The leading _ is a nice toggle.)

Decorators are likely to take the place of older, more cumbersome ways of configuring some capabilities. The CaptureFilter is deprecated starting in the next release, for example. For more about OpenIG Decorators see the draft OpenIG Reference.


ForgeRock doc tools 2.1.5 released

ForgeRock doc tools 2.1.5 is now available. This is a maintenance release, adding an option to stop the build after pre-processing DocBook XML sources and fixing some bugs. Thanks to Gene and Chris for their fixes, and to Lana for testing.

See the release notes for details about what has changed.

You do not need to make any configuration changes to move to this maintenance release from 2.1.4, except to update the version number in your POM.

See the README for more about how to use the doc tools.


POODLE SSL Bug and OpenDJ

A new security issue hit the streets this week: the Poodle SSL bug. Immediately we’ve received a question on the OpenDJ mailing list on how to remediate from the vulnerability.
While the vulnerability is mostly triggered by the client, it’s also possible to prevent attack by disabling the use of SSLv3 all together on the server side. Beware that disabling SSLv3 might break old legacy client applications.

OpenDJ uses the SSL implementation provided by Java, and by default will allow use of all the TLS protocols supported by the JVM. You can restrict the set of protocols for the Java VM installed on the system using deployment.properties (on the Mac, using the Java Preferences Panel, in the Advanced Mode), or using environment properties at startup (-Ddeployment.security.SSLv3=false). I will let you search through the official Java documentations for the details.

But you can also control the protocols used by OpenDJ itself. If you want to do so, you will need to change settings in several places :

  • the LDAPS Connection Handler, since this is the one dealing with LDAP over SSL/TLS.
  • the LDAP Connection Handler, if the startTLS extended operation is to be used to negotiate SSL/TLS establishment on the LDAP connection.
  • the HTTP Connection Handler, if you have enabled it to activate the RESTful APIs
  • The Crypto Manager, whose settings are used by Replication and possibly the Pass Through Authentication Plugin.
  • The Administration Connector, which is also using LDAPS.

For example, to change the settings in the LDAPS Connection Handler, you would run the following command :

# dsconfig set-connection-handler-prop --handler-name "LDAPS Connection Handler"
--add ssl-protocol:TLSv1 --add ssl-protocol:TLSv1.1 --add ssl-protocol:TLSv1.2
-h localhost -p 4444 -X -D "cn=Directory Manager" -w secret12 -n

Repeat for the LDAP Connection Handler and the HTTP Connection Handler.

For the crypto manager, use the following command:

# dsconfig set-crypto-manager-prop
--add ssl-protocol:TLSv1 --add ssl-protocol:TLSv1.1 --add ssl-protocol:TLSv1.2
-h localhost -p 4444 -X -D "cn=Directory Manager" -w secret12 -n

And for the Administration Connector :

# dsconfig set-administration-connector-prop
--add ssl-protocol:TLSv1 --add ssl-protocol:TLSv1.1 --add ssl-protocol:TLSv1.2
-h localhost -p 4444 -X -D "cn=Directory Manager" -w secret12 -n

All of these changes will take effect immediately, but they will only impact new connections established after the change.


Filed under: Directory Services Tagged: directory, directory-server, ForgeRock, opendj, poodle, security, ssl, vulnerability

OpenDJ: LDAP Controls

OpenDJ LogoLDAP controls are a standard mechanism for extending basic LDAP operations. For example, you can use a control to ask the LDAP server to sort search results before returning them, or to return search results a few at a time.

OpenDJ directory server supports a fairly long list of controls. Let’s take a look at three of them.

“Only do this if…”

The Assertion Control tells the directory server only to process the operation if a specified assertion is true for the target entry. You can specify the assertion as a filter to match.

As an example, let’s replace Babs Jensen’s street address, but only if it is the one we are expecting. Notice the assertion filter passed to the ldapmodify request. If Babs’s street address is not “500 3rd Street”, the request does not have an effect:

$ ldapmodify 
> --port 1389 
> --bindDN uid=kvaughan,ou=people,dc=example,dc=com 
> --bindPassword bribery 
> --assertionFilter "(street=500 3rd Street)"
dn: uid=bjensen,ou=people,dc=example,dc=com
changetype: modify
replace: street
street: 33 New Montgomery Street

Processing MODIFY request for uid=bjensen,ou=people,dc=example,dc=com
MODIFY operation successful for DN uid=bjensen,ou=people,dc=example,dc=com

“Make the modification, and shut up”

The Permissive Modify Control is handy when you want to make a modification no matter what. It lets you add an attribute that already exists, or delete one that is already gone without getting an error.

As an example, let’s make sure user.0 is a member of a big static group. It doesn’t matter whether user.0 was already a member, but if not, we want to make sure user.0 is added to the group.

$ ldapmodify 
>  --port 1389 
>  --bindDN uid=user.1,ou=people,dc=example,dc=com 
>  --bindPassword password 
>  --control 1.2.840.113556.1.4.1413
dn: cn=Static,ou=Groups,dc=example,dc=com
changetype: modify
add: member
member: uid=user.0,ou=people,dc=example,dc=com

Processing MODIFY request for cn=Static,ou=Groups,dc=example,dc=com
MODIFY operation successful for DN cn=Static,ou=Groups,dc=example,dc=com

“Delete the children, too”

The Subtree Delete Control lets you delete an entire branch of entries.

As an example, let’s delete ou=Groups,dc=example,dc=com and any groups underneath. The user doing this needs an access to use the tree delete control, as in aci: (targetcontrol="1.2.840.113556.1.4.805") (version 3.0; acl "Tree delete"; allow(all) userdn ="ldap:///uid=user.1,ou=people,dc=example,dc=com";).

$ ldapdelete 
>  --port 1389 
>  --bindDN uid=user.1,ou=people,dc=example,dc=com 
>  --bindPassword password 
>  --deleteSubtree 
>  ou=Groups,dc=example,dc=com
Processing DELETE request for ou=Groups,dc=example,dc=com

DELETE operation successful for DN ou=Groups,dc=example,dc=com

As mentioned above, OpenDJ directory server supports many LDAP controls. So does OpenDJ LDAP SDK. If you want to use one in your application, see the Dev Guide chapter on Working With Controls.


Introduction to OpenIG (Part 4: Troubleshooting)

As transformations are dictated by the set of filters/handlers in your configuration, they are not always trivial, it's becoming very important quickly to capture the messages at different phases of the processing.

See the flow

First thing to understand when trying to debug a configuration is "where the hell are all my messages going ?" :)

This is achievable simply by activating DEBUG traces in your LogSink heap object:

When the DEBUG traces are on, you should see something like:

You'll see a new line each time an Exchanges comes into a Handler/Filter and each time it's flowing out of the element (you also get a performance measurement).

Capture the messages (requests/responses)

OpenIG provides a simple, way to see the HTTP message (being a request or a response), including both headers and (optionaly) the entity (if that's a textual content): the CaptureFilter.

Here is an output example you can obtain when you install a CaptureFilter:

Install a CaptureFilter

Being a filter, it has to be installed as part of a Chain:

It is usually best placed either as the OpenIG entry point (the first element to be invoked), that helps to see what the User-Agent sends and receives (as it's perceived by OpenIG) or just before a ClientHandler (that represents a sort of endpoint, usually your protected application).

Capture what you want

CaptureFilter is sufficient for simple capturing needs. When what you want to observe is not contained in the HTTP message, we have to use the OpenIG swiss-knife: ScriptableFilter.

This is a special filter that allows you to execute a Groovy script when traversed by an Exchange.

Here is a sample script that prints the content of the Exchange's session:

Copy this script into ~/.openig/scripts/groovy/PrintSessionFilter.groovy and configure your heap object:

Seeing the messages on wire

Sometimes, all of the previous solutions are not applicable, because you want to see the on-wire message content (as opposed to modelled by OpenIG).

For this case, the only solution is to start your OpenIG with a couple of system properties that will activate deep traces of the http client library we're using: Apache HTTP Client.

>$ bin/catalina.sh -Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.SimpleLog 
                   -Dorg.apache.commons.logging.simplelog.log.httpclient.wire=debug 
                   -Dorg.apache.commons.logging.simplelog.log.org.apache.commons.httpclient=debug 
                   run

See the HTTP Client Logging page for more informations.

Next

In the next post we'll explain how routes can speed-up your configuration preparation.

Towards automating tests for examples in docs

LogoTM_verticalbigForgeRock documentation includes many (but never enough) examples. When reading about editing configuration files, you expect to see excerpts of the configuration files. When reading a developer guide, you expect to see code samples. When following a tutorial that involves the command line, you expect to see command line examples for each step. Most of us figure things out a lot more quickly given both a good explanation and also a working example.

Trouble is, examples can go stale and break when the software changes. Unless you have a test harness, this sort of breakage happens silently. If the doc source contains only example input and output, it can also take time to set the software up in order to reproduce the conditions for the example. And yet readers hardly want to search for the relevant part of the example in a mass of scaffolding code and configuration.

Some of that work can be done behind the scenes by quality engineers. They can set up a context that allows them to test examples in the documentation, and indeed some of the quality engineers at ForgeRock like Jean-Charles and Laurent are already starting to solve the problem that way. It would be better for them and for everyone else, however, if it were a lot easier to prepare the context, and not something separate from the examples.

So it would help both to include only the salient excerpts in the doc but also to link to all the material needed to set up the software to try or to test the examples. (Of course everything must be versioned alongside the software. If OpenAM changes between versions 11 and 12, then the examples must change as well.)

With XInclude and JCite support, this is already technically possible for XML and Java samples. We use JCite in the OpenDJ LDAP SDK Developer’s Guide to quote from code samples tested as part of the build. But throughout the docs we have samples that involve neither XML nor Java code.

For all the other samples, we are adding two things to the next major release of the forgerock-doc-maven-plugin: copying arbitrary resources into the documentation, and quoting from any text-based file.

In the next major version, copying arbitrary resources will involve setting <copyResourceFiles> to true. If the resources are not under src/main/docbkx/resources, you will set this by using the <resourcesDirectory> setting. There are some notes on this in the draft README for the nightly build version of the plugin. This way docs can include long example scripts and other files, without including the entire text in docs.

Quoting from any text based file will depend on a new plugin called xcite-maven-plugin. The basic idea is as follows. In the file you want to cite, you either add nothing (if you want to quote the entire file) or you add markers (if you want to quote only part of the file). The markers are just strings. So they could be in comments, but they could also be part of the text. Then in the file where you want the quote to appear, you add a citation, à la JCite. For example: [file.txt:start marker:end marker]

Suppose file.txt looks like this:

# start marker
This is a great quote & stuff.
# end marker

Then in your book.xml file, you might have the citation:

<para>[file.txt:start marker:end marker]</para>

After you run the plugin, the quote ends up in your book.xml file:

<para>This is a great quote &amp; stuff.</para>

The examples above are overly simplified. But you can imagine how this might work to include excerpts of a long shell script that involves some setup and configuration, then a number of example commands.

More to come…


Introduction to OpenIG (Part 3: Concepts)

The previous posts exposed you to OpenIG: use cases and an initial configuration example. Before going further, I would like to introduce the underlying concepts that you should understand.

HTTP Exchanges Requests and Responses

OpenIG is a specialized HTTP reverse-proxy: it only deals with HTTP messages. The purpose of OpenIG is to give a complete control over the messages flowing through itself (incoming requests and outgoing responses).

In order to ease handling of the whole message processing (request and response), OpenIG uses the concept of Exchange: it's a simple way to link a Request and a Response together. It also contains a Session instance that can be used to store session-scoped properties. The Exchange itself is a Map so it's a natural container for request-scoped properties.

When a request comes through OpenIG, an Exchange object is created and populated with an initial Request and a Session instance.

A Request is the OpenIG model for an incoming HTTP message, it captures both the message's entity and all of its headers. It also has dedicated accessors for message's target URI, cookies and form/query parameters.

A Response is the complementary model object for outgoing HTTP messages. In addition to the entity and headers accessors, Response provides setter for the HTTP status code (20x -> ok, 30x -> redirect, 50x -> server error, ...).

Exchange processing

Now we have a model of the message's content, what can we do to apply transformations to it?

OpenIG offers a simple, but powerful, API to process exchanges:

  • Handler that are responsible to produce a Response object into the Exchange
  • Filter that can intercept the flowing Exchange (incoming and outgoing)

Handler

What do the doc says about Handler.handle() ?

Called to request the handler respond to the request.

A handler that doesn't hand-off an exchange to another handler downstream is responsible for creating the response in the exchange object.

Hmmm, an example would help, right ?

OpenIG offers a rich set of Handlers, the most significative one would be the ClientHandler. This is a highly used component usually ending Exchange's processing that simply forward the Request to the required URI and wraps the returned HTTP message into the Exchange's Response.

In other words, it acts as a client to the protected resource (hence the name).

Filter

Again, what do the doc says about Filter.filter() ?

Filters the request and/or response of an exchange.

Initially, exchange.request contains the request to be filtered. To pass the request to the next filter or handler in the chain, the filter calls next.handle(exchange). After this call, exchange.response contains the response that can be filtered.

This method may elect not to pass the request to the next filter or handler, and instead handle the request itself. It can achieve this by merely avoiding a call to next.handle(exchange) and creating its own response object the exchange. The filter is also at liberty to replace a response with another of its own after the call to next.handle(exchange).

This is easier to understand I think, everyone is used to interceptors nowadays...

The traditional example of a Filter is the CaptureFilter: this filter simply prints the content of the incoming request, then call the next handler in chain and finally prints the outgoing response's content.

Filters are contained inside a special Handler called a Chain. The chain is responsible to sequentially invoke each of the filter declared in its configuration, before handing the flow to its terminal Handler.

All together: a Chain example

If you have your OpenIG up and running, please shut it down and replace its configuration file (config.json) with the following content:

Compared to previous, this configuration will enhance the response message with an additional HTTP header named X-Hello.

The message flow is depicted in the following diagram:

Exchange Flow

A Filter can intercept both the Request and the Reponse flows. In this case, our HeaderFilter is configured to only act on the response flow (because the outgoing message is a handy way to observe a filter in action from an outside perspective).

Wrap up

OpenIG provides a low-level HTTP model API that let you alter HTTP messages in many ways. The message processing is handled through some kind of pipeline composed of handlers and filters.

All the processing logic you want to apply to your messages finally depends on the way you compose your handlers and filters together.

Next

In the next post we'll see how to debug your configurations.