IoT World Forum Review: Interop, Data & Security

This week saw the 2 day Internet of Things World Forum conference take place in London. There is clearly a general consensus, that the IoT market is a multi-trillion dollar opportunity, through the implementation of items such as consumer wearables, embedded predictive failure components and data collecting sensors.



The rapid rise in connected devices and IoT ecosystems, is seemingly beingdriven by several key factors, includingfalling cost of both connectivity anddata storage. These lowering barriers to entry, coupled with more developer friendly ecosystems and open platforms, is helping to fulfil new revenue generating business opportunities in multiple verticals including manufacturing and healthcare.

Matt Hatton from Machina Research started off discussing the progression from local standalone projects (Intranets of Things), through to more internal or enterprise focused deployments (Subnets of Things).  David Keene from Google, extended this further, to say the progression will reach the concept of Web of Things, where accessibility and 'findability' will be key to managing and accessing data.

It was clear that data aggregation and analytics will be a major component in any successful IoT infrastructure, whether that is focusing on consumer enhancements, such as the Jaguar connected car project as described by Leon Hurst, through to smart health care, either in the form of Fitbits, or more advanced medical instrumentation.

API's and machine processing were certainly referenced more than once.  The new more connected web, will provide interaction touch points that only machines can understand, coupled with better data aggregation, distributed data storage and centralised querying. API's of course need protection too, either via gateways or via token management integration for standards such as OAuth2.

One aspect that was conspicuous in it's absence, was that of data privacy, and identity and access management.  The IoT landscape is creating vast amounts of data at stream like speeds.  The concept of little data (small devices in isolation) to big data (aggregated in cloud services) requires strong levels of authentication and authorization, at both the device, service and end user level.  The ability to share and transparently know where data is being accessed will be a key concern in the consumer and health care spaces.

Dave Wagstaff from BSquare, brought up the interesting concept, that many organisations are now subtly moving away from a product based business model, to a software and services based approach. With the the increased capability of devices, organisations now can perform much more in the way of remote monitoring, predictive failure and so on, where the end user really is just paying an insurance or subscription for their physical thing.

Bernd Heinrichs from Cisco followed a similar pattern, where he described the German view of Industry v4.0 (or 4.1...) where innovative production concepts are helping to reduce energy, increase uptime and generate better component output.

From a new market opportunity perspective, Francois Menuier from Morgan Stanley, observed that 6% of all consumers now own a wearable, with 59% of them using that wearable daily. In addition many wearable owners, argued that this was an additional purchase and not one to replace existing technology, solidifying the view that new market initiatives are available in the IoT world. However many consumer wearables generate huge amounts of deeply personal data that needs to be protected and shared securely.

Jon Carter from Deutsch Telekom went through the 7 steps for a successful IoT implementation, which ended with the two main points of applying a minimum viable product concept to design and also leverage secure and open platform.

Dr Shane Rooney from the GSMA focused his thoughts on security within the mobile network operator network, including the concept of device to device and device to service authentication, as well the the need for greater focus on data privacy.

Overall an interesting couple of days. Whilst most manufacturers and platforms are focused on interoperability and data management, identity and access management has a strong and critical role in allowing 3rd party data sharing and interactions to take place. It will be interesting to see if the 2015 and 2016 start to introduce these concepts by default.





Another great resource to get started with OpenIG

guillaumeI forgot to mention, but Guillaume, the lead developer for OpenIG, has also started a blog to discuss about Middleware, and share his experience and thoughts about OpenIG.

He has started a great series of posts introducing OpenIG, it’s use cases, some terminology…

I encourage you to take a look at it here : In Between – a Blog by Guillaume Sauthier


Filed under: Identity Gateway Tagged: blog, ForgeRock, gateway, identity, identity gateway, openig, opensource

Simplifying OpenIG configuration…

In the article that I’ve posted yesterday, I’ve outline portions of configuration files for OpenIG. The configuration is actually only working with the latest OpenIG nightly builds, as it leverages some of the newest updates to the code.

One of the feedback that we got after we released was that configuring OpenIG was still too complex and verbose. So, we’ve made changes to the model, simplifying it, removing intermediate objects… The result is much smaller and easier to understand configuration files, but more importantly, easier to read back and understand the flow they represents.

My colleague Mark has done a great job of describing and illustrating those changes in a few articles :

OpenIG’s improved configuration files (Part 1)

OpenIG: A quick look at decorators

OpenIG’s improve configuration files Part 2

 


Filed under: Identity Gateway Tagged: configuration, ease of use, engineering, ForgeRock, gateway, identity, openig, opensource

Missed the IRM Summit Europe ? We’ve got it recorded !

All the sessions from the European IRMSummit that took place early this month in Dublin were recorded, and the videos are now available.

To make it even easier for everyone, our Marketing team has produced playlists according to the agenda :

Enjoy and I hope this will give you envy to be with us next year !


Filed under: General Tagged: conference, Dublin, ForgeRock, identity, Identity Relationship Management, IRM, IRMSummit2014, opensource, presentations, summit, videos

API Protection with OpenIG: Controlling access by methods

OpenIGUsually, one of the first thing you want to do when securing APIs is to only allow specifics calls to them. For example, you want to make sure that you can only read to specific URLs, or can call PUT but not POST to other ones.
OpenIG, the Open Identity Gateway, has a everything you need to do this by default using a DispatchHandler, in which you express the methods that you want to allow as a condition.
The configuration for the coming OpenIG 3.1 version, would look like this:

 {
     "name": "MethodFilterHandler",
     "type": "DispatchHandler",
     "config": {
         "bindings": [
         {
             "handler": "ClientHandler",
             "condition": "${exchange.request.method == 'GET' or exchange.request.method == 'HEAD'}",
             "baseURI": "http://www.example.com:8089"
         },
         {
             "handler": {
                 "type": "StaticResponseHandler",
                 "config": {
                     "status": 405,
                     "reason": "Method is not allowed",
                     "headers": {
                         "Allow": [ "GET", "HEAD" ]
                     }
                 }
             }
         }]
     }
 }

This is pretty straightforward, but if you want to allow another method, you need to update the both the condition and the rejection headers. And when you have multiple APIs with different methods that you want to allow or deny, you need to repeat this block of configuration or make a much complex condition expression.

But there is a simpler way, leveraging the scripting capabilities of OpenIG.
Create a file under your .openig/scripts/groovy named MethodFilter.groovy with the following content:

/**
 * The contents of this file are subject to the terms of the Common Development and
 * Distribution License 1.0 (the License). You may not use this file except in compliance with the
 * License.
 * Copyright 2014 ForgeRock AS.
 * Author: Ludovic Poitou
 */
import org.forgerock.openig.http.Response

/*
 * Filters requests that have the allowedmethods supplied using a
 * configuration like the following:
 *
 * {
 *     "name": "MethodFilter",
 *     "type": "ScriptableFilter",
 *     "config": {
 *         "type": "application/x-groovy",
 *         "file": "MethodFilter.groovy",
 *         "args": {
 *             "allowedmethods": [ "GET", "HEAD" ]
 *         }
 *     }
 * }
 */

if (allowedmethods.contains(exchange.request.method)) {
    // Call the next handler. This returns when the request has been handled.
    next.handle(exchange)
} else {
    exchange.response = new Response()
    exchange.response.status = 405
    exchange.response.reason = "Method not allowed: (" + exchange.request.method +")"
    exchange.response.headers.addAll("Allow", allowedmethods)
}

And now in all the places where you need to filter specific methods for an API, just add a filter to the Chain as below:

{
    "heap": [
        {
            "name": "MethodFilterHandler",
            "type": "Chain",
            "config": {
                "filters": [
                    {
                        "type": "ScriptableFilter",
                        "config": {
                            "type": "application/x-groovy",
                            "file": "MethodFilter.groovy",
                            "args": {
                                "allowedmethods": [ "GET", "HEAD" ]
                            }
                        }
                    }
                ],
                "handler": "ClientHandler"
            }
        }
    ],
    "handler": "MethodFilterHandler",
    "baseURI": "http://www.example.com:8089"
}

This solution allows to filter different methods for different APIs with a simple configuration element, the “allowedmethods” field, for greater reusability.


Filed under: Identity Gateway Tagged: access control, API, ForgeRock, methods, openig, opensource, Tips

OpenIG’s improved configuration files, part II

openig-logoGuillaume, Matt, and Violette’s recent work has made OpenIG configuration a lot easier to read and write. To try this at home, grab the latest nightly build of OpenIG. (The changes in the server are backwards compatible, too, so you don’t have to move right away. But you will want to.)

Imagine that you are working with OpenIG 3.0.0, debugging one of your basic routes. You want to capture requests and responses going through your route to see what is going on. Your route definition looks something like this:

{
    "heap": {
        "objects": [
            {
                "name": "LoginChain",
                "type": "Chain",
                "config": {
                    "filters": [
                        "CaptureFilter",
                        "LoginRequest"
                    ],
                    "handler": "ClientHandler"
                }
            },
            {
                "name": "CaptureFilter",
                "type": "CaptureFilter",
                "config": {
                    "file": "/tmp/gateway.log"
                }
            },
            {
                "name": "LoginRequest",
                "type": "StaticRequestFilter",
                "config": {
                    "method": "POST",
                    "uri": "http://www.example.com",
                    "form": {
                        "username": [
                            "george"
                        ],
                        "password": [
                            "costanza"
                        ]
                    }
                }
            },
            {
                "name": "ClientHandler",
                "type": "ClientHandler",
                "config": {}
            }
        ]
    },
    "handler": "LoginChain"
}

Fast forward to last night’s build of OpenIG, taking into account Guillaume’s inlining and capture decorator, Violette’s removal of empty “config” settings, Matt’s streamlining of the “heap” when not necessary and updates to the way decorations can be done.

Here is that same route, seriously improved.

{
    "handler": {
        "type": "Chain",
        "config": {
            "filters": [
                {
                    "type": "StaticRequestFilter",
                    "config": {
                        "method": "POST",
                        "uri": "http://www.example.com",
                        "form": {
                            "username": [
                                "george"
                            ],
                            "password": [
                                "costanza"
                            ]
                        }
                    }
                }
            ],
            "handler": {
                "type": "ClientHandler"
            }
        }
    },
    "capture": "all"
}

Well, almost the same route. You now capture requests and responses at many more points in the flow.

Notice that the very first thing in the configuration is the “handler” that produces a response.

The “Chain” is much easier to read, too.

When you start writing scripts or get stuck with a complex configuration, add a “CaptureDecorator” definition with "captureExchange": true for even more debugging information at each capture point.

You can now go utterly minimalist in your default route, assuming you define a “ClientHandler” in the top-level config.json file.

{ "handler": "ClientHandler" }

For more on what’s happening in the configuration and around OpenIG, see the draft, in-progress What’s New chapter of the release notes.


OpenIDM: Trying the new Admin UI

openidm-logo One of the cool features in the upcoming release of OpenIDM is the new Admin UI. Jake Feasel demonstrated this to several people last week. It already looks like a major improvement for newbies over editing configuration files.

If like me you have been out of the loop for a while, it is reassuring to see that OpenIDM still installs a dream when you are just getting started. Download, unzip, and ./startup.sh.

Here is how you might start OpenIDM with an existing sample. This uses sample2, which is one-way synchronization with OpenDJ. You are not required to start with the samples, but they can quickly bootstrap your evaluation, without requiring you to read much doc or to think through the initial configuration.

$ cd /path/to && mv ~/Downloads/openidm . && cd openidm
$ ./startup.sh -p samples/sample2
Executing ./startup.sh...
Using OPENIDM_HOME:   /path/to/openidm
Using PROJECT_HOME:   /path/to/openidm/samples/sample2
Using OPENIDM_OPTS:   -Xmx1024m -Xms1024m
Using LOGGING_CONFIG: -Djava.util.logging.config.file=/path/to/openidm/samples/sample2/conf/logging.properties
Using boot properties at /path/to/openidm/samples/sample2/conf/boot/boot.properties
OpenIDM version "3.1.0-RC3-SNAPSHOT" (revision: 4297) jenkins-OpenIDM-3746 null
-> OpenIDM ready

OpenIDM’s web based UI is ready for HTTPS out of the box, but it seems you can still use HTTP for evaluation.

For example, you can visit http://localhost:8080/openidmui/ and login as openidm-admin:openidm-admin.

OpenIDM’s UI helps prevent default passwords by prompting you to change your password the first time you login.

openidm-first-login

You find the Admin UI at http://localhost:8080/admin/. This shows the view when running with the sample2 configuration.

openidm-admin-home

The new Admin UI offers a wizard-like approach to setting up provisioning. If you follow sample2, set up OpenDJ with some sample data before you get started. The sample comes with a mapping from OpenDJ accounts to managed/user.

openidm-add-properties

The sample also comes with a configuration for what to do in different situations during synchronization. Most of the policies are defaults.

openidm-sync-config

To run reconciliation and synchronize your source and target, either click the Reconcile Now or schedule reconciliation on the Sync Tab of the Mappings page. When reconciliation completes, you should have a bunch of new managed users. If you schedule reconciliation, subsequent runs might not encounter any changes.

openidm-recon

Click the User View link at the upper left of the page and then the Users tab to view all your managed users.

openidm-users-list

When you change a mapped attribute in the source, in this case OpenDJ, reconciliation updates it in the target, in this case the managed/user. For example, Babs Jensen’s original mail address is bjensen@example.com.

openidm-babs-before

After changing the mail address in OpenDJ to babs.jensen@example.com, reconciliation updates her corresponding managed/user in OpenIDM’s repo. Refreshing the page after reconciliation, you can see the change.

openidm-babs-after

The OpenIDM Admin UI is quite a leap forward, and promises to make it much easier for all of us to create and edit resources and mappings, and to arrange and schedule synchronization. Hats off to the OpenIDM team!


The new ForgeRock Community site

Earlier this week, a new major version of ForgeRock Community site was pushed to production.

ForgeRock.org

Beside a cleaner look and feel and a long awaited reorganisation of content, the new version enables better collaboration around the open source projects and initiatives. You will find Forums, for general discussions or project specific ones, new Groups around specific topics like UMA or IoT. We’ve also added a calendar with different views, so that you can find or suggest events, conferences, webinars touching the projects and IRM at large.
Great work Aron and Marius for the new ForgeRock.org site ! Thank you.

Venn Of Authorization with UMAAnd we’ve also announced a new project OpenUMA. If you haven’t paid attention to it yet, I suggest you do now. User-Managed Access (UMA) is an OAuth-based protocol that enables an individual to control the authorization of data sharing and service access made by others. The OpenUMA community shares an interest in informing, improving, and extending the development of UMA-compatible open-source software as part of ForgeRock’s Open Identity Stack.

 


Filed under: General Tagged: collaboration, community, ForgeRock, forgerock.org, identity, opensource, projects

Highlights of IRMSummit Europe 2014…

Powerscourt hotelLast week at the nice Powerscourt Estate, outside Dublin, Ireland, ForgeRock hosted the European Identity Relationship Management Summit, attended by over 200 partners, customers, prospects, users of ForgeRock technologies. What a great European IRMSummit it was !

If you haven’t been able to attend, here’s some highlights:

I heard many talks and discussions about Identity being the cornerstone in the digital transformation of enterprises and organizations. It shifting identity projects from a cost center to revenue generators.

There was lots of focus on consumer identity and access management, with some perspectives on current identity standards and what is going to be needed from the IRM solutions. We’ve also heard from security and analytics vendors, demonstrating how ForgeRock’s Open Identity Stack can be combined with the network security layer or with analytics tools to increase security and context awareness when controlling access.

User Managed Access is getting more and more real, as the specifications are getting close to be finalised and ForgeRock announced the OpenUMA initiative for foster ideas and code around it. See forgerock.org/openuma.

Chris and Allan around an Internet connected coffee machine, powered by ARMMany talks about Internet of Things and especially demonstration around defining the relationship between a Thing and a User, securing the access to the data produced by the Thing. We’ve seen a door lock being unlocked with a NFC enabled mobile phone, by provisioning over the air the appropriate credentials, a smart coffee machine able to identify the coffee type and the user, pushing the data to a web service, and asking the user for consent to share. There’s a common understanding that all the things will have identities and relations with other identities.

There were several interesting discussions and presentations about Digital Citizens, illustrated by reports from deployments in Norway, Switzerland, Nigeria, and the European Commission cross-border authentication initiatives STORK and eIDAS

Half a day was dedicated to ForgeRock products, with introductory trainings, demonstrations of coming features in OpenAM, OpenDJ, OpenIDM and OpenIG. During the Wednesday afternoon, I did 2 presentations on OpenIG, demonstrating the ease of integration of OAuth2.0 and OpenID Connect to protect applications and APIs, and on OpenDJ, demonstrating the flexibility and power of the REST to LDAP interface.

All presentations and materials are available online as pdf and now as videos on the ForgeRock’s YouTube page. You can also find here a short summary of the Summit in a video produced by Markus.

Powerscourt Estate HousePowerscourt Estate gardens
The summit wouldn’t be such a great conference if there was no plan for social interactions and fun. This year we had a nice dinner in the Powerscourt house (aka the Castle) followed by live music in the pub. The band was great, but became even better when Joni and Eve joined them for a few songs, for the great pleasure of all the guests.

15542471759_d6d2ee842d_m

The band15542475489_04dabb40ff_m

Slainte
Of course, I have to admit that the best part of the IRM Summit in Ireland was the pints of Guinness !

To all attendees, thank you for your participation, the interesting discussions and the input to our products. I’m looking forward to see you again next year for the 2015 edition. Sláinte !

As usual, you can find the photos that I’ve taken at the Powerscourt Estate on Flickr. Feel free to copy for non commercial use, and if you do republish them, I would appreciate getting the credit for them.

[Updated on Nov 11] Added link to the highlight video produced by Markus
[Updated on Nov 13] Added link to the slideshare folder where all presentations have been published
[Updated on Nob 24] Added link to the all videos on ForgeRock’s YouTube page


Filed under: Identity Tagged: conference, ForgeRock, identity, IRM, IRMSummit2014, IRMSummitEurope, openam, opendj, openidm, openig, summit

OpenAM: Policy management improvements

OpenAM logoLast week’s entry summarized new OAuth 2.0 and OpenID Connect related features that you can try in the nightly builds of OpenAM. But those aren’t the only important changes for the next major release. Among the many others are changes to policy management, including a slick new UI for creating and editing policies.

Rest assured that your existing agents and existing policies still work fine after you upgrade your OpenAM servers. Policy evaluation works as it did before. What’s been done is to take full advantage of the newer policy engine implementation that is already in use today behind the scenes, but not made visible in OpenAM 11.

The new policy editor (re)introduces the notion of application. An application serves as a template for policies, and helps you to group policies. When you open the policy editor in OpenAM console, the first thing you see is the list of applications for the realm where you are managing policies. The default application in the top-level realm, geared for web and Java EE policy agents, is named (with a bit of nostalgia for the old folks) iPlanetAMWebAgentService.

PolicyEditor

You can then drill down to create or edit a policy that belongs to this application.

PolicyEditorApplication

Notice the policy for the OAuth 2.0 authorization server & OpenID Provider. It is when you start creating or editing the policies that you see the strongest features of the new UI.

PolicyEditorPolicy

As you see in this excerpt (conditions and response attributes are not shown), the policy editor works like a wizard, guiding you through the steps to build your policy. You can build up subject and environment conditions using the logical operations. The image shows a subject condition for all authenticated users but not the demo user.

One thing to keep in mind if when creating your own applications, especially in sub-realms: OpenAM can now direct policy agents straight to an application, so you don’t always need referrals (unless for example you want to have a policy agent protecting multiple applications). This is done with a couple of new properties in the policy agent profile. Here’s a snapshot of the Policy Client Service section of a web policy agent profile screen showing the new properties.

PolicyClientService

These properties are not actually used by the policy agent, but instead by OpenAM, when it directs policy decision requests to the right realm and application.

Underlying the new UI is a full REST API for policy management and policy evaluation. The REST API opens up additional application types, letting you describe applications where the verbs are not necessarily those of HTTP, and write custom policy enforcement points for such applications.

There are still a few changes forthcoming in the UI, and the documentation is catching up with the code, but this is something you can start playing with already in the latest builds.