Stream application logs to FireEye TAP using rSyslog File Monitoring

Introduction to FireEye TAP

The FireEye Threat Analytics Platform is a cloud-based solution that enables security teams to identify and effectively respond to cyber threats by layering enterprise-generated event data with real-time threat intelligence from FireEye. The platform increases the overall visibility into the threat landscape by leveraging the FireEye Threat Prevention Platforms’ rich insights into threat actor profiles and behavior. More details can be found here:

FireEye Threat Analytics Platform

Use Cases

Addressing a business need is the concept of “Identity Explorer”, using which administrators and case analysts can review the identity related incidents from the enterprise. The ForgeRock-FireEye TAP based solution will help heighten the sense of security, especially one related to BYOD, such as new mobile device registrations.


A sample case for detecting fraudulent device registrations is documented here. This is a typical use case wherein a user registers a new device or logs in with the new device from an unknown location. This is deemed a fraudulent login. The key to correctly detecting fraud in this case is knowing that the new location is not one the user would normally login from.

Sample rSyslog Configuration

$ModLoad imfile
$InputFilePollInterval 10
$PrivDropToGroup adm
$WorkDirectory /var/spool/rsyslog

$InputFileName /home/ec2-user/openam12/openam/debug/Authentication
$InputFileTag debugAuth:
$InputFileStateFile stat-debugAuth12-access #this must be unique for each file being polled
$InputFileSeverity info
$InputFilePersistStateInterval 20000

$InputFileName /home/ec2-user/openam12/openam/log/amSSO.access
$InputFileTag amSSO:
$InputFileStateFile stat-amSSO12-access #this must be unique for each file being polled
$InputFileSeverity info
$InputFilePersistStateInterval 20000

$InputFileName /opt/demo/tomcat7b/bin/access.log
$InputFileTag tomcat7baccess:
$InputFileStateFile stat-tomcat7baccess12-access #this must be unique for each file being polled
$InputFileSeverity info
$InputFilePersistStateInterval 20000
# Add a tag for file events
$template TAPFormatFile,"<%pri%>%protocol-version% %app-name% %procid% %msgid% %msg%n"

# Send to TAP then discard
if $programname == 'debugAuth' then @@;TAPFormatFile
if $programname == 'amSSO' then @@;TAPFormatFile
if $programname == 'tomcat7baccess' then @@;TAPFormatFile

OpenAM Debug Logging

Enable debug logging for Category: Authentication in /openam/Debug.jsp

FireEye Communications Broker Setup

You would setup a proprietary software on your unix server that listens on TCP:516 and routes incoming data to the FireEye TAP servers.

Viewing Parsed Log Messages in TAP

Search for class:forgerock (this would be the name of your integration as agreed upon with FireEye), and for program:amauth. Other examples are program:amsso and program:ampolicy.

If parsing is working correctly, the TAP administration would see messages corresponding to the program name show up. In this screen shot the client’s IP is hidden. The next step is to create ALERTS that key off on certain field values parsed out of the logs.

Here is a sample alert for a user logging on from an unknown location:


The following screenshot shows a list of locations the user, User.120 has signed on from over the past month.


The logins from Tokyo, Frankfurt and Singapore could be deemed anomalous, and corresponding logs added to a new incident to investigate this behavior.

Here is the device information shown in TAP:

Here I show how logs from TAP can be added to a previously created, or new incident.

The analyst assigned to service the alert, and incident would need to login to TAP and investigate using session parameters such as timestamp, device name, OpenAM server name and possibly create a request to revoke or temporarily disable access for User.120 in OpenAM.

Happy Christmas (This isn’t a Scam)

It really isn't - just a simple note to wish all the Infosec Pro readers a relaxing festive break, for yourself, friends and family.

2013 has been a interesting year yet again in the Infosec world.  Connectivity has been the buzz, with topics such as the 'Internet of Things' 'Relationship Management' and 'Social Graphing' all producing great value and enhanced user experiences, but have brought with them some tough challenges with regards to authentication, context aware security and privacy.

The government surveillance initiatives on both sides of the Atlantic, have brought home the seemingly omnipresent nature of snooping, hacking and eavesdropping.  Whilst not new (anyone read Spycatcher ?), the once private and encrypted world of email, SMS and telephony may now never be seen in the same light again.

Snowdon continues to grab the headlines, playing an elusive game of cat and mouse between the Russians and the United States.  If, as believed he has released only 1% of the material he has access to, 2014 could certainly be more interesting.

But what will 2014 bring?  Certainly the same corporate issues that have faced many organisations for the last 3 or 4 years have not been solved.

BYOD, identity assurance and governance, SIEM management, context aware authentication and the ever present 'big security data' challenges still exist.  I can only see these issues becoming of greater importance, more complex and more costly to solve.  The increased connected nature of individuals, things and consumers, is bringing organisations closer their respective market audiences, but requires interesting platforms, bringing together data warehousing, identity management, authentication and RESTful interfaces.  2014, may just be the year where security goes agile.  We can hope.

By Simon Moffatt

Who Has Access -v- Who Has Accessed

The certification and attestation part of identity management is clearly focused on the 'who has access to what?' question.   But access review compliance is really identifying failings further up stream in the identity management architecture.  Reviewing previously created users, or previously created authorization policies and finding excessive permissions or misaligned policies, shows failings with the access decommissioning process or business to authorization mapping process.

The Basic Pillars of Identity & Access Management

  • Compliance By Design
The creation and removal of account data from target systems falls under a provisioning component.  This layer is generally focused on connectivity infrastructure to directories and databases, either using agents or native protocol connectors.  The tasks, for want of a better word, are driven either by static rules or business logic, generally encompassing approval workflows.  The actual details and structure of what needs to be created or removed  is often generated elsewhere - perhaps via roles, or end user requests, or authoritative data feeds.  The provisioning layer helps fulfill what system level accounts and permissions need creating.  This could be described as compliance by design and would be seen as a panacea deployment, with quite a pro-active approach to security, based on approval before creation.
  • Compliance By Control
The second area could be the authorization component.  Once an account exists within a target system, there is a consumption phase, where an application or system uses that account and associated permissions to manage authorization.  The 'what that user can do' part.  This may occur internally, or more commonly, leverage an external authorization engine, with a policy decision point and policy enforcement point style architecture.  Here there is a reliance on the definition of authorization policies that can control what the user can do.  These policies may include some context data such as what the use is trying to access, the time of day, IP address and perhaps some business data around who the user is - department, location and so on.  These authorization 'policies' could be as simply as the read, write, execute permission bits set within a Unix system (the policy here is really quite implicit and static), or something more complex that has been crafted manually or automatically and specific to a particular system, area and organisation.  I'd describe this phase as compliance by control, where the approval emphasis is on the authorization policy.
  • Compliance By Review
At both the account level and authorization level, there is generally some sort of periodic review.  This review could be for internal or external compliance, or to simply help align business requirements with the underlying access control fulfillment layer.  This historically would be the 'who has access to what?' part.  This would be quite an important - not to mention costly from a time and money perspective - component for disconnected identity management infrastructures.  This normally requires a centralization of identity data, that has been created and hopefully approved at some point in the past.  The review is to help identify access misalignment, data irregularities or controls that no longer fulfill the business requirements.  This review process is often marred by data analysis problems, complexity, a lack of understanding with regards to who should perform reviews, or perhaps a lack of clarity surrounding what should be certified or what should be revoked.

SIEM, Activities and Who Has Accessed What?

One of the recent expansions of the access review process has been to marry together security information and event monitoring (SIEM) data with the identity and access management extracts.  Being able to see what an individual has actually done with their access, can help to determine whether they actually still need certain permissions.  For example, if a line manager is presented with a team member's directory access which contains 20 groups, it could be very difficult to decide which of those 20 groups are actually required for that individual to fulfill their job.  If, on the other hand, you can quickly see that out of the 20 groups, twelve were not used within the last 12 months, that is a good indicator that they are no longer required on a day to day basis and should be removed.

There is clearly a big difference between what the user can access and what they actually have accessed.  Getting this view, requires quite low level activity logging within a system, as well as the ability to collect, correlate, store and ultimately analyse that data.  SIEM systems do this well, with many now linking to profiling and identity warehouse technologies to help create this meta-warehouse.  This is another movement to the generally accepted view of 'big data'.  Whilst this central warehouse is now very possible, the end result, is still only really trying to speed up the process of finding failures further up the identity food chain.

Movement to Identity 'Intelligence'

I've talked about the concept of 'identity intelligence' a few times in the past.  There is a lot of talk about moving from big data to big intelligence and security analytics is jumping on this band wagon too.  But in reality, intelligence in this sense is really just helping to identify the failings faster.  This isn't a bad thing, but ultimately it's not particularly sustainable or actual going to push the architecture forward to help 'cure' the identified failures.  It's still quite reactive.  A more proactive approach is to apply 'intelligence' at every component of the identity food chain to help make identity management more agile, responsive and aligned to business requirements.  I'm not advocating what those steps should be, but it will encompass an approach and mindset more than just a set of tools and rest heavily on a graph based view of identity.

By analyzing the 'who has accessed' part of the identity food chain, we can gain yet more insight in to who and what should be created and approved, within the directories and databases that under pin internal and web based user stores.  Ultimately this may make the access review component redundant once and for all.

By Simon Moffatt