ForgeRock OpenIDM is built around the concept of flexibility, that in itself provides a powerful platform to solve all kinds of Identity provisioning related problems that organizations and enterprises faces. Whether its for internal-facing users such as employees or contractors or if its external facing identities such as customers or partners leveraging services your company exposes, OpenIDM is the ideal platform to address these challenges.
However, with this flexibility comes a great deal of potential pitfalls the reckless implementer can fall into, and since I recently discussed this at the ForgeRock UnSummit in San Francisco, I thought i'd highlight some of these in my blog.
- Isolate each piece of the deployment (e.g. source system, target system, IDM repository, etc..) and measure perf.
- Weakest link will be the limiting factor for recon performance
Yes! We all know that mainframes integrated via screen-scraping is horribly slow performant. Don't expect anything different.
- Ensure
recon source/target queries return full objects vs using the default of
query-all-ids. This has a dramatic impact on overall recon time as
once the initial source/target queries have executed the recon doesn't
need to query those systems again until the target objects are updated
- For very large source datasets, us paging with recon source queries
- Isolate audit data within it's own database
Audit can be useful and needed, but if not needed limit the amount of audit data that is written down. Avoid using the repo for audit data storage as this puts stress on the repo and will slow the performance. Always use its own DB. If your use-case do not require audit at all but require super high performance - turn off audit all together. - Be wary of circular updates when using bi-directional mappings.
This is a dangerous one! The scenario where i have essentially two authoritative sources for the same piece of identity data. E.g. a Database containing a set of identity attributes and the same being stored in OpenIDM as part of the Managed User. When changes occur on the database it should propagate back to OpenIDM and when changes occur on OpenIDM it should propagate to the Database - Implemented thru the use of our hooks, e.g. onUpdate. If not careful, one can easily end up in a circular cascading behaviour of updates! - Be
wary of mappings which always result in a modified target object. For
example when mapping password -> userPassword via the LDAP Connector.
The userPassword attribute in the provisioner MUST have the
NOT_RETURNED_BY_DEFAULT and NOT_READABLE flags set otherwise OpenIDM
will compare the clear-text password value to the hashed userPassword
value returned by LDAP and always push a change.
- In a cluster, always set schedules to persistent and disable concurrent execution
- Always
use the LDAP connector over that of the .NET Connector unless it is
impossible to do otherwise. LDAP Connector performance is far greater
than that of the .NET Connector
If managing only users in Active Directory without the need to do more sophisticated things like running scripts, PowerShell CmdLets, setting ACLs etc - the LDAP integration approach is most suitable and much better on performance. - Always use immutable IDs for Managed Objects and remote System Objects
Although it is easier when playing with the REST API thru tools such as Curl, using simplified IDs such as human readable clear text names, this is generally a bad idea outside of demoing capabilities, development etc. Leverage the auto-generated UUIDs each object gets in OpenIDM and you will avoid problems later.
Should you have experienced issues or pitfalls that is worth describing, feel free to drop me a note!