BPM interoperability- Look Ma, no hands

I have been tuning in to various rants on LinkedIn regarding the complexity and cost of legacy BPMs software suites.

The idea of an “all-singing, all-dancing” software suite goes back a number of years, but it started “feature wars” across vendors with the not-unanticipated result that many of these systems have or are about to collapse under their own weight.

My approach to BPM was a) we needed a process mapping capability b) a compiler c) a run time orchestration/governance environment and d) interoperability.  End of.

At the time we did not have a clear notion of how to get our RT environment to seamlessly communicate with multiple internal and external systems where, each, in the worst case, could be working with different sub-sets of available data and using different data transport formats.  But, we did avoid getting onto the “all-in-one” slippery slope.

Many software vendors I talk to opted to build parsers and formatters within their software suites.  The problem with this approach is there is no end of new/updated data transport formats so the result has been multiple releases of software suites, which is costly.

The approach we took was to have a separate engine that could detect incoming files and parse them. And format outbound data to various standards.

The logical, to me, approach was to purchase various format specifications and write some code.  Our developers thought otherwise. No specifications were needed, no ongoing programming would be needed, they said, as and when these specifications might change.

Their only requirement was to get a properly formatted run-time file and declare that to be the template for each format.  Obviously the choice of run time file required finding a “full-feature” file with most/all of the possible constructs allowed by the format “standard”.

They proceeded to develop a smart parser that was able to read lines in the sample template and strip out the data – this gave them their “specification”.  As and when changes were made to specifications, often without any advance notice from publishers, the parser would temporarily throw up its hands but the only required remedy was to add a couple of new database fields and things would settle down to no-error processing.

The first high-profile project we used our parser/formatter software on was an e-Hub application where 100+ organizations needed to consolidate data for, in this particular case, patients. The entire project took less than six weeks to get “off the ground” and has yet to encounter any hurdles.

About kwkeirstead@civerex.com

Management consultant and process control engineer (MSc EE) with a focus on bridging the gap between operations and strategy in the areas of critical infrastructure protection, connect-the-dots law enforcement investigations, healthcare services delivery, job shop manufacturing and b2b/b2c/b2d transactions. (C) 2010-2018 Karl Walter Keirstead, P. Eng. All rights reserved. The opinions expressed here are those of the author, and are not connected with Jay-Kell Technologies Inc, Civerex Systems Inc. (Canada), Civerex Systems Inc. (USA) or CvX Productions.
This entry was posted in Business Process Management, Data Interoperability, MANAGEMENT and tagged , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s