Extending the reach of your BPMs

rainbowNo BPMs includes all of the functionality you are likely to need to process data along instances of process templates.

Here are two approaches you can use to extend the functionality of your BPMs without the need to customize:

1) export your data to an external environment,  carry out processing, generate a file, then attach the file to the database record that has the focus in your BPMS.

The disadvantage of this option is that your data will not be available to flow along process instances.  Re-keying of data onto process instance data collection forms,  if required, is tedious..

Partially offsetting the disadvantage, as and when you revisit an attached file for purposes of editing \ versioning, your operating system will automatically launch the appropriate 3rd party application.

2) You can avoid having separate data streams by setting up “wait” rules in your BPMs to block processing along process instances until such time as external instance-specific data becomes available.

Here is the sequence of tasks that must be performed to process data at 3rd party applications:

a) exporting data at a process step.

b) formatting the data for data transport,

c) data transport to any number of 3rd party apps.

d) importing the data at the remote systems,

e) carrying out processing at the 3rd party apps

f) exporting the processed data from the 3rd party apps,

g) data transport back to the BPMs.

h) import of the processed data into the BPMs.

Placement of a generic Data Exchanger on the outbound side of your BPMs addresses all of the stated needs and provides the following benefits:

Your BPMs can hopefully automatically export, in real time, all data needed by all external apps.

1. The data required by each app can be mapped once and only once at the Data Exchanger such that apps are able to read data using their respective native data element naming conventions.

2. The specific sub-set of exported data required by each app can be accommodated simply by not providing a data element cross-reference. (no mapping means no need for the data).

3. Data can, be “pushed” to individual apps or picked up depending on the capabilities of such apps.

Additional Requirements:

* a formatter is required to accommodate each format that a new trading partner identifies,

* a parser is required to accommodate each format a new trading partner identifies.

* encryption/ecryption software is typically required to protect data during data transport,

Extend the capabilities of your BPMs!, the world is your oyster.

About kwkeirstead@civerex.com

Management consultant and process control engineer (MSc EE) with a focus on bridging the gap between operations and strategy in the areas of critical infrastructure protection, major crimes case management, healthcare services delivery, and b2b/b2c/b2d transactions. (C) 2010-2019 Karl Walter Keirstead, P. Eng. All rights reserved. The opinions expressed here are those of the author, and are not connected with Jay-Kell Technologies Inc, Civerex Systems Inc. (Canada), Civerex Systems Inc. (USA) or CvX Productions.
This entry was posted in Business Process Management, Case Management, Data Interoperability, Enterprise Content Management and tagged . Bookmark the permalink.

1 Response to Extending the reach of your BPMs

  1. Karl-Walter, great post! The necessity to have simple, clean and consistent data within the process is too often forgotten especially when the process flow is the one and only perspective that many take. You can have perfectly designed processes which turn to rubbish with less than perfect data (and content for that matter).

    We take the approach to let the IT department create user-understandable data objects (type, states and attributes) and define how they are being read and written from/to the silos. For the business user it is now as simple as to select such a data object and place it into the case/process/task container. From this point on this data object is accessible on all levels. That includes automatic/default form generation, natural language rules, goal definitions, and reading and writing data from and to content.

    Most of the above is heavy coding/scripting to be done by IT for EVERY process definition in most BPMS. Clearly as soon as a data object is defined once in the Papyrus platform it can be reused for any kind of case or process requirement. While we focus with the data on the format that the silo supplies and to make it understandable to the user that he is dealing with data from another system, it is easy to take data from one silo and map them to another within a single data object. What you do not want is that the user has to wonder why he has TWO CUSTOMER objects for the same person with different data!

    Thanks for bringing up this important point.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s