The real purpose of a BPMs


I have a client who has been with us for years. They have been using one of our healthcare  products for Case management but somehow never got around to using the workflow management module.

Recently, they started up a new division and asked us to help with streamlining of their operations.

This included mapping out processes, assigning roles to process steps, attaching data collection forms to steps, compiling the maps and rolling out processes to a run-time environment where BPM was able to provide background orchestration and ACM was able to provide governance via global rule sets.

We rolled out each process, invited a group of stakeholders to take on featured Roles and piano-play a few process template instances. As anticipated, we got feedback on the process logic and forms (steps with bad Role encoding, steps in the wrong sequence, wrong forms at certain steps). The versioning turnaround time in most cases was immediate i.e. change the Roles, change process logic, change process step forms then re-compile and roll out a new version.

Since the mapping was done in real time with the active participation of stakeholders and since the environment lent itself to close to “instant gratification”, it did not take a long time to “improve” the processes.

Unfortunately, for most organizations, the BPM story ends here with delivery of a mapped, modeled, improved process on paper.

The real purpose of BPM, in our view, becomes evident when you put processes in a run time Case environment where you are able to achieve orchestration from BPM and achieve governance from the Case environment.

You need automated resource allocation, leveling and balancing capabilities (R.A.L.B) at the Case environment so that process steps from multiple instances post automatically at User InTrays as and when these steps become current along what typically works out to be a Case load of 20-50 patients in healthcare and possibly 100 interventions per day of Day Orders for a worker in a job shop manufacturing setting.

Without RALB, workers do not have an easy time prioritizing (micro-scheduling) process steps that post to their individual InTrays nor do supervisors have an easy time leveling and balancing workload across workers.

The other thing often missing is Interoperability – if your Case environment is not able to export data to local and remote systems and applications and import data from these, your Case environment becomes a “ivory tower” where decisions are made in the absence of current data that could have an important bearing on such decisions.

Being able to manage workflow at Cases is all important because Case automatically gives you a history of all interventions at any Case. You get to see data, as it was, in reverse chronological order, on the form versions that were in service at the time the data was collected. Case plus interoperability allows workers to make decisions based on past and current data and if we add in predictive analytics, workers have the wherewithal to carry out “Case Management”.

It’s hard to imagine how e-Case Management would be incapable of providing at least a 30% improvement in productivity.

If we add onto this increased throughput as a result of having next-in-line steps post immediately as current steps are committed, decreased errors as a result of in-line step-specific rule sets, and improved compliance as a result of Case level governance it is hard to understand why so many BPM initiatives quit at the paper process map stage.

A possible explanation is that BPM imposes rigidity in respect of the performance of work. This makes it difficult to get workers on board and even more difficult to sustain your BPM initiative.

Facts are if you inventory all of your BPM processes at a Services Menu and then include in this inventory a reasonable sub-set of individual process steps as “processes of one step each”, your software users will never feel constrained by BPM – they can, at any time, simply by selecting a menu item, revisit already committed workflow steps, insert steps not in the process template and record data at steps not yet current along a BPM workflow.

The final discovery once you get staff on board with e-Cases is that with Case level governance you will discover one worker performing interventions using a process template, with another worker, for whatever reason, performing the same scope work via a number of seemingly, except to that worker, unrelated ad hoc interventions.

Since both know what they are doing (i.e. we hire knowledge workers with the presumption that they know what to do, how to do it), it’s not surprising both are able to reach the same goals/objectives.

The lesson here is clear – make sure you go “the extra mile” with BPM. You can do this by combining BPM, ACM, RALB in an e-Case environment that accommodates interoperability.

The elevator pitch to CEOs/CIOs is easy – workflow management environments provide orchestration (the center line  guidelines along a highway) plus governance (guardrails on the sides of the highway).

The benefits are increased staff efficiency, increased throughput, decreased errors, improved compliance with internal and external rules and regulations, all of which lead to improved outcomes and increased competitive advantage. With a potential 30% productivity improvement, ROIs should turn positive in 12-18 months, with clear sailing beyond this.

The punch line is – “when you have ways and means of doing the right things, the right way, at the right place , at the right time, using the right resources, there is not much that is likely to fall between the cracks”.

Posted in Adaptive Case Management, Automated Resource Allocation, Business Process Improvement, Business Process Management, Case Management, Data Interoperability, R.A.L.B. | Tagged , , , , | Leave a comment

So you think you can multi-task?

Before you say yes, check out the video.

Seriously, you have probably noticed that multi-tasking seems routine to some, yet very difficult to others.

You may or may not be able to change the way you are, but here some ideas that will let you increase your productivity when you are a member of a team.

Rule #1 – do what you do well, let others do what you are not good at (i.e. don’t try to be your own plumber is a variation of this rule many of us have learned to follow).

Rule #2 – when you have a task to perform, take a bit of time to consider a few options before starting to work on the task. (I.e. look before you leap).

Rule #3 – avoid elegant solutions to the wrong problem. (I.e. is the problem really a problem?, if it is a problem, does it need a solution or is it possible the problem may go away on its own?, is your proposed approach to the problem likely to yield a solution?)

Rule #4 – remember the effect of “S” curves – when you have to suspend work on a task, you incur time unwinding from the task and when you come back to the task there is a “learning curve” that you typically have to go through.

Rule #5 – when you finish a task, communicate this to other members of your team – everything we do involves the creation of outputs from inputs (i.e. your output becomes someone else’s input and if you don’t communicate they won’t be able to do good job planning their contribution).


Posted in Productivity Improvement | Leave a comment

Where, Oh Where, have my Documents Gone?

Decision-making is the art of converting information into action. It’s an art because it requires a blending of experience, knowledge, wisdom, intuition, and information. All of these except information are acquired or intrinsic capabilities.

The problem with information is that it is difficult to bring together all of the content needed to make decisions. The result is many decisions are made in the absence of key information.

If you think about it, the information needed to make most decisions is likely to be spread across multiple “documents” (e.g. MS Word .doc files, .pdfs, spreadsheets, PowerPoint presentations, web pages etc.). These documents are likely to be all over the place. Each document type requires special software for viewing/editing.

Pulling all of these together and keeping them together as documents go through multiple revisions is tedious and time-consuming with no guarantee that all of the information will be where you need it to be at the time you need to make a decision.

Another consideration is some of the information needed to make decisions relating to one initiative will be common to other initiatives. Corporate Policy & Procedure is a good example. This means that if you want to consolidate information by initiative, you need to duplicate certain documents.

Free-form search Knowledgebases such as Civerex’s CiverManage™ are a solution to the problem of consolidating information and being able to prioritize initiatives that compete for the same scarce resources.

Here’s how you can set up a Kbase that can consolidate all of your documents and organize these for decision-making.

#1 Start by finding a Kbase environment that allows bulk import of the content of your Windows Directories and files located across various servers/PCs. You want to be able to import entire Directories, not individual files.

You will want the documents to import to your Kbase’s underlying database management system. Don’t settle for a system that uses links back to source Directories (i.e. if someone moves the files, the links will break; if someone opens a document from a Directory, edits the document and does a save-as, your KBase will be pointing to the wrong version of the document).

#2 Now, organize your documents the way you wish as “things-within-things-within-things”, same as you would organize physical files, within file folders, within drawers, within physical filing cabinets.

#3 Once your documents are all in one place, create as many alias documents as you like and park these under as many separate “filing cabinets” as you wish. The way alias documents work is any edits done at any alias document operate on the source document for that alias.

This leaves editing to be discussed along with free-form searches.

When you click on a stored document for editing, the Kbase environment needs to automatically do a save to the database before any editing starts so that you end up with a full audit trail of your document content. Since your documents are IN your Kbase, file names are no longer needed (the node name, key words / mirrored content are all you need to find a document).

Free-Form Searches

The big advantage of Kbases becomes apparent when you engage a search.

Whereas traditional SQL searches tell you what a query was able to find (providing you ask the software to look in the right place), a free-form Kbase search tells you visually what the software was able to find and what the software was NOT able to find. This greatly increases the amount of information returned per search.

A search at a hospital Kbase can give a patient information regarding which facilities have EMRs but the same search can give a vendor information regarding which facilities do not have EMRs.

If you try to look for a phone number in an address field using traditional SQL you will get back “not found” – the same search in a free-form Kbase will find the phone number even if it is embedded in a memo field.

It’s not all clear sailing – for fast searches, it pays to mirror document content to plain memo fields but in respect of, say, PowerPoint presentations, images, etc, you need to assign key words at document nodes so that you can find content. Videos are typically too large to store IN Kbases, so these are exceptions to the “no-links” rule.

With the exception of videos, you can, following import, move original documents to backup and, hopefully, never have to refer to these again.

A Practical Scenario

Consider a filmmaker trying to put together a documentary – there will typically be a screenplay, a project schedule, a project budget, promotional material, financing plans/requests/responses, equipment inventories, multiple video clips, actor/model release forms, copyright permission royalty agreements, etc..   If several projects are being worked on at the same time, some of the content may need to be used across projects.

File Folders (paper or electronic) make it difficult to find content.

Large corporations can easily have hundreds of active initiatives plus a number of initiatives under consideration, each with its own set of documents. In respect of initiatives under consideration, most of these initiatives compete for scarce resources. This gives rise to a need to prioritize initiatives which is next to impossible to do looking at one initiative at a time.

Knowledgebases help with information consolidation (products, products under development, asset inventories, staff, internal policy/procedure, external rules/ regulations, markets, competitors) and since there are no physical boundaries to Kbases, it is possible using “Russian doll” data hiding techniques to have any number of project\folders\nodes at one screen with free-form search facilities across the entire space.

The Kbases our group works with usually have in excess of 1,000 documents, sometimes 10,000 documents, all visible and accessible from one screen. Try that on your PC desktop and you will appreciate the value of a Kbase.

If you are not using Kbases, you or your assistants probably spend 1-2 hours a day hunting for information. Cost this out over a year and you have the potential to save a lot of money. The quality of decisions you make will improve as a result of having more complete information available if you migrate your documents to a Kbase.

If you are in the market for a Kbase, pick a LinkedIn discussion topic that has received 200-300 comments and ask the vendor to put together a demo Kbase comprising the comments, organized respondent by respondent, in reverse chronological order. Try to identify a sub-theme that two respondents have found to be particularly interesting and engage a search. Your search results will automatically thread together the dialog between the two respondents.

Here below is a screenshot of a Satellite Inventory, comprising some 6,000 documents relating to three Entities (Satellites, Launch Sites, and Launch Vehicles).

  • A search for a Launch Site, will give you site demographics plus all satellites launched from that site (all countries, all years) plus all launch vehicles used.
  • A search for a Satellite will give you demographics for that satellite and simultaneously highlight Launch Sites and Launch Vehicles used. You will be able to see clearly whether a country used a particular Launch Site for a period of time and then switched to a different site. If that country uses one site for commercial launches and a different site for military satellite launches, you will see it.
  • A search for a Launch Vehicle will give you all satellites/sites that have used this launch vehicle.


Need to find needles in haystacks?   Start using a Kbase today!

If you have any questions re your use of Kbases to improve decision-making, call me at 1 450 458 5601.


The setup for investigative searches such as encountered in major crimes case management is different. Here, you will want a mix of “things-within-things-within-things” plus networked documents (i.e. multiple hierarchical trees) as you “connect-the-dots”.

Posted in Enterprise Content Management, Decision Making, Database Technology, Major Crimes Case Management | Tagged , , | Leave a comment

Case – why you need it (Part III of III)

Here is part III/III on Case Management (ACM, BPM, RALB, ECM, CRM, and FOMM) highlighting data storage issues at Cases plus the need for interoperability and non-subjective ways and means of assessing progress toward meeting Case objectives.

Watch the 11 minute  video at

Posted in Business Process Management, Adaptive Case Management, Data Interoperability, Database Technology, Case Management, R.A.L.B. | Tagged , , , , | Leave a comment

Case – why you need It (Part II of III)

Here is part II on Case, ACM, BPM, RALB, ECM, CRM, and FOMM.

Watch the video at

Posted in Adaptive Case Management, Automated Resource Allocation, Business Process Improvement, Business Process Management, Case Management, Customer Centricity, Data Interoperability, Enterprise Content Management, R.A.L.B. | Tagged , , | Leave a comment

How much money should you spend on a promo video?

Video_ShootWe live in an era where “also ran” no longer works. The other thing is viewers of web videos have very short attention spans.

Any small business owner has to decide up front what level of sophistication they want for, to take one example, a promo video.

Four options I can think of are PowerPoint, talking heads, animation sequences and interview style.

All can result in absolutely dreadful results when done wrong. The latter IMO is the most effective when done right.

Here is how we pitch video to our clients as a way toward improving competitive advantage.

The business owner has to understand that for interview style there has to be a script (that must not be referred to during the recording), a good camera  (preferably two), good lighting and good sound.  The video should not run more than 3-5 mins.

The script has to immediately answer viewer questions that include

1) should I continue to view this video?

2) is the message addressing a need that I have?

3) does it cause me to want to contact the owner/company to contract for products/services or at least get more information?

Budgets will dictate whether the owner can afford to hire a video production company to hopefully generate good value for money or whether they should try to do the video themselves using consumer level technology.

No telling in respect of the latter whether a video is any better than no video.

You may have noticed that the media in some countries are shifting away from the use of pro staff/equipment for news event coverage. The difference in quality is dramatic but the argument against my recommendations is “do contemporary audiences actually care?”Video_ShootVideo_Shoot

Posted in Video Production | Tagged | Leave a comment

Case – why you need it (Part I of III)

Here is an 11 minute video on Case, ACM, BPM, RALB, ECM, CRM, and FOMM.

No ads, no waiting to the last few frames to find out what the video is all about.

No clue what RALB or FOMM stand for?

Listen to the video to find out.

Posted in Adaptive Case Management, Business Process Management, Case Management, Customer Centricity, Data Interoperability, Enterprise Content Management, Interoperability, R.A.L.B. | Tagged , , , | Leave a comment

Check your Business Rules on the way in and out

All successful business activity takes place under the control of Business Rules.saloon

There are two types of Business Rules, those that issue warnings and those that cause hard stops.

In highly-automated systems such as automobiles, an audible or visual warning is typically issued when a rule is violated (e.g. engine overheating). If the warning is ignored, a state of affairs may be reached where the system causes a shutdown.

A well designed BPM process behaves in much the same way – the process provides behind-the-scenes orchestration and the BPMs environment provides in-line, real-time governance.

Some prefer to use the terms guidelines and guardrails (i.e. center lines on a highway / physical barriers along the sides).

The big question becomes “where do we park business rules?”

If we consider a sequence of linked steps, we know that data flows along template instances.  We can, at any step, test data that is being carried forward as well as data being collected at the step.

We can, for example, prevent alpha characters from being input at numeric fields, reject end dates that are earlier than start dates, and carry out range checks on data values (i.e. percent complete must be between 0 and 100).

The obvious place to park rules is at data collection forms at template instance steps.

If you are at a step, recording data, there is a presumption that you are an appropriate/authorized resource to be at the step and that the step is being performed at the right time.

BPM process maps where steps have plan-side routings take care of some of this but do not take in to consideration whether an instance is on the right sub-path (i.e. we engaged processing of an adult healthcare patient only to discover that the patient actually is an adolescent).

Accordingly, whereas process map logic may tell us that a step has become “current” it may be prudent to test via pre-condition rules that it is OK to access the step and initiate processing.

The rule set placement in this case needs to be upstream from the step and this is easily accommodated by a precursor auto-exec step with a routing of “system”.

Users don’t see the step but if the results of processing at the step indicate a problem, a prompt will be issued or a hard stop will occur. If the outcome of the auto-step is a hard stop, processing never gets to the next-in-line step unless/until the problem is fixed or a supervisor inputs an override.

In respect of soft stops, an auto-exec step can be more lenient (e.g. a step asks for a street address for an individual or an organization). Here, whereas the workflow designers’ preference was to pick up a street address at a particular stage of the processing, an address is actually not needed until such time as a form is submitted, a letter is sent, or a worker goes on the road to that address. Downstream rules will apply a hard stop as and when appropriate.

We can see from the above that auto-exec process control template instance steps are no different from ordinary input/output conversion steps so no special constructs are needed – these steps auto-commit instead of requiring manual commits, their routing is “system” instead of some user skill category.

How do we provide governance at ad hoc steps?

Whereas we can declare an ad hoc step to be a process of one step, if the step form consists of a single memo field, there is not much that we can do. We don’t, in the normal course of events, need to be concerned about what is recorded at memo fields, but, you never know, so no harm auto-parsing the data to see if there are any key words /contexts that should be flagged.

Suppose, however, someone wants to ship a prototype.

Global rules say “no shipments unless a product has been through QA” so the generic solution to this and other unwanted behavior is to isolate users from directly streaming records on templates by presenting to the user a menu of services.

The menu can include all workflow templates plus a replication of all steps within each of these workflow templates as ad hoc steps.

“Build-test-ship” expands to “build”, “ship”, “test” (alpha listing) and in respect of “ship” we would reasonably have, behind the scenes, a two-step template, the first of which is an auto-exec step that asks “has this product been tested?” – if the answer is no, the “ship” ad hoc intervention will fail.


We can implement governance at a Case for work that is made up of any mix of structured /unstructured activity without having to jump through hoops.

Simply park your rules at steps or set up links at steps to regional rule sets (avoids any need for 1:1 coupling between any rule set and a process step).

Final refinements are, once at a process step, if the processing calls for engaging or reaching out to an algorithm, extend your rules implementation model by preceding the algorithm with pre-conditions and, just to make sure, insert an auto-exec post-condition step immediately “downstream” from the algorithm.

Do the same as you leave process steps via next-in-line auto-exec “system” steps that include loopbacks in the event of a fail.

Here is an Eiffel code fragment that illustrates “require”, “do” and “ensure”

class ACCOUNT create

… Attributes as before:
balance , minimum_balance , owner , open …
deposit (sum: INTEGER) is
— Deposit sum into the account.
sum >= 0
add (sum)
balance = old balance + sum
withdraw (sum: INTEGER) is
— Withdraw sum from the account.
sum >= 0
sum <= balance – minimum_balance
add (-sum)
balance = old balance – sum
may_withdraw … — As before

feature {NONE}
add … — As before
make (initial: INTEGER) is
— Initialize account with balance initial.
initial >= minimum_balance
balance := initial
balance >= minimum_balance
end — class ACCOUNT

Apologies to Civerex customers who have never seen nor had to worry about computer code. Rest assured, we will not be putting out upgrades to our software suites that require coding, same as we have not, in the past, required customers to worry about building and maintaining database tables\fields.

Most of the heavy lifting for the approach used in Civerex workflow management software suites comes from research done by Dr. Bertrand Meyer, inventor of the Eiffel language. We are indebted to Dr. Meyer and his team for their important contribution in the area of workflow management.

Civerex was the Canadian distributor of IES for a number of years.


Posted in Case Management, Software Design | Tagged , , , | Leave a comment

Employee Engagement

The best methodologies and working tools will get you nowhere if your employees are not working efficiently and effectively.

Here is a mini-plan to increase employee engagement within your organization.

#1. Hurdle one is to make sure none of the employees hate their jobs. If you get to step #7 with an employee and they are not on board, let both sides move on. It’s critical to have everyone in the boat, rowing in the same direction.

#2. Explain the big picture (why we are all here).

#3. Next, give the employees proper tools to do the work you are asking them to do and train them in the use of these tools.

#4. Show them what’s in it for them to use these tools effectively (e.g. less difficulty performing the work with the tools than not working with the tools, reduced stress).

#5. Give them some responsibility to make decisions.

#6. Mentor employees along a career path tailored to their interests, desires and capabilities.

#7. Get each employee to sign off on their engagement plan.

#8. Do a review every 6 months.

#9. Do daily/weekly walkabouts

Posted in Organizational Development | Tagged , , | 3 Comments

Working with KPIs V – Measuring Performance

This is the final post in the series “Working with KPIs”, where operations meets strategy.

In the screenshot below, we have workflows consolidating data in real time to the “3D Printing Corporation” Kbase.

Rule sets at process template steps upload data to the Kbase where expressions of the type x=x+1 increment counters and record details of events (e.g. QA defect details).

If a KPI is, for example, a moving average, then algorithms at the KBase can carry out the required additional calculations.

Direct data import from local and remote systems and applications can take place at a Kbase without the need to initially import data to your run time environment and, from there, upload the data to the Kbase.


As expected, the Kbase will auto-version all incoming data by “session” so all free-form searches need to be constrained by a time window (e.g. T2 current, T1 last 3 months, T0 last 6 months).


As with a workflow management run-time environment that auto-exports transaction data to

a) a history,
b) a Kbase,
c) a data exchanger,

a Kbase can be configured to export session data for upload to a graphic environment to facilitate trending.

Bottom line, you need 360 degree methodologies to narrow the gap between operations and strategy.

Traditional row and column data reporting has gone the way of buggy whips in today’s fast pace business climate.

“what you don’t see may end up being more important than what you are currently looking at”

The combination of traditional relational database technology plus advanced graphic capabilities allow you to look at the past, manage the present and get a better idea of what is likely to take place in the future.

To review the entire sequence of operations -> strategy blog posts see the following:

Case Management Tips and Tricks

I    Work Scheduling

II  Decision Support/Work Performance

III Data Exchange

IV Consolidation of Case Data to Executive Dashboards . . .


Working with KPIs

I    Taking Stock

II  Formulating Strategy

III Setting Goals/Objectives

IV Defining KPIs

V   Measuring Performance (this post)


Posted in Enterprise Content Management, Strategic Planning | Tagged , , , | 2 Comments