Closing the Business Planning, Monitoring, Control Circle

As specialty practitioners continue to focus on specific operational methodologies and tools and argue about which are “best”, the fundamentals are that managing a business is about building, maintaining and enhancing competitive advantage.PlanMonitorControl

Any organization that is not able to build/maintain competitive advantage will fade. And, any organization that is unable to enhance its competitive advantage runs the risk of being leapfrogged by disruptive technologies (e.g. fuel cell powered automobiles that can act as portable electrical generators during power outages).

It all starts with strategy formulation.

We all know strategy is typically developed in isolation and then “communicated” to operational units who are left to interpret the strategy. It’s very easy to identify organizations that have a good strategy with poor operational performance and there are many examples of organizations that are super-efficient at building products customers do not want.

The notion that “everything is a process” at the operational level is not sustainable in organizations where most of the staff members are knowledge workers. There are few end-to-end processes with convenient end point objectives in these organizations. Instead, we find “process fragments” that people, machines and software thread together at run time.

The range of operational methodologies/tools is such that no amount of buying “best in class” rankings will deter enterprising organizations from finding more innovative/cost effective “solutions” to their problems.

Buyers can “save time and spend” or “spend time and save”.

It’s a bad idea, actually, to wait for problems to be identified and to then only start looking for “solutions”.

Some capabilities need to be classified as “strategic corporate assets”, IT infrastructure being a prime example.

Picking an IT Infrastructure – one of the most important corporate decisions you will make.

If an organization subscribes to the notion that the objective of any business is to remain in business, it is important to pick an Information infrastructure that has the potential to address unanticipated future needs, problems and opportunities.

Easily said, not so easy to do.

But, failure to adopt technology that is “future-proof” results in a need to return to the marketplace for new technology in 2-3 years long before the ROI for the old technology has had time to run its course.

Noone understands this better than video production companies, who, having recently gone to HD, are currently tripping over each other to upgrade to 4K.

Meanwhile, manufacturers are getting ready to roll out 8K cameras but, wait, Canon just announced a new sensor that is 60 times sharper than 1080p HD

If an organization wants to be successful, it needs methodologies and tools that are capable of allowing the organization to close the gap between strategy and operations.

If you are a subscriber to my collection of rants, you will have seen comments on the different information architectural needs of transaction processing applications versus applications/environments needed to carry out strategic planning.

Transaction processing applications pull/push messages from/to engines that provide very specific services. A Case-based run-time environment that showcases seamless interoperability has the wherewithal to provide decision support, collect operational level data, share data and build up a history of transactions, Case by Case.  All you need in such an environment is the ability to establish a cursor position in an RDBMS and engage processing.

Knowlegebase applications require a different IT architecture.

Here,  we have a need for graphic display of thousands (sometimes tens of thousands) of records, organized in multi-root hierarchical trees,  with free-form search capabilities across all records. The records typically come from multiple Entities where, in some organizations, there is no duplication of data elements across Entities (i.e. resources, customers, products, competitors, policy/procedure/legislation)

The focus can be on a particular structured data element value, text, key words, even images, and the scope of any search is likely to extend to current and historical data as opposed to current data only.

The 2nd difference between Kbase application IT architectures and the usual structures found in traditional RDBMS applications is that in a Kbase you need to be able to put a simultaneous focus on all records, not just the “hits”.

Users are likely, some of the time at least, to be more interested in what a search did NOT find as what was found. (i.e. McDonalds today wants to put an outlet at a location where Wendys is, tomorrow they may want to put an outlet where Wendys is not).

The 3rd and final difference is that the “structure” of data in a Kbase is likely to change on-the-fly. (i.e. I have records organized by City – following a search I may want to cluster some of these temporarily or permanently by type of business, keeping the City structure intact).

It’s easy to understand that in a Kbase application you need to load record stubs for all records into memory,  carry out searches and then build your KMap out of memory.

Different User mindset, different User Interface, different architecture.

See “It’s Time For You To Get Your Big Data Organized”

Stay tuned for more information on “Cases of Cases” in a subsequent blog post.

Posted in Strategic Planning, Operational Planning, Organizational Development, Decision Making, Case Management, Competitive Advantage | Tagged , , | Leave a comment

The potential of telemedicine for reducing the cost of healthcare and for improving quality of life

Telehealth has the potential to greatly reduce the cost of healthcare services delivery and ???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????greatly increase the quality of life.

In respect of returning military personnel, John Liebert, MD and William JJ Birnes, PhD, JD published in 2013 a book called “Wounded Minds” where they highlighted the impact of results of the inefficiency of traditional treatment approaches ($32.2 billion annual expenditure for anxiety disorders alone).

These two authors state (page 254) in respect of the use of new suicide prevention initiatives that “Technology aimed at augmenting therapy is another strategy, one designed to overcome some access to care issues in remote areas. Virtual reality and telemedicine are examples”

Anxiety is just one area of medicine that can benefit from telehealth (i.e. substance abuse, depression, etc.)

It would, in my view, be a mistake to limit looking to telehealth to address the behavioral sub-set of conditions that patients can present with or to restrict telehealth to care access in remote areas.

It’s my view we have hardly begun to scratch the surface here.

We need to remind ourselves that the approach to medicine as currently practiced (i.e. fixing problems) is far less efficient than encouraging lifestyles that help to prevent problems from developing (i.e. wellness).

We can use telehealth in the area of treatment planning/monitoring as well as in the area of promoting wellness, with the caveat that no single approach/methodology/technique should replace all others.

Increasing availability of medical devices in the field bring us back to telemetry, a technology that is absolutely pervasive in industry, with origins back in the 19th century (data transmission between the Russian Tsar’s winter palace and army headquarters developed in 1845).

My area of interest in healthcare is in continuity of care (i.e. doing the right things, the right way, using the right resources, at the right places and times).

The foundation for this is twofold

a) there cannot be ten best ways to do something nor should there be only one.

b) healthcare resources are scarce so we need to make efficient use of these.

We can talk on and on about telehealth but it is an area that has many moving parts and these all have to fit together smoothly and seamlessly if we are to make effective and efficient use of this important technology.

Civerex has been a pioneer in providing infrastructure for telehealth.

We had in place in the early 2000’s telehealth Tx planning monitoring software for use in the treatment of anxiety disorders. The communication at the time was purely via telephone, but with call centers in one time zone, providers in another and patients in yet a third time zone, it was important in the appointment booking software module used by call center staff to make sure that providers and patients would “meet” at the right time.

Civerex’s current focus is to provide customers with efficient ways of recording 1:1 telehealth sessions and consolidating video/voice recordings at patient EHRs. We are looking to accommodate live video broadcasts of in-home sessions carried out by clinical staff so that senior staff back at clinics can tune into these broadcasts and provide real-time advice/assistance on the administration of treatment plan protocols.


About Civerex

The owners of Civerex developed in the late 1980’s a software product called RapidTox for the diagnosis of instances of poisoning. We were inspired by the work of Robert Driesbach, MD, who published the 1st edition of Handbook of Poisoning back in 1955.

The foundation of our work on RapidTox was a diagnostic algorithm that was able to identify candidate poisons on the basis of symptoms/signs. Selection of a poison gave the user a list of modalities (treatments that worked) with goals/objectives plus the ability to carry out differential diagnoses.

Our current suite of behavioral/medical software products continue to include the diagnostic algorithm in addition to putting a focus on encouraging consistent use of “best practices” protocols via background orchestration, with accommodation for deviating from these as and when deemed appropriate/necessary, subject to governance. The two core methodologies we use are BPM (business process management) and ACM (adaptive case management).

Another area of interest is promotion of the concept that discharge planning should start with the first incoming phone call.

We have spent a lot of time on providing seamless interoperability by and between our products and local and remote 3rd party systems and applications and we promote for general use, a product called CiverExchange that addresses this need.

Any groups interested in collaborating with Civerex should contact us at 450 458 5601 to highlight areas of interest and be prepared to apply for research grants to fund any proposed initiatives that Civerex may agree to in respect of collaborative undertakings.

We are happy to provide “private label” versions of our software for loading content subject to be of interest to different communities of prospective users.


Posted in Adaptive Case Management, Business Process Management, Data Interoperability, FIXING HEALTHCARE, Interoperability, Telehealth, Video Production | Tagged , , , | Leave a comment

What do Process Maps and Suicide Sheep have in Common?

I didn’t expect you to “get it”.

Check out this Factual Facts post “In 2005 in Turkey, a suicide sheep jumped off a cliff and 1500 sheep followed the first one” .

Now, before you go away, I need to very quickly make an important connection.

Many process maps get prepared on paper, are then filed, and are never referred to again.

If this is what happens to the maps you prepare, you might as well prepare them and throw them off a cliff.

My point is. . . .

For any process that has complex steps, connected sometimes in complex ways, with multiple decision box branching points, where different skilled persons must perform different steps, where different steps require the collection of different data, where you want to carry out data mining after-the-fact . . . .

there is NO way instances of your process templates can be “managed” by staring at a paper map and NO way, however long staff might stare at such maps, that you will be able to have process instances performed properly (doing the right things, the right way, at the right time, using the right resources, collecting the right data).

Remedy . . . .

Map your process maps on an e-canvas, compile them, roll them out for modeling/simulation, improve your process maps, then roll them out again so they can provide real-time orchestration within a Case run-time environment.

Otherwise, see you at the bottom of the cliff.


Posted in Adaptive Case Management, Business Process Management, Case Management, Nonsense | Tagged , , , , | Leave a comment

What’s your action plan for BPM for 2015?

We all understand that organizations need best practices and that, in the case of complex processes, these need to be in-line, not on-line and certainly not off-line.

In theory, you can ask data mining software to run through your event logs and build up an inventory of all processes. If your processes are in-line you will have an event log, otherwise you probably won’t.

So, in the absence of an existing complete inventory of in-line processes, you have little choice but to drag key stakeholders into a room or a virtual room and facilitate process mapping.

Do this and you will be at a stage where how you get to mapped, improved, modeled, compiled and rolled-out processes will be mostly up to you.

You can use an e-canvas to drag and drop process steps as fast as stakeholders say “and then we do this”, click on your compiler and be done OR you can take notes, go away for 1-2 days and then reconvene to discuss your paper process map.

You can stay easily end up staying away for several weeks if you elect to write computer code to get to where you can  “piano-play” your process so that stakeholders can identify missing steps, steps that are improperly sequenced, steps that have the wrong attached forms, wrong routings.

If you are a consultant trying to stretch out your engagement, this “buggy whip” strategy will only work until your client finds out how long process mapping should take, using appropriate technology.

If you are a BA/BI staff member, undue dragging-on of process mapping initiatives will tax the patience of your stakeholders and put the initiative at risk – most of these folks would rather be somewhere else.

Don’t know where to start with BPM? Google “BPM” – you will see some 68,600,000 results.

Another option is browse my 235 blog articles written over a period of four years.


Posted in Business Process Improvement, Business Process Management, Operations Management, Process Management, Process Mapping | Leave a comment

Get Your Story Right

If you want to become a management consultant, it’s unreasonable to expect to ease into this the day after you graduate from university.

It takes time to build up expertise to walk in off the street and facilitate strategy development sessions within a corporation and time to work out ways and means of helping corporations to align operations with strategy.

Once you are ready, unless you plan to rely on word of mouth, you need to get out a story that allows prospective clients to move beyond the notion that all you will do for them is borrow their watch to tell them what time it is.

Most consultancies provide consulting services only, leaving it up to prospective customers to do most of the hands-on heavy-lifting that follows receipt of a study report.

Civerex followed a different path – we have been an operating company for most of our 30-year history and only recently started to offer consulting services to clients.

Our clients quickly discover that whereas we offer non-product-specific advice/assistance, we have a range of products that can be used to move forward from study reports to implementation of ideas/concepts.

All of these products have a stronger focus on application system development as opposed to plug-in “one-size-fits-all” solutions. They can be configured for different industry areas/application situations, either by us, or by independent consultants/implementers or, if a client feels up to it, by internal staff that Civerex mentors.

Here, for the record, is the Civerex History.

Make sure your story is not too short, nor too long and that it reads well.


About Civerex

“2015 is our 30th anniversary . . .

We started off as Jay-Kell Marketing Services- in 1985, organizing high-tech seminars out of Singapore on satellite communications, military radio, LANs/WANs and Object Oriented programming.

In 1990 we moved the business to Canada, continuing with seminars, but adding sales/support for Canada for a range of 4GL and O-O software products manufactured by mdbs inc., a Lafayette, IN based software company.

In 1992 we started to develop software applications on our own, thanks to a grant from the Ontario Hospital Association to develop an expert system for the diagnosis of psychiatric disorders. Our first customer was the Royal Ottawa Hospital.

We did a spin-off of software distribution/support to a new business entity, Civerex Systems Inc., in 1994.

Civerex became a sole-source supplier to DOJ/FBI in the late 1990’s and spent a number of years developing software for the management of critical incidents in the area of hostage, barricade and suicide. We developed for the FBI a software application called L.E.N.S (law enforcement negotiation system).

During the early 2000’s we formed a joint-venture with an aerospace engineering company called Infinity Technologies and ran our US operations out of Huntsville, AL for a number of years. Infinity was eventually bought out, Infinity/Civerex LLC was shut down, and Civerex (Canada) took over the ICLLC customer base.

Jay-Kell Technologies today continues to own all of the intellectual property for some eight (8) software products (knowledgebases, e-mapping, entity record management, portal, and data exchange) and has responsibility for private-label licensing. The way we see it, having two out of three competitors proposing our technology on an RFP initiative is a good thing.

In 2010 we started to provide management consulting services with a specific focus on bridging the gap between strategy and operations.

In 2012 we added a video production unit to address the growing demand for the development of advanced sales/promotion approaches and at-a-distance-customer training.

As you can see, it’s not our first rodeo.

We can bring a wide range of international hands-on expertise to bear on problems/issues you may need help with.

Call 800 529 5355 (USA) or 1450 458 5601 elsewhere for more information.



Posted in Adaptive Case Management, Business Process Improvement, Business Process Management, Competitive Advantage, Compliance Control, Customer Centricity, Data Interoperability, Enterprise Content Management, MANAGEMENT, Operational Planning, Operations Management, Organizational Development, Planning, Productivity Improvement, Strategic Planning | Tagged | Leave a comment

How certain can we be about uncertainty?

The answer is 100% – anything we map out as a plan for the future will be characterized by risk and uncertainty.


We can quantify risk but under many scenarios the only thing we can say about uncertainty is it will always be on the horizon.

What do we do when an initiative is impacted by a significant negative event?

A practical example of an unanticipated negative event is a product development initiative using a particular technology that gets hit by a leapfrog technology.

E.g. You might be in the process of perfecting a new type of gasoline engine and simultaneously be hit with an oil price drop (this week) plus an announcement by Toyota (this week) that they plan to focus on powering automobiles via fuel cells.

The fact that cars with fuel cells can provide an electrical feed that can handle some of your household electrical needs for a couple of days is bad news to the makers of residential standby generators, although this means you pretty much have to avoid driving until the power company grid comes back on-line unless you have two cars.

I wonder how long residential standby generator manufacturers have known about the possibility of using a car as a backup generator.

Adaptive Case Management is a methodology that allows knowledge workers to change course on short notice. ACM is not particularly helpful predicting uncertainty.

Building and maintaining KnowledgeBases can help you to keep an eye on open options, on technology, the competition.

If you take the trouble include KPIs in your Kbase and take the time to challenge trends in your KPIs by carrying out free-form searches across your Kbase, you may have a leg up on your competitors.

How do we find people who are good at predicting the future?

Answer, it’s pretty much pointless.

They are difficult to find – most of them retire to fortified islands in the Caribbean and disconnect their phones.


Posted in Decision Making, Operational Planning, Strategic Planning | Tagged , , , | 1 Comment

Do the Right Things the Right Way and you will be OK

RubicI am including here a link to an interesting and thought-provoking article called “How to Make a Decision on Just About Anything: by Craig Reid, CEO of the Process Improvement Group.

I commented on his article as follows:

As Señor Wences might have said “Easy for you, difficult for us”

Making the right decisions has hurdles every step of the way.

  1. All options – the ones to worry about are the ones you did not think of.
  2. Eliminate all options that don’t work – you often don’t know if an option will work/not work until you try it. Going down multiple garden paths takes time and money.
  3. List your criteria – good move, but who says the criteria you list are in your or your corporation’s best interest?
  4. Give the criteria weightings – again, good idea. But how do you do this non-subjectively, taking into account risk, uncertainty? Not easy to be certain about uncertainty.
  5. Review all options from the start – you can loop again and again and make the same errors/omissions each time. No guarantee any of this advances your ability to make a good decisions. The best advice here is ‘include what you forgot to include’.
  6. Adding up the scores – somewhat less difficult but consider the number of spreadsheets out there with faulty calculation rules that are being used to “steer the ship”.

Donald Rumsfeld did a pretty good job of highlighting what needs to be considered when making decisions.

I would have loved to have been there to see the expressions on the faces of those who were present when he said the following:

“There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know”

When we examine some of the bad decisions made over time by  governments and industry, it’s clear there is a fourth category that needs to be considered i.e. “unknown knowns”, which is information we have (somewhere) but may not be available to us at the time we need it.

Get a good handle on all four categories and you have the basis for making good decisions.


Posted in Decision Making | Leave a comment

Case Cloning – For Better Outcomes

Remember Dolly?

sheepLet’s start with the basics.

CASE is a place.

It’s where you collect data, make decisions, manage work and workload, record progress toward meeting objectives and it’s a place where you make data available for use by local and remote 3rd party systems and application.

It’s not a methodology.

We manage Cases using various methodologies such as background BPM (“best practices” orchestration), background ACM (flexibility, governance), RALB (Resource Allocation, Leveling and Balancing) and FOMM (Figure of Merit Matrices).

Clearly, we want consistent good outcomes at Cases.

Cloning Cases

It follows that if some Cases have better outcomes, it makes sense to use one of these as a template for a new Case as opposed to starting from scratch with process fragment templates and then adapting the Case by adding ad hoc interventions as the new Case evolves.

All of which brings us to the notion that cloning Cases that had good outcomes “is a good thing”. Martha Stewart would agree.

Our starting position is to identify Cases that had good outcomes.

Not much guidance available here – you have to define a set of KPIs at Cases (key performance indicators), consolidate and weigh these over time across multiple instances to where you can “rank” Cases according to outcome and the rest, (i.e. picking a high-ranking Case to clone) becomes relatively easy and non-subjective.

Except that if we pick an area like healthcare, where it can seem that each patient is unique, we need a few ground rules.

In healthcare you rarely see end-to-end “best practices”.

What we see instead are process fragments that healthcare professionals, robots and software thread together.

Whereas we used to have “the chart”, we now have e-charts, and the idea is you go there before you make any decision regarding the next intervention for a patient.

Not surprisingly, above and beyond process fragment templates, we see ad hoc interventions at healthcare Cases based on actor experience, judgment, peer consultations, and, again, e-chart reviews.

How / what do we clone under this scenario?

  1. Clone a “good” Case.
  2. Mark up the Clone to show the sub-pathways that the Patient who was the focus in the source Case navigated through.
  3. Purge out information that would allow you to identify the source Patient but keep data like symptoms/signs, diagnoses, treatment plans, goals/objectives.
  4. Stream your new Patient onto the Cloned Case’s pathways (including ad hoc steps which you can consider to be “pathways of one step each”).
  5. Pale out the inherited data from the source Case.

Now, as you proceed to process the new Patient, as usual, post the forms at each step as steps become current, and record, for example, symptoms/signs as you would, starting from an empty Case, but, in respect of paled out data, treat the data as you would when doing a differential diagnosis (i.e. does the new Patient perhaps have “night terrors”?).

If the patient says yes, affirm the inherited data, if they say no, then do nothing or expressly null out the data, the processing rules will automatically drop any un-affirmed data as you leave each form and then commit the step.

At decision boxes, reach a decision re which way to branch in the usual way, but, again, take note of the source Case branching. Do not allow the branching choice in the previous Case to bias your decision-making but remember the reason the source Case had a good outcome is because the processing at the reference Case went this way instead of that way.

You can see that this type of shadowing of your new Case gives you a something of a real-time “second opinion”.

Not into healthcare services delivery?

No problem, the same approach can be used for manufacturing, insurance, b2b.

In fact, if each new Case does not lean too far toward once-off, you might be able to skip the manual decision-making and consider “Meta Case” management where software picks the best reference Cases and automatically updates “best practice” templates.

We will steer clear of Meta Cases in the healthcare domain for the time being.

Posted in Adaptive Case Management, Business Process Improvement, Business Process Management, Case Management, Decision Making, FIXING HEALTHCARE, R.A.L.B. | Tagged , , , , | Leave a comment

Where do we go from here with BPM?

I recently participated in a ABPMP LinkedIn group discussion “What do you think are the most demanded BPM services (for 2015 and beyond)?”2015_2016_2017

We started off listing the “services” as “discovery, mapping, modeling, analysis, improvement, automation, discovery”.

From my experience, most BPM initiatives that fail, fail in “automation”, therefore this is the service area that should be in demand.

By “automation” I don’t mean having all process steps performed by robots/software. Automation here refers to “orchestration” (guidance) and “governance” (guardrails).

When you have complex processes comprising steps connected often in complex ways, where different skills are needed to perform different steps, it’s unreasonable for any BPM initiative to quit with “publication of a paper process map”.

The real life scenario is you will have multiple process templates, with multiple instances of each at each Case, with deviations at certain steps along instances (skipping over a step, re-visiting already committed steps, inserting steps not in the template, recording data and partially completing steps that are not yet current along an instance).

The other reality is there usually are scarce resources and competing priorities with the result that the time to run through an instance is greater than the sum of step durations.

The fact that users often are forced to multi-task results in extra time above and beyond wait times between steps. Each time a worker suspends work at a step, there is an exit time and then a re-entry time when he/she comes back to the step.

So, it follows that organizations need automated resource allocation, leveling and balancing (RALB).

We can look to background BPM and RALB to provide resource allocation providing there is an environment to host RALB.

Since workers are not robots and do not like to be treated as robots, organizations need to empower workers to micro-schedule their task loads (leveling).

Since priorities can change, supervisors need to be able to balance workload across workers (balancing).

But, we are not done – we need “governance” in addition to “orchestration” and this, hopefully, comes from the Case environment that is hosting RALB and background BPM.

Next, there is the issue of managing Cases – you can very easily be doing things the right way but not doing the right things.

Following a template can guarantee that you get the right result but it does not mean that use of the template will advance the state of the Case.

In ACM/BPM, unlike straight-through mostly linear processes, there are no convenient objectives at the end of a template.

What counts is progress toward Case-level objectives. Any work that does not directly or indirectly advance the state of a Case should not be performed.

Finally, we have the issue when defining Cases that Case objectives should at all times be supportive of strategy and conceptually, at least, contribute to “competitive advantage”.

A worthwhile focus for any management consultant for 2015 is to help corporations become more aware of the extent to which their KPIs are relevant in the context of where they want to be 1-2-3 years from now.

Too often, I find they are looking at the wrong KPIs because of failure to keep strategies current and failure to close the gap between operations and strategy.


Posted in Adaptive Case Management, Business Process Management, Case Management, Process Management, R.A.L.B., Strategic Planning | Tagged , , , | 1 Comment

Decisions, Decisions, Decisions

One of the main benefits of BPM is its potential to provide background real-time decision support at steps along “best practices” process template instances.The_Thiinker

Prerequisites to providing functional unit end users with real-time decision-support include:

  1. Ability to roll out compiled process maps to a run-time environment (process steps post automatically to the attention of workers who have the appropriate operational skills and who are available to perform such steps).
  2. Real-time data collection at process steps.
  3. Auto-buildup of data (auto-posting of run-time data to a repository).
  4. Easy user access to current and historical data (data, organized intervention by intervention, in reverse chronological order).
  5. Easy access to algorithms that enrich data (internal as well as local and remote 3rd party algorithms).

The terminology is important – we are looking for “guidance” or assistance but without the burden of rigidity, the expectation being that users will want/need to deviate from best practices where appropriate. Any advice and assistance provided must not encumber users/software/robots.

Let’s see how Decision Support works at BPM process steps.

Decision Support at Process Steps

We need the ability to have decision support at all steps on pathways/sub-pathways.

The range of steps includes ordinary steps, gateway steps, special constructs called “Branching Decision Boxes”, loopback steps and BPM process merge points.

A Process step needs to know, plan side, when it should/can become “current” (BPM flowgraph logic takes care of this), what skill(s) are required for execution of the step, what data display/data collection forms are needed at the step and optionally, and what local instructions may be needed at the step for users who may be new to performance of a step.

Our starting point for this discussion on decision support for BPM will be data collection forms.

Here, rules local to a form at a process step can carry out range checks on data (end time is less than start time), block “bad” data (i.e. date “13/13/2014”), issue alerts when there is missing data, etc.

Routings (i.e. Nurse Practitioner, MD, civil engineer, solicitor) take care of performance skill requirements. It is important to point out that individuals can have several skills, some of which are exercised at specific locations.

Local instructions to process steps can be plain text, a diagram or a video.

Decision Branching Boxes along Process Template Instances

Branching Decision Boxes accommodate selective branching to one or more downstream sub-pathways at certain steps along process map templates.

Not all Process Steps, need to be “performed” – some steps do nothing more than cause branching to one or more sub-pathways downstream and we call these steps “Branching Decision Boxes”. In some environments, a decision box can be formed simply by lassoing two or more ordinary process steps and making certain that each has the capability of resolving to TRUE under certain conditions.

Clearly you need a Boolean variable at each step that best starts off with a value of FALSE, with an ability to evaluate to TRUE under certain conditions as detailed in a Rule Set.

One easy setup involves parking a Boolean variable at a Form that is attached to each process step and relying on an embedded Rule Set to “fire” the step within the Decision Box (i.e. only step only, several steps, or all steps).

Here is a practical example:



If a>2 then j=j+1

If b > 5 then j=j+1

If j=2 then c=TRUE

Rule sets such as this one are used extensively during scoring of questionnaires. Individual questions can use forward, reverse or null scoring and thresholds i.e. 2 and 5 are used to determine whether responses at clusters of questions meet a threshold.

If Question “a” has a score of 3 or more, we increment a counter called “j”. If Question “b” has a score of 6 or more, we increment “j” again.

A final rule is needed to resolve “c” to TRUE if the responses to questions “a” and “b” both exceed their respective thresholds of “2” and “5”.

When you are constructing BPM process workflows you will not get far without Branching Decision Boxes. There are very few straight-through processes in the b2b world of today.

A relatively “simple” process map with only a few Branching Decision Boxes can easily yield hundreds/thousands of permutations and combinations. It’s important to “test drive” mapped rolled-out process templates thoroughly prior to production release.

Here are some practical tips.

There are three (3) types of Decision Boxes.

1. Enabling Decision Boxes (branching)

Enabling Decision Boxes have two or more options – if you have outcomes for two options i.e. #1, #2, it is always a good idea to include another, say, “#3 – Neither #1 nor #2” for modeling purposes.

Enabling Decision Boxes can be of the sub-type “manual” or “automatic” as well as “single-pick or “multi-pick”.

“Manual” requires that the user pick an option.

“Automatic” self-executes as and when a decision box becomes current along an instance of a template.

As explained earlier in this article, a simple approach to Enabling Decision Box construction is to use ordinary nodes that you lasso to form a physical Decision Box.

When a Decision Box option or Node does not evaluate to TRUE, the entire sub-path downstream from that Node is dropped from the process template instance.

In order to avoid giving Case Managers the notion that certain pathways/sub-pathways are “disappearing”, make sure each Rule Set includes a Node that evaluates to TRUE when the condition “none of the above” is met. When such Nodes evaluate to TRUE your Case log will show termination of a pathway/sub-pathway.

A Rule Set that needs to cause branching for negative or positive values of a variable called “a” can function with the following:

If a <0 then engage sub-pathway “A”

If a >0 then engage sub-pathway “B”

but may cause confusion to users when a =0.

What is missing is

If a =0 then engage “EXIT”

Where EXIT is a “debug” sub-pathway of one step that posts a warning that reads “Exited at Node ___”

Some environments require special constructs for decision branching.

2. Gatekeeper “Decision Box” Steps (inhibiting)

Gatekeeper “decision boxes” have a routing of “system” (i.e. no users see them).

Their sole purpose is to block processing along a pathway/sub-pathway until such time as a set of conditions have been fulfilled.

The quotes highlight the fact that Gatekeeper decision boxes have only one included step, accordingly there is no need to lasso/construct a physical decision box.

In most BPMs, gatekeeper steps have distinctive icons.

A reasonable question to ask here is what if the set of conditions never gets fulfilled?

Two answers:

One, you may optionally request that each time the rule set at a gatekeeper decision box is tested, an error message will post to the attention of the System Manager if the Rule Set resolves to FALSE.

Two, in the absence of such tracking, Case Managers typically will  have set milestones along pathway/sub-pathway instances, with the result that the software system will track lack of progress along the path/sub-path and the manager will take corrective action.

Another question relating to gatekeeper decision boxes is how do they acquire the data that allows the rule set to resolve to TRUE?

The answers are a) direct data entry by some user, b) auto-enrichment of data by an algorithm and c) import of data to the BPMs.

3. Embedded (hidden) “Decision Boxes”

 Embedded decision boxes are typically invisible, unless, for some reason, you want them to be otherwise.

In the normal course of events, there is no need to indicate to a user that a checkbox, for example, on a data collection form, causes a “link to pathway”.

If the initial step on the linked pathway has a routing that does not match that of the user at the step\ data collection form, the user will not see the effects of the processing they engage by clicking at the checkbox.

4. Other

 In some BPMs adding a new Case results in streaming of the Case onto a process template instance.

Here, the effect of setting a default pathway in the BPMs profile, causes the behavior, so we can include new Case streaming under hidden decision boxes.


The presumption is that BPM practitioners are motivated to guide their clients along a maturity path where complex processes are mapped, improved and rolled out to a run-time environment capable of providing orchestration & governance (Case level decision support).

Neither you nor your clients will get to this level by staring at paper process maps.

Additional Reading

See “Check your Business Rules on the Way In and Out”


Posted in Adaptive Case Management, Business Process Management, Case Management, Decision Making | Tagged | 1 Comment