If you are a BPM practitioner who has moved beyond end-to-end process management, you are probably familiar with “Adaptive Case Management” or ACM.
ACM is practiced in a run-time environment called “Case”, (not “Use Case”), capable of hosting any number of data elements and data values for an Entity, including, as attachments, .pdf, MS Word, Excel documents, plus digital images and video recordings.
Examples of Entities are Corporate Assets, Customers, Staff, Suppliers, Orders, Projects . . the list can go on and on.
Each Entity, of course, needs its own set of data elements with unique internal IDs.
Clearly an Order in an Orders Entity will result in data traffic to a Suppliers Entity, then, as, and when the Supplier(s) ship on the Order there will be data traffic back to the Orders Entity.
In IT parlance “Case” is nothing more than a primary cursor position in a post-relational database.
A visit to your run-time Case Management environment will show users, robots plus local and remote systems and applications streaming Case Records onto discovered/mapped/improved/rolled-out BPM process templates to yield private instances of such templates.
The actors perform “Case Interventions”, most of which capture data and write out the data to a Case History where each intervention has a date/timestamp plus an actor “signature”. Usually, a parallel data stream is output to a data warehouse for data mining purposes. And, a copy of the data goes to a Data Exchanger for sharing with possibly large numbers of subscribers, each wanting a different subset of the data.
Most interventions at Cases contribute to reaching Case Objectives. Some are purely supportive of the contributors but are no less important in terms of Case Management.
The rubber hits the road when organizations realize that any Case Manager is likely to be overseeing multiple concurrent instances of a BPM process template, worst case, all at different steps along their template instances, plus multiple concurrent instances of other BPM process templates.
It gets worse – whereas BPM process templates are capable of providing Orchestration (i.e. do this and then do that), it’s unlikely with complex business processes that any templates will be capable of covering all eventualities.
Accordingly, users need to be able to skip steps, re-visit already committed steps, insert ad hoc steps not in the template and sometimes record data at not-yet-current steps.
Moreover, users are likely to be working on multiple Cases at the same time – overall orchestration at Cases is best left to Case Managers/users not automated protocols.
The flexibility just described obviously needs Governance to “rein-in” extreme, unwanted variations away from “best practices”, be they mapped or not mapped.
Governance is provided by Rule Sets at process steps, between process steps, at data exchanger import engines as well as at the Case level itself.
Since Cases are presumed, at all times, to be supportive of and contribute to building, maintaining and enhancing competitive advantage, we typically see strategic Rule Sets generating events at Cases (i.e. launch an auto-audit, if, as and when advancement toward Case objectives, for example, seriously lags the planned Case timeline).
So, Case Managers manage Cases, background BPM provides orchestration along process templates at Cases, and Rule Sets provide governance at Cases.
See “It’s all about that Case . . . “ at
The remaining pieces of the business process management puzzle are workload management (i.e. allocating resources, leveling and balancing workload within and across Cases), data collection, assessment of progress toward Case Objectives, data consolidation and data exchange.
Most process steps require specific performance capabilities or specific equipment / supplies so it makes sense, plan-side, to define resource requirements at process steps. This results in posting of next-in-line steps along a template instance immediately following a commit at the last step/sub-path immediately upstream from such steps. Exceptions to this rule are steps that have an imposed plan-side delay (i.e. pour concrete, wait 4 days).
In the interest of rapid advancement toward Case Objectives, steps post to the attention of actors who are both available and have a match on required performance skill at steps. We want the 1st available actor to “take” the step and “own” it, the presumption being that he/she will promptly perform the required action and commit the step. Otherwise, the actor should clearly document work performed and return the step to the resource pool (e.g. when going off shift with an in-progress intervention).
Once an actor has “taken” a step, we expect some micro-scheduling (i.e. setting aside a specific time of day to work on the step, re-scheduling certain steps to tomorrow, etc). Most people like to perform a mix of short-term tasks and long-term tasks so as to not compromise progress with either category of task.
Whereas steps typically post with a priority indication, things change, so supervisors need to be able to offload steps from one actor and specifically assign these to other actors.
As steps post, a reasonable expectation is that instructions and all required data collection forms be easily accessible. The key to resistance is to make it easier for staff to use the software system than to not use it.
Clicking on a step should cause instructions and forms to post for easy data recording. Once data entry is complete, we want a one-click commit in the run-time environment, with routing to the Case History, to the Data Warehouse and to the Data Exchanger. Data posting to the Case History should be done in real time (because the next-in-line step may need some of the data just collected).
If you are in the market for a Case Management System make sure you understand the difference between Case Logs and Case Histories. Nothing short of the ability to view data, as it was, at the time it was collected, on the form versions that were in service at that time, will do. Case Logs can have the detail that you find in Case Histories, most do not.
Assessment of progress toward Case Objectives
It’s not easy to avoid subjective assessments of progress toward Case Objectives. FOMM (Figure of Merit Matrices) at Cases provide a framework for consistent and automated assessment of progress toward Case Objectives. If all steps have plan-side standard performance times and your software system is able to track actual/projected step times, simple manhours-to-go calculations may suffice.
Since most work within corporations is funded by annual operating budgets or ROI submissions, there will be KPIs at the strategy level that look to operational (Case) data for trending. Your best bet here is to include your KPIs in a knowledgebase so that senior management can challenge reported trends.
In many run-time environments most of the data recorded at Cases comes from local and remote external systems and applications.
It is unreasonable to expect alignment of data element names across multiple systems and applications.
Accordingly, corporations need a way for publishers to post data using their own data element naming conventions and, for subscribers, to read data using their own data element naming conventions. The data exchanger must be capable of filtering data so that subscribers only see data on a strict need-to-know basis.