Performance based reimbursement – coming soon to a place near you


telehealth

Healthcare services delivery in the USA is out of control.

Costs have skyrocketed, facilities are overloaded, doctors are suffering burnout and government intervention has, under the guise of improving patient safety and outcomes, yielded only modest improvements.

MU (Meaningful Use) is largely responsible for the current alarming state of affairs.

It takes longer to process patients than before MU and it seems the focus has shifted away from treatment of individual patients to long-term outcomes data collection.

The remedy, after billions of dollars spent, is to rewind and set the focus on quality, efficiency and effectiveness of healthcare services delivery. Something that should have been the focus of MU from the start. Better late than never.

The problem is going to be with implementation.

Current EHRs were not designed to generate performance data. Replacing what is currently in use will be expensive and we can expect several rounds of false starts as vendors shift into a feeding frenzy to crank out “new” and “improved” EHRs using, the same old, in many cases, database architectures invented in the 1960s. Customers will be buying pigs with lipstick.

Strangely, the methodologies to do things the right way are readily available. We need four methodologies (BPM, RALB, ACM and FOMM) to make performance-based reimbursement a success.

And, there are two hurdles that need to be sorted.

One is “not invented here” and the other is “resistance to change”. Both of these are cultural hurdles.

NIH is particularly well entrenched in healthcare so it will be difficult to port BPM / RALB / ACM / FOMM. The easy solution for NIH is to get over it.

As for Resistance to Change, there is an easy fix that  does not require making changes in  the way we manage work.

If you think about it, all of us, each day, come into our places of work and immediately take note of our fixed time appointments. No one has a problem with a calendar. No change, no resistance.

Following calendar inspection, we look at our to-do list and we micro-schedule to-do tasks to fit between fixed time commitments.

If you have a half hour appointment at 0900 hours and another at 1100 hours, you may reasonably pick a couple of small tasks to clear off your desk between 0930 and 1045. Or, you may prefer to make progress on one large to-do task. Up to you, and no obligation to stick with one approach or the other from one day to the next.

Resistance to change in healthcare can be minimized so long as the pitch is right.

The thing is case management has been at the core of medicine since the 1600s. Accordingly, healthcare workers have no problem going to a patient chart prior to meeting with a patient so transitioning to an e-chart that looks the same as the old manila folder is not a problem.

The other thing is the concept in healthcare of “best practices” is understood.

BPM excels at enabling building and enhancing best practices, but it has a reputation of imposing rigid protocols. BPM and ACM together replace the rigidity of structured protocols where these make sense and accommodate unstructured or ad hoc interventions where appropriate. No rigidity, no resistance.

The other positive attribute of BPM is that lets agency functional unit staff document their workflows featuring existing agency forms, so healthcare professionals see their workflows posting their forms. No change, no resistance.

All of these scheduling maneuvers are eminently handled by RALB (i.e. 3-tier scheduling or Resource Allocation, Leveling and Balancing).

FOMM (Figure of Merit Matrices) is also not new, and easy to implement. Basically, it’s all about non-subjective assessment of progress toward meeting Case objectives. You could do it on the back of an envelope, but it’s a lot faster/easier if you append a spreadsheet template to each Case Record and follow the methodology.

None of these tried and true healthcare services delivery methods will work if the software User Interface is not right.

Here again, no change will result in no resistance.

So, let’s go forward with a UI  consisting of a split screen featuring two constructs everyone is familiar with (a calendar and a to-do list) and let’s make things such that using the UI requires less effort than not using it. No resistance here.

OK, how does this get us to performance-based?

This is where IT comes in – with end-users in the drivers’ seat, building and enhancing their own workflows, IT will have time to focus on predictive analytics. As users perform interventions, record data, the data will flow to the EHR (as it does now) but with a parallel feed to a data warehouse where all manner of analytics can take place.

The final piece of the puzzle is not to simply crank out after-the-fact statistical and tabular reports but to analyze data in real-time and improve decision-making in respect of healthcare services delivery to individual patients.

Reporting on measures is the easy part.

=======================================================================

BPM: Business Process Management

ACM: Adaptive Case Management

RALB: Resource Allocation, Leveling and Balancing

FOMM: Figure Of Merit Matrices

 

Posted in Adaptive Case Management, Business Process Management, Case Management, FOMM, Meaningful Use, Performance Based Reimbursement, R.A.L.B. | Tagged , , , | Leave a comment

Is Social BPM a failure?


BPM.com is a great place to hang out.

Peter Schooff asked the question above and I recommend you take a look at the range of comments received.

BPM

http://bpm.com/bpm-today/in-the-forum/do-you-consider-social-bpm-a-failure

My comment was  . . .

 “

So many diverse comments here on this one discussion topic.

 

In healthcare it’s “no verbal orders”

For a child, at any step along a best practice pathway, you are likely to get a call from the parent re “Why are you doing this? or “I see on the internet that beet root is a better treatment modality, why are you not using this?”

For an adult, same thing plus a desire to go to a portal and gain access to their EMR file (the law says they have the right to access information in their file).

The healthcare log needs to have a record of each of these “ad hoc” interventions – not just date/time and caller but any data that was recorded, at the time it was recorded, on the form versions in service OR an audio recording OR a video telehealth encounter recording.

No way we would allow data flows to patients/caregivers using Facebook, Twitter, e-mail because of the risk of disclosure of patient information and possibility of heavy fines.

In respect of portal accesses, you want the user to be able to log in, see a menu of services (trimmed to what this user is allowed to see/request), a message goes from the portal to a webserver engine and it is the engine alone that links to the backend db server, indexes to the right record, retrieves the data, passes it back to the engine and pushes out the info to the user at the portal. Any suspicious incoming data stream diverts to a healthcare professional/admin person who will probably call and say “if you really need this amount of information, how about you come into the clinic to pick it up?

Bottom line, no “social ” in healthcare and if you are building generic platforms for healthcare (hospitals/clinics interacting with patients), for manufacturing (organizations like Lockheed interacting with suppliers), for b2b (a job shop operation interacting with a customer), why not use the same approach?

I am with Emiel  Kelly “. . . .some type of processes rely more on social interactions than other ones”

I think we could avoid the controversies by saying ” . . . some type of processes rely more on ad hoc interactions than other ones”

Posted in Business Process Management, Adaptive Case Management, Case Management, Interconnectivity | Leave a comment

Where is professional videography headed?


 

Thinking of going to 4K? You might want to wait for 8K assuming you want to be able to film what people a mile across a valley are eating for lunch.

Video_Shoot

Viewing 8K is another matter. Few residences have an attached cinema hall. The likely deal when you buy an 8K TV is you get a discount on the renovation contractor who enlarges your living room – after you have negotiated a variance with the local planning committee to allow your house to be closer to your property line.

I can’t get excited about 8K at all and, until recently, have not had a lot of motivation to trade up to 4K, other than for the ability to crop to HD and still have a good image.

For me, the order of things is the storyline, audio, lighting, camera.

Why? Because you lose the audience if the storyline is bad, next in line comes the audio and, if the lighting is no good, you quickly get to the end of the line i.e. the camera.

My challenges in videography have been to track stage performers as they rapidly glide across a stage, often going from front of stage to back of stage at the same time. To do this right you need a smooth tripod/head that can be managed with one finger and you pretty much have to know where the performers are about to move to so you can track them. For this reason, I go once to take notes, then a 2nd time to practice tracking and yet a 3rd time to do the recording. Imaging the savings if I could lock on faces and have the camera do the tracking.

Out of doors, the technique needs to vary. Here, we also have fast moving objects, often not close together, so you have to master smooth pan and zoom.

Auto-focus makes things a lot easier but it seems to me, based on testing I did a couple of years ago, that unless you move through an arc that lets the camera focus whilst you are panning/zooming, you may, depending on the camera, be in for a longer than desirable settling-in time.

The experimentation I did with my AG-AC160A gave me very fast settling-in-times or rather slow settling-in-times depending on the panning arc I chose to go from a near object to a church a mile or so across and down a river. Imagine the savings if I could lasso a target area on my monitor and have the camera track to the target (panning and zooming along a reasonable arc).

A very recent technology advance, for me, is the DJI Osmo.

Sure, you have to live with the wide angle lens, and low light issues (physics rules) but think of the footage you are likely to get relative to the missing an event entirely because of setup time with a big camera/tripod.

Adapting to the DJI Osmo is not likely to be a picnic.

In the ads, you only see the camera and monitor on top of a handle, but I can see things starting to look a lot less mobile once you start loading down the camera and yourself with accessories. You probably need to add two crew members, one with a portable mixer/recorder and one with a boom mic.

Quite a transition to make for a one-trick pony like me.

Posted in TECHNOLOGY, Video Production | Leave a comment

Success Factors with BPM


If you are thinking about the potential benefits of Business Process Management or want to fast track your current BPM initiative, here are a few “must haves” for success.
The_Thiinker

0. “First the problem then the solution”, meaning no point mapping/implementing processes if the organization does not have a mission and has not evolved a set of strategies.

1. a reasonable subset of the business activity to be “managed” involves the performance of tasks in logical sequence.

2. the work will be performed more than once (otherwise use Critical Path Method).

3. no work should be performed that does not either directly or indirectly support strategy.

4. the benefits vary such that for a large initiative it is advisable to go through the formality of an ROI or SROI.

5. the more complex the sequencing, the more specialized the tasks (requiring specific skill sets), the larger number of silos that a process must overarch, the more beneficial it becomes to go beyond paper mapping to achieve in-line implementation of a process (as opposed to an off-line or on-line implementation).

6. too low a level of detail (i.e. splitting one short term task to be performed exclusively by one person into three tasks) is bad; too high a level of summary makes monitoring/control difficult (i.e. one task comprising several tasks, to be performed by several people, over an extended period of time).

7. the run-time environment hosting instances of templates (i.e. compiled rolled-out flowgraphs) needs to be able to accommodate re-visiting already committed tasks, recording data at tasks not yet current along their instances, and insertion of ad hoc interventions at the environment.

8. usual run-time services to support the processing of instances include R.A.L.B (three-tier scheduling); a formal History (committed tasks, with date/timestamped user “signatures”, with recall of data, as it was, at the time it was entered, on the form versions that were in service at the time); data logging for possible machine real-time predictions OR after-the-fact data mining to allow process owners to improve their processes; data import/export to increase the reach of the run time environment.

9. reasonable accommodation for deviating from instances, but with governance from rule sets at the environment to “rein in” extreme, unwanted deviations i.e. guidance from BPM, governance from the environment. [the highway example of center lines to provide guidance and guardrails on both sides for governance].

10. the environment selected must have a simple UI, otherwise the initiative will fail – i.e. none of these assumptions will increase productivity, increase throughput, decrease errors, improve compliance with internal and external regulations or improve outcomes if the User Interface at the run-time environment fails to improve the user experience (avoid having to say to users ” easy for me, difficult for you”).

11. adequate training must be provided – the best results are obtained when the facilitator kicks off the 1st mapping session by giving the mouse to a stakeholder and saying “let’s map out one of your processes”.

12. many processes are dynamic, they must be maintained and occasionally targeted as candidates for improvement.

13. Wraparound BPM (360 degree BPM) is achieved when work performed under the guidance of BPM results in data that can be consolidated to KPIs at the strategy level.

Hurdles that need to be overcome

a) “you can manage complex processes by staring at paper process maps” – not true.

b) except for end-to-end processes, objectives belong at Cases hosting BPM/ACM, not at end points along flowgraphs – many times there are no end points (i.e. process fragments) – users, machines and software thread process fragments at run-time. In theory each Case can be different.

c) Cases can only be closed by Case Managers (“it ain’t over until the fat lady sings”).

d) Case Managers need decision support (from rules at tasks along flowgraph template instances, from the Case History, from rules global to the run-time environment, from FOMM (Figure of Merit Matrices) to avoid subjective assessment of progress toward Case goals/objectives.

Management needs to exercise reasonable patience, – you can’t change a corporate culture overnight.

Posted in Business Process Management, Process Mapping, Automated Resource Allocation, Strategic Planning, Operational Planning, Process Management, Case Management, R.A.L.B., Competitive Advantage | Tagged | Leave a comment

How to achieve quick wins with BPM


Quick Wins definitely are the preferred business development approach for consultants compared to wasting time responding to RFPs.

 Here is the pitch we have perfected over the past two decades..

How to Quick Start BPM in your organization

Let’s face it.

BPM and its direct antecedents (flowgraphs) have been around for a long time.

  • The methodology is not well known because we encounter business people daily who have never heard of BPM.
  • Another subset has heard of BPM but feel what they are doing presently (or not doing) is sufficient.
  • A third group has tried to implement BPM only to end up as members of a not-so-elite group that, according to some, experience failure rates of 70%.

We need to break out of this mold.

Technology alone is not going to help, so this leaves leadership and user onboarding to master.

If an organization wants BPM and users cooperate, it should be able to achieve liftoff and here is how to fast track your BPM initiative.

Businessman at the entrance of a labyrinth

Pilot Phase

1. Go for low-hanging fruit – pick an initiative with not too long a timeframe, not too high a risk, with the potential to demonstrate quantifiable benefits.

2. Pick a pilot project process that is confined to one or two silos.

3. Go to the trouble of preparing an ROI (you will want to document before/after to get support for other initiatives).

Make sure you document the “before” (i.e. how long it takes to do work, how consistent the outcomes are).

Desired State of Affairs: e.g. The new process reduces the time to analyze a claim by 30%, the level of customer satisfaction increased from 2.5 to 4.5.

4. Bring in a facilitator to graphically map out the process in real-time.

Forget notations and UMLs – most processes only need nodes, directional arcs, branching decision boxes, imposed delays, loopbacks, exit points.

Facilitators lose much of their “magic” when they force a group of ordinary users to watch them build processes with notations, languages.

5. Park images of data collection forms needed at process steps on your mapping canvas so you can drag and drop form images at steps as you build your process.

Make sure the images post to forms that include a memo field – you will want at run time to be able to take quick note of complaints from stakeholders that the process logic is wrong, the forms in use are wrong, the performing roles are wrong, etc..

6. Do not slow down the project by programming rule sets during the first cycle.

Instead, describe rules in narrative terms only and make the branching decision boxes manual.

You can build rule sets and convert decision boxes to auto off-line.

7. Assign actual imposed delays to process steps that need these, but use a run-time environment that allows a temporary override down to 1-2 minutes for testing purposes.

8. Encode process steps with their correct Routings but allow a temporary override of the parent Routing of all Routings so that one person can run through the entire process without having to log out/in under different user accounts.

9. Compile your mapped process, roll it out, get a small group of stakeholders to piano-play the process, incorporate their suggestions/comments/corrections, re-map, roll out again etc.

If you cannot get through all of the listed steps in 1 ½ hours, your SOW for today is larger/more complex than it should be.

Only map in one session what you can roll out and test, update, roll out and test again. You can advance the process next session. Your users want/expect “instant gratification”.

Production Phase

1. Replace the imaged forms with real forms, build rule sets, put branching at decision boxes to auto, reset imposed delays to their plan-side values.

2. Collect run-time data (should be automatic in the environment you are using) for statistical analysis/machine analysis to improve your process.

3. Blend in FOMM (Figure of Merit Matrix) constructs at Cases so you can more easily track progress toward meeting Case objectives.

The overall objective for your “Quick Results” BPM project is on-time/on-budget completion, before/after documentation, user testimonials that it is easier to do their work with the system than without it.

Overall, you should be able to see increased productivity, increased throughput (be it in the area of processing patients, settling insurance claims, or completing MRO on a Blackhawk helicopter), reduction in processing errors, increased compliance with internal and external rules/regulations, all of which contribute to better outcomes and increased competitive advantage.

Posted in Business Process Management, Competitive Advantage, Process Mapping, Productivity Improvement | Tagged | Leave a comment

Success in Business – Reach for the Top With ACM 360


Success in business is all about building and enhancing competitive advantage.

Strategy “rules” – if you don’t know where you want to be, you won’t get there.

onward_and_upward

No point evolving plans that don’t make efficient/effective use of scarce resources. No point having plans that never get implemented. No point implementing plans that are not monitored.

Plan, monitor, control.

We knew that, but how do we actually make it happen?

  1. Go for the big picture for strategy formulation/management

If you can’t see it, you can’t include it or exclude it.

Seeing the big picture allows you to assess and prioritize initiatives.

Graphic free-form search Knowledgebases give you the big picture.

Kbases – don’t try to evolve/manage strategy without these.

Use the following model:

Corporate assets inventory -> strategy -> KPIs ->candidate initiatives -> ROIs -> Authorized Initiatives

     2. Go with Adaptive Case for operations management

Case is capable of hosting any mix of ad hoc and structured interventions. Look to background BPM to provide guidance re structured interventions. Look to Case level rule sets to provide governance.

A Case can store any object including data element values, .pdf, .doc, .txt, video/audio recordings and spreadsheets.

The latter are capable of providing a framework for assessing progress toward meeting ROI goals/objectives. The methodology of choice for non-subjective assessment of progress is FOMM (Figure of Merit Matrices).

Case environments automatically build longitudinal histories with date-timestamped – user “signed” hyperlinks that allow viewing of data as it was, at the time it was collected, on the form versions that were in service at the time.

Case provides dynamic decision support in respect of the performance of work that derives from annual budget authorizations and work that is related to initiatives funded via ROIs.

Case – don’t go to the office without it.

Use the following model:

Initiatives -> Case setup -> monitoring -> data consolidation to Kbases -> KPI trending

     3. Face Up to the Dynamics and Bridge The Gap between Strategy and Operations.

In today’s world where 5-year plans have, under many scenarios, been compressed to 1 ½ years . . .

strategy can change, initiative priorities can change, goals and objectives can change.

Enter ACM  360 (Adaptive Case Management 360), for bridging the gap between Strategy and Operations, because of the wraparound that can be achieved by:

  1.  launching Cases for individual ROIs;
  2.  setting up permanent “bucket” Cases\Sub-Cases for the many different database record types organizations need (Corporate Assets, Land, Plant, Equipment, Tools, Customers, Customer Orders, Inventory In/Out, Supplier Orders, Shipments, etc);
  3.  managing operations, and;
  4.  consolidating data to corporate Kbases\KPIs.

 

 

Posted in Adaptive Case Management, Business Process Management, Case Management, Decision Making, Operational Planning, Strategic Planning, Uncategorized | Tagged , , | Leave a comment

Risk vs Uncertainty . . . again


I’m not sure people in general understand the difference between risk and uncertainty so here is an update on an article dated 2012.

Barry Ritholtz does a good job in the article quoting Michael Mauboussin

http://ritholtz.com/2012/12/defining-risk-versus-uncertainty/

Risk: We don’t know what is going to
happen next,Rick_Uncertainty but we do know what the distribution looks like.

Uncertainty: We don’t know what is going to happen next, and we do not know what the possible distribution looks like.

The distinction is important in the area of strategic planning.

ROIs for initiatives should always include a Risk Assessment (worst case, expected, best case).

Approvers of ROIs are well advised not to expect to end up too close to the bottom or too close to the top.

In respect of uncertainty, ROIs should also always include exit strategies.

Posted in Decision Making, Risk Analysis, Strategic Planning, Uncategorized | Leave a comment

Patient Portals versus APIs for Patient Access to Healthcare Information


Back in November 2015, Health Data Management, published an article called “Challenges Ahead for Portals”.

This is an interesting article because it indirectly describes the effect of too much regulatory involvement in healthcare services delivery.

PORTAL_API

http://www.healthdatamanagement.com/news/challenges-ahead-for-portals-51630-1.html

In the article, Raj Ratwani, scientific director of MedStar Health’s National Center for Human Factors in Healthcare states that patient portals “.. do not present information in a manner that is understandable and useful”.

It’s likely that views regarding the inappropriateness of existing patient portals led to inclusion in the Stage 3 objective for Patient Electronic Access to address the patient needs to “ view their health information, download their health information, transmit their health information to a third party and access their health information through an API.

My point is it’s fine for regulatory agencies to set incentive objectives but not to narrowly specify the means by which such objectives should be met.

Whether a patient gains assess to PHI via a portal or via an API should be a decision best left to stakeholders who have a close connection to patients.

Under this scenario, if a vendor implements a portal that does not address patient needs, the patients will move to another healthcare service provider who either has a better portal implementation or an API that works well for such patients and the provider supposedly, would pick up on this and move to a different vendor.

Accordingly, portal/API selection should be the responsibility of vendors first, then healthcare service providers, picking solutions they feel their patients will find acceptable.

Vendor -> Provider Selection-> Patient Needs

The way things go when there is too much regulation is regulators impose demands on vendors, healthcare service providers then select, from a reduced set of options, solutions they feel will address internal/patient needs and the patients then decide whether the “solutions” meet their needs.   I doubt very much whether the regulators consulted patients before reaching the conclusion that patients would be best served via APIs.

See how far away the patient is from the regulators under this alternative scenario.

Regulatory Authority -> Vendor -> Restricted Solution Selection for Providers-> Patient Needs

The reality is you can deliver patient healthcare information to patients using a number of technologies, one of which is an API at a Patient Portal (i.e. a hybrid solution). This avoids the need for the patient to download and install an API on the various devices they may want to use to access their healthcare information. All they need with a portal/API is to type in a URL and enter a user name/password.

The danger with the phraseology in the Stage 3 Final Rule is that software systems that do not have a traditional API could be categorized as not meeting the Stage 3 Final Rule.

Posted in FIXING HEALTHCARE, Interconnectivity, Interoperability, Meaningful Use, Uncategorized | Tagged , , , , | Leave a comment

Are your BPM process steps being served?


Anyone remember the British sitcom “Are you being sewaiter_free_photorved” broadcast between 1972 and 1985 on BBC1?

No one needs to ask this in respect of BPM workflow steps because rules ensure that soon as one structured step is committed, the ‘next-in-line” steps immediate post to the InTrays of staff with the skill sets required to perform these steps.

Clearly, we don’t want steps with imposed delays to post immediately, nor do we want steps under the control of behind-the-scenes “gatekeeper” steps being released willy-nilly, but, otherwise, immediately means immediately.

What about ad hoc interventions at Cases?

Since the system doesn’t know what the next ad hoc intervention is going to be at a Case, management needs to rely on analytics at Cases for decision support e.g. when 30% of the man-hours have been spent, we should be no less the 20% complete on Case objectives. Senior management also needs analytics relating to overall corporate KPI (Key Performance Indicator) trending e.g. if a trended KPI is more than 4% off target, management needs to investigate.

Thirty pct/twenty pct metrics are interesting for work where advancement follows “S” curves.

I recall “early risers”, “slow risers”, “fast risers”, etc. models in construction work but you don’t need to have worked in construction to discover that progress is typically slow at the start of most initiatives, followed by rapid advancement, only to get to where it seems you are at 90% complete but only 50% done along the timeline.

I also recall how staff tried to work project management systems to their advantage – each time a project started to look bad and management complained, staff would re-wire their flow graphs to include shortcuts and compress the durations of downstream work such that in the absence of good rule sets and vigilant PMO staff, project float would immediately go from -10 weeks to +12 weeks, only to catch up with reality when it became too late to do anything about cost/time/performance overruns.

Our project management team quickly learned that we could trend such that project re-sets could be dampened (same way moving averages are used in the commodity markets) to anticipate “real” expected time/cost overruns. Needless to say, we did not enjoy a high level of popularity.

Bottom line here, nothing is changed expanding once-through projects to handle b2b – Case Managers need to manage their Cases and the toolsets required are easily described:

  1. A Case History for making decisions.
  2. Some non-subjective means of assessing/projecting progress toward Case objectives (e.g. Figure of Merit Matrices)
  3. A run-time environment that supports RALB (resource allocation, leveling, balancing) because users in some areas will be working on  20 0r m0re Cases on any given day.

As for the right time to close a Case?   Yogi Berra summed it up nicely – “It ain’t over till it’s over”.

For Cases, what this translates to is “Cases are closed by Case Managers”.

 

Posted in Adaptive Case Management, Automated Resource Allocation, Business Process Management, Case Management, Decision Making, Uncategorized | Tagged | 1 Comment

Life in the BPM fast lane . . .


If you are a BPM practitioner who has moved beyond end-to-end process management, you are probably familiar with “Adaptive Case Management” or ACM.

ACM is practiced in a run-time environment called “Case”, (not “Use Case”), capable of hosting any number of data elements and data values for an Entity, including, as attachments, .pdf, MS Word, Excel documents, plus digital images and video recordings.

Examples of Entities are Corporate Assets, Customers, Staff, Suppliers, Orders, Projects . . the list can go on and on.

Each Entity, of course, needs its own set of data elements with unique internal IDs.

Clearly an Order in an Orders Entity will result in data traffic to a Suppliers Entity, then, as, and when the Supplier(s) ship on the Order there will be data traffic back to the Orders Entity.

In IT parlance “Case” is nothing more than a primary cursor position in a post-relational database.

A visit to your run-time Case Management environment will show users, robots plus local and remote systems and applications streaming Case Records onto discovered/mapped/improved/rolled-out BPM process templates to yield private instances of such templates.

The actors perform “Case Interventions”, most of which capture data and write out the data to a Case History where each intervention has a date/timestamp plus an actor “signature”. Usually, a parallel data stream is output to a data warehouse for data mining purposes. And, a copy of the data goes to a Data Exchanger for sharing with possibly large numbers of subscribers, each wanting a different subset of the data.

Most interventions at Cases contribute to reaching Case Objectives. Some are purely supportive of the contributors but are no less important in terms of Case Management.

The rubber hits the road when organizations realize that any Case Manager is likely to be overseeing multiple concurrent instances of a BPM process template, worst case, all at different steps along their template instances, plus multiple concurrent instances of other BPM process templates.

It gets worse – whereas BPM process templates are capable of providing Orchestration (i.e. do this and then do that), it’s unlikely with complex business processes that any templates will be capable of covering all eventualities.

Accordingly, users need to be able to skip steps, re-visit already committed steps, insert ad hoc steps not in the template and sometimes record data at not-yet-current steps.

Moreover, users are likely to be working on multiple Cases at the same time – overall orchestration at Cases is best left to Case Managers/users not automated protocols.

The flexibility just described obviously needs Governance to “rein-in” extreme, unwanted variations away from “best practices”, be they mapped or not mapped.

Governance is provided by Rule Sets at process steps, between process steps, at data exchanger import engines as well as at the Case level itself.

Since Cases are presumed, at all times, to be supportive of and contribute to building, maintaining and enhancing competitive advantage, we typically see strategic Rule Sets generating events at Cases (i.e. launch an auto-audit, if, as and when advancement toward Case objectives, for example, seriously lags the planned Case timeline).

So, Case Managers manage Cases, background BPM provides orchestration along process templates at Cases, and Rule Sets provide governance at Cases.

See “It’s all about that Case . . . “ at

https://kwkeirstead.wordpress.com/2015/09/17/its-all-about-that-case-bout-that-case-no-trouble/

The remaining pieces of the business process management puzzle are workload management (i.e. allocating resources, leveling and balancing workload within and across Cases), data collection, assessment of progress toward Case Objectives, data consolidation and data exchange.

Allocating Resources

Most process steps require specific performance capabilities or specific equipment / supplies so it makes sense, plan-side, to define resource requirements at process steps. This results in posting of next-in-line steps along a template instance immediately following a commit at the last step/sub-path immediately upstream from such steps. Exceptions to this rule are steps that have an imposed plan-side delay (i.e. pour concrete, wait 4 days).

In the interest of rapid advancement toward Case Objectives, steps post to the attention of actors who are both available and have a match on required performance skill at steps. We want the 1st available actor to “take” the step and “own” it, the presumption being that he/she will promptly perform the required action and commit the step. Otherwise, the actor should clearly document work performed and return the step to the resource pool (e.g. when going off shift with an in-progress intervention).

Leveling Workload

Once an actor has “taken” a step, we expect some micro-scheduling (i.e. setting aside a specific time of day to work on the step, re-scheduling certain steps to tomorrow, etc). Most people like to perform a mix of short-term tasks and long-term tasks so as to not compromise progress with either category of task.

Balancing Workload

Whereas steps typically post with a priority indication, things change, so supervisors need to be able to offload steps from one actor and specifically assign these to other actors.

Data Collection

As steps post, a reasonable expectation is that instructions and all required data collection forms be easily accessible. The key to resistance is to make it easier for staff to use the software system than to not use it.

Clicking on a step should cause instructions and forms to post for easy data recording. Once data entry is complete, we want a one-click commit in the run-time environment, with routing to the Case History, to the Data Warehouse and to the Data Exchanger. Data posting to the Case History should be done in real time (because the next-in-line step may need some of the data just collected).

If you are in the market for a Case Management System make sure you understand the difference between Case Logs and Case Histories. Nothing short of the ability to view data, as it was, at the time it was collected, on the form versions that were in service at that time, will do. Case Logs can have the detail that you find in Case Histories, most do not.

Assessment of progress toward Case Objectives

It’s not easy to avoid subjective assessments of progress toward Case Objectives. FOMM (Figure of Merit Matrices) at Cases provide a framework for consistent and automated assessment of progress toward Case Objectives. If all steps have plan-side standard performance times and your software system is able to track actual/projected step times, simple manhours-to-go calculations may suffice.

Data Consolidation

Since most work within corporations is funded by annual operating budgets or ROI submissions, there will be KPIs at the strategy level that look to operational (Case) data for trending. Your best bet here is to include your KPIs in a knowledgebase so that senior management can challenge reported trends.

Data Exchange

In many run-time environments most of the data recorded at Cases comes from local and remote external systems and applications.

It is unreasonable to expect alignment of data element names across multiple systems and applications.

Accordingly, corporations need a way for publishers to post data using their own data element naming conventions and, for subscribers, to read data using their own data element naming conventions. The data exchanger must be capable of filtering data so that subscribers only see data on a strict need-to-know basis.

Posted in Adaptive Case Management, Business Process Management, Case Management, Data Interoperability, Uncategorized | 1 Comment