Process Improvement – Under the Hood


Here’s the deal . . .car-engine-motor-clean-customized-159293

You are in a corporation that

1) “thinks” BPM,

2) has mapped processes,

3) has a run-time workflow\workload Case platform, whose Cases have stated objectives,

4) has non-subjective means of assessing progress toward meeting Case objectives,

5) has a compiler that is capable of carving up your process maps into run-time BPM templates,

6) has a way to stream Cases onto BPM templates,

7) has good Case Managers.

All good,  except that essential to requisite #3 (run-time workflow/workload Case platform), is the ability to auto-build a Case History.

Each user log-in that results in any data change/data augmentation needs to result in auto- recording of the “session” by way of a system applied date/timestamp, complete with a user “signature” and all data, as it was, at the time it was entered, on the form versions that were in service at the time.

Once in, the platform must not allow any changes to the data.  Errors/omissions, late data are handled by posting copies of Forms, allowing edits to these Forms with new “session” recordings.

Considering that not all data needed at a Case can be recorded precisely at the time the data becomes known to a user, all Forms at process steps (structured or ad hoc) must accommodate a reference date-of-capture date which can precede the Case History session date by hours, days, even weeks.

The cardinal rule in some industry sectors is that data, not in the system, “does not exist” – the interpretation of this rule is that protocol requires that users visit the Case History prior to taking any current decisions/actions. If the data is not in the Hx, there is a good chance that the decision will be made only on the basis of what is in the Case History.  Who knew what, when is all – important in many Case audits.

So, how now do you go about improving decision-making at Cases and improving processes dynamically?

First comes data analytics.

Unless you are trying to post big screen notices to individual buyers at shopping malls re Internet searches they did last night, data analytics for improved dynamic decision making is not complicated.

A small change at branching decision boxes allows analytics to provide a hint to the user as to which branching options have been the most popular (i.e. they went that way 60% of the time).

Clearly, your data sampling size must to be sufficient and you may need/want to filter your statistics according to a timeframe, especially for initiatives that anticipate different seasonal outcomes or initiatives where legislation may have changed recently.

As for dynamic process improvement, the best approach I have been able to think of is to post the process map and then overlay it to show skips, jumps, loopbacks and ad hoc insertions, with stats where possible.  Ad hoc interventions should be noted as well, particularly in terms of their timing (i.e. each time step #1024 is skipped, a specific ad hoc intervention is inserted, possibly giving a good indication that the map needs to be updated to show the ad hoc intervention in lieu of the skipped step.

I am not prepared to say that any of the above aside from the Mall initiative is AI but for me, it provides a pretty good way of improving processes.

 

 

 

Posted in Adaptive Case Management, Business Process Improvement, Business Process Management, Case Management, Decision Making, FOMM, Process auditing, Risk Analysis | Leave a comment

Take it to the limit with BPM


onward_and_upward

The transition BPM aficionados are going through today, moving from neat-looking structured workflows (end-to-end processes) to arbitrary clusters of small workflows (process fragments) and pure ad hoc interventions, is proving to be a difficult one.

It all begins with discovery that once you embrace process fragments, there is no convenient “process-end-node”.

 

The result is you no longer have, plan side, a place to park process objective(s).

Like it or not, objectives need to move from plan-side to run-time side.

Fortunately, someone/somewhere, figured out that in order to be able to thread together process fragments you need a workspace.

They figured “Case” was the right place to host background BPM (workflow), the right place to host three-tier scheduling (workload), and the right place to position algorithms for assessing progress toward meeting Case objectives (i.e. measures).

As indicated above, Case is a workspace. If you are modeling, it’s a sandbox. Technically it’s just a cursor position in an RDBMS.

If you can navigate to a Case, you can get at everything relating to that Case and this includes structured data (entryfield, checkbox, radio button, list, grid, etc.) plus unstructured data (memo, image, and video/audio). The secret sauce is to put everything IN the Case, not all over the place with “links” to objects.

Objectives are easily accommodated at Cases.

Sold on Case?  If yes, then let’s do a stress test.

The thing is I have been telling folks for years that Case lets you manage a mix of structured/ unstructured data and that the mix can be 5/95% or 95/5%, but, can we go to 0/100% or 100%/0%.?

And, if we can go to 0/100% structured/unstructured, are we still practicing BPM?

I say, yes, today.

The reason is has taken 8 years at this blog to “take it to the limit” is that we only recently started to see, in the real world, Cases where BPM can continue to be core in the absence of any workflows (i.e. all of the process fragments are “processes” of one step).

You can visualize this by taking any process you may have and proceeding to delete all of directional arrows.

Such is the starting scenario for the electronic version of Practical Homicide Investigation – Checklist and Field Guide (e-Checklist), where we are taking on the task of “improving investigative outcomes” using “smart software” at smartphones and tablets.

Facts are, the “smartphones/tablets” really aren’t that “smart” in this app.

It’s the investigators who are smart and these folks are more than capable of deciding, across the crime scene investigation team, who should do what, when, how, and why.

Except that we cannot have two different accounts from the same witness re the same event and we cannot have support staff putting down chalk lines and markers before the cs photographer has done his/her initial walkabout.

The command and control center laptop (e-hub) actually has a lot of “smarts” that guide workflow and help to reduce errors and omissions.

If one investigator ‘takes’ a checklist (i.e. a best practice protocol), the software shows the checklist as busy but grants read-only access to others.

The fact that the “system” accommodates near real-time data consolidation means that dynamic decision making at crime scenes is enhanced.  Parking a generic data exchanger near the e-hub allows an investigator to send a facial image to a remote facial recognition database and get back “hits”.

In the area of evidence-gathering, it helps to have ready access to finding, collecting, bagging and tagging evidence because there are more than 60 evidence protocols and any errors or omissions routinely result in the evidence getting thrown out of court.

Take a look . . .

I won’t go on, because most of you are not into homicide investigations, so your task for today is to try to identify references to BPM in the 10-minute overview to “e-Checklist”.

You won’t find any.

However, BPM is absolutely core to this app and police departments, as they adapt e-Checklist to filter out checklists that are not relevant to a particular crime scene case (i.e the victim is an adult, no need to consult the “Sudden Infant Death Syndrome” checklist), will gradually put in place process fragments of more than one step each and will evolve rules for  preserving, for example, “chains of custody” for evidence.

Another thing you need to know is that “e-Checklist” (an encapsulation of 1,500,000 lines of complex computer code across four interconnected modules) required little more than minor changes in terminology – the workflows were built with the standard version of our process mapping environment; the Case dbms is standard; the checklists were built with the standard CiverIncident Form Painter; the Portal forms that post at smartphones/tablets are pushouts of back end server forms;  the Generic Data Exchanger, as you might have guessed, did not need other than a few minor enhancements, so what  you have here is a low-code BPM app, where, nominally, 100% of the workflow is unstructured.

Did we take BPM to the limit in this app?

Let us know if you agree. Let us know if you disagree.

“Take it to the Limit”, The Eagles -, “Live at the Summit: Houston, 1976”

Posted in Adaptive Case Management, Business Process Management, Case Management, Data Interoperability, Database Technology, Decision Making, Major Crimes Case Management | 1 Comment

KPPs and BPM


KPPs and BPM

gateway-3130595_640You are probably familiar with KPIs (Key Performance Indicators), but this article is about KPPs (Key Process Points).

KPPs are wait-state auto-commit steps along BPM pathways.

They have predecessors and successors like all other BPM steps. They become current when the last of their immediate predecessors has been committed and their immediate successors become current at the time KPPs commit.

The performing resource is “System” and KPPs post to user System’s InTray.

KPPs have Forms, same as all other BPM steps, with, in the case of KPP steps, a mandatory Form Rule that reads data and “fires” when the rule evaluates to TRUE.

For performance reasons, it is important that each KPP have a cycle timer so that the Form Rule is evaluated at reasonable times along the process timeline (i.e. every hour, every day, once a week).

KPPs are becoming more and more important as BPM processes depend on data external to their run-time Case Management platforms.

They typically require a generic data exchanger so that data pulled from local and remote applications and systems to the data exchanger or data posted to the exchanger can be mapped to data element names that individual trading partner Case Management platforms recognize. Most organizations use pull protocol with some of their data trading partners and use push protocol with other of their data trading partners.

OK, so if we have KPPs waiting for incoming data, why don’t we have KPPs waiting to output data for push to local and remote systems and applications or for pull by local and remote systems and applications.

The reason is this data management task is handled by the data exchanger. For pull, the subscriber can decide the frequency of pickup and, for push, again, the subscriber can work out a strategy that suits the subscriber.

If you don’t have/use KPPs, your BPMs is an island and you are basically working with a model of a process as opposed to a real-life mirror of a process.

Posted in Case Management, Operations Management, Uncategorized | Tagged , | Leave a comment

Can your BPMs improve operational effectiveness and efficiency?


Apparently not, based on Wikipedia’s definition – “a BPMS is a technological suite of tools designed to help the BPM professionals accomplish their goals

businessman hand point on virtual business network in board room

The conclusion is easily explained. . .

Operational efficiency, for sure, can be achieved by automating business processes but to achieve effectiveness it is Case goals, not the goals of BPM professionals, that need to receive the focus.

If the Wikipedia definition is right, corporations need a run-time Case Management System that is capable of hosting background BPM compiled process maps as templates in addition to a BPMs.

And, the Case “goals” reasonably come from ROIs requesting funds to undertake the initiative(s) that is/are the focus of each Case.

Clearly, not all Cases need ROIs – options are 1) annual functional department budgets for small initiatives typically confined within one functional department and 2) collaborative low-cost initiatives that span two or more functional departments.

Need more proof?

Consider a knowledge worker in a large corporation – unless this person is totally focused on some large initiative i.e. one Case, chances are they will be working on multiple Cases – this takes Case Management of each process template instance out of the hands of BPM.

The worker will typically see multiple tasks relating to multiple Cases posting to his/her InTray and the worker will want to micro-schedule his/her work across some or all assigned Cases.

On top of this, the worker’s supervisor will, from time to time, receive re-prioritization requests and may impose deadlines on the worker for certain tasks/Cases. The supervisor may also offload tasks/Cases from the worker.

The micro-scheduling/re-prioritization activity is called RALB (resource allocation, leveling, and balancing) and is not part of what most folks consider to be part of BPM.

Next, we have the need for automated data exchange between Case Management Systems and local and remote 3rd party systems and applications.  Without this, your Case becomes an island. As we connect more and more to the IoT, we need ways and means of exporting/importing data and, for both efficiency and effectiveness, much of this data flow needs to be automated

Example: You have a workflow that is expecting receipt of a supplier notice of shipment.  The notice can come in the form of an edi message but you certainly don’t want this to come to you as an e-mail attachment. The happy scenario is your supplier uploads the message to a generic data exchanger and your Case Management environment polls the data exchanger at set time intervals, looking for the shipment confirmation. Pathway rules then allow the processing to move forward.

Lastly, if your Case is the least bit complex, it will have several goals and you will not want to rely on subjective assessments by Case Managers re whether a set of goals has/has not been reached.  Left on their own, it is important to remember that “Case Managers close Cases”.   Their power at Cases is pretty much absolute.  Figure Of Merit Matrices (FOMM) make the decision-making less arbitrary.

Bottom line, here is the scorecard . . . .

 

Performance Requirement BPMs Case
Efficiency Y Y
Effectiveness N Y
Business Goal Focus N Y
RALB N Y
Data Exchange N Y
FOMM N Y

 

Conclusion

BPMs                                necessary, not sufficient

Case/BPM                        necessary, not sufficient (1)

 RBV/Case/BPM            necessary, sufficient(2)

(1)

The reason Case/BPM gets a “not sufficient” tag is that Case/BPM are operational tools – we don’t get 360-degree coverage unless/until the organization subscribes to RBV (Resource-Based view), or equivalent, for strategy development.

Absent RBV, the organization lacks an orderly means of allocating scarce resources to Cases so any goals defined at Cases are not likely to be met.

(2)

See

Theories of the Firm – Expanding RBV using 3D free-form search Kbases

http://wp.me/pzzpB-Ms

 

 

Posted in Adaptive Case Management, Business Process Management, Case Management, Data Interoperability, Decision Making, FOMM, R.A.L.B. | 1 Comment

2016 Recap – Basic Requirements for Success with BPM


Once again, with minor updates/consolidations,  . . . . here is my list of basic requirements for Success with BPM.

BPM

1. Some of the work to be performed involves the performance of tasks in logical sequence.

2. The work will be performed more than once (otherwise use Critical Path Method).

3. The benefits vary such that, for a large initiative, it is advisable to prepare an ROI or SROI.

4. The more complex the sequencing, the more specialized the tasks (requiring specific skill sets), the more beneficial it becomes to go beyond paper mapping to have an in-line implementation of a process (as opposed to an off-line or on-line implementation).

5. The run-time environment hosting instances of templates (i.e. compiled flowgraphs) needs to be able to accommodate re-visiting already committed tasks, recording data at tasks not-yet-current along their instances, and ad hoc intervention insertions at the environment.

6. Usual essential services to support the processing of instances include:

a) R.A.L.B. (three-tier scheduling);
b) a non-subjective approach to assessing progress toward attainment of Case goals/objectives such as F.O.M.M. (Figure of Merit Matrices),
c) a formal History (committed tasks, with date/timestamped user “signatures”, with recall of data, as it was, at the time it was entered, on the form versions that were in service at the time);
d) data logging for possible machine analysis to allow process owners to improve their processes; data import/export to increase the reach of the run time environment.

7. Reasonable accommodation to deviate from the sequencing of steps, but with governance from rule sets along instance pathways and at the environment (typically Case) to “rein in” extreme, unwanted deviations away from “best practices protocols” i.e. guidance from BPM, governance from the environment. [the highway example of center lines to provide guidance and guardrails on both sides for governance is helpful].

8. The environment selected must have a simple User Interface, otherwise the initiative will fail.

9. Adequate training must be provided.

10. For whatever reason (lack of time, inability to “think” process), bring in a facilitator for a short period of time. On-site visits may be necessary with some clients (1-2 days) but the recommended balance of work should reasonably be capable of being done as a series of several one-hour GoToMeeting or equivalent sessions per week.

11. Advanced capabilities include: bi-directional data exchange with local and remote 3rd party systems and applications; predictive analytics for improved decision-making  and consolidation of run-time data to a free-form-search corporate Knowledge Base that hosts corporate assets, strategies and KPIs.

Posted in Adaptive Case Management, Automated Resource Allocation, Business Process Improvement, Business Process Management, Case Management, Compliance Control, Data Interoperability, FOMM, Operations Management, Process Management, Process Mapping, Productivity Improvement | 1 Comment

Managing Source Code using Kbases


After years of looking for better and better ways and means of managing source code, we turned to one of our own software suites, a graphic free-form-search Knowledgebase, to manage O-O code that is used across ten commercial software products that we develop, maintain and support.

We found that the “must-have” features for managing source were:

a) auto-version control

b) node aliasing, because source code units are typically used across several, sometimes all, products.

c) free-form-search  facilities so you can pick any code fragment and immediately see where it is used, pick any developer to find out units they have worked on etc.

In the screenshot below we have a code set relating to a custom app that we built for a customer in 1998, with  250+ source code units and 16 database tables.

2016-11-30_1123

Clicking at any node reveals the source, drilling down allows you to browse the versions of the source, engaging a search for any code fragment, table construct (latest or all) causes all primary node “hits” to light up.

Posted in Database Technology, Software Source Control | Leave a comment

Steering the ship – the new business management reality


Increased global competition has more corporations chasing after the same opportunities.

The impact on strategic planning has been a reduction from 5-year planning cycles, with annual reviews, to 18 month cycles, with quarterly reviews.

The playing field has changed in other ways as well.

Corporations used to strive for “satisfied” customers.  Today, they need “delighted” customers.

In certain cases, the primary focus needs to be on future customers, not on current customers.

Behind the scenes, the corporate mission remains unchanged – i.e. building, protecting and enhancing competitive advantage, but with shorter-term initiatives characterized by increased risk and uncertainty.

The traditional role of steering the ship i.e.  “plan->monitor->control” needs upgrading to “plan->monitor->re-assess”, where “monitor” now includes advanced decision support, and predictive analytics.ship

Whereas initiatives traditionally were allowed to run their course, “plan->monitor->re-assess” means more are at risk of being the focus of budget cuts or outright termination.

SWOT (Strengths, Weaknesses, Opportunities and Threats) continues to be front-and-center, except that the number of corporate assets likely to be impacted by an initiative has increased.

Corporations are finding it more difficult to build competitive advantage based on one or two key assets.

Success comes from innovation in the way clusters of assets are put into service.

Here is a partial list of corporate assets that need to be under constant review:

(Capital, Access to Capital, Land, Equipment, Tools, Premises, Staff, Intellectual Property/Knowhow, Current Products/Services, Products / Services Under Development, Projects Awaiting Approval, Technology Trends, Changing Legislation, Competitors)

Decisions, Decisions, Decisions

It’s not surprising that the process of decision-making has changed.

Whereas, in the past, decisions were often made on the basis of information with a heavy dose of experience and intuition, today’s decision makers look for ways and means of rapidly converting knowledge into information.

Free-Form-Search Kbases are the environment of choice for rapid conversion of knowledge to information, for the following reasons:

  1. Ability to see the big picture at a graphic User Interface,
  2. Built-in connect-the-dots facilities,
  3. Availability of e-map/e-build environments for rapid roll out of operational processes,
  4. Real-time data collection and uploading / consolidation of operational data to Kbases.

The changes I have described here not been without casualties:

  • Traditional BPM, with its focus on end to end processes, is now a core capability under Case that provides background orchestration and governance in respect of the performance of work,
  • Senior management no longer just stares at executive dashboards featuring KPIs, leaving operations to do what they like. Senior management is now able to sandbox their environment and not only challenge trends but also challenge KPIs themselves,
  • CRM has been absorbed into Case,
  • ECM is now embedded in Case.

Survivors include flow-graphing (mid 1950s) and F.O.M.M. (1960s).

Both are alive and well and pretty much “must-haves” for anyone working in business today.

So, what’s next?

  • IoT interconnectivity.
  • Interoperability by and between local and remote 3rd party systems and applications.
  • Predictive analytics.
  • Fewer decision makers as AI kicks in but the ones who survive will appear to be ”smarter”.
Posted in Competitive Advantage, Decision Making, FOMM, Risk Analysis, Strategic Planning | Leave a comment

What you don’t know will hurt you


Businessman at the entrance of a labyrinthDonald Rumsfeld did us a big favor by describing three categories of knowledge.

“There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know”. Donald Rumsfeld  U.S. Department of Defense (D0D) news briefing, February 12, 2002).

He initially forgot one category “unknown knowns” (i.e. knowledge that organizations have but cannot find / access when needed).

If you are looking to improve competitive advantage you do not have much control over unknown unknowns. But you need to have a good handle on the remaining others.

Knowns knowns and known unknowns are the result of hard work.

The problem of unknown knowns, on the other hand, does not require much more than putting in place a free-form-search Knowledge Base.

FFS Kbases basically ensure that if you have it, you will be able to find it.

As usual, there are no free lunches. FFS Kbases require daily contributions. Otherwise, they quickly become useless. You also have to use them.

The starting position is to acquire FFS Kbase software -make sure you do your homework.
Posted in Decision Making, Knowledge Bases

Where are the ‘Easy’ buttons in BPM?


If you are looking for success with BPM, there are, by my count, 19 hurdles. Different consultants use different approaches, their approaches are effective within some customer cultures/not so effective in others, the tools used by the consultant and the customer vary.

At the end of the day what counts is the customer journey.

BPM

To contain the scope of the discussion, let’s isolate the following as not being part of “BPM” . . .

  • formulating corporate strategies
  • ranking candidate initiatives
  • selecting initiatives in the context of scarce resources, risk and uncertainty
  • authorizing implementation of initiatives

The question becomes which of the following areas of BPM expertise are “easy” at the customer level, which are not . . .

Go ahead and rank these and feel free to critique any you feel should not be part of BPM and add others you feel should be part of BPM.

Straight away I can identify a 20th which is “ability to carry out CEM within your bpm run-time environment so that you are on the lookout for and responsive to customer touch points”.

E=Easy M=Moderate, D=Difficult

  1. mapping out processes (concept level);
  2. transitioning concept maps to production-level detail;
  3. improving processes prior to rollout;
  4. selecting an appropriate run-time environment (i.e. Case, unless you can suggest something better);
  5. rollout of improved processes to the run-time environment (compiling graphic maps to run-time templates);
  6. setting up Case-level governance;
  7. setting Case objectives;
  8. streaming Case records onto instances of run-time templates;
  9. threading together process fragments;
  10. managing workflow at Cases (skill performance roles);
  11. managing workload at Cases (users prioritizing tasks);
  12. insertion of ad hoc steps (processes of one step, if you like);
  13. interoperability (people, machines, software, at various places);
  14. managing workload across Cases (by supervisors);
  15. assessing progress toward meeting objectives at Cases;
  16. consolidating Case data to KPIs;
  17. challenging KPI trends, KPIs, initiatives, strategies;
  18.  real-time decision support at Cases;
  19. data mining for the purpose of auto-improvement of processes.

o o o

 

Posted in Business Process Improvement, Business Process Management, Case Management, Customer Experience Management | Tagged | 1 Comment

Mini-firestorm at BPM.COM


What Is the Best Way to Build an Executable Process Model?

From a comment E. Scott Menter made on this discussion where he wrote: “Flowcharts (including IMHO BPMN) are simply not a great way to build an executable process model.” What do you think?

As of this morning, we are at response #19, with no clear consensus.

#19 Karl Walter Keirstead

[Reading over the material at this discussion, I think we need a few more rounds before the traditional stampede over to a new question takes place.

Scott riled up a bunch of us by stating “Flowcharts (including IMHO BPMN) are simply not a great way to build an executable process model.”

At the 1st post, Emiel threw one spanner in the works, asking what “executing” means.

Here we are at response #19.

My experience is we can execute a model or we can execute a “best practice” template.

The former is a best practice under construction (albeit at a higher level of summary) whereas the latter is, (graphically, for those who relate to flowgraphs), the “best practice”.

We cannot “execute” graphs but we can compile them/interpret them/ scan them/ transform them by carving them up into discrete run-time steps that have various attributes such as the performance skill level needed, data collection forms needed to record data and some mechanism for attesting to the completion or commitment of individual steps.

Scott says the “really fun part” is “press the compile button”. I agree – a lot of behind the scenes things take place when you do this.

The objective is to be in a position, within some run-time environment, where software, machines and/or people can post steps to user InTrays for their attention/action.

Clearly, as one step is completed, we need the environment to provide orchestration by posting the next-in-line steps to the appropriate classes of users.

These users really need orchestration – in healthcare, a nurse can easily need to provide services to 20-50 patients per day, each of these patients will typically be at a different step along a private care pathway (read best practice template) and many patients will be on different pathways at different steps.

The nurses indeed are not robots, it’s totally unrealistic to think that all possible interventions for any patient could be “guided” by a best practice template.

What this means is the nurse needs to be able to insert an ad hoc step at any point.

They also need the ability to skip steps in the best practice, re-visit an already completed step, and record data at a not-yet-current step (i.e. the data has become available) – no point writing the data down on a piece of paper, then waiting for the step to become current and, only then, recording the data in the patient Electronic Health Record.

Lastly, the nurses need to be able to micro-schedule their work, supervisors need to be able to set priorities, supervisors need to level and balance workload across, in this example, nurses.

Can one do all of this on a cellphone? In principle, yes.

Can the run-time system work without orchestration, no. This would mean the organization would have no way to manage its best practices.

Is a flowgraph yes/no “a great way” to build an executable process model? You tell me.

How long does it take to build a flowgraph that captures the steps/directional arrows etc.?

How long does it take to build a best practice some other way? What does the result look like?

If you don’t like to build flowgraphs, even if another way takes five times longer, do it, providing a) you have the time b) the customer is prepared to pay for the end result.

The main lookup I made in participating at this discussion was the advice given by Dr Russell Ackoff. (The Art of Problem Solving, 1978)

The advice was along these lines “Decisions involve choices and if you can’t see the likely outcome of a choice then you cannot make decisions”

My comment . . . .no flowgraphs, no way to see likely outcomes, no way to make decisions.

Execept that maybe there IS (+1, Scott) an alternative to flowgraphs, but we need to see/hear what that is.

Linear task lists clearly are the way to go at run-time. Amit uses them, my group uses them.

A live/batch chat capability is a no-brainer (improves the customer experience or journey).

Once you admit that chats are “a good thing” you are admitting to ad hoc steps and your feet are firmly planted in “Adaptive Case Management” (a run-time environment with governance, with background BPM, with embedded CEM, with RALB or auto resource allocation, leveling and balancing, plus lots of other capabilities) that lets you “manage work”.

I have found a few folks who hold the view that “process management” means building, testing, updating paper process maps. OK, that works for a silo whose output is a paper process map.

For others, the end is a new beginning and the next thing to do is roll out the best practice in a run-time format so as to manage, not the process, but the Case that is hosting various (typically) process fragments and ad hoc insertions.

And the purpose of “managing” a Case is to support or contribute to corporate strategy/initiaties.

And the purpose of formulating strategies/initiatives is to build, maintain and enhance competive advantage.

All of this in support of Peter Drucker’s various statements along the lines of “the purpose of a business is to stay in business”.

A small problem is that for some, the purpose has changed to “let’s buy this company, fire management, put some lipstick on the pig, flip the company and make a lot of quick money”.

]

Posted in Adaptive Case Management, Business Process Improvement, Business Process Management, Case Management, Competitive Advantage, Customer Centricity, Customer Experience Management, Process Management, Process Mapping, R.A.L.B. | Tagged | Leave a comment