I found a post at the LinkedIn “Workflow/Business Process Management” discussion group The Alignment of Six Sigma and BPM that asks “There has been a lot of conversation that you need to choose either Six Sigma or BPM. My premise is that the two can be a very powerful combination when used in a complimentary fashion”.
My response was this:
They are not dead, they do complement each other BUT unless an organization is content with conceptual implementations of improved processes, you need to augment Six Sigma plus BPM with ACM.
The reality of today’s marketplaces is that most organizations have to be able to handle a mix of structured and unstructured work. It can range from 95/5% structured/ unstructured, in which case Six Sigma + BPM will be fine, to 5/95% structured/ unstructured where ACM might, on its own, provide satisfactory results but with diminished control over process fragments (the 5% in theory but more than this in practice).
So, why not make the effort to combine ACM+BPM+6S and be in a position to handle ANY mix of work?
For this you need
a) a CASE environment as a repository, and as a decision support environment
b) an automated resource allocation, leveling and balancing run-time environment because no organization works on one process at a time.
The typical day-to-day work scenario is multiple processes where many instances of each are active, and where progress of work is held up by scarce resources (people, equipment, funding).
Each of the instances can have varying priorities that can change on the fly, so this is why we distinguish between resource allocation, resource leveling and resource balancing, even re-balancing.
Everything starts with a set of best practice protocols as templates.
However many decision branch points you build into your workflows, there will ALWAYS be exceptions.
Staff need to be able to skip steps, re-visit completed steps, jump ahead to steps that are not yet current on the workflow, perform steps not in the workflow.
So, the management approach needs to be to encourage, but not impose, consistency in the use of best practices. Anyone who wants to argue with this has only to use worst practices all of the time to see where that takes them.
I like to consider ad hoc interventions at a Case as “processes of one step”. This makes for a simple model where ad hoc work does not have to receive special handling.
At the end of the day, we get to the puzzle of how can we, on the one hand, encourage consistency in the use of best practices, yet effectively say “follow the best practices but, if you like, don’t follow them”?
This makes for a scenario where you might be following a best practice, whilst the person sitting next to you is performing the same scope of work as a series of seemingly unrelated ad hoc steps.
Why worry about this if the two workers end up with the same outcome?
The big question is how do you get the same outcome with two opposite approaches to the performance of work?
Answer: in-line background checkpoints at key process points where software puts up a roadblock if things are too far out of line during processing of an instance.
An example from medicine is a chart review. We know that it is impossible to get a roomful of MDs to agree on how a particular patient should be managed, but we can, at chart review time, have a software system consult a hidden checklist that says “you need 9 deliverables to complete the chart review, you only have 7, so fix up the deficiency and then only will we move forward”