Where do we go from here with BPM?


I recently participated in a ABPMP LinkedIn group discussion “What do you think are the most demanded BPM services (for 2015 and beyond)?”2015_2016_2017

We started off listing the “services” as “discovery, mapping, modeling, analysis, improvement, automation, discovery”.

From my experience, most BPM initiatives that fail, fail in “automation”, therefore this is the service area that should be in demand.

By “automation” I don’t mean having all process steps performed by robots/software. Automation here refers to “orchestration” (guidance) and “governance” (guardrails).

When you have complex processes comprising steps connected often in complex ways, where different skills are needed to perform different steps, it’s unreasonable for any BPM initiative to quit with “publication of a paper process map”.

The real life scenario is you will have multiple process templates, with multiple instances of each at each Case, with deviations at certain steps along instances (skipping over a step, re-visiting already committed steps, inserting steps not in the template, recording data and partially completing steps that are not yet current along an instance).

The other reality is there usually are scarce resources and competing priorities with the result that the time to run through an instance is greater than the sum of step durations.

The fact that users often are forced to multi-task results in extra time above and beyond wait times between steps. Each time a worker suspends work at a step, there is an exit time and then a re-entry time when he/she comes back to the step.

So, it follows that organizations need automated resource allocation, leveling and balancing (RALB).

We can look to background BPM and RALB to provide resource allocation providing there is an environment to host RALB.

Since workers are not robots and do not like to be treated as robots, organizations need to empower workers to micro-schedule their task loads (leveling).

Since priorities can change, supervisors need to be able to balance workload across workers (balancing).

But, we are not done – we need “governance” in addition to “orchestration” and this, hopefully, comes from the Case environment that is hosting RALB and background BPM.

Next, there is the issue of managing Cases – you can very easily be doing things the right way but not doing the right things.

Following a template can guarantee that you get the right result but it does not mean that use of the template will advance the state of the Case.

In ACM/BPM, unlike straight-through mostly linear processes, there are no convenient objectives at the end of a template.

What counts is progress toward Case-level objectives. Any work that does not directly or indirectly advance the state of a Case should not be performed.

Finally, we have the issue when defining Cases that Case objectives should at all times be supportive of strategy and conceptually, at least, contribute to “competitive advantage”.

A worthwhile focus for any management consultant for 2015 is to help corporations become more aware of the extent to which their KPIs are relevant in the context of where they want to be 1-2-3 years from now.

Too often, I find they are looking at the wrong KPIs because of failure to keep strategies current and failure to close the gap between operations and strategy.

where_do_we_go_with_BPM

Posted in Adaptive Case Management, Business Process Management, Case Management, Process Management, R.A.L.B., Strategic Planning | Tagged , , , | Leave a comment

Decisions, Decisions, Decisions


One of the main benefits of BPM is its potential to provide background real-time decision support at steps along “best practices” process template instances.The_Thiinker

Prerequisites to providing functional unit end users with real-time decision-support include:

  1. Ability to roll out compiled process maps to a run-time environment (process steps post automatically to the attention of workers who have the appropriate operational skills and who are available to perform such steps).
  2. Real-time data collection at process steps.
  3. Auto-buildup of data (auto-posting of run-time data to a repository).
  4. Easy user access to current and historical data (data, organized intervention by intervention, in reverse chronological order).
  5. Easy access to algorithms that enrich data (internal as well as local and remote 3rd party algorithms).

The terminology is important – we are looking for “guidance” or assistance but without the burden of rigidity, the expectation being that users will want/need to deviate from best practices where appropriate. Any advice and assistance provided must not encumber users/software/robots.

Let’s see how Decision Support works at BPM process steps.

Decision Support at Process Steps

We need the ability to have decision support at all steps on pathways/sub-pathways.

The range of steps includes ordinary steps, gateway steps, special constructs called “Branching Decision Boxes”, loopback steps and BPM process merge points.

A Process step needs to know, plan side, when it should/can become “current” (BPM flowgraph logic takes care of this), what skill(s) are required for execution of the step, what data display/data collection forms are needed at the step and optionally, and what local instructions may be needed at the step for users who may be new to performance of a step.

Our starting point for this discussion on decision support for BPM will be data collection forms.

Here, rules local to a form at a process step can carry out range checks on data (end time is less than start time), block “bad” data (i.e. date “13/13/2014”), issue alerts when there is missing data, etc.

Routings (i.e. Nurse Practitioner, MD, civil engineer, solicitor) take care of performance skill requirements. It is important to point out that individuals can have several skills, some of which are exercised at specific locations.

Local instructions to process steps can be plain text, a diagram or a video.

Decision Branching Boxes along Process Template Instances

Branching Decision Boxes accommodate selective branching to one or more downstream sub-pathways at certain steps along process map templates.

Not all Process Steps, need to be “performed” – some steps do nothing more than cause branching to one or more sub-pathways downstream and we call these steps “Branching Decision Boxes”. In some environments, a decision box can be formed simply by lassoing two or more ordinary process steps and making certain that each has the capability of resolving to TRUE under certain conditions.

Clearly you need a Boolean variable at each step that best starts off with a value of FALSE, with an ability to evaluate to TRUE under certain conditions as detailed in a Rule Set.

One easy setup involves parking a Boolean variable at a Form that is attached to each process step and relying on an embedded Rule Set to “fire” the step within the Decision Box (i.e. only step only, several steps, or all steps).

Here is a practical example:

c=FALSE

j=0

If a>2 then j=j+1

If b > 5 then j=j+1

If j=2 then c=TRUE

Rule sets such as this one are used extensively during scoring of questionnaires. Individual questions can use forward, reverse or null scoring and thresholds i.e. 2 and 5 are used to determine whether responses at clusters of questions meet a threshold.

If Question “a” has a score of 3 or more, we increment a counter called “j”. If Question “b” has a score of 6 or more, we increment “j” again.

A final rule is needed to resolve “c” to TRUE if the responses to questions “a” and “b” both exceed their respective thresholds of “2” and “5”.

When you are constructing BPM process workflows you will not get far without Branching Decision Boxes. There are very few straight-through processes in the b2b world of today.

A relatively “simple” process map with only a few Branching Decision Boxes can easily yield hundreds/thousands of permutations and combinations. It’s important to “test drive” mapped rolled-out process templates thoroughly prior to production release.

Here are some practical tips.

There are three (3) types of Decision Boxes.

1. Enabling Decision Boxes (branching)

Enabling Decision Boxes have two or more options – if you have outcomes for two options i.e. #1, #2, it is always a good idea to include another, say, “#3 – Neither #1 nor #2” for modeling purposes.

Enabling Decision Boxes can be of the sub-type “manual” or “automatic” as well as “single-pick or “multi-pick”.

“Manual” requires that the user pick an option.

“Automatic” self-executes as and when a decision box becomes current along an instance of a template.

As explained earlier in this article, a simple approach to Enabling Decision Box construction is to use ordinary nodes that you lasso to form a physical Decision Box.

When a Decision Box option or Node does not evaluate to TRUE, the entire sub-path downstream from that Node is dropped from the process template instance.

In order to avoid giving Case Managers the notion that certain pathways/sub-pathways are “disappearing”, make sure each Rule Set includes a Node that evaluates to TRUE when the condition “none of the above” is met. When such Nodes evaluate to TRUE your Case log will show termination of a pathway/sub-pathway.

A Rule Set that needs to cause branching for negative or positive values of a variable called “a” can function with the following:

If a <0 then engage sub-pathway “A”

If a >0 then engage sub-pathway “B”

but may cause confusion to users when a =0.

What is missing is

If a =0 then engage “EXIT”

Where EXIT is a “debug” sub-pathway of one step that posts a warning that reads “Exited at Node ___”

Some environments require special constructs for decision branching.

2. Gatekeeper “Decision Box” Steps (inhibiting)

Gatekeeper “decision boxes” have a routing of “system” (i.e. no users see them).

Their sole purpose is to block processing along a pathway/sub-pathway until such time as a set of conditions have been fulfilled.

The quotes highlight the fact that Gatekeeper decision boxes have only one included step, accordingly there is no need to lasso/construct a physical decision box.

In most BPMs, gatekeeper steps have distinctive icons.

A reasonable question to ask here is what if the set of conditions never gets fulfilled?

Two answers:

One, you may optionally request that each time the rule set at a gatekeeper decision box is tested, an error message will post to the attention of the System Manager if the Rule Set resolves to FALSE.

Two, in the absence of such tracking, Case Managers typically will  have set milestones along pathway/sub-pathway instances, with the result that the software system will track lack of progress along the path/sub-path and the manager will take corrective action.

Another question relating to gatekeeper decision boxes is how do they acquire the data that allows the rule set to resolve to TRUE?

The answers are a) direct data entry by some user, b) auto-enrichment of data by an algorithm and c) import of data to the BPMs.

3. Embedded (hidden) “Decision Boxes”

 Embedded decision boxes are typically invisible, unless, for some reason, you want them to be otherwise.

In the normal course of events, there is no need to indicate to a user that a checkbox, for example, on a data collection form, causes a “link to pathway”.

If the initial step on the linked pathway has a routing that does not match that of the user at the step\ data collection form, the user will not see the effects of the processing they engage by clicking at the checkbox.

4. Other

 In some BPMs adding a new Case results in streaming of the Case onto a process template instance.

Here, the effect of setting a default pathway in the BPMs profile, causes the behavior, so we can include new Case streaming under hidden decision boxes.

Recap

The presumption is that BPM practitioners are motivated to guide their clients along a maturity path where complex processes are mapped, improved and rolled out to a run-time environment capable of providing orchestration & governance (Case level decision support).

Neither you nor your clients will get to this level by staring at paper process maps.

Additional Reading

See “Check your Business Rules on the Way In and Out”

http://wp.me/pzzpB-yz

decisions_decisions

Posted in Adaptive Case Management, Business Process Management, Case Management, Decision Making | Tagged | Leave a comment

Patient Continuity of Care – Integrating Patient Information Across Healthcare Services Delivery Facilities


The healthcare industry has a poor track record of facilitating the integration of Patient Information.

Patients have EMRs, typically one at each healthcare services delivery organization where healthcare_posthey receive services.

A patient can be receiving services from a General Practitioner, a Specialist, and one or more hospitals.

Rule #1 within any single healthcare services delivery entity is a record of each patient encounter/ service performed needs to go into the patient EMR.

The reason is simple – in the absence of being able to reach anyone who knows the Patient History, a provider has to rely on what is in the EMR for decision-making relating to the Patient.

Clearly, when a Patient is receiving services from more than one healthcare facility, there is a need to consolidate encounter information to avoid duplication of services and reduce medical errors.

How to Route encounter information to patient EMRs

Most healthcare organizations have best practice protocols and if the nature of service deliveries to patients involves the receipt of services from multiple providers (internal or external), it is essential to have best practices in-line (not online, not offline).

Providers are simply too busy to look up information.

But, they are trained to consult “the Chart” and can be relied upon to do so, if they are given easy access to it.

RALB

“In line” means there is an RALB (Resource Allocation, Leveling and Balancing) environment that provides orchestration plus governance (i.e. taking care of what, who, why, where and when).

 

Orchestration

Orchestration comes from background BPM (Business Process Management) best practice flowgraph templates (i.e. the system automatically posts steps to user InTrays for attention/action and when one step along a patient care path is committed, the next-in-line step posts automatically to the attention of staff who have the requisite skills to perform such steps).

As each step is committed, a recording is made in the patient EMR (i.e. data, as it was, at the time it was recorded, on the form versions that were in service at that time).

 

Governance

Whereas best practices are “best” most of the time, each patient is different. The number of permutations and combinations of possible interventions is too vast to expect all eventualities to be covered by best practice templates.

Providers often need to skip steps, perform steps in a different sequence, insert steps not in any template, re-visit already committed steps and record data at steps that are not yet current. Rule sets are needed to “rein in” extreme variations away from best practices. As with BPM, the underlying methodology , ACM (Adaptive Case Management), in this case, needs to sit in the background.

 

Collaboration

It is common practice for healthcare professionals to discuss “next steps” for patients and healthcare professionals need easy ways to do this.

Time does not always permit telephone conversations or face-to-face meetings.

The best solutions are Instant Messaging (IM) and secure Point-of-Service (POS) e-mail.

IM clearly is the best approach but it is only “instant” if the sender and receiver are both logged into the software environment at the same time. Otherwise, “instant” could be hours or even days.

POS is an extremely attractive option because each outgoing message is situational/ context appropriate. If your current focus is to carry out a diagnostic assessment and you either have a question or are looking for a “second opinion”, the ideal scenario is to be able to send out a message from the diagnostic assessment step and get back a response at the diagnostic assessment step.

The benefit of POS messaging is you don’t have to provide a lot of information – many times “take a look and give me your opinion” will be sufficient because the software system has a direct link to the patient\pathway\step. All the recipient of a POS message needs to do is log in and he/she will see the step that is the focus of the POS e-mail.

It’s important to point out that in any 24×7 organization, a healthcare professional may ask a question, and then go off shift.

No point for the recipient of an e-mail message to send a response back to the healthcare professional, the message clearly needs to go to the POS (i.e. the process step of focus) so that a night shift healthcare professional who may have seen neither the request nor the response, will be able to see both and take appropriate action.

POS messaging can be important even for organizations that run one shift because handoffs are frequent.

 

Interoperability

A final “must have” capability for Continuity of Care is the ability for any healthcare entity to update Patient EMRs with data from other healthcare service delivery organizations patients may be dealing with.

Here, an e-hub is the best solution.

There is no reason why we should not today have patient data transparency, but the healthcare industry has been preoccupied with details relating to message formats as opposed to embracing the concept of generic data exchangers where publishers and subscribers can each read/write data using their own native data element names.

There is no need to standardize on EMR systems or on data transport formats.

e-Hubs of the type designed by CHM of California accommodate data consolidation from any number of clinics, hospitals, labs and distribution of encounter information on a need-to-know basis for import to healthcare service provider patient EMRs.

Isn’t it time healthcare joined the 21st century?

 

 

Posted in Business Process Improvement, Case Management, Data Interoperability, Decision Making, FIXING HEALTHCARE, Interoperability, R.A.L.B. | 2 Comments

Three-Tier Scheduling and why you need it for ACM/BPM


Most people hate their jobs.Three Tier

Organizations supposedly hire knowledge workers because of a feeling that workers generally know what to do, how to do it, who has to do the work, why it needs to be done, leaving where and when as run-time issues because of scarce resources and changing customer priorities.

Organizations then often tie down these knowledge workers with bureaucracy / red tape, poor tools.

No wonder motivation and engagement decrease.

Seems to me an infrastructure where a) strategic objectives are at least semi-quantitative and where b) operations activity is at all times supportive of strategy, is a simple formula for success.

It’s hard to get to this level of maturity if you have no strategies, no infrastructure at the operations level for stating objectives and tracking progress toward meeting these, inefficient processes, and improper tools.

Once these hurdles have been overcome, it should all be about 3-tier scheduling – software initially allocates resources based on process map templates, workers micro-schedule their work, supervisors level and balance workload across workers.

And, EVERYBODY maintains a local focus on “project/case” objectives and a wider focus on making sure that no operational activities are undertaken that do not directly or indirectly contribute to strategic objectives.

People like success and will be motivated when they know that their contributions count and engaged when they have venues/tools/support networks that allow them to be efficient and effective.

Here is how most workdays unfold for knowledge workers and their supervisors. Study this carefully and put in place an environment that is capable of supporting the following three (3) levels of scheduling, monitoring and control.

Tier 1

Auto schedulers are able to allocate process steps when steps have predefined Routings. Clearly, we don’t want names of workers as Routings but rather skill categories (i.e. nurse, nurse practitioner, tax accountant, purchasing agent, master electrician). BPM logic can keep track of the order of posting steps to user InTrays and the ideal approach if you have, say, five (5) shift nurses, is to broadcast any “nurse” step to all five.

The 1st to “take” the step “owns” the step and is expected to complete the step. If a hand-off is necessary (i.e. in-process task at end of shift) a specific re-assignment can be made. In some cases, an in-process task may need to go back the resource pools for pickup by another worker.

Tier 2

At any given moment during any workday, a knowledge worker is likely to have a) fixed time commitments (i.e. meetings) plus b) a to-do list that may include time-sensitive tasks (e.g. again taking an example from healthcare ,“breakfast medications”).

As for the balance of a worker’s to-do tasks, some may be more urgent than others but the only way for a worker to know which tasks have high priority is to have tags at such tasks that indicate the level of priority (e.g. “urgent”). The tags may have been placed at the tasks by the worker, by software, or by supervisors.

The key point is workers must have the ability to micro-schedule their tasks. If a worker has a 9AM meeting that lasts 30 minutes, with another at 10:30 AM, the worker may decide to finish off several small to-do items or try to advance the state of a single large item.

Tier 3

While individual worker InTrays are being populated with structured tasks by software and by ad hoc tasks invented by workers, supervisors will generally be aware of workload across workers and may elect to level and balance workload by re-assigning some tasks that are pending at one worker’s InTray

It’s easy to see that task management is greatly simplified in an RALB (Resource allocation, leveling and balancing) environment.

It’s easy to see how difficult task management is in the absence of an RALB environment.

Almost done.

We indicated there are two success factors, scheduling of work and next, setting and maintaining a focus on objectives.

Unless you have end-to-end processes, objectives obviously cannot conveniently be “parked” at a virtual end point that all processes dovetail into.

In many businesses today, we no longer have ‘processes’. What we have are ‘process fragments’ that get threaded together by people, machines and software at run-time. Process has become a late run time phenomenon – you may only have a “process” the moment a Case is closed.

Real work is made up of a mix of structured and unstructured (i.e. ad hoc) interventions at Cases so we need a way accommodates “EVERYBODY maintains a local focus on “project/case” objectives.

If you research this, you will find various algorithms (percent complete, remaining man hours) but none of these work when you are unable to assign durations to forward tasks within a Case. Aside from not knowing what some of the tasks are until later, we have the problem of guesstimating how long each is likely to take.

A suitable, flexible approach to assessing and tracking progress toward Case objectives is a methodology called FOMM (Figure of Merit Matrices).

Bottom, line, if you want to practice ACM/BPM make sure your work environment accommodates RALB and get up to speed on FOMM.

http://wp.me/pzzpB-iF

Tier123_Wordle

 

Posted in Adaptive Case Management, Automated Resource Allocation, Business Process Management, Case Management, Operational Planning, Process Management, R.A.L.B., Scheduling | Tagged | Leave a comment

I Only Have “Delighted” Clients


In today’s competitive environment it is no longer sufficient to have “satisfied customers”, you need “delighted customers”.

One of my night jobs is video production (corporate spots, stage events) and each time I quote a job I tell the client Rule #1 is they pay nothing if they are not happy with the ???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????result.

There is a Rule #2 that says we do as many takes as I say to get it right. More, if the client wants a couple beyond this.

Our group recently recorded content for a 45-second spot for national TV with a well-known personality and it took 22 takes over a couple of hours to get everything right.

What was fascinating was discovery that the subject did not get impatient and roll his eyes as we made fine adjustments to the lighting, sound as well as being very critical of what he said/ how he stood, gestures, etc.

When we were done, he said “How about we do a couple more?”. The promo has not yet aired, so no idea how audiences will react but we may have picked up a “delighted” customer.

One area I steer clear of is “weddings” – no amount of money, in my opinion, is worth an encounter with a Bridezilla.

Hats off to those who do weddings.

I can easily imagine that most of the people you might encounter in hospital emergency rooms are wedding videographers.

Bridezilla_wordle

Posted in Case Management, Video Production | Leave a comment

On Accountable Care – Data or Patients, Which Should Come First?


I saw an article on this today – the answer is a no-brainer.

Patient first, data needed for decision-making at the patient level, then other data, in that order.

Aside from all of the mandated data collection, internal healthcare facility need for data and information needed to make bedside decisions, we should never lose focus on the fact that the patient wants out as soon as practicable,  with the proviso of no near-term relapse/return.

Does this not tell us that the Case level focus from the time the pt presents should be on discharge planning? (pt gets to go home, beds become available, providers are able to focus on other pts).

The problem of course is that each pt is unique in terms of when they can/should be discharged.

So, we clearly need at the Case level a non-subjective in-your-face means of tracking progress toward pre-defined discharge objectives (this pt can be discharged because they are in a residence with a 24 x 7 nurse for emergency contact purposes, this pt can go home because they have access to a visiting nurse, this one because they have a caregiver, NOT this one because they live alone and have no support system)

When setting up a set of discharge objectives, some related, some not, it is NOT always essential that all get met, or that all of a short list of pre-defined objectives get met.

The right approach is Cases get closed when Case Managers close them, not when some algorithm posts a discharge advisory.

The Rand Corporation figured out how to manage conflicting needs back in the 1960s.

Adapting this from their application area (missile range, accuracy, payload) to healthcare discharge planning required a bit of out-of-the-box thinking but my group now promotes FOMM (Figure of Merit Matrix) as a default form (spreadsheet actually) at patient Cases.

Want to start using FOMM?  No need for anything other than the ability to attach a default on load spreadsheet to your EHR.

You can have FOMM working for you by the end of any working day.

Read all about FOMM at

http://wp.me/pzzpB-iF

On_accountable_care

Posted in Case Management, Decision Making, Discharge Planning, Uncategorized | Tagged , | Leave a comment

Streamlining BPM Modeling – Reduce Initial BPM Process Rollout Time by 80%


The objective of modelling a BPM process is to test process templates prior to rollout to a production environment.

Process templates consist of steps, sometimes interconnected in complex ways, where each step requires an indicated level of performance skill and, usually, the collection of context-situation appropriate data to validate performance of steps and for use downstream from such steps.

Some process steps carry out calculations on recorded data, accordingly, modeling includes validation of calculation rules.

Other steps found in process templates include branching decision boxes which require a selection by a user. Some branching decision boxes can be automated in which case modeling includes testing and validation of rule sets at decision box options.

Usual defects in process templates include unnecessary steps, missing steps, steps connected in the wrong order, steps with bad routings, bad forms, bad calculation routines and bad rules.

Other modeling objectives include testing of the User Interface. No point putting in place facilities for doing the right things, the right way, at the right time, using the right resources if users refuse to use the run time software made available to them.

Process modeling can take up a lot of time.

Here is a tip on how to dramatically reduce the time and cost of initial rollout of a BPMs processes for modeling purposes.

Whereas it takes ½ an hour to one hour, or more (depending on complexity), to “paint” a data collection form for production use, you can image 20-40 forms per hour using an ordinary camera on tripod and use the images for modeling your processes.

When you substitute images of forms for real forms in a 40-50 step workflow that has, say, 20 forms, it takes 2 hours to build the workflow and put in place form images, instead of 12 hours to build/roll out a template that features real forms. This gives you an initial 80% rollout time savings, with no extra cost beyond the cost of a camera and copy stand.

Prove it to yourself using the following protocol:

  1. Drag and drop steps on your canvas/sheet and connect these using directional arcs to build your test workflow.
  2. Assign routings at each step.
  3. Attach to each step a data collection form that features an image control plus a memo control.
  4. Do not spend time automating branching decision boxes – instead, attach a placeholder form to each decision box option and describe the rules that will later be used by a rule set when you automate the processing at the decision box.
  5. Park one or more of your data collection and placeholder forms at each workflow step.
  6. Compile your workflow and piano-play one or more instance of your template in a test run time environment with a small group of stakeholders.
  7. As stakeholders comment on the sequencing of steps, routings, forms at steps, record their comments at the step forms and then immediately update and re-issue your template.
  8. Now replace your forms with data collection/branching decision box forms and calculation algorithms/rules, recompile your template, repeat the piano play.

Compare your time/cost for items #1 through #7 with the turnaround time for initial modelling using the traditional approach of generating paper maps, marking these up by hand, going away to generate new versions of your paper maps, and then hosting another session with the process documentation team.

N.B. Actual initial rollout time using traditional approaches to modeling is a lot higher because of the need to host multiple mapping sessions with stakeholders. Each additional session requires a “settling in” time. You are likely to find that months can be reduced to a few days using “two-pass” rollout of processes.

Streamlining_BPM

Posted in Business Process Management, Process Mapping, Process auditing | Tagged , , | 1 Comment

What simple software can I use in strategy consulting?


It’s a lot easier to respond to specific questions as opposed to trying to invent hypothetical questions and then answer them.

I got the above today from MosaicHub of which apparently I am a member.

My response appears below . . . .

Did I give good advice or bad advice?

What simple software can I use in strategy consulting?

Lots of experience in this area over decades trying out different options.

The thing about strategic planning is you need data on assets, inventory, tools/equipment, subsidiaries, divisions, units, staff, customers, existing products, products under development, competitors, technology trends, changing legislation, sometimes country profiles etc.

Each of these “entities” has data that is specific but some of it will be common, duplicative, contradictory, false, out of date.

What to do?

You need a free-form multi-tree hierarchical dbms where the data is accessible from one screen (10,000 documents, more, it does not matter).

Having the data is the start of your problems.

You can take the route of mining the data and that goes on and on OR you use a ‘”connect-the-dots” approach that law enforcement uses which can be very effective.

Your Kbase will be out of date soon as you figure you have in it what you need so it’s important to keep it up to date ongoing,

This is why however many objects you may be tracking, you need versioning – no point looking at a ROI written up 3 years ago and trying to analyze how things went using current data. You may need to look at the entire timeline.

If you are a consultant, you can get a copy of our CiverManage Kbase, free, but you might need some hand-holding which will cost you a bit. No need to commit to this in the absence of a no-obligation, 1/2 hour web session, where we give you the mouse and let you build a mini Kbase of your choice to see what you are getting into.

Many consultants I work with prefer post-its and whiteboards. I cannot see how I could do a session in Europe in the morning, one in North America in the afternoon and yet another in the Far East at night. For me, it’s on-line, real-time or I spend my time elsewhere.

I am easily reachable, just type Karl Walter Keirstead (200+ blog posts on strategy, operations and bridging the gap).

What_Simple_Software

Posted in Competitive Advantage, Decision Making, Enterprise Content Management, Organizational Development, Planning, Strategic Planning | Leave a comment

Tell me an answer, I’ll give you a question


Every day I talk to people in charge of organizations re new initiatives they have in mind.

Before things get too involved in discussions I ask them about their “old initiatives”.

  • Do they have in place a formal mechanism for authorizing the allocation of scarce resources to a new initiative?
  • Is there in place a means for assessing risk and uncertainty?
  • Do they have in place mechanisms for prioritizing competing proposed initiatives and selecting from these, the one or two that will give the greatest “bang for the buck”?

Next, when will the new initiative start, how long will it run, do they have the people needed to plan, implement, monitor and control the initiative? What is their Plan B for the scenario where the initiative fails or starts to fail?

The client quickly gets the message that the questions will not stop during the implementation phase of any new initiative.

For initiatives that manage to get going in the absence of a particular tool/methodology, the client expects to hear the same questions, slightly re-phrased i.e. “Do you now have in place …. ?”

Bottom line, if you want to be successful as a consultant, you have to ask a lot of questions.

Tell_me_an_answer

Posted in MANAGEMENT, Operational Planning, Organizational Development, Project Planning, Strategic Planning | Leave a comment

Back to the future with ACM


AirplaneI hear complaints on a regular basis regarding long lead times getting BPM processes into production, rigidity in the performance of tasks allegedly “imposed” by BPM processes and high rates of failure of BPM initiatives.

It’s easy to understand that long lead times on any initiative reduce motivation, particularly if initial estimates are overly optimistic.

Rigidity in the performance of tasks results in staff refusing or being unable to follow process logic.

If either or both of these do not result in failure of an initiative, lack of a means of sustaining the orchestration that BPM is capable of providing probably is the final nail in the coffin.

We can do something about long lead times.

Let functional unit members map out their own processes in real-time, with the help of a facilitator, at first, with on-call participation of IT or BI for rule set construction, bridging processes across functional units and attending to data import/export, but otherwise cut out any middlemen. Do this and units become the prime movers, able to build, own and manage their processes.

Clearly, you won’t get very far with functional units by telling them they have to learn a new language or notation, so steer clear of any “solutions” that only add to the problem.

Complaints about rigidity go away when you say to staff . . .

“Here is a set of your best practices, consistent use of these will give better outcomes but we hire knowledge workers on the premise that you know “what” to do, “how”, “why”. Accordingly, you can deviate from the best practices as you see fit, with the proviso that environment-level governance will rein-in deviations that are not in the best interest of the organization, customers and staff”.

BPM in conjunction with Resource Allocation Leveling and Balancing (RALB), addresses “when” and “where”, leaving us to find an environment in which to perform and manage tasks.

That environment is CASE and all we really need here is a way for workers to be mindful of their fixed-time tasks and have the capability in respect of floating-time tasks (“to-do” items, if that better describes these) to micro-schedule these in between fixed-time tasks.

Clearly, part of task performance includes data collection plus declarations of completion at tasks, otherwise workers downstream from completed tasks are not be able to plan their workloads.

You need to build up a repository (log) of completed tasks in order for any system to provide advice and assistance in respect of the performance of tasks.

The rest of “sustain” involves providing a User Interface where going there to report on task performance is easier than not going there. Workers who tell the software what is going on do not have to entertain multiple phone calls asking “what did you do, what are you doing, when can my task start?”

CASE accommodates any mix of structured and unstructured work and builds a repository. ACM is concerned with the management of Cases.

Looking back at the origins of BPM, Critical Path Method basically had nodes, directional arcs, branching, merge points, milestones plus the ability to anticipate arrival at project end points.

Seems to me we lost a lot of that going from CPM to BPM.

ACM brought us back to the future.

Back_to_the_Future

 

Posted in Adaptive Case Management, Business Process Management, Case Management, Process Management | Tagged , | Leave a comment