Three-Tier Scheduling and why you need it for ACM/BPM

Most people hate their jobs.Three Tier

Organizations supposedly hire knowledge workers because of a feeling that workers generally know what to do, how to do it, who has to do the work, why it needs to be done, leaving where and when as run-time issues because of scarce resources and changing customer priorities.

Organizations then often tie down these knowledge workers with bureaucracy / red tape, poor tools.

No wonder motivation and engagement decrease.

Seems to me an infrastructure where a) strategic objectives are at least semi-quantitative and where b) operations activity is at all times supportive of strategy, is a simple formula for success.

It’s hard to get to this level of maturity if you have no strategies, no infrastructure at the operations level for stating objectives and tracking progress toward meeting these, inefficient processes, and improper tools.

Once these hurdles have been overcome, it should all be about 3-tier scheduling – software initially allocates resources based on process map templates, workers micro-schedule their work, supervisors level and balance workload across workers.

And, EVERYBODY maintains a local focus on “project/case” objectives and a wider focus on making sure that no operational activities are undertaken that do not directly or indirectly contribute to strategic objectives.

People like success and will be motivated when they know that their contributions count and engaged when they have venues/tools/support networks that allow them to be efficient and effective.

Here is how most workdays unfold for knowledge workers and their supervisors. Study this carefully and put in place an environment that is capable of supporting the following three (3) levels of scheduling, monitoring and control.

Tier 1

Auto schedulers are able to allocate process steps when steps have predefined Routings. Clearly, we don’t want names of workers as Routings but rather skill categories (i.e. nurse, nurse practitioner, tax accountant, purchasing agent, master electrician). BPM logic can keep track of the order of posting steps to user InTrays and the ideal approach if you have, say, five (5) shift nurses, is to broadcast any “nurse” step to all five.

The 1st to “take” the step “owns” the step and is expected to complete the step. If a hand-off is necessary (i.e. in-process task at end of shift) a specific re-assignment can be made. In some cases, an in-process task may need to go back the resource pools for pickup by another worker.

Tier 2

At any given moment during any workday, a knowledge worker is likely to have a) fixed time commitments (i.e. meetings) plus b) a to-do list that may include time-sensitive tasks (e.g. again taking an example from healthcare ,“breakfast medications”).

As for the balance of a worker’s to-do tasks, some may be more urgent than others but the only way for a worker to know which tasks have high priority is to have tags at such tasks that indicate the level of priority (e.g. “urgent”). The tags may have been placed at the tasks by the worker, by software, or by supervisors.

The key point is workers must have the ability to micro-schedule their tasks. If a worker has a 9AM meeting that lasts 30 minutes, with another at 10:30 AM, the worker may decide to finish off several small to-do items or try to advance the state of a single large item.

Tier 3

While individual worker InTrays are being populated with structured tasks by software and by ad hoc tasks invented by workers, supervisors will generally be aware of workload across workers and may elect to level and balance workload by re-assigning some tasks that are pending at one worker’s InTray

It’s easy to see that task management is greatly simplified in an RALB (Resource allocation, leveling and balancing) environment.

It’s easy to see how difficult task management is in the absence of an RALB environment.

Almost done.

We indicated there are two success factors, scheduling of work and next, setting and maintaining a focus on objectives.

Unless you have end-to-end processes, objectives obviously cannot conveniently be “parked” at a virtual end point that all processes dovetail into.

In many businesses today, we no longer have ‘processes’. What we have are ‘process fragments’ that get threaded together by people, machines and software at run-time. Process has become a late run time phenomenon – you may only have a “process” the moment a Case is closed.

Real work is made up of a mix of structured and unstructured (i.e. ad hoc) interventions at Cases so we need a way accommodates “EVERYBODY maintains a local focus on “project/case” objectives.

If you research this, you will find various algorithms (percent complete, remaining man hours) but none of these work when you are unable to assign durations to forward tasks within a Case. Aside from not knowing what some of the tasks are until later, we have the problem of guesstimating how long each is likely to take.

A suitable, flexible approach to assessing and tracking progress toward Case objectives is a methodology called FOMM (Figure of Merit Matrices).

Bottom, line, if you want to practice ACM/BPM make sure your work environment accommodates RALB and get up to speed on FOMM.



Posted in Adaptive Case Management, Automated Resource Allocation, Business Process Management, Case Management, Operational Planning, Process Management, R.A.L.B., Scheduling | Leave a comment

I Only Have “Delighted” Clients

In today’s competitive environment it is no longer sufficient to have “satisfied customers”, you need “delighted customers”.

One of my night jobs is video production (corporate spots, stage events) and each time I quote a job I tell the client Rule #1 is they pay nothing if they are not happy with the ???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????result.

There is a Rule #2 that says we do as many takes as I say to get it right. More, if the client wants a couple beyond this.

Our group recently recorded content for a 45-second spot for national TV with a well-known personality and it took 22 takes over a couple of hours to get everything right.

What was fascinating was discovery that the subject did not get impatient and roll his eyes as we made fine adjustments to the lighting, sound as well as being very critical of what he said/ how he stood, gestures, etc.

When we were done, he said “How about we do a couple more?”. The promo has not yet aired, so no idea how audiences will react but we may have picked up a “delighted” customer.

One area I steer clear of is “weddings” – no amount of money, in my opinion, is worth an encounter with a Bridezilla.

Hats off to those who do weddings.

I can easily imagine that most of the people you might encounter in hospital emergency rooms are wedding videographers.


Posted in Case Management, Video Production | Leave a comment

On Accountable Care – Data or Patients, Which Should Come First?

I saw an article on this today – the answer is a no-brainer.

Patient first, data needed for decision-making at the patient level, then other data, in that order.

Aside from all of the mandated data collection, internal healthcare facility need for data and information needed to make bedside decisions, we should never lose focus on the fact that the patient wants out as soon as practicable,  with the proviso of no near-term relapse/return.

Does this not tell us that the Case level focus from the time the pt presents should be on discharge planning? (pt gets to go home, beds become available, providers are able to focus on other pts).

The problem of course is that each pt is unique in terms of when they can/should be discharged.

So, we clearly need at the Case level a non-subjective in-your-face means of tracking progress toward pre-defined discharge objectives (this pt can be discharged because they are in a residence with a 24 x 7 nurse for emergency contact purposes, this pt can go home because they have access to a visiting nurse, this one because they have a caregiver, NOT this one because they live alone and have no support system)

When setting up a set of discharge objectives, some related, some not, it is NOT always essential that all get met, or that all of a short list of pre-defined objectives get met.

The right approach is Cases get closed when Case Managers close them, not when some algorithm posts a discharge advisory.

The Rand Corporation figured out how to manage conflicting needs back in the 1960s.

Adapting this from their application area (missile range, accuracy, payload) to healthcare discharge planning required a bit of out-of-the-box thinking but my group now promotes FOMM (Figure of Merit Matrix) as a default form (spreadsheet actually) at patient Cases.

Want to start using FOMM?  No need for anything other than the ability to attach a default on load spreadsheet to your EHR.

You can have FOMM working for you by the end of any working day.

Read all about FOMM at


Posted in Case Management, Decision Making, Discharge Planning, Uncategorized | Tagged , | Leave a comment

Streamlining BPM Modeling – Reduce Initial BPM Process Rollout Time by 80%

The objective of modelling a BPM process is to test process templates prior to rollout to a production environment.

Process templates consist of steps, sometimes interconnected in complex ways, where each step requires an indicated level of performance skill and, usually, the collection of context-situation appropriate data to validate performance of steps and for use downstream from such steps.

Some process steps carry out calculations on recorded data, accordingly, modeling includes validation of calculation rules.

Other steps found in process templates include branching decision boxes which require a selection by a user. Some branching decision boxes can be automated in which case modeling includes testing and validation of rule sets at decision box options.

Usual defects in process templates include unnecessary steps, missing steps, steps connected in the wrong order, steps with bad routings, bad forms, bad calculation routines and bad rules.

Other modeling objectives include testing of the User Interface. No point putting in place facilities for doing the right things, the right way, at the right time, using the right resources if users refuse to use the run time software made available to them.

Process modeling can take up a lot of time.

Here is a tip on how to dramatically reduce the time and cost of initial rollout of a BPMs processes for modeling purposes.

Whereas it takes ½ an hour to one hour, or more (depending on complexity), to “paint” a data collection form for production use, you can image 20-40 forms per hour using an ordinary camera on tripod and use the images for modeling your processes.

When you substitute images of forms for real forms in a 40-50 step workflow that has, say, 20 forms, it takes 2 hours to build the workflow and put in place form images, instead of 12 hours to build/roll out a template that features real forms. This gives you an initial 80% rollout time savings, with no extra cost beyond the cost of a camera and copy stand.

Prove it to yourself using the following protocol:

  1. Drag and drop steps on your canvas/sheet and connect these using directional arcs to build your test workflow.
  2. Assign routings at each step.
  3. Attach to each step a data collection form that features an image control plus a memo control.
  4. Do not spend time automating branching decision boxes – instead, attach a placeholder form to each decision box option and describe the rules that will later be used by a rule set when you automate the processing at the decision box.
  5. Park one or more of your data collection and placeholder forms at each workflow step.
  6. Compile your workflow and piano-play one or more instance of your template in a test run time environment with a small group of stakeholders.
  7. As stakeholders comment on the sequencing of steps, routings, forms at steps, record their comments at the step forms and then immediately update and re-issue your template.
  8. Now replace your forms with data collection/branching decision box forms and calculation algorithms/rules, recompile your template, repeat the piano play.

Compare your time/cost for items #1 through #7 with the turnaround time for initial modelling using the traditional approach of generating paper maps, marking these up by hand, going away to generate new versions of your paper maps, and then hosting another session with the process documentation team.

N.B. Actual initial rollout time using traditional approaches to modeling is a lot higher because of the need to host multiple mapping sessions with stakeholders. Each additional session requires a “settling in” time. You are likely to find that months can be reduced to a few days using “two-pass” rollout of processes.


Posted in Business Process Management, Process auditing, Process Mapping | Tagged , , | 1 Comment

What simple software can I use in strategy consulting?

It’s a lot easier to respond to specific questions as opposed to trying to invent hypothetical questions and then answer them.

I got the above today from MosaicHub of which apparently I am a member.

My response appears below . . . .

Did I give good advice or bad advice?

What simple software can I use in strategy consulting?

Lots of experience in this area over decades trying out different options.

The thing about strategic planning is you need data on assets, inventory, tools/equipment, subsidiaries, divisions, units, staff, customers, existing products, products under development, competitors, technology trends, changing legislation, sometimes country profiles etc.

Each of these “entities” has data that is specific but some of it will be common, duplicative, contradictory, false, out of date.

What to do?

You need a free-form multi-tree hierarchical dbms where the data is accessible from one screen (10,000 documents, more, it does not matter).

Having the data is the start of your problems.

You can take the route of mining the data and that goes on and on OR you use a ‘”connect-the-dots” approach that law enforcement uses which can be very effective.

Your Kbase will be out of date soon as you figure you have in it what you need so it’s important to keep it up to date ongoing,

This is why however many objects you may be tracking, you need versioning – no point looking at a ROI written up 3 years ago and trying to analyze how things went using current data. You may need to look at the entire timeline.

If you are a consultant, you can get a copy of our CiverManage Kbase, free, but you might need some hand-holding which will cost you a bit. No need to commit to this in the absence of a no-obligation, 1/2 hour web session, where we give you the mouse and let you build a mini Kbase of your choice to see what you are getting into.

Many consultants I work with prefer post-its and whiteboards. I cannot see how I could do a session in Europe in the morning, one in North America in the afternoon and yet another in the Far East at night. For me, it’s on-line, real-time or I spend my time elsewhere.

I am easily reachable, just type Karl Walter Keirstead (200+ blog posts on strategy, operations and bridging the gap).


Posted in Competitive Advantage, Decision Making, Enterprise Content Management, Organizational Development, Planning, Strategic Planning | Leave a comment

Tell me an answer, I’ll give you a question

Every day I talk to people in charge of organizations re new initiatives they have in mind.

Before things get too involved in discussions I ask them about their “old initiatives”.

  • Do they have in place a formal mechanism for authorizing the allocation of scarce resources to a new initiative?
  • Is there in place a means for assessing risk and uncertainty?
  • Do they have in place mechanisms for prioritizing competing proposed initiatives and selecting from these, the one or two that will give the greatest “bang for the buck”?

Next, when will the new initiative start, how long will it run, do they have the people needed to plan, implement, monitor and control the initiative? What is their Plan B for the scenario where the initiative fails or starts to fail?

The client quickly gets the message that the questions will not stop during the implementation phase of any new initiative.

For initiatives that manage to get going in the absence of a particular tool/methodology, the client expects to hear the same questions, slightly re-phrased i.e. “Do you now have in place …. ?”

Bottom line, if you want to be successful as a consultant, you have to ask a lot of questions.


Posted in MANAGEMENT, Operational Planning, Organizational Development, Project Planning, Strategic Planning | Leave a comment

Back to the future with ACM

AirplaneI hear complaints on a regular basis regarding long lead times getting BPM processes into production, rigidity in the performance of tasks allegedly “imposed” by BPM processes and high rates of failure of BPM initiatives.

It’s easy to understand that long lead times on any initiative reduce motivation, particularly if initial estimates are overly optimistic.

Rigidity in the performance of tasks results in staff refusing or being unable to follow process logic.

If either or both of these do not result in failure of an initiative, lack of a means of sustaining the orchestration that BPM is capable of providing probably is the final nail in the coffin.

We can do something about long lead times.

Let functional unit members map out their own processes in real-time, with the help of a facilitator, at first, with on-call participation of IT or BI for rule set construction, bridging processes across functional units and attending to data import/export, but otherwise cut out any middlemen. Do this and units become the prime movers, able to build, own and manage their processes.

Clearly, you won’t get very far with functional units by telling them they have to learn a new language or notation, so steer clear of any “solutions” that only add to the problem.

Complaints about rigidity go away when you say to staff . . .

“Here is a set of your best practices, consistent use of these will give better outcomes but we hire knowledge workers on the premise that you know “what” to do, “how”, “why”. Accordingly, you can deviate from the best practices as you see fit, with the proviso that environment-level governance will rein-in deviations that are not in the best interest of the organization, customers and staff”.

BPM in conjunction with Resource Allocation Leveling and Balancing (RALB), addresses “when” and “where”, leaving us to find an environment in which to perform and manage tasks.

That environment is CASE and all we really need here is a way for workers to be mindful of their fixed-time tasks and have the capability in respect of floating-time tasks (“to-do” items, if that better describes these) to micro-schedule these in between fixed-time tasks.

Clearly, part of task performance includes data collection plus declarations of completion at tasks, otherwise workers downstream from completed tasks are not be able to plan their workloads.

You need to build up a repository (log) of completed tasks in order for any system to provide advice and assistance in respect of the performance of tasks.

The rest of “sustain” involves providing a User Interface where going there to report on task performance is easier than not going there. Workers who tell the software what is going on do not have to entertain multiple phone calls asking “what did you do, what are you doing, when can my task start?”

CASE accommodates any mix of structured and unstructured work and builds a repository. ACM is concerned with the management of Cases.

Looking back at the origins of BPM, Critical Path Method basically had nodes, directional arcs, branching, merge points, milestones plus the ability to anticipate arrival at project end points.

Seems to me we lost a lot of that going from CPM to BPM.

ACM brought us back to the future.



Posted in Adaptive Case Management, Business Process Management, Case Management, Process Management | Tagged , | Leave a comment

Wraparound BPM with CiverOrders and CiverSupplier (promotional content)

Business Process Management (BPM) is a term that picks up process discovery, mapping, PlanMonitorControlmodeling and, in the case of complex processes, rollout of improved mapped processes to a run-time environment that supports BAM (Business Activity Monitoring).

If your organization has complex processes and your BPM/BAM journey ends with “communication” of mapped paper processes, your organization is not “managing” its processes.

Consider two travelers starting on a trip by automobile, one using a paper roadmap for navigation, the other a GPS. The traveler with the GPS is using a roadmap that has been “rolled out” to a runtime environment. The GPS provides orchestration. Orchestration greatly increases sustainability of your processes – without orchestration, staff receives training on processes, then slowly reverts to old ways. There is no sustainability.

Some BPMs (Business Process Management systems) accommodate rollout without the need for customization.

You map your processes in a graphic environment using drag-and-drop (steps, connecting arcs, loopbacks), with attributes that include step names, data collection forms needed at steps, skill designations for step performance. No need for difficult-to-learn “languages”, no need to write computer code.

One click compiles your map and deploys or rolls it out (i.e. the software carves up your process into discrete steps and posts these according to the logic of your flowgraph to the InTrays of the right users).

You are now in a run-time “Case” environment that provides automatic resource allocation with your BPM providing background orchestration.

Users see tasks post at their InTrays, they micro-schedule tasks, perform tasks and commit these as they complete tasks. Supervisors level and balance workload across users.

Only when you reach this level of process maturity are you “managing” your processes.

What does “perform” a task mean?

It means . . . .

  1. Taking action at a task (i.e. in a manufacturing setting a task might post as “assemble parts 120A and 120B”),
  2. Recording data on one or more data collection forms at the task (ranging from a simple “done” check-mark to recording various measurements/observations), then,
  3. Declaring the task to be complete (so that the BPMs can post the next-in-line step(s) to the appropriate users).

The software automatically builds a History (date, time, process, instance, step, user, data), and automatically posts the next-in-line task to the appropriate user(s).

The contribution of BPM is to provide background orchestration so that the right things get done the right way, using the right resources, at the right places and the right times.

Important considerations for managing processes are:

  • Users need to consult the History prior to taking action at tasks (the history allows users to see which tasks along a workflow have been completed and which tasks are ‘current’ along a workflow).
  • Users will typically be working on multiple instances of a process (i.e. there may be several active customer orders where there is a need to “assemble parts 120A and 120B”). Users may also be performing work on several processes at the same time, each with more than one instance. One worker may be performing all tasks along a workflow or different tasks may need to be performed by different workers with different skill capabilities. In some settings there may be a need to “hand off” in-progress tasks to other workers (e.g. at change of shift).
  • However much time is spent mapping and improving processes, circumstances arise where it becomes necessary to deviate from “best practice” protocols (i.e. processes). A user may need to re-visit an already committed process step, record data at a not-yet-current step, skip a step, or insert a step that was not part of the process.

Clearly, we develop processes with the expectation that users will make consistent use of these, but with the understanding that deviations away from “best practices” may be required under specific situations and contexts. We need some control over deviations (i.e. global rule sets within the run-time environment as opposed to rule sets that are local to a particular process or process fragment).

Your BPMs ends up providing orchestration (i.e. guidelines provided by BPM process templates) and governance (i.e. guardrails provided by the run time environment).

With reference to our “highway map/GPS” analogy, we could say center lines on a highway provide orchestration whereas the guardrails on both sides of the highway provide governance.

The big question becomes, what type of run-time environment is needed to accommodate structured BPM activity and unstructured “ad hoc” interventions?

  1. You need a User Interface that allows users to see pending tasks and allows users to micro-schedule these and commit tasks. Supervisors need to be able to level and balance workload across users.It turns out the UI can consist of a single split screen, ½ consisting of anInTray, the other ½ consisting of a traditional calendar.

    If you think about it that is all any of us needs to perform work. We come into the office each day and take note of our fixed time appointments (calendar) and we then schedule ToDo items (InTray) for performance before or after our fixed time appointments. We all do this, all day long, every day.

  1. You need a way to set a focus on a particular customer (i.e. establish a cursor position in a relational database management system).
  1. You need a means of auto-recording of “user sessions” in a History (i.e. date, time, user, process, instance, step, form data) with a re-call capability that allows anyone at the customer record, to view data as it was, at the time it was collected, on the form versions that were in service at the time.
  1. Your run-time environment needs to be able to accept data streams from outside of the environment and deliver data to local and remote 3rd party systems and applications.

What is “wraparound”, how do achieve it?

Wraparound (i.e. process discovery, mapping, modeling, rollout, monitoring, discovery), for example,  a customer in business, a patient in healthcare, a case under investigation in law enforcement, an insurance claim, is achieved by consulting the History before engaging processing at process steps.

At a higher level (i.e. across customers, patients, . . .) wraparound is achieved by data mining with the objective of discovering ways and means of improving “best practices”.

If you are looking for wraparound BPM, consider CiverOrders (a drag-and-drop graphic process mapping environment plus compiler) and CiverSupplier (a Case based run-time environment that accommodates any mix of structured versus unstructured work, featuring RALB, CRM, ECM).

Call 1+ 800 529 5355 for more info or e-mail us at

A 5-simultaneous-user run-time system costs as little as $1,500 per year for annual service and maintenance, inclusive of free software updates.


RALB: Resource allocation, leveling and balancing

CRM: Customer Relations Management

ECM: Enterprise Content Management

Run-Time Environment: A user/supervisor workspace where all process step posting and data collection takes place.

Drag-and-drop Graphic Processing Mapping environment: a no-coding process map builder* and compiler.

Customization versus Configuration: Customization involves writing computer code that is translated into computer programs and building/maintaining database tables and fields, configuration involves selecting options that an application system uses to address the specific needs of a customer. The differences are twofold. Customization takes longer and requires technical staff, i.e. computer programmers), configuration, in the main, can be done by functional unit staff.

*some process steps require “rule sets” consisting of an organized set of “if-then” statements (e.g. “if a >1 then b=TRUE”).  Some vendors call this coding, others say it is not coding. Civerex does not consider rule sets to be “coding” but not all functional unit members are able to build and maintain rule sets without some assistance from, say IT department or business analyst staff assistance). Users of CiverOrders/CiverSupplier are otherwise able to map out processes and design custom data collection forms without coding or the need to build database tables/fields.


Posted in Adaptive Case Management, Business Process Management, Case Management | Tagged , | Leave a comment

Good Decision-making – The basics and beyond

MissionStrategyThe overall mission of any organization is to build, preserve and improve competitive advantage.

Each day thousands of corporations start up in many sectors. Many fail and a major reason is bad decisions.

The ones that succeed do so because they leverage their infrastructures. They use their starting equity to acquire plant, equipment and tools. They hire competent and innovative staff, design and build innovative products and services,  study the competition, track market trends and track changing legislation.

Above all, they manage their cost base such that some of the profits can be reinvested.

The members of this group make good decisions.

How do corporations make good decisions consistently?

In order to answer the question we need to know what decision-making is – it’s all about converting information into action.

Except that the information has to be available and it has to be accessible.

Simple decisions are easy to make. You have three people at your front desk, you get a CRM system that allows you to go to two people. Your ROI says the initiative will get to break even in nine months. After that, you are generating positive cash flow.

Not so easy to decide which of several proposed initiatives, all competing for the same scarce resources, should be allowed to go forward. Here, each will have a different ROI, a different risk profile, a different uncertainty level. You need a way to identify the more promising of your competing initiatives, select one or more and then prioritize implementation for the ones you select.

Do we go with two smaller, shorter ROIs or, do we go for one longer ROI that has higher risk but a better potential payoff?

There are two types of decisions in organizations – strategic and operational. Let’s see what they have in common.

 Background – Mission to Strategy to Initiatives

Most organizations have a mission which is the foundation of the organization i.e. “This is why we are here, this is how we spend our money, this is where we expect to be in, say, three years”.

Linked to the mission statement we typically find several strategic pillars and, attached to each of these, one or more initiatives.

Competing initiatives at a strategic pillar make for easier decision-making than making decisions regarding initiatives that compete for scarce resources across different strategic pillars.

Suppose an organization wants to become a world leader in remote Medical Devices. One strategy might be to make their devices multifunction i.e. measure blood pressure, heart rate, and blood sugar levels.

Another strategy might be to add educational content to the existing set of Medical Devices based on that notion that intellectual challenges decrease the onset of dementia for some patients.

Which of these will contribute more to increasing competitive advantage?

Getting back to our definition of decision-making, it’s important to understand that most decisions are made once, at a particular point in time. Things change.

The information that was on hand for making a particular decision or, should I say, the information that was accessible at the time the decision was made will be different today.

As stated earlier in this article, the information needed to make good decisions must be available and accessible.

How can we consolidate information for effective decision-making by top management (strategic decision making)?

A stack of file folders comprising position papers, spreadsheets and statistical reports probably is the worst way to try to consolidate information.

Digitizing information and putting it in Windows file folders is not much better.

The environment of choice for consolidating information is a free-form search Knowledge Base (KBase).

Two reasons:

  • You can access any of, say, 10,000 documents from a single computer screen.
  • You can engage searches and see what each search was able to find and what the search did not find.

Most software search algorithms only tell you what was found.

Moreover, you have to tell the software where to find what you are looking for. If an address was inadvertently enter into a comment field instead of into the address field, you’re not likely to find it.

Suppose McDonald’s is looking to add a new retail outlet.

One approach is to look for a location where Wendy’s already has an outlet. Another is to look for a location where Wendy’s does not currently have an outlet. A free-form search knowledge base will answer both of these questions in one search.

Kbases help you to connect-the-dots at the strategy level.

If Knowledge Bases are that useful, why don’t all corporations have them?

Answer – like many other things, they need care and feeding – if you don’t recognize the value of a knowledge base you won’t set one up. If you set up a knowledge base and don’t use it, you won’t be inclined to maintain it. If you don’t maintain it, you will not be able to use when you feel you need it.

So, we have a knowledge base, tools that allow us to prioritize initiatives, we give an initiative the green light, what comes next?

Implementing Initiatives

You need an implementation plan for all major initiatives.

If there are multiple steps, connected in complex ways, where different people need to attend to these different steps, you need CPM (Critical Path Method). Managing projects with CPM is all about managing “float”. If you don’t manage “float” you are likely to be late and over budget.

Hopefully your ROI has a timeline that extends beyond “ribbon cutting”, such that you have resources for training, plus funding to support ongoing maintenance of your initiative. The way to ensure this is to indicate clearly in ROI preparation procedures that ROIs must include all costs and that they will receive periodic reviews.

How do you make decisions regarding operational activity (operational decisions)?

As you move forward to production mode following implementation of an initiative, it pays to have software do most of the heavy lifting.

If you map out your processes in the right environment, you’ll have one-click access to a compiler that can carve up your graph into discrete steps and auto-post steps, as these become current.

If you take care to indicate plan-side the resources needed to perform each process step, what data collection forms are needed at each step, include an instructional component, RALB (Resource Allocation Leveling and Balancing) will automatically fill worker InTrays.

When you are set up to do the right things, the right way, at the right time and place, using the right resources, you know you are in good shape.

Now we get to an important part of our discussion i.e. operational decision making.

The first thing to understand is that knowledge workers have different ways of working efficiently. Some like to start their day by finishing off a few small tasks others prefer to make progress on one large task and later work on small tasks. There may be no consistency from one day to the next, even for the same knowledge worker.

It follows that workers need to be able to micro–schedule their workloads. If you fail to enable this the ‘A’ in RALB will fail.

Next, supervisors who are sensitive to changing customer priorities and to variations in workload for individual knowledge workers need to be able to level (“L”) and balance (“B”) workload across knowledge workers.

RALB – don’t go to the office without it.

Tracking Progress Toward Operational Objectives

So, we have a mission, strategies, implemented initiatives, plus ways and means of allocating resources.

Tracking progress toward attainment of objectives is the next hurdle.

Don’t expect to do this using “manhours-to-go” or “percent-complete” along an end-to-end process.

Most work today is not capable of being orchestrated using end-to-end processes. The usual scenario is to have a series of process fragments that people, machines, and software must thread together at run-time.

Since each customer order, insurance claim, patient has different needs, you need an environment that accommodates following “best practices”, with facilities for accommodating deviations away from best practices.

The environment of choice for workflow management is Case.

Case, basically is a “bucket” that allows you to “park” any and all information relating to a particular customer order, insurance claim, patient. This sets the stage for easier decision-making.

Your typical Case is likely to have multiple objectives. Not all of these have the same relative importance. You may be able to close your Case with only a sub-set of objectives having been met.

Clearly, Case managers need a non-subjective means of determining the status of each Case (i.e. is the Case going to achieve its originally-stated objectives? On time? Within budget?).

FOMM (Figure of Merit Matrices) work well here.

KPIs – the Final Frontier

As you have probably noticed, there’s a considerable disconnect between strategy and operations in many organizations. Strategy is the domain of one group of people, operations another.

Rule #1 for success in an organization is that operations must, at all times, be supportive of strategy. If the mission\strategy says the organization builds airplanes, we don’t want money being spent building racing cars.

Since top management does not have time to micro-manage operations, they have to rely on ROIs (don’t authorize iniatives that are not supportive of strategy) and they also need to periodically trend KPIs (Key Performance Indicators).

It’s not too difficult to work out ways and means of consolidating operational data to KPIs. However, it’s best to update KPIs in real-time, not rely on readings based on historical data. And, guard against watching the wrong KPIs.

The big question is where do you keep your KPIs?

How about back at the Kbase?

This closes the strategy->operations->strategy loop and it gives top management one additional and very powerful benefit which is the ability to look at KPIs and engage what-if and connect-the-dots processing. This can lead to a realization that KPIs need changing (i.e. the KPIs are no longer valid measures of current strategy.)

ROIs on the way down (submit/approve), and KPIs on the way up, are the way to bridge the gap between Strategy and Operations.


Posted in Automated Resource Allocation, Case Management, Decision Making, MANAGEMENT, Operational Planning, Operations Management, R.A.L.B., Strategic Planning | Tagged , , , , | Leave a comment

Patient Experience: $100M in Free Revenue for Your Hospital

Paul raises some excellent points!

Originally posted on Disrupting Patient Access & Experience:

Did you know that fifty-percent of patients who have selected a provider will seek a second opinion?  That means that fifty-percent of the people who chose your hospital will want information from another hospital or physician. It also means that fifty-percent of the people who chose a hospital other than yours will be seeking a second opinion, which means many of them will be contacting your institution.

However you do the math, seeking a second opinion means that over the course of the year your hospital will be hearing from as many as tens of thousands of people seeking a second opinion. And what is the question they are really trying to answer?  They are trying to answer the question—should I buy my healthcare for this situation from your hospital?

For a moment, let’s consider the officer of the Chief Financial Officer.  That person’s job is to make…

View original 666 more words

Posted in Uncategorized | Leave a comment