Trans-Pacific Partnership Agreement

If you are a citizen of one of the Trans-Pacific Partnership member countries, (Brunei, Chile, New Zealand, Singapore, Australia, Canada, Japan, Malaysia, Mexico, Peru, the United States, Vietnam), you can now download and read 6,000 pages across 30 chapters at one computer screen.

As usual with trade agreements impacting multiple countries, there are differences in terminology, exceptions, special circumstances that each country will want to cross-compare during ratification.

I don’t see analysts scrolling thousands of pages in MS Word.

When the text is hosted in a free-form search Knowledge base, things go a lot easier.

In the example view below, typing the name of a member country highlights all references to that country and allows a user to selectively browse the agreement.  Since the Chapter Forms are customizable, it’s very easy to add a “Comments” field and browse the content as part of any connect-the-dots undertaking.


Posted in Knowledge Bases | Tagged , | Leave a comment

The importance of pre-conditions and post-conditions in the “new” BPM

Traditional BPM had little need for  pre-conditions and post-conditions at process steps.

The combination of flow graph logic and data collection checks and balances put in place at process steps by BPM flow graph designers provided reasonable expectation of no-fault processing along BPM processes at run time.

Photo Credit: Ignacio Evangelists

Pre-conditions needed on arrival at steps.   Post-conditions needed on exit from steps.

Photo Credit: Ignacio Evangelista

The situation changed dramatically when the industry started to need to accommodate “process fragments” in addition to traditional end-to-end processes, especially processes fragments made up of a single step.

In a run-time Case environment, if I stream a new manufacturing order onto a workflow that has as its first step “ship final product”, the Case hosting the processing needs a way to determine that the usual “design-build-test-QA” steps have been skipped over.

Traditional BPM did not have to worry about this because it was able to rely on an end-to-end process comprising “design-build-test-QA-ship”. On arrival at “ship”, all of the data attesting to completion of upstream steps would typically be on hand.

Not so in Case, where users can do what they like, including not following an end-to-end BPM process, undertaking instead to achieve the same objective by performing a seemingly random (except to the performer) sequence of steps where the only inferred links between steps is their timing.

It follows that we need pre-conditions at the initial step of key process fragments. In the above example, the processing engine will ask “Do you have a QA certificate?” and refuse to go forward in the event of a “No” response.

Once process designers get used to putting in pre-conditions at process fragment start steps they quickly see no reason for not putting pre-conditions at intermediate and final steps along process fragments.

Pre-conditions add robustness at process steps that may be impacted by data imports to Cases. (i.e. the manufacturer had a fire in the QA lab, the product was sent to an outside lab for QA certification, the results came in via an external transaction but the data was not streamed onto the process fragment because this type of extraordinary processing was not anticipated in the BPM process map).

You might ask why, with pre-conditions, would we also need post-conditions?

The reality is that BPM process maps rarely cover all eventualities so there can be data at a Case that a process fragment does not have direct access to. Here, the generic fix at the Case is to accommodate both pre-conditions and post-conditions at any process step (end-to-end processes, process fragments, ad hoc steps).

Pre-conditions and post-conditions are central to a software correctness methodology called “Design by Contract” invented by Dr. Bertrand Meyer in the mid 1980’s and implemented in the Eiffel programming language

For more information on Design by Contract™ see

The author has no commercial affiliation with Eiffel Software.

Posted in Uncategorized | Tagged , , , , | 1 Comment

Patient- Centric Care – Basic EHR Needs

When I was a child, our family doctor did house calls. Visits were particularly remarkable during snowstorms when the good doctor would arrive with horse and sleigh.Patient_centered_care

We will never get back to the “good old days” but it is worth documenting the essentials of patient-centric care within the context of today’s “healthcare factories”, where things have, in my view, gone over the edge.

The first requirement is that your home doctor have an EMR.

This is nothing more than an electronic version of the old paper-chart except that it is more accessible and the information can be shared. Assuming inter-operability, that is.

The purpose of the e-chart is unchanged relative to its paper chart precursor. The default view remains a reverse chronological record of interventions, each with a date/timestamp and performing resource “signature”. If you click on a hyperlink, a reasonable expectation is to be able to see data, as it was, at the time it was collected, on the form versions that were in service at that time.

The reasons for consulting the e-chart are to determine what the most recent intervention was and provide decision support in respect of the current and future interventions.

One way to simplify decision support is to put in place “best practice” protocols to guide the processing of patients.

Clinics/hospitals need best practices templates, with facilities for streaming patients onto private instances of these templates. Each template needs to consist of an orderly sequence of steps/interventions, with instructions at steps, context-situational appropriate data collection forms and routing parameters that indicate the required classes of users with the skill sets to perform steps.

The logic for “best practices” is that there cannot be 10 best ways to manage a patient with a set of symptoms/signs and a somewhat similar history of interventions.

Except that rigid imposition of protocols (i.e. cookie cutter medicine) does not work.

Accordingly, clinicians must have the freedom to deviate from best practices. This means “best practices” become guidelines and the care environment needs to be able to provide governance to prevent extreme, unwanted deviations from the “best practices”. Rule sets at the Case level are capable of taking care of this.

So, with the above infrastructure in place (i.e. guidance and governance), the clinic/hospital is in a position to do the right things, the right way, using the right resources. Assuming, inter-operability.

What’s missing?

Well, two things.. Location and Timing.

No point scheduling an intervention that needs a particular piece of equipment or a particular skill when the organization has neither. And, for good outcomes, we need timely interventions, which assumes availability of infrastructure and availability of skilled resources.

For this reason, patient care systems need RALB (Resource Allocation, Leveling and Balancing) that take best practice steps and ad hoc interventions and assign these to specific healthcare professionals. (Dr. Jones, in Examination Room 307, with Patient Martha Bloggs, on Friday 16th 2015 at 1000 hours).

Clearly, in a facility with various teams of skilled resources we need tasks to post to the attention of “day shift radiologists”, not Dr. Jones.

The way RALB works is to post the “next” step along a patient care-pathway to the attention of “day shift radiologists” and if there are, say, three of these on shift, the first to “take” the order “owns” the step and is expected to perform it or put it back in the resource pool.

If Dr. Jones takes the order, he/she needs to be able to micro-schedule the order in the context of other time demands, so a 1000 hrs scheduled appointment could start at 1025 hrs, or 1045 hrs, but the point is the task does not fall between the cracks.

In a large facility, there may be schedulers who offload work from clinicians to other clinicians, so we end up with 3-tier scheduling (allocation, leveling, balancing).

It’s not enough to be efficient in the processing of individual steps along patient care pathways.

On top of this, organizations need to ensure that as one task along a patient care pathway is complete, the next-in-line task will take place without unwanted delay.

So, given a Case Management environment, tasks post to clinician InTrays, then, following completion of a task, that task clears from the InTray of the clinician who “took” the task and the next-in-line task as per the flowgraph template instance posts to the InTray of the appropriate clinicians.

Now, the organization is doing the right things, the right way, using the right resources, at the right places and at the right times.

There are two faults with the “best practices” focus.

One is that efficiency at the individual patient care pathway level does not necessarily lead to overall agency-level efficiency. Accordingly, it’s reasonable to expect overrides to best practices “for the greater good”.

A second fault with “best practices” is that effectiveness will be decreased by overhead related to regulatory “long term outcomes” data collection. We need to look to automation to minimize the impact.

Posted in Case Management, Data Interoperability, Decision Making, FIXING HEALTHCARE | Tagged , | Leave a comment

“Big Data” poses some challenges for healthcare – Find out how to circumvent these.

A discussion group at HIMSS in LinkedIn recently posted the following question “Smarter, Safer, Cheaper. What’s the true promise for healthcare big data”.

Here was my response:

Smarter, safer, cheaper for sure, but there are some challenges Dig_Dataand ways of making analytics seamless or making analytics very difficult.

Rule #1 is you cannot analyze what you did not collect in terms of data.

Rule #2 is you need easy ways of displaying the results of analytics as overlays to best practice protocols.

Rule #3 is you cannot follow Rule #2 if you don’t have best practice protocols (preferably in the form of flowgraphs as opposed to written policy/procedure).

In industrial Case Management (one example being military helicopter MRO), no two units that come in have the same configuration so it’s important to have a Case History of all interventions, organized by “session” and presented in reverse chronological order, with the ability to present the data by system/sub-system. The key to MRO is knowing when a particular system/sub-system needs periodic inspection and having data on hand to help with predicting problems before they actually materialize.

You end up with a lot of data but the data collection is effortless because all work is guided by flowgraphs (do this and then do that), data collection is automated and analytics routinely examine the data to identify ways and means of improving the flowgraphs.

It’s no different in healthcare, except much more complicated owing to the number of permutations and combinations such that you cannot expect to have end-to-end flowgraphs capable of handling the processing of most individual patients.

So, we end up with best practice process fragments that healthcare professionals have to thread together manually with some assistance by software and machines.

The following capabilities are needed:

#a. Skip a step along a best practice protocol.

#b. Revisit an already committed step.

#c. Forward record data at steps not yet current.

#d. Insert a step not in the original protocol.

In all of this it is important to be able to capture the data so you need a workflow management software environment that automatically logs all data as entered session by session at various computer screens at process steps and at ad hoc intervention steps. (i.e. map out your processes plan-side, roll them out, manage tasks via a run-time workflow management environment at Cases, have in place data logging at the Case Hx with a parallel flow to a “data warehouse” that analytic tools can easily link to).

The challenge becomes to detect via analytics patterns in the data logging, examples of which are for a 1-2-3-4-5-6 …. 12 workflow:

* Step 5 is almost always skipped. Why then should it continue to be in the protocol as a mandatory step? Either leave it as an optional step, or eliminate the step.

* Step 3 is almost always repeated via an ad hoc intervention following a commit at step 8. The process template probably needs to have a loopback added.

* One or more ad hoc steps are almost always inserted at step 12, why not update the protocol to make these part of the best practice template?

It is helpful to inventory all of the best practices as graphic flowgraphs for easy updating.

Analytics should ideally be able to build virtual meta cases that show protocol variations across Cases relating to patients with similar presentations, yielding statistics along the lines of “across 1,000 cases, the cases branched this way 5% of the time, 12% of the time that way, with no branching 83% of the time”.

Posted in Uncategorized | Tagged , | Leave a comment

It’s all about that Case, ‘bout that Case, no Trouble

I figured it would be easier for Meghan Trainor to attract folks to my blog.

In healthcare services delivery, your Case Managers need Case Management software.  Seems obvious, doesn’t it?  Why then do clinics/hospitals have EMRs that do not have a Patient History as their core pillar.

If you want to provide continuity of care to individual patients you need, first of all, “best practices” workflows to guide intake, assessment, diagnosis, tx planning, tx monitoring and discharge planning.all_about_the_case

And, yes, the Government says you also need long term outcomes data collection so that your patient’s great-grandchildren will also have good outcomes.

Clearly, the team of caregivers does not, will not, have the time to stare at paper process maps.

Accordingly, best practices maps need to be prepared and, for practical purposes, be carved up into individual interventions and automatically posted to the right staff at the right time.

The maps go a long way toward getting the “right” things done, by the “right” people, the “right” way, at the “right” time.

None of this can be done in thin air – “Case” is the environment of choice to host process steps.

Case accommodates any mix of structured and ad hoc interventions.

None of this will come as a surprise to practitioners who are old enough to remember paper charts.

Paper charts used “Case” and there still is no shortage of “Case Managers”, but look at the hoops they have to jump through to do what used to be easy.

What is perplexing is why did some designers of EMRs forget to carry forward the notion of “Case”?

Meghan Trainor – All About That Bass, .. that Bass, no Treble

Posted in FIXING HEALTHCARE | Tagged , , | Leave a comment

Why are we still having problems in the area of Healthcare software?

The point was made in a recent LinkedIn HIMSS discussion group “At what point does an EHR implementation become too large an endeavor?”

My response was this happens at the time you pick a bad EHR “solution”.   The usual outcome is a need for extensive customization which extends the timeline and the cost of acquisition. Confused_Dog

We don’t hear many stories about how easy/difficult it was for an organization to “customize”, say, MS Excel.

The reason, of course, is that setting up rules at spreadsheet cells is typically referred to as “configuration” as opposed to “customization” and you can configure different spreadsheet “books” to process data in quite different ways.

With EHRs, you can basically look at the software as having four modules (Clinical, Scheduling, Billing, Data Interchange). Not all that different from spreadsheets, (i.e. different “books” that perform different functions), right?

The statement “. . the level of effort is mind boggling” seems indeed to be the case with many EHRs, but not all.

My view is the origin of the mess the healthcare industry is in can be traced back to the way many of the EHRs were designed (requiring customization instead of configuration).

If your EHR ships with a proper scheduler designed for healthcare, a billing engine designed for healthcare, the only things you really need to worry about are Clinical and Data Exchange.


In most Clinical modules you quickly get to the practical problem Dr Liebert raises in respect of clinical work, which, if I am reading it right, is a statement that cookie-cutter medicine does not work.

Practical input like this from experts could have allowed EHR software designers to predict that one-size-fits-all would not work (” . . . the system needs you to do it this way”).

Some went to the other extreme where the customer is allowed to start with a blank sheet and map out clinical best practices as process maps, but with the caveat ” . . . make sure you anticipate all possible eventualities”.

This, too, was “not close, no cigar”.

The lights go on when your Clinical piece has its foundation in ACM (Adaptive Case Management) because, here, you start with a blank sheet, and use BPM (Business Process Management) to map your processes to cover most eventualities, without having to stay up nights worrying about all eventualities. The environment takes care of the need to accommodate ad hoc interventions.

BPM provides orchestration, ACM accommodates (via global rule sets) eventualities you did not include in your BPM process maps.

Think of a highway. BPM is the yellow center line (guidelines), ACM provides the guardrails on each side of the road. You can vary from the guidelines but extreme excursions are reined in by the guardrails.

You get to have your cake and eat it.

Don’t try to go out and buy an ACM EHR.  ACM is a methodology, not a piece of software.

Data Exchange

The wrong turn with Data Exchange was to reckon that EHRs need to be standardized so that they can exchange data.

One generic standalone Data Exchanger is all you really need and, simply stated, most of all of what it needs to be able to do is to take in data from publishers and make different subsets of this data available to different subscribers.

Publishers clearly want to post using their native data element naming conventions, subscribers want to read using their native data element naming conventions.

In respect of data types, character strings are fine for most data and you can post complex objects such as video files with an indication of their file type so that subscribers know what program is needed to access/open the file.

Data Exchange under the hood is nowhere near as simple as I have describe it here – you need long descriptors for data elements so that potential subscribers can know what it is they are subscribing to. You don’t, as a publisher, want to make your data available on other than a strict need-to-know basis. Different EHRs require different data formats.

You end up writing formatters to get data into the Data Exchanger and parsers to get the data out of the Data Exchanger but, guess what? There are only so many different EHRs so each new formatter/parser gets a little easier.

I think the healthcare business could significantly increase efficiency, increase throughput, decrease errors and improve compliance with internal and external rules and regulations, all of which would improve outcomes (i.e. individual patient outcomes and data collection for long term outcomes across different patient populations).

The only way this is going to happen is for buyers of EHRs to spec out what they want/need and accept nothing less.

Remember what you really need is a solution capable of addressing current as well as unanticipated future needs, not just current needs.

Posted in Adaptive Case Management, Business Process Improvement, Business Process Management, Case Management, Data Interoperability, FIXING HEALTHCARE | Tagged | Leave a comment

What’s your corporate IT strategy?

corporat_IT_strategyIn the beginning, IT reported to the CFO because the main focus was on automating accounting functions, with, time permitting, helping other functional units to address their computing needs. None of the application systems were able to easily exchange data.

In manufacturing and construction, things progressed to where we saw “all-singing-all-dancing” materials management/workflow management applications, with data transfers to accounting.

The focus, at that time, shifted from addressing functional unit needs to addressing the needs of the enterprise and CIO’s started to report to the CEO, not the CFO.

The problem was strategic planners recognized the need for business analysis capabilities so they proceeded independently to set up Business Analysis/Business Process Improvement units. These units did not have the skills to evolve sophisticated mind maps capable of consolidating operational level data to KPIs, so they did most of their work using spreadsheets.

Meanwhile, most IT departments, with a prior focus on filling orders and keeping the lights on, not following the state-of-the-art of mind maps, did not have the time nor did they set about to acquire the skills to select enterprise solutions.

Consolidating the organization’s Business Analysis/Business Improvement capabilities with those of traditional IT became possible as corporations moved to “cloud” solutions, relieving IT of the burden of “keeping the lights on”.

Not all of these transitions went smoothly.

The combined talent pool of internal BA/BI/IT proved, in many cases, to be a poor substitute for a couple of days of senior management consultant on-site time. The competitive advantage of these folks is that they have seen multiple good/bad strategy development/implementation initiatives across different industry areas over several decades.

In the absence of expert advice/assistance, the tendency remains to go for one of the following three options.

Internal Build – the undertaking here is to build the next generation Ferrari in one’s garage. Not impossible, given that Bill Gates did it and Steve Jobs did it, why not us?

One-stop-shop – the idea here is to find a system that addresses one’s current needs to the extent possible and to then proceed to “customize” the system. Vendors love this option. The core problem of course is corporations really don’t want/need software that addresses current needs – they need software that can address unanticipated future needs.

Best of Breed Integration – this makes a lot of sense PROVIDING all of the applications you pick have a demonstrable capability of exporting their data and importing data from local and remote third party systems and applications. Any slip-ups here and you join the queue of customers pleading with vendors for customization.

So, what to do? . . . .

Hire a good consultant for a couple of days on-site.

Pick a reputable busy hands-on consultant who starts to get ready to leave from the time he/she arrives on site.

Their focus will be on analysis/assessment and on training your staff so that they can take over once the consultant has gone off-site. Do your homework in advance. Expect the consultant to also do a lot of homework such that on arrival he/she has a reasonable understanding of your business, your strategy and the current state of your operations.

Do not hire a consultant who borrows your watch to tell you what time it is.

What’s the best possible outcome?

The best possible outcome is likely to come out of generic software that allows operations level staff to build, own and manage their own processes.

They should only have to go to BA/IT for rule set construction and inter-connectivity by and between all of the systems you already have in place and do not particularly want to cast aside.

The usual great deficiency in organizations is inability to provide continuity across silos.

Adaptive Case Management software plus Business Process Management software go a long way to bridging the gap. Graphic Enterprise free-form search knowledge bases go a long way to helping corporate stakeholders to evolve strategies and track performance at KPIs.

Yes, my group promotes these solution sets and we also use them internally.

A good starting position is to recognize that every organization has “best practices” – it does not matter whether these exist only in the minds of staff or whether they are on paper, on-line or in-line.   The main point is these are their “best practices” and will remain as such until these practices are replaced with improved practices.

The other thing they need to keep in mind is that their Job #1 is to build, maintain and enhance “competitive advantage”. This responsibility is shared across the Board, senior management and all staff. It’s all about “rowing in the same direction”.

What next? Read this blog post again and then again.

Then, take action.

Posted in Business Process Management, Adaptive Case Management, Interoperability | Tagged , , , | 1 Comment

How to fund process improvement initiatives

Every organization has processes, the processes are either documented or undocumented. They may exist only in the minds of workers or be on paper, online, or in-line.

Every organization has “best practices”. In the absence of any declared best practices, the practices in use are the organization’s “best practices” until these are replaced by better practices.

Processes are key for building and enhancing competitive advantage. The singular purpose of a process is to convert inputs to outputs.

All good, except that conversion requires time, money and resources that are typically in short supply.

The question that therefore often arises is which processes should be improved, to what extent, and how should process improvement initiatives be funded?


Option 1 – Fix your broken processes

If a process has bottlenecks, shuts down frequently or produces inconsistent outcomes it’s a prime candidate for improvement. Automated or manual inspection of process run-time logs will typically reveal bottlenecks and shutdowns.

If a step or series of steps show inconsistent outcomes, consider automating these.

A simple financial analysis is usually sufficient to justify fixing broken processes. Some need to be replaced outright.

When fixing processes, consider updating and improving these.

Option 2 – Go for the low-hanging fruit when looking to improve processes

If your organization has a “continuous process improvement” unit, bear in mind that most change is disruptive and that too much tweaking of some processes takes you to diminishing returns. Guard against having a “hammer looking for nails” mentality within such units.

With respect to the identification of candidate processes for improvement it makes sense to focus on easy, inexpensive, fast-track, low-risk, high return initiatives.

Some of these span silos and can have an important impact on an organization. Others do not.

Most processes in this category can be funded out of annual operating budgets.

Option 3 – Manage significant change

Wider, more complex initiatives require ROI submissions.

Objectives need to be clearly defined, benefits need to be stated, resources, timelines and funding need to be put in place.

Periodic monitoring is needed with, at times, re-setting of objectives.

Why many ROI-based initiatives fail.

It sounds simplistic but going through the motions of preparing an ROI and then not bothering to monitor performance, time, cost and outcomes over the life cycle of the initiative means the benefits as declared in the ROI stand a good chance of not being attained.

The reason is things change from the time of preparation of an ROI to implementation of initiatives – if things have changed to where the projected ROI of an initiative is trending negative, it is important to “know when to hold and when to fold”.

Worst case scenarios include being leapfrogged by a competitor before “new technology” gets to market. It may be best to shut down an initiative and put your scarce resources to initiatives with more promising outcomes.

Posted in Adaptive Case Management, Business Process Improvement, Competitive Advantage | Tagged , | Leave a comment

Where’s The Beef? – An under the hood look at “seamlessly integrated” BPMs

wheres_the_beef_from_PexelsI keep reading about “seamlessly integrated” BPMs’ (Business Process Management Systems) but when I start to look under the hood, it quickly becomes obvious that many content authors have no clue regarding the needs of organizations in the area of process management.

Reality is they should be talking about “hay-wired” BPMs modules.

Most of the product pitches start with a description of a process mapping environment. Some of the descriptions go on and on about languages/notations. This tells us that the environment being promoted is not end-user oriented.

No end-user wants to learn a new language/notation nor do they want to hire or have to liaise with a team of process experts or software development intermediaries.

The real experts for any process are the folks who work with that process day in/day out. Chances are you need facilitators to drag them out of their silos but with minor prompting, end-users can build, own and manage their processes.

Next in the list of capabilities we learn that there are “interfaces” that convert the graphic process map into, it seems, a run-time template that is “not rigid”.

What “interface” exactly would one possibly want other than a “rollout” button? If there is more to it than this, this is a dead giveaway that the protocol is too complicated and unworthy of receiving a tag of “seamlessly integrated”.

Same for “not rigid” – given we know that managing any Case requires being able to deal with a random mix of structured versus ad hoc interventions.

Any detailed explanation about a BPM template not being “rigid” is a smokescreen for inadequacy in this area.

We all know that at any step along any process template you may be required to reach out to a customer (or some outside system/application) or accept input from a customer or outside system/application.   If “rigidity” has to be highlighted, other than in a historical account of the evolution of BPM, the setup is too complicated.

Strike three!

I could quit here but if any readers are still with me, I am not yet done with the rant.

Here goes – at any process template step it’s a given that users will need to collect data.

These software systems therefore need at least one form at each step/intervention and, from an end-user perspective, the form needs to be local to the step.

Same for all manner of .txt, .doc, .ppt, .pdf, .xls, even video/audio recordings that may relate to a step. All of these need to be attributes of steps, not off in some “shared” repository.

What end-users want/need is real “seamless integration” and a modest amount of “interfacing.”

Clearly, when they click on a .doc attribute at a step, they want interfacing with MS Word, not replication of word processor functionality within the Case environment.

Why multiple references to Case?

The thing is we want the primary focus to be on reaching objectives and if “work” has become a combination of ad hoc and structured interventions, we pretty much need to host all of this at a run-time environment that lets users decide what next to do, when, how (ad hoc intervention capability), with some “guidance” attributable to background BPM process templates. Clearly, it’s not only all about BPM.

We also need to look to the environment to “rein-in” extreme excursions away from policy/procedure (global case-level rule sets, if you like). Otherwise you have no “governance”.

Case provides all of the necessary functionality, including data exchange plus auto-buildup of a longitudinal history of interventions (how otherwise would a user be able to decide what the next intervention should be at a Case?).

The real icing on the cake in some of these nonsensical pitches is references to “process objectives”.

If you no longer have “processes” (what we have today are “process fragments” that get threaded together by users, robots and software at run time), how can/should we expect to find objectives at process template end points?

No processes, no convenient end points, no objectives.

The answer is objectives have gone from being plan-side attributes of process maps to run-time attributes of Cases.

Once you venture into areas where there is a need to accommodate a random mix of ad hoc and structured interventions (i.e. most business areas today except where we may find a few not-yet-automated end-to-end processes), it is the Case Manager who decides what the objectives are for a Case and they, not BI analysts, nor IT, park these objectives at each Case.

Case Managers also monitor progress toward meeting Case objectives and this typically requires fairly complex non-subjective decision making, not simple counting of the number of remaining steps along a process template pathway.

See posts I have made on FOMM (Figure of Merit Matrices).

Just last week I read some promotional material announcing a ‘transition’ to “Case”.

I pointed out to the authors that Case is not new, that it actually is a term borrowed from the legal profession and was alive and well in the UK in the area of medicine from as early on as 1603.

They have not thus far responded to my objections.

It’s easy to determine within healthcare whether there is a focus on Cases.

Just walk into a clinic/hospital and ask if you can talk to a Case Manager. You will probably have a roomful of choices and some of these folks have been doing Case Management for decades. They have job titles, job descriptions and really perform “Case Management” day-in/day-out.

Most of my readers, aside from members of the flat-earth society, are starting to get this. Except that the way things have been going lately, we may very well soon have a flat earth, so they may end up having the last laugh.

“Case” – not new, not close, no cigar.

Posted in Adaptive Case Management, Business Process Management, Case Management, Data Interoperability, Decision Making, Enterprise Content Management, Process Management, Process Mapping | Tagged , , , | 1 Comment

3D Strategic Planning – What you need to know about it

Strategic Planning is a “connect-the-dots” exercise that usually starts off like this. . .big_data_mess_full_size

In order to develop strategy you need information on multiple Entities relating to your business (i.e. Capital, Access to Capital, Land, Equipment, Tools, Premises, Staff, Current Products/Services, Products / Services Under Development, Projects Awaiting Approval, Technology Trends, Changing Legislation, Competitors).

We know that decision-making involves the transformation of information into action. The problem is you cannot make decisions if you cannot easily find and access the information needed for such decisions.

For any given opportunity, arriving at a go/no decision will impact one or more of these Entities.

One added complexity is the way information belonging to one Entity interacts with information belonging to another Entity.

This is where we make the link between traditional strategic planning (deciding how to make best use of scarce resources on initiatives that track with corporate mission) and the “connect-the-dots” approach used by law enforcement for investigations.

The key point is the law enforcement “connect-the-dots” approach can be “borrowed” for corporate strategic planning purposes.

Here is a typical practical scenario

An opportunity has been identified, the sponsors present to the Board and the Board now has to assess the opportunity, assign a ranking, assign a priority and if the project manages to make its way through these filters, allocate funds to allow an initiative to move forward.

Different opportunities impact different Entities in different ways.

It follows that if you are consolidating information relating to corporate Entities you may need to provisionally allocate assets/resources to several competing opportunities.

All in the interest of making better decisions.

One way to do this is to consolidate all Entity information for the Corporation at a graphic knowledge base and then alias information relevant to each opportunity for local consultation at each opportunity.   This allows you to toggle between the “big picture” and individual opportunities, with each opportunity being able to “see” competition from others on the use of scarce resources.

If you find your favorite proposed initiative has a ranking below another initiative you can perhaps do more research on the merits/benefits of your initiative and improve its ranking.

The more you are able to “connect-the-dots” between initiatives and their “draws” on scarce resources, the greater the potential quality of your decision-making at and across initiatives.

Why 3D?

Well, you will discover soon enough that trying to build a single hierarchy comprising say, 500,000 information points on one computer screen requires the use of 3D “Russian Doll” or “Cascading Folder” mapping as illustrated in the US State Dept Country Profiles demo database (all countries, business, travel, law enforcement, narcotics, terrorism, etc.).

Try that on paper or whiteboard with post-its.

What you need is a graphic free-form search knowledge base capability that accommodates any number of separate hierarchies, with “connect-the-dots” facilities and with the ability to quickly zoom in from the “big picture” to low-level detail and back.

At the end of the day, it’s all about how you like to look at your corporation – you really only have two choices . . .

Like this



Or like this












Think about this article next time you go to a meeting pushing a cart with a 3 foot pile of reports, papers, spreadsheets.

Key Words: Strategic planning, Connect-the-dots, knowledge bases

Posted in Strategic Planning | 1 Comment