O/I consulting engagements IMO are discovery/connect-the-dots exercises.
They don’t lend themselves to procedural laundry-list approaches, particularly when you go from one industry to another.
In many organizations, your typical client has notions regarding who their customers are, and what they like/don’t like. Parts are accurate; other parts are wishful thinking, and yet other parts are a dated perception re their customers.
Most organizations don’t know who their customers should be (would a different mix be in their better interest?) and they certainly don’t typically know, in most cases, who their customers of the future are going to be.
So, the consultant has to make sure that the fundamentals of all aspects of the organization are documented as a prelude to any Outside/In consulting engagement, preferably in such a way that the information can be examined and tested in multiple ways.
Areas of interest/analysis are customers of the past/characteristics, current customers, future customers; current products/services, in-the-works future products/services; policy/procedure; processes; strategies; legislation, plus competitors and what they are doing.
A practical example:
In respect of new legislation that may have appeared in a particular industry area, we need to know where in the organization’s current policy and procedure a particular clause in the legislation is addressed. And, from the opposite side, where in the new legislation does it say the organization has to do something that is part of current policy/procedure? Maybe some of the things the organization has been doing no longer need to be done?
At the end of the day, you have infrastructure, human resources, processes and customers. All customer experiences derive from processes, the processes transform inputs to outputs, infrastructure and human resources are needed to support the processes, processes need to support strategy. Everything has to fit so it follows that if you ignore any one dimension the end result will not be right.
Currently, I have a project to help organizations pre-certify themselves for ARRA Meaningful Use (Healthcare) financial incentives. Here, all of a sudden, there are two customers – the new “customer” is entirely known (it’s the 45 CFR legislation) but the issue is, given an MU certified EMR system, how does the organization need to change to be in a position during an audit (fail and the gov’t stops the incentive payments) to demonstrate that they are making meaningful use of their MU EMR system?
This requires knowing the legislation, knowing the EMR product the organization has purchased, then upgrading policy/procedure, processes to make sure that at the operations level, the organization is meeting the MU criteria.
In this particular exercise there is a presumption that if healthcare agencies make meaningful use of MU software, the quality of care to the core “customer” will improve. I am not convinced.
So, I recommend to my clients that whilst in pursuit of MU incentives, they need to keep their eye on wider ways and means of improving patient satisfaction by continuing to make improvements in areas that increase staff efficiency, increase throughput, and decrease admin/medical errors, given that these are known ways to improve patient outcomes (satisfaction).
I believe that all multi-dimensional problem/solution investigative exercises greatly benefit from a KnowledgeBase – the alternative is to leaf through hundreds/ thousands of documents.
Why work this way when you can have all of the data relating to a corporation accessible from one screen and such that you can take multiple views on any subset depending on the context?