Success in Business – Reach for the Top With ACM 360


Success in business is all about building and enhancing competitive advantage.

Strategy “rules” – if you don’t know where you want to be, you won’t get there.

onward_and_upward

No point evolving plans that don’t make efficient/effective use of scarce resources. No point having plans that never get implemented. No point implementing plans that are not monitored.

Plan, monitor, control.

We knew that, but how do we actually make it happen?

  1. Go for the big picture for strategy formulation/management

If you can’t see it, you can’t include it or exclude it.

Seeing the big picture allows you to assess and prioritize initiatives.

Graphic free-form search Knowledgebases give you the big picture.

Kbases – don’t try to evolve/manage strategy without these.

Use the following model:

Corporate assets inventory -> strategy -> KPIs ->candidate initiatives -> ROIs -> Authorized Initiatives

     2. Go with Adaptive Case for operations management

Case is capable of hosting any mix of ad hoc and structured interventions. Look to background BPM to provide guidance re structured interventions. Look to Case level rule sets to provide governance.

A Case can store any object including data element values, .pdf, .doc, .txt, video/audio recordings and spreadsheets.

The latter are capable of providing a framework for assessing progress toward meeting ROI goals/objectives. The methodology of choice for non-subjective assessment of progress is FOMM (Figure of Merit Matrices).

Case environments automatically build longitudinal histories with date-timestamped – user “signed” hyperlinks that allow viewing of data as it was, at the time it was collected, on the form versions that were in service at the time.

Case provides dynamic decision support in respect of the performance of work that derives from annual budget authorizations and work that is related to initiatives funded via ROIs.

Case – don’t go to the office without it.

Use the following model:

Initiatives -> Case setup -> monitoring -> data consolidation to Kbases -> KPI trending

     3. Face Up to the Dynamics and Bridge The Gap between Strategy and Operations.

In today’s world where 5-year plans have, under many scenarios, been compressed to 1 ½ years . . .

strategy can change, initiative priorities can change, goals and objectives can change.

Enter ACM  360 (Adaptive Case Management 360), for bridging the gap between Strategy and Operations, because of the wraparound that can be achieved by:

  1. a) launching Cases for individual ROIs;
  2. b) setting up permanent “bucket” Cases\Sub-Cases for the many different database record types organizations need (Corporate Assets, Land, Plant, Equipment, Tools, Customers, Customer Orders, Inventory In/Out, Supplier Orders, Shipments, etc);
  3. c) managing operations, and;
  4. d) consolidating data to corporate Kbases\KPIs.

 

 

Posted in Adaptive Case Management, Business Process Management, Case Management, Decision Making, Operational Planning, Strategic Planning, Uncategorized | Tagged , , | Leave a comment

Risk vs Uncertainty . . . again


I’m not sure people in general understand the difference between risk and uncertainty so here is an update on an article dated 2012.

Barry Ritholtz does a good job in the article quoting Michael Mauboussin

http://ritholtz.com/2012/12/defining-risk-versus-uncertainty/

Risk: We don’t know what is going to
happen next,Rick_Uncertainty but we do know what the distribution looks like.

Uncertainty: We don’t know what is going to happen next, and we do not know what the possible distribution looks like.

The distinction is important in the area of strategic planning.

ROIs for initiatives should always include a Risk Assessment (worst case, expected, best case).

Approvers of ROIs are well advised not to expect to end up too close to the bottom or too close to the top.

In respect of uncertainty, ROIs should also always include exit strategies.

Posted in Decision Making, Risk Analysis, Strategic Planning, Uncategorized | Leave a comment

Patient Portals versus APIs for Patient Access to Healthcare Information


Back in November 2015, Health Data Management, published an article called “Challenges Ahead for Portals”.

This is an interesting article because it indirectly describes the effect of too much regulatory involvement in healthcare services delivery.

PORTAL_API

http://www.healthdatamanagement.com/news/challenges-ahead-for-portals-51630-1.html

In the article, Raj Ratwani, scientific director of MedStar Health’s National Center for Human Factors in Healthcare states that patient portals “.. do not present information in a manner that is understandable and useful”.

It’s likely that views regarding the inappropriateness of existing patient portals led to inclusion in the Stage 3 objective for Patient Electronic Access to address the patient needs to “ view their health information, download their health information, transmit their health information to a third party and access their health information through an API.

My point is it’s fine for regulatory agencies to set incentive objectives but not to narrowly specify the means by which such objectives should be met.

Whether a patient gains assess to PHI via a portal or via an API should be a decision best left to stakeholders who have a close connection to patients.

Under this scenario, if a vendor implements a portal that does not address patient needs, the patients will move to another healthcare service provider who either has a better portal implementation or an API that works well for such patients and the provider supposedly, would pick up on this and move to a different vendor.

Accordingly, portal/API selection should be the responsibility of vendors first, then healthcare service providers, picking solutions they feel their patients will find acceptable.

Vendor -> Provider Selection-> Patient Needs

The way things go when there is too much regulation is regulators impose demands on vendors, healthcare service providers then select, from a reduced set of options, solutions they feel will address internal/patient needs and the patients then decide whether the “solutions” meet their needs.   I doubt very much whether the regulators consulted patients before reaching the conclusion that patients would be best served via APIs.

See how far away the patient is from the regulators under this alternative scenario.

Regulatory Authority -> Vendor -> Restricted Solution Selection for Providers-> Patient Needs

The reality is you can deliver patient healthcare information to patients using a number of technologies, one of which is an API at a Patient Portal (i.e. a hybrid solution). This avoids the need for the patient to download and install an API on the various devices they may want to use to access their healthcare information. All they need with a portal/API is to type in a URL and enter a user name/password.

The danger with the phraseology in the Stage 3 Final Rule is that software systems that do not have a traditional API could be categorized as not meeting the Stage 3 Final Rule.

Posted in FIXING HEALTHCARE, Interconnectivity, Interoperability, Meaningful Use, Uncategorized | Tagged , , , , | Leave a comment

Are your BPM process steps being served?


Anyone remember the British sitcom “Are you being sewaiter_free_photorved” broadcast between 1972 and 1985 on BBC1?

No one needs to ask this in respect of BPM workflow steps because rules ensure that soon as one structured step is committed, the ‘next-in-line” steps immediate post to the InTrays of staff with the skill sets required to perform these steps.

Clearly, we don’t want steps with imposed delays to post immediately, nor do we want steps under the control of behind-the-scenes “gatekeeper” steps being released willy-nilly, but, otherwise, immediately means immediately.

What about ad hoc interventions at Cases?

Since the system doesn’t know what the next ad hoc intervention is going to be at a Case, management needs to rely on analytics at Cases for decision support e.g. when 30% of the man-hours have been spent, we should be no less the 20% complete on Case objectives. Senior management also needs analytics relating to overall corporate KPI (Key Performance Indicator) trending e.g. if a trended KPI is more than 4% off target, management needs to investigate.

Thirty pct/twenty pct metrics are interesting for work where advancement follows “S” curves.

I recall “early risers”, “slow risers”, “fast risers”, etc. models in construction work but you don’t need to have worked in construction to discover that progress is typically slow at the start of most initiatives, followed by rapid advancement, only to get to where it seems you are at 90% complete but only 50% done along the timeline.

I also recall how staff tried to work project management systems to their advantage – each time a project started to look bad and management complained, staff would re-wire their flow graphs to include shortcuts and compress the durations of downstream work such that in the absence of good rule sets and vigilant PMO staff, project float would immediately go from -10 weeks to +12 weeks, only to catch up with reality when it became too late to do anything about cost/time/performance overruns.

Our project management team quickly learned that we could trend such that project re-sets could be dampened (same way moving averages are used in the commodity markets) to anticipate “real” expected time/cost overruns. Needless to say, we did not enjoy a high level of popularity.

Bottom line here, nothing is changed expanding once-through projects to handle b2b – Case Managers need to manage their Cases and the toolsets required are easily described:

  1. A Case History for making decisions.
  2. Some non-subjective means of assessing/projecting progress toward Case objectives (e.g. Figure of Merit Matrices)
  3. A run-time environment that supports RALB (resource allocation, leveling, balancing) because users in some areas will be working on  20 0r m0re Cases on any given day.

As for the right time to close a Case?   Yogi Berra summed it up nicely – “It ain’t over till it’s over”.

For Cases, what this translates to is “Cases are closed by Case Managers”.

 

Posted in Adaptive Case Management, Automated Resource Allocation, Business Process Management, Case Management, Decision Making, Uncategorized | Tagged | 1 Comment

Life in the BPM fast lane . . .


If you are a BPM practitioner who has moved beyond end-to-end process management, you are probably familiar with “Adaptive Case Management” or ACM.

ACM is practiced in a run-time environment called “Case”, (not “Use Case”), capable of hosting any number of data elements and data values for an Entity, including, as attachments, .pdf, MS Word, Excel documents, plus digital images and video recordings.

Examples of Entities are Corporate Assets, Customers, Staff, Suppliers, Orders, Projects . . the list can go on and on.

Each Entity, of course, needs its own set of data elements with unique internal IDs.

Clearly an Order in an Orders Entity will result in data traffic to a Suppliers Entity, then, as, and when the Supplier(s) ship on the Order there will be data traffic back to the Orders Entity.

In IT parlance “Case” is nothing more than a primary cursor position in a post-relational database.

A visit to your run-time Case Management environment will show users, robots plus local and remote systems and applications streaming Case Records onto discovered/mapped/improved/rolled-out BPM process templates to yield private instances of such templates.

The actors perform “Case Interventions”, most of which capture data and write out the data to a Case History where each intervention has a date/timestamp plus an actor “signature”. Usually, a parallel data stream is output to a data warehouse for data mining purposes. And, a copy of the data goes to a Data Exchanger for sharing with possibly large numbers of subscribers, each wanting a different subset of the data.

Most interventions at Cases contribute to reaching Case Objectives. Some are purely supportive of the contributors but are no less important in terms of Case Management.

The rubber hits the road when organizations realize that any Case Manager is likely to be overseeing multiple concurrent instances of a BPM process template, worst case, all at different steps along their template instances, plus multiple concurrent instances of other BPM process templates.

It gets worse – whereas BPM process templates are capable of providing Orchestration (i.e. do this and then do that), it’s unlikely with complex business processes that any templates will be capable of covering all eventualities.

Accordingly, users need to be able to skip steps, re-visit already committed steps, insert ad hoc steps not in the template and sometimes record data at not-yet-current steps.

Moreover, users are likely to be working on multiple Cases at the same time – overall orchestration at Cases is best left to Case Managers/users not automated protocols.

The flexibility just described obviously needs Governance to “rein-in” extreme, unwanted variations away from “best practices”, be they mapped or not mapped.

Governance is provided by Rule Sets at process steps, between process steps, at data exchanger import engines as well as at the Case level itself.

Since Cases are presumed, at all times, to be supportive of and contribute to building, maintaining and enhancing competitive advantage, we typically see strategic Rule Sets generating events at Cases (i.e. launch an auto-audit, if, as and when advancement toward Case objectives, for example, seriously lags the planned Case timeline).

So, Case Managers manage Cases, background BPM provides orchestration along process templates at Cases, and Rule Sets provide governance at Cases.

See “It’s all about that Case . . . “ at

https://kwkeirstead.wordpress.com/2015/09/17/its-all-about-that-case-bout-that-case-no-trouble/

The remaining pieces of the business process management puzzle are workload management (i.e. allocating resources, leveling and balancing workload within and across Cases), data collection, assessment of progress toward Case Objectives, data consolidation and data exchange.

Allocating Resources

Most process steps require specific performance capabilities or specific equipment / supplies so it makes sense, plan-side, to define resource requirements at process steps. This results in posting of next-in-line steps along a template instance immediately following a commit at the last step/sub-path immediately upstream from such steps. Exceptions to this rule are steps that have an imposed plan-side delay (i.e. pour concrete, wait 4 days).

In the interest of rapid advancement toward Case Objectives, steps post to the attention of actors who are both available and have a match on required performance skill at steps. We want the 1st available actor to “take” the step and “own” it, the presumption being that he/she will promptly perform the required action and commit the step. Otherwise, the actor should clearly document work performed and return the step to the resource pool (e.g. when going off shift with an in-progress intervention).

Leveling Workload

Once an actor has “taken” a step, we expect some micro-scheduling (i.e. setting aside a specific time of day to work on the step, re-scheduling certain steps to tomorrow, etc). Most people like to perform a mix of short-term tasks and long-term tasks so as to not compromise progress with either category of task.

Balancing Workload

Whereas steps typically post with a priority indication, things change, so supervisors need to be able to offload steps from one actor and specifically assign these to other actors.

Data Collection

As steps post, a reasonable expectation is that instructions and all required data collection forms be easily accessible. The key to resistance is to make it easier for staff to use the software system than to not use it.

Clicking on a step should cause instructions and forms to post for easy data recording. Once data entry is complete, we want a one-click commit in the run-time environment, with routing to the Case History, to the Data Warehouse and to the Data Exchanger. Data posting to the Case History should be done in real time (because the next-in-line step may need some of the data just collected).

If you are in the market for a Case Management System make sure you understand the difference between Case Logs and Case Histories. Nothing short of the ability to view data, as it was, at the time it was collected, on the form versions that were in service at that time, will do. Case Logs can have the detail that you find in Case Histories, most do not.

Assessment of progress toward Case Objectives

It’s not easy to avoid subjective assessments of progress toward Case Objectives. FOMM (Figure of Merit Matrices) at Cases provide a framework for consistent and automated assessment of progress toward Case Objectives. If all steps have plan-side standard performance times and your software system is able to track actual/projected step times, simple manhours-to-go calculations may suffice.

Data Consolidation

Since most work within corporations is funded by annual operating budgets or ROI submissions, there will be KPIs at the strategy level that look to operational (Case) data for trending. Your best bet here is to include your KPIs in a knowledgebase so that senior management can challenge reported trends.

Data Exchange

In many run-time environments most of the data recorded at Cases comes from local and remote external systems and applications.

It is unreasonable to expect alignment of data element names across multiple systems and applications.

Accordingly, corporations need a way for publishers to post data using their own data element naming conventions and, for subscribers, to read data using their own data element naming conventions. The data exchanger must be capable of filtering data so that subscribers only see data on a strict need-to-know basis.

Posted in Adaptive Case Management, Business Process Management, Case Management, Data Interoperability, Uncategorized | 1 Comment

Trans-Pacific Partnership Agreement


If you are a citizen of one of the Trans-Pacific Partnership member countries, (Brunei, Chile, New Zealand, Singapore, Australia, Canada, Japan, Malaysia, Mexico, Peru, the United States, Vietnam), you can now download and read 6,000 pages across 30 chapters at one computer screen.

As usual with trade agreements impacting multiple countries, there are differences in terminology, exceptions, special circumstances that each country will want to cross-compare during ratification.

I don’t see analysts scrolling thousands of pages in MS Word.

When the text is hosted in a free-form search Knowledge base, things go a lot easier.

In the example view below, typing the name of a member country highlights all references to that country and allows a user to selectively browse the agreement.  Since the Chapter Forms are customizable, it’s very easy to add a “Comments” field and browse the content as part of any connect-the-dots undertaking.

TPP_Japan

Posted in Knowledge Bases | Tagged , | Leave a comment

The importance of pre-conditions and post-conditions in the “new” BPM


Traditional BPM had little need for  pre-conditions and post-conditions at process steps.

The combination of flow graph logic and data collection checks and balances put in place at process steps by BPM flow graph designers provided reasonable expectation of no-fault processing along BPM processes at run time.

Photo Credit: Ignacio Evangelists https://www.facebook.com/ignacio.evangelista.photo/

Pre-conditions needed on arrival at steps.   Post-conditions needed on exit from steps.

Photo Credit: Ignacio Evangelista
https://www.facebook.com/ignacio.evangelista.photo/

The situation changed dramatically when the industry started to need to accommodate “process fragments” in addition to traditional end-to-end processes, especially processes fragments made up of a single step.

In a run-time Case environment, if I stream a new manufacturing order onto a workflow that has as its first step “ship final product”, the Case hosting the processing needs a way to determine that the usual “design-build-test-QA” steps have been skipped over.

Traditional BPM did not have to worry about this because it was able to rely on an end-to-end process comprising “design-build-test-QA-ship”. On arrival at “ship”, all of the data attesting to completion of upstream steps would typically be on hand.

Not so in Case, where users can do what they like, including not following an end-to-end BPM process, undertaking instead to achieve the same objective by performing a seemingly random (except to the performer) sequence of steps where the only inferred links between steps is their timing.

It follows that we need pre-conditions at the initial step of key process fragments. In the above example, the processing engine will ask “Do you have a QA certificate?” and refuse to go forward in the event of a “No” response.

Once process designers get used to putting in pre-conditions at process fragment start steps they quickly see no reason for not putting pre-conditions at intermediate and final steps along process fragments.

Pre-conditions add robustness at process steps that may be impacted by data imports to Cases. (i.e. the manufacturer had a fire in the QA lab, the product was sent to an outside lab for QA certification, the results came in via an external transaction but the data was not streamed onto the process fragment because this type of extraordinary processing was not anticipated in the BPM process map).

You might ask why, with pre-conditions, would we also need post-conditions?

The reality is that BPM process maps rarely cover all eventualities so there can be data at a Case that a process fragment does not have direct access to. Here, the generic fix at the Case is to accommodate both pre-conditions and post-conditions at any process step (end-to-end processes, process fragments, ad hoc steps).

Pre-conditions and post-conditions are central to a software correctness methodology called “Design by Contract” invented by Dr. Bertrand Meyer in the mid 1980’s and implemented in the Eiffel programming language

For more information on Design by Contract™ see

https://www.eiffel.com/values/design-by-contract/

The author has no commercial affiliation with Eiffel Software.

Posted in Uncategorized | Tagged , , , , | 1 Comment

Patient- Centric Care – Basic EHR Needs


When I was a child, our family doctor did house calls. Visits were particularly remarkable during snowstorms when the good doctor would arrive with horse and sleigh.Patient_centered_care

We will never get back to the “good old days” but it is worth documenting the essentials of patient-centric care within the context of today’s “healthcare factories”, where things have, in my view, gone over the edge.

The first requirement is that your home doctor have an EMR.

This is nothing more than an electronic version of the old paper-chart except that it is more accessible and the information can be shared. Assuming inter-operability, that is.

The purpose of the e-chart is unchanged relative to its paper chart precursor. The default view remains a reverse chronological record of interventions, each with a date/timestamp and performing resource “signature”. If you click on a hyperlink, a reasonable expectation is to be able to see data, as it was, at the time it was collected, on the form versions that were in service at that time.

The reasons for consulting the e-chart are to determine what the most recent intervention was and provide decision support in respect of the current and future interventions.

One way to simplify decision support is to put in place “best practice” protocols to guide the processing of patients.

Clinics/hospitals need best practices templates, with facilities for streaming patients onto private instances of these templates. Each template needs to consist of an orderly sequence of steps/interventions, with instructions at steps, context-situational appropriate data collection forms and routing parameters that indicate the required classes of users with the skill sets to perform steps.

The logic for “best practices” is that there cannot be 10 best ways to manage a patient with a set of symptoms/signs and a somewhat similar history of interventions.

Except that rigid imposition of protocols (i.e. cookie cutter medicine) does not work.

Accordingly, clinicians must have the freedom to deviate from best practices. This means “best practices” become guidelines and the care environment needs to be able to provide governance to prevent extreme, unwanted deviations from the “best practices”. Rule sets at the Case level are capable of taking care of this.

So, with the above infrastructure in place (i.e. guidance and governance), the clinic/hospital is in a position to do the right things, the right way, using the right resources. Assuming, inter-operability.

What’s missing?

Well, two things.. Location and Timing.

No point scheduling an intervention that needs a particular piece of equipment or a particular skill when the organization has neither. And, for good outcomes, we need timely interventions, which assumes availability of infrastructure and availability of skilled resources.

For this reason, patient care systems need RALB (Resource Allocation, Leveling and Balancing) that take best practice steps and ad hoc interventions and assign these to specific healthcare professionals. (Dr. Jones, in Examination Room 307, with Patient Martha Bloggs, on Friday 16th 2015 at 1000 hours).

Clearly, in a facility with various teams of skilled resources we need tasks to post to the attention of “day shift radiologists”, not Dr. Jones.

The way RALB works is to post the “next” step along a patient care-pathway to the attention of “day shift radiologists” and if there are, say, three of these on shift, the first to “take” the order “owns” the step and is expected to perform it or put it back in the resource pool.

If Dr. Jones takes the order, he/she needs to be able to micro-schedule the order in the context of other time demands, so a 1000 hrs scheduled appointment could start at 1025 hrs, or 1045 hrs, but the point is the task does not fall between the cracks.

In a large facility, there may be schedulers who offload work from clinicians to other clinicians, so we end up with 3-tier scheduling (allocation, leveling, balancing).

It’s not enough to be efficient in the processing of individual steps along patient care pathways.

On top of this, organizations need to ensure that as one task along a patient care pathway is complete, the next-in-line task will take place without unwanted delay.

So, given a Case Management environment, tasks post to clinician InTrays, then, following completion of a task, that task clears from the InTray of the clinician who “took” the task and the next-in-line task as per the flowgraph template instance posts to the InTray of the appropriate clinicians.

Now, the organization is doing the right things, the right way, using the right resources, at the right places and at the right times.

There are two faults with the “best practices” focus.

One is that efficiency at the individual patient care pathway level does not necessarily lead to overall agency-level efficiency. Accordingly, it’s reasonable to expect overrides to best practices “for the greater good”.

A second fault with “best practices” is that effectiveness will be decreased by overhead related to regulatory “long term outcomes” data collection. We need to look to automation to minimize the impact.

Posted in Case Management, Data Interoperability, Decision Making, FIXING HEALTHCARE | Tagged , | Leave a comment

“Big Data” poses some challenges for healthcare – Find out how to circumvent these.


A discussion group at HIMSS in LinkedIn recently posted the following question “Smarter, Safer, Cheaper. What’s the true promise for healthcare big data”.

Here was my response:

Smarter, safer, cheaper for sure, but there are some challenges Dig_Dataand ways of making analytics seamless or making analytics very difficult.

Rule #1 is you cannot analyze what you did not collect in terms of data.

Rule #2 is you need easy ways of displaying the results of analytics as overlays to best practice protocols.

Rule #3 is you cannot follow Rule #2 if you don’t have best practice protocols (preferably in the form of flowgraphs as opposed to written policy/procedure).

In industrial Case Management (one example being military helicopter MRO), no two units that come in have the same configuration so it’s important to have a Case History of all interventions, organized by “session” and presented in reverse chronological order, with the ability to present the data by system/sub-system. The key to MRO is knowing when a particular system/sub-system needs periodic inspection and having data on hand to help with predicting problems before they actually materialize.

You end up with a lot of data but the data collection is effortless because all work is guided by flowgraphs (do this and then do that), data collection is automated and analytics routinely examine the data to identify ways and means of improving the flowgraphs.

It’s no different in healthcare, except much more complicated owing to the number of permutations and combinations such that you cannot expect to have end-to-end flowgraphs capable of handling the processing of most individual patients.

So, we end up with best practice process fragments that healthcare professionals have to thread together manually with some assistance by software and machines.

The following capabilities are needed:

#a. Skip a step along a best practice protocol.

#b. Revisit an already committed step.

#c. Forward record data at steps not yet current.

#d. Insert a step not in the original protocol.

In all of this it is important to be able to capture the data so you need a workflow management software environment that automatically logs all data as entered session by session at various computer screens at process steps and at ad hoc intervention steps. (i.e. map out your processes plan-side, roll them out, manage tasks via a run-time workflow management environment at Cases, have in place data logging at the Case Hx with a parallel flow to a “data warehouse” that analytic tools can easily link to).

The challenge becomes to detect via analytics patterns in the data logging, examples of which are for a 1-2-3-4-5-6 …. 12 workflow:

* Step 5 is almost always skipped. Why then should it continue to be in the protocol as a mandatory step? Either leave it as an optional step, or eliminate the step.

* Step 3 is almost always repeated via an ad hoc intervention following a commit at step 8. The process template probably needs to have a loopback added.

* One or more ad hoc steps are almost always inserted at step 12, why not update the protocol to make these part of the best practice template?

It is helpful to inventory all of the best practices as graphic flowgraphs for easy updating.

Analytics should ideally be able to build virtual meta cases that show protocol variations across Cases relating to patients with similar presentations, yielding statistics along the lines of “across 1,000 cases, the cases branched this way 5% of the time, 12% of the time that way, with no branching 83% of the time”.

Posted in Uncategorized | Tagged , | Leave a comment

It’s all about that Case, ‘bout that Case, no Trouble


I figured it would be easier for Meghan Trainor to attract folks to my blog.

In healthcare services delivery, your Case Managers need Case Management software.  Seems obvious, doesn’t it?  Why then do clinics/hospitals have EMRs that do not have a Patient History as their core pillar.

If you want to provide continuity of care to individual patients you need, first of all, “best practices” workflows to guide intake, assessment, diagnosis, tx planning, tx monitoring and discharge planning.all_about_the_case

And, yes, the Government says you also need long term outcomes data collection so that your patient’s great-grandchildren will also have good outcomes.

Clearly, the team of caregivers does not, will not, have the time to stare at paper process maps.

Accordingly, best practices maps need to be prepared and, for practical purposes, be carved up into individual interventions and automatically posted to the right staff at the right time.

The maps go a long way toward getting the “right” things done, by the “right” people, the “right” way, at the “right” time.

None of this can be done in thin air – “Case” is the environment of choice to host process steps.

Case accommodates any mix of structured and ad hoc interventions.

None of this will come as a surprise to practitioners who are old enough to remember paper charts.

Paper charts used “Case” and there still is no shortage of “Case Managers”, but look at the hoops they have to jump through to do what used to be easy.

What is perplexing is why did some designers of EMRs forget to carry forward the notion of “Case”?

Meghan Trainor – All About That Bass, .. that Bass, no Treble

Posted in FIXING HEALTHCARE | Tagged , , | 1 Comment