What’s your corporate IT strategy?


In the beginning, IT reported to the CFO because the main focus was on automating accounting functions, with, time permitting, helping other functional units to address their computing needs. None of the application systems were able to easily exchange data.

In manufacturing and construction, things progressed to where we saw “all-singing-all-dancing” materials management/workflow management applications, with data transfers to accounting.

The focus, at that time, shifted from addressing functional unit needs to addressing the needs of the enterprise and CIO’s started to report to the CEO, not the CFO.

The problem was strategic planners recognized the need for business analysis capabilities so they proceeded independently to set up Business Analysis/Business Process Improvement units. These units did not have the skills to evolve sophisticated mind maps capable of consolidating operational level data to KPIs, so they did most of their work using spreadsheets.

Meanwhile, most IT departments, with a prior focus on filling orders and keeping the lights on, not following the state-of-the-art of mind maps, did not have the time nor did they set about to acquire the skills to select enterprise solutions.

Consolidating the organization’s Business Analysis/Business Improvement capabilities with those of traditional IT became possible as corporations moved to “cloud” solutions, reliving IT of the burden of “keeping the lights on”.

Not all of these transitions went smoothly.

The combined talent pool of internal BA/BI/IT proved, in many cases, to be a poor substitute for a couple of days of a senior management consultant on site time. The competitive advantage of these folks is that they had seen multiple good/bad strategy development/implementation initiatives across different industry areas over several decades.

In the absence of expert advice/assistance, the tendency remains to go for one of the following three options.

Internal Build – the undertaking here is to build the next generation Ferrari in one’s garage. Not impossible given that Bill Gates did it and Steve Jobs did it, why not us?

One-stop-shop – the idea here is to find a system that addresses one’s current needs to the extent possible and to then proceed to “customize” the system. Vendors love this option. The core problem of course is corporations really don’t want/need software that addresses current needs – they need software that can address unanticipated future needs.

Best of Breed Integration – this makes a lot of sense, PROVIDING, all of the applications you pick have a demonstrable capability of exporting their data and importing data from local and remote third party systems and applications. Any slipups here and you join the queue of customers pleading with vendors for customization.

So, what to do? . . . .

Hire a good consultant for a couple of days on-site.

Pick a reputable busy hands-on consultant who starts to get ready to leave from the time he/she arrives on site.

Their focus will be on analysis/assessment and on training your staff so that they can take over once the consultant has gone off-site. Do your homework in advance. Expect the consultant to also do a lot of homework such that on arrival he/she has a reasonable understanding of your business, your strategy and the current state of your operations.

Do not hire a consultant who borrows your watch to tell you what time it is.

What’s the best possible outcome?

The best possible outcome is likely to come out of generic software that allows operations level staff to build, own and manage their own processes.

They should only have to go to BA/IT for rule set construction and interconnectivity by and between all of the systems you already have in place and do not particularly want to cast aside.

The usual great deficiency in organizations is inability to provide continuity across silos.

Adaptive Case Management software plus Business Process Management software go a long way to bridging the gap. Graphic Enterprise free-form search Knowledgbases go a long way to helping corporate stakeholders to evolve strategies and track performance at KPIs.

Yes, my group promotes these solution sets and we also use them internally.

A good starting position is to recognize that every organization has “best practices” – it does not matter whether these exist only in the minds of staff or whether they are on paper, on-line or in-line.   The main point is they are your “best practices” and will remain as such until you improve them.

The other thing to keep in mind is that your Job #1 is to build, maintain and enhance “competitive advantage”. This responsibility is shared across the Board, senior management and all staff. It’s all about “rowing in the same direction”.

What next? Read this blog post again and then again.

Then, take action.

Posted in Adaptive Case Management, Business Process Management, Interoperability | Tagged , , , | Leave a comment

How to fund process improvement initiatives


Every organization has processes, the processes are either documented or undocumented. They may exist only in the minds of workers or be on paper, online, or in-line.

Every organization has “best practices”. In the absence of any declared best practices, the practices in use are the organization’s “best practices” until these are replaced by better practices.

Processes are key for building and enhancing competitive advantage. The singular purpose of a process is to convert inputs to outputs.

All good, except that conversion requires time, money and resources that are typically in short supply.

The question that therefore often arises is which processes should be improved, to what extent, and how should process improvement initiatives be funded?

 

Option 1 – Fix your broken processes

If a process has bottlenecks, shuts down frequently or produces inconsistent outcomes it’s a prime candidate for improvement. Automated or manual inspection of process run-time logs will typically reveal bottlenecks and shutdowns.

If a step or series of steps show inconsistent outcomes, consider automating these.

A simple financial analysis is usually sufficient to justify fixing broken processes. Some need to be replaced outright.

When fixing processes, consider updating and improving these.

Option 2 – Go for the low-hanging fruit when looking to improve processes

If your organization has a “continuous process improvement” unit, bear in mind that most change is disruptive and that too much tweaking of some processes takes you to diminishing returns. Guard against having a “hammer looking for nails” mentality within such units.

With respect to the identification of candidate processes for improvement it makes sense to focus on easy, inexpensive, fast-track, low-risk, high return initiatives.

Some of these span silos and can have an important impact on an organization. Others do not.

Most processes in this category can be funded out of annual operating budgets.

Option 3 – Manage significant change

Wider, more complex initiatives require ROI submissions.

Objectives need to be clearly defined, benefits need to be stated, resources, timelines and funding need to be put in place.

Periodic monitoring is needed with, at times, re-setting of objectives.

Why many ROI-based initiatives fail.

It sounds simplistic but going through the motions of preparing an ROI and then not bothering to monitor performance, time, cost and outcomes over the life cycle of the initiative means the benefits as declared in the ROI stand a good chance of not being attained.

The reason is things change from the time of preparation of an ROI to implementation of initiatives – if things have changed to where the projected ROI of an initiative is trending negative, it is important to “know when to hold and when to fold”.

Worst case scenarios include being leapfrogged by a competitor before “new technology” gets to market. It may be best to shut down an initiative and put your scarce resources to initiatives with more promising outcomes.

Posted in Adaptive Case Management, Business Process Improvement, Competitive Advantage | Tagged , | Leave a comment

Where’s The Beef? – An under the hood look at “seamlessly integrated” BPMs


wheres_the_beef_from_PexelsI keep reading about “seamlessly integrated” BPMs’ (Business Process Management Systems) but when I start to look under the hood, it quickly becomes obvious that many content authors have no clue regarding the needs of organizations in the area of process management.

Reality is they should be talking about “hay-wired” BPMs modules.

Most of the product pitches start with a description of a process mapping environment. Some of the descriptions go on and on about languages/notations. This tells us that the environment being promoted is not end-user oriented.

No end-user wants to learn a new language/notation nor do they want to hire or have to liaise with a team of process experts or software development intermediaries.

The real experts for any process are the folks who work with that process day in/day out. Chances are you need facilitators to drag them out of their silos but with minor prompting, end-users can build, own and manage their processes.

Next in the list of capabilities we learn that there are “interfaces” that convert the graphic process map into, it seems, a run-time template that is “not rigid”.

What “interface” exactly would one possibly want other than a “rollout” button? If there is more to it than this, this is a dead giveaway that the protocol is too complicated and unworthy of receiving a tag of “seamlessly integrated”.

Same for “not rigid” – given we know that managing any Case requires being able to deal with a random mix of structured versus ad hoc interventions.

Any detailed explanation about a BPM template not being “rigid” is a smokescreen for inadequacy in this area.

We all know that at any step along any process template you may be required to reach out to a customer (or some outside system/application) or accept input from a customer or outside system/application.   If “rigidity” has to be highlighted, other than in a historical account of the evolution of BPM, the setup is too complicated.

Strike three!

I could quit here but if any readers are still with me, I am not yet done with the rant.

Here goes – at any process template step it’s a given that users will need to collect data.

These software systems therefore need at least one form at each step/intervention and, from an end-user perspective, the form needs to be local to the step.

Same for all manner of .txt, .doc, .ppt, .pdf, .xls, even video/audio recordings that may relate to a step. All of these need to be attributes of steps, not off in some “shared” repository.

What end-users want/need is real “seamless integration” and a modest amount of “interfacing.”

Clearly, when they click on a .doc attribute at a step, they want interfacing with MS Word, not replication of word processor functionality within the Case environment.

Why multiple references to Case?

The thing is we want the primary focus to be on reaching objectives and if “work” has become a combination of ad hoc and structured interventions, we pretty much need to host all of this at a run-time environment that lets users decide what next to do, when, how (ad hoc intervention capability), with some “guidance” attributable to background BPM process templates. Clearly, it’s not only all about BPM.

We also need to look to the environment to “rein-in” extreme excursions away from policy/procedure (global case-level rule sets, if you like). Otherwise you have no “governance”.

Case provides all of the necessary functionality, including data exchange plus auto-buildup of a longitudinal history of interventions (how otherwise would a user be able to decide what the next intervention should be at a Case?).

The real icing on the cake in some of these nonsensical pitches is references to “process objectives”.

If you no longer have “processes” (what we have today are “process fragments” that get threaded together by users, robots and software at run time), how can/should we expect to find objectives at process template end points?

No processes, no convenient end points, no objectives.

The answer is objectives have gone from being plan-side attributes of process maps to run-time attributes of Cases.

Once you venture into areas where there is a need to accommodate a random mix of ad hoc and structured interventions (i.e. most business areas today except where we may find a few not-yet-automated end-to-end processes), it is the Case Manager who decides what the objectives are for a Case and they, not BI analysts, nor IT, park these objectives at each Case.

Case Managers also monitor progress toward meeting Case objectives and this typically requires fairly complex non-subjective decision making, not simple counting of the number of remaining steps along a process template pathway.

See posts I have made on FOMM (Figure of Merit Matrices).

Just last week I read some promotional material announcing a ‘transition’ to “Case”.

I pointed out to the authors that Case is not new, that it actually is a term borrowed from the legal profession and was alive and well in the UK in the area of medicine from as early on as 1603.

They have not thus far responded to my objections.

It’s easy to determine within healthcare whether there is a focus on Cases.

Just walk into a clinic/hospital and ask if you can talk to a Case Manager. You will probably have a roomful of choices and some of these folks have been doing Case Management for decades. They have job titles, job descriptions and really perform “Case Management” day-in/day-out.

Most of my readers, aside from members of the flat-earth society, are starting to get this. Except that the way things have been going lately, we may very well soon have a flat earth, so they may end up having the last laugh.

“Case” – not new, not close, no cigar.

Posted in Adaptive Case Management, Business Process Management, Case Management, Data Interoperability, Decision Making, Enterprise Content Management, Process Management, Process Mapping | Tagged , , , | 1 Comment

3D Strategic Planning – What you need to know about it


Strategic Planning is a “connect-the-dots” exercise that usually starts off like this. . .big_data_mess_full_size

In order to develop strategy you need information on multiple Entities relating to your business (i.e. Capital, Access to Capital, Land, Equipment, Tools, Premises, Staff, Current Products/Services, Products / Services Under Development, Projects Awaiting Approval, Technology Trends, Changing Legislation, Competitors).

We know that decision-making involves the transformation of information into action. The problem is you cannot make decisions if you cannot easily find and access the information needed for such decisions.

For any given opportunity, arriving at a go/no decision will impact one or more of these Entities.

One added complexity is the way information belonging to one Entity interacts with information belonging to another Entity.

This is where we make the link between traditional strategic planning (deciding how to make best use of scarce resources on initiatives that track with corporate mission) and the “connect-the-dots” approach used by law enforcement for investigations.

The key point is the law enforcement “connect-the-dots” approach can be “borrowed” for corporate strategic planning purposes.

Here is a typical practical scenario

An opportunity has been identified, the sponsors present to the Board and the Board now has to assess the opportunity, assign a ranking, assign a priority and if the project manages to make its way through these filters, allocate funds to allow an initiative to move forward.

Different opportunities impact different Entities in different ways.

It follows that if you are consolidating information relating to corporate Entities you may need to provisionally allocate assets/resources to several competing opportunities.

All in the interest of making better decisions.

One way to do this is to consolidate all Entity information for the Corporation at a graphic knowledge base and then alias information relevant to each opportunity for local consultation at each opportunity.   This allows you to toggle between the “big picture” and individual opportunities, with each opportunity being able to “see” competition from others on the use of scarce resources.

If you find your favorite proposed initiative has a ranking below another initiative you can perhaps do more research on the merits/benefits of your initiative and improve its ranking.

The more you are able to “connect-the-dots” between initiatives and their “draws” on scarce resources, the greater the potential quality of your decision-making at and across initiatives.

Why 3D?

Well, you will discover soon enough that trying to build a single hierarchy comprising say, 500,000 information points on one computer screen requires the use of 3D “Russian Doll” or “Cascading Folder” mapping as illustrated in the US State Dept Country Profiles demo database (all countries, business, travel, law enforcement, narcotics, terrorism, etc.).

Try that on paper or whiteboard with post-its.

What you need is a graphic free-form search knowledge base capability that accommodates any number of separate hierarchies, with “connect-the-dots” facilities and with the ability to quickly zoom in from the “big picture” to low-level detail and back.

At the end of the day, it’s all about how you like to look at your corporation – you really only have two choices . . .

Like this

big_data_orderly

 

Or like this

big_data_mess_full_size

 

 

 

 

 

 

 

 

 

 

Think about this article next time you go to a meeting pushing a cart with a 3 foot pile of reports, papers, spreadsheets.

Key Words: Strategic planning, Connect-the-dots, knowledge bases

Posted in Strategic Planning | 1 Comment

Policy, procedure, KPIs – how to run a business


Corporations have infrastructures with various asset classes (capital, access to capital, plant, equipment, people, existing products/services, new products/services, customers, and partners).

The ones that have “secret sauces” succeed, most of the others fail.

The rules for success are quite simple – it’s important to build and maintain each asset class individually.

Then, aside from the risk of being leapfrogged, you need to also enhance your assets to keep ahead of the competition.

Not all assets have the same relative strategic value so you need a way when making decisions re the commitment/deployment of assets to study each fund request in the light of its potential to support strategy.

It pays to maintain a reserve in each asset class. If, for instance, you commit all of your staff to a large project you may need to refuse new opportunities and if your “all-eggs-in-one-basket” initiative fails you will find yourself in damage control mode.

The traditional approach to “management” has been to standardize and streamline.

Policy provides governance, procedure provides guidance, KPIs allow CEOs to steer the ship.

The problem is people don’t keep policy in mind, they don’t read procedure and because things change quickly, it’s very easy to be looking at the wrong KPIs.

This is why we have BPM (Business Process Management) and ACM/BPM (Adaptive Case Management).

A Bit of History

I have always maintained the position that BPM had its origins in CPM (nodes, arcs, end node objective).

CPM goes back to 1957 (possibly earlier) with  flow graphing hitting the streets both in the military (Polaris Program) and in the construction business (E I Du Pont de Nemours).

Media coverage of flow graphing peaked in 1962 with DOD/NASA’s Pert Cost. I don’t recall seeing much media frenzy on CPM but the “critical path” methodology has evolved over time to where few considering launching a large project would do so without CPM.

The main contribution introduced by BPM was content-driven decision-box branching and loopbacks.

It is worth pointing out that branching itself was a part of GERT (an invention of General Electric) which recognized the need to avoid having to engage all pathways in a flow graph. The difference in GERT, was that the branching was evidence-based (i.e. plan side) whereas BPM is content-sensitive, causing rule set engagement to occur at run-time.

BPM Today

The problem with BPM came when most of the low-hanging fruit (i.e. mostly linear, straight through, end-to-end processes) had been harvested.

This is resulting in an exodus from traditional BPMs to Case where objectives can no longer be conveniently parked at process end points.

ACM/BPM

The thing about Case is that it can host objectives and allow background BPM to continue to provide Orchestration.

Governance comes from rule-sets at the Case level. We call all of this ACM/BPM where ACM stands for “Adaptive Case Management”. Some just call the hybrid approach ACM but we need both Orchestration and Governance plus a few other tools.

Corporations that embrace ACM/BPM end up with the best of two worlds i.e. a flexible environment in which structured “best practices” in the form of process fragments are available for users, machines and software to thread together at run time, and where the ability exists at any stage during the lifecycle of a Case to insert ad hoc interventions.

Contrast this with the shaky foundation that maintains that you can, with BPM alone, map out all eventualities. It does not take a lot of analysis to realize that in a relatively simple flow graph, the closer you have branching decision-boxes to the start node, the greater the number of permutations and combinations. Easy to identify flow graphs where the number of permutations and combinations is in the hundreds of thousands.

ACM/BPM wins hands down because there are no restrictions at any point in the lifecycle of a Case re engaging specific processing

Bridging the gap between operations and strategy

It’s fine to be practicing ACM/BPM at the operations level but it’s a trees vs forest wheel-spinning exercise unless/until you have a way to evolve strategy, put in place ways and means of assigning priorities (few companies have the wherewithal to authorize all funding requests that are submitted) and then monitor approved funding requests to make sure work is proceeding on time, on budget and within specification.

Strategy -> Priority Setting -> Objectives     <-       ROIs <- Cases <- Case objectives

Narrowing the gap between operations and strategy requires ensuring that Case objectives are at all times supportive of strategic objectives.

i.e.  Case objectives -> KPIs -> Strategic objectives

The main tools I have used over time to bridge the gap between operations and strategy are

a) shifting from ROIs to SROIs (1) at the funding level.

b) use of FOMM (Figure of Merit Matrices) during monitoring/tracking as a non-subjective means of assessing progress toward meeting Case objective.

c) finding ways to consolidate operations level data to a free-form search knowledge base environment that is able to simultaneously host KPIs.

I will re-visit these in future blog posts.

 

(1)

SROI is “Socio-Economic Return on Investment”  – basically it takes the sharp edge off purely financial assessments. The complexity of business today requires taking on a wider horizon than financial returns.

https://en.wikipedia.org/wiki/Social_return_on_investment

 

Posted in Adaptive Case Management, Business Process Management, Case Management, Competitive Advantage, MANAGEMENT, Strategic Planning | Leave a comment

Your BPMs shopping list


If you are in the market for a BPMs, you may be better off looking for a BPFMs (Business Process Fragment Management System).

shopping_cartPlease don’t make this into a new acronym – we don’t need more acronyms with “Intelligent Business Process Management”, “Agile Business Process Management”, “Dynamic Business Process Management”.  I am here to simplify things, not complicate them.

The thing is in today’s business world there are very few remaining end-to-end Business Processes to map.

Corporations long ago automated their continuous and end-to-end processes with the result that most of all we have left are “Process Fragments”.

What’s the difference between a Business Process and a Business Process Fragment?

Basically, It’s the presence or absence of objectives at flow graph end nodes.

Business Process flow graphs conveniently dovetail into a terminal node which can accommodate an objective. Get to the end node and you have reached the objective.

Business Process Fragment flow graphs, on the other hand, have no plan-side objectives.

You get to the end node of one process fragment and a robot, human or software threads the end node to another Process Fragment.

Of course, you could thread process fragments together plan side but this would require that you anticipate all possible eventualities and that you have in place rule sets to guide the branching down specific pathways.

Do the math. The higher the number of decision branching points toward the start of a flow graph, the higher the number of permutations and combinations. Easy in a relatively simple flow graph to get to 500,000.

It’s OK to let your knowledge workers do some of the decision branching. The usual reason for hiring knowledge workers is that they know what to do, when, how, and why and if you deploy them properly, where. .

If you are worried about allowing knowledge workers to pick the branching at run time, you probably should not have hired them.

Under the new era of Business Process Fragments, how do we know, plan side, when we are done?

Answer, you don’t.

Objectives move to run-time Case environments where a Case Manager decides when objectives have been met and it is OK to close the Case. (i.e. “It ain’t over until the fat lady sings”)

Now, it’s obvious from the above that our running list of “must haves” includes a) a graphic process mapping facility, b) a compiler so you can roll out templates of your graphic process maps, and c) a run-time environment capable of providing workload management across orders and users in the context of scarce resources.

You need d) global Case level rules so that as and when users deviate from “best practice” protocols, (i.e. skip over steps, perform steps out of sequence, perform ad hoc steps not in any process fragment and thread together process fragments in ways that are less than ideal), these users will be tripped up by such rules.

You also need e) a repository so you can look back over time and see who did what, how, when and where.

And, you need f) the capability to import data and export data from/to 3rd party local and remote systems, including customer systems and applications.

So, there you have it, your complete shopping list for a BPMs.

  • Process mapping
  • Compiler
  • Run-time Case management environment w/ workload management facilities
  • Global Case-level rule sets
  • Data Repository
  • Data Exchanger

By my count, that makes six (6) “must haves”.

Keep this list handy when you book a seat at a webinar or register for a seminar on BPMs.

The generic name for the type of system you should be looking for is ACM/BPM (Adaptive Case Management/Business Process Management).

Posted in Adaptive Case Management, Business Process Management, Case Management, Data Interoperability, Operational Planning, Process Mapping, R.A.L.B., Scheduling, Software Design | Tagged | Leave a comment

Pick a BPMs, any BPMs


We too easily settle for various states of affairs, only to find that outcomes could be more favorable given more research, better use of logic and less attention to paid “reviews”.

Hardly a day goes by without a new LinkedIn invite to attend a web demo or seminar on the “best” BPMs (business process management software suite).

“Best” for whom? The answer typically is “best” for the vendor.

Nowhere in these presentations is there much of an attempt to itemize essential functionality of a BPMs followed by a demonstration of how a product being showcased provides such functionality.

Most of these presentations are “show and tell” which translate to “. . . see what we can do with this fantastic product”.

If you have read my “Spend and Save” blog post, you can easily understand where many of the web demo/seminar attendees are coming from.

Unlike Sue in “Along Came Jones”, they don’t need fixes for their problems because they have not done an analysis of what their problems are, so no need to bring anyone in to fix these problems.

“Along Came Jones”, by the way, was a hit song written by Jerry Lieber and Mike Stoller and originally recorded by The Coasters in 1959. The song tells of the interaction between a gunslinger stock villain, “Salty Sam,” and a ranch owner, “Sweet Sue”. Sam makes various attempts, the first of which at an abandoned sawmill, to get Sue to give the villain the deed to her ranch but is, each time, outdone by Jones.

Remember the Coasters?

I did not expect you would, but here is a link (great saxophone!)

https://www.youtube.com/watch?v=MrGaoSB0Eus

Now, at a more practical level, stay tuned for my checklist of “must have” functions for a BPMs. Don’t sign up for any BPM web demos or BPM seminars until you have this checklist and don’t be shy about asking the presenters how/where in their product offerings they address these functions.

Posted in Business Process Management, Case Management, TECHNOLOGY | Tagged , | Leave a comment

The Importance of Continuity of Patient Care


Not so long ago, when you saw your GP and they referred you to a specialist, you would have to start all over again with your demographics, health history.

Most of us see several healthcare providers at different agencies over time, we visit ERs on weekends, and we have tests done at labs.

A reasonable patient expectation is that your need-to-know patient information will be available to healthcare providers/facilities you visit. But, don’t count on it just yet. In the absence of automation, providers are simply too busy to consolidate your patient data to their EHRs.

It’s not too much of a stretch to expect that if you are on vacation in a foreign country important information relating to you would also be available, on demand, at facilities you are visiting for the first time (again, on a strict need-to-know basis). Don’t count on it just yet.

I notice, at LinkedIn discussions, distinctions being drawn between interoperability and interconnectivity.

I hope we don’t go down the path where all EHRs end up being “standardized”. What is needed is the ability for agencies to exchange data (i.e. interconnectivity).

I seems that every second LinkedIn post on interoperability/interconnectivity at some stage cautions readers with “ … but we are not there yet”.  I agree “we” are not yet there in most cases but maintain that we could easily be there, now, and in some cases we are there.

An easy way to demonstrate interconnectivity and how it should work goes like this:

A managed care company that has member clinics can easily ask its members to upload daily progress notes, results of lab tests received etc. to a hub. The clinics already have connectivity with the MCO for submittal of claims. No big deal to set up a separate upload.

It’s also easy for these same agencies toward the end of each working day to upload a list of patients they will be seeing the next day and get back visit reports from other agencies for import to their EHRs.

If a particular EHR cannot dovetail incremental third-party visit data, nothing wrong with logging into the hub on a second screen to at least view patient activity at third-party agencies.

What if you are not a member of an MCO? Here, we need State or Federal hubs.

It’s probably not a good idea to have one central hub but, even with 50 hubs, these could be connected in a ring. If you are in your home town, your healthcare record is going to be at the local hub, if you are traveling, it’s not a big issue to link to your home base hub and download/upload.

There is a second type of needed interconnectivity.

Here, we need to interconnect staff and physical resources so that things go not fall between the cracks moving from one step to the next along a patient care pathway. Bearing in mind a typical hospital must to be able to process several hundreds of patients per day, there is an added need to prioritize tasks, then level and balance workload across staff, all in the context of scarce resources.

Traditional EHRs don’t do a very good job “managing” workflow for the simple reason that they lack RALB (Resource Allocation, Leveling and Balancing). The solution is to put it in. We have known how to manage serial and parallel tasks since 1957 (i.e. CPM), even before this.

Healthcare Data Interconnectivity 2015

I don’t buy into the notion that it’s “difficult” to export, package, ship, receive and import patient data. What we should be saying is many software manufacturers “make it difficult”, for customer retention purposes. Given the amount of money these systems cost, it’s not a bad strategy to run some of these “bad” suppliers out of town.

The world has had structured data exchange in the area of international shipping since the early 1960s.

Yes, healthcare is different in that much of the data is unstructured, but given a generic data exchanger you can allow any number of publishers and subscribers to each read/write using their own native data element naming conventions.

Of course, each publisher has to make available a “long name”  per data element so that subscribers can know what they are subscribing to otherwise the only hurdle is data transport formats that certain individual subscribers are not able to read.

It follows that the worst case scenario is the one where a formatter has to be written for a particular subscriber and a parser has to be written for the publisher should there be any need for that subscriber to push back data to the publisher. As the number of “standard” data transport formats increases, the demand for custom formats will decrease.

Easy/difficult to write formatters/parsers? Not at all, if you have software that can “sniff” a new data transport format and generate code to get this in/out of a generic data exchanger.  The software industry has come a long way from having programmers write code, to adapting code written by others (including themselves) to code written by programs.

So there you have it, a rational and practical solution for interconnectivity in healthcare.

End of . . . .

Posted in Case Management, Interconnectivity, Interoperability | Tagged , , , | Leave a comment

Successful Engineering Design Strategies


You may have read about Toyota’s announced focus on hydrogen fuel-cell powered automobiles.

This is a win, win, lose strategy (for Toyota, for those of us who like to live on this planet, but maybe not for manufacturers of home generators).

http://www.toyota-global.com/innovation/environmental_technology/fuelcell_vehicle/

The thing is, it seems that if you were to have one of the proposed Toyota cars, you could, providing you don’t need to drive off anywhere with that car, power up essential services at your home for a couple of days.

Clearly this hypothesis needs to be tested but if the promise is kept, small home generators are going to be a lot harder to sell than currently.

My point re engineering design in general is you need three (3) designs:

  1. The one you have in production
  2. Another, under test, that you could roll out fairly quickly.
  3. A disruptive design, in concept or on the drawing board, with the potential to sideline your designs 1 and 2.

No guarantee, of course, you won’t be leapfrogged by one of your current competitors.

The reality here is you might stand a greater chance of being leapfrogged by an organization that currently is not one of your competitors or by an organization that does not yet exist.

If this causes you not to sleep at night, read more of my blog articles.

My close friends maintain they are a sure remedy for insomnia.

 

Posted in Engineering Design, Manufacturing Operations, TECHNOLOGY | Tagged , , | Leave a comment

“Save Time and Spend” versus “Spend Time and Save”


When you go into any marketplace looking for a technology solution to address a complex problem, you basically have two options.

You can look at “best of class” rankings and pick near the top – this will save you time but it may cost you a lot of money.

The other option is to carry out an exhaustive search and hopefully use non-subjective filtering to find a “best fit”, spending time but saving money.

 

“Best of Class” – How it often works.

The first criteria of importance when you want to make a “best of class” product selection decision is the integrity and capability of the ranking service. Have the vendor products actually been evaluated or is the ranking a function of how much money vendors have paid to have their products “evaluated”?

Next, we have to worry about what type of prospective customer the ranking was done for.

A high-ranking system that handles a certain volume of transactions but is not scalable may not work for you.

 

“Best Fit” – How it usually really works

For organizations that want to do their own ranking, the usual approach is to prepare and issue an RFP. Corner cutting in this area usually results in the manufacture of a features list that may or may not be reflective of the needs of end-users.

A better approach is to consult users to find out what they actually need and write-up the RFP request on that basis.

Neither of these approaches results in a shopping list that requests a solution for unanticipated future needs. For this, you have to go to an experienced consultant, bearing in mind that most consultants with the ability to consistently predict the future will be difficult to lure out of the private Caribbean islands they have retired to.

This puts your solution search between ”Save Time and Spend” and “Spend Time and Save” in that preparing an RFP is a non-trivial exercise both time and money wise. The folks who prepare the RFPs do not always know, or bother to find out, what their users want or need, so they consult product literature and prepare an inventory of features across a large number of vendors.

The vendors invest heavily in responding and the one with the most features often gets the contract, all other things being equal. Except when the buyer has decided in advance who is to get the contract in which case the future “also ran” vendors really should not spend a lot of time and money responding to the RFP.

RFPs often start “feature wars” where any vendor capable of understanding the Scope of Work (SOW) will traipse through the checklists and respond positively.

The problem with “best fit” is users usually only need 10% of the features requested and so you can quickly see that a vendor high up on the feature count list might score very poorly on the few features that the users actually need. It would be helpful if features could be ranked on the basis of their usefulness but that seems to be difficult, particularly when the buyers go into the marketplace without bothering to consult the users re what is/is not important to them.

Sound depressing?

‘Fraid so.

The easy route ends up being expensive and the difficult route takes a long time.

.

 

Posted in TECHNOLOGY, Uncategorized | Leave a comment