Where’s The Beef? – An under the hood look at “seamlessly integrated” BPMs


wheres_the_beef_from_PexelsI keep reading about “seamlessly integrated” BPMs’ (Business Process Management Systems) but when I start to look under the hood, it quickly becomes obvious that many content authors have no clue regarding the needs of organizations in the area of process management.

Reality is they should be talking about “hay-wired” BPMs modules.

Most of the product pitches start with a description of a process mapping environment. Some of the descriptions go on and on about languages/notations. This tells us that the environment being promoted is not end-user oriented.

No end-user wants to learn a new language/notation nor do they want to hire or have to liaise with a team of process experts or software development intermediaries.

The real experts for any process are the folks who work with that process day in/day out. Chances are you need facilitators to drag them out of their silos but with minor prompting, end-users can build, own and manage their processes.

Next in the list of capabilities we learn that there are “interfaces” that convert the graphic process map into, it seems, a run-time template that is “not rigid”.

What “interface” exactly would one possibly want other than a “rollout” button? If there is more to it than this, this is a dead giveaway that the protocol is too complicated and unworthy of receiving a tag of “seamlessly integrated”.

Same for “not rigid” – given we know that managing any Case requires being able to deal with a random mix of structured versus ad hoc interventions.

Any detailed explanation about a BPM template not being “rigid” is a smokescreen for inadequacy in this area.

We all know that at any step along any process template you may be required to reach out to a customer (or some outside system/application) or accept input from a customer or outside system/application.   If “rigidity” has to be highlighted, other than in a historical account of the evolution of BPM, the setup is too complicated.

Strike three!

I could quit here but if any readers are still with me, I am not yet done with the rant.

Here goes – at any process template step it’s a given that users will need to collect data.

These software systems therefore need at least one form at each step/intervention and, from an end-user perspective, the form needs to be local to the step.

Same for all manner of .txt, .doc, .ppt, .pdf, .xls, even video/audio recordings that may relate to a step. All of these need to be attributes of steps, not off in some “shared” repository.

What end-users want/need is real “seamless integration” and a modest amount of “interfacing.”

Clearly, when they click on a .doc attribute at a step, they want interfacing with MS Word, not replication of word processor functionality within the Case environment.

Why multiple references to Case?

The thing is we want the primary focus to be on reaching objectives and if “work” has become a combination of ad hoc and structured interventions, we pretty much need to host all of this at a run-time environment that lets users decide what next to do, when, how (ad hoc intervention capability), with some “guidance” attributable to background BPM process templates. Clearly, it’s not only all about BPM.

We also need to look to the environment to “rein-in” extreme excursions away from policy/procedure (global case-level rule sets, if you like). Otherwise you have no “governance”.

Case provides all of the necessary functionality, including data exchange plus auto-buildup of a longitudinal history of interventions (how otherwise would a user be able to decide what the next intervention should be at a Case?).

The real icing on the cake in some of these nonsensical pitches is references to “process objectives”.

If you no longer have “processes” (what we have today are “process fragments” that get threaded together by users, robots and software at run time), how can/should we expect to find objectives at process template end points?

No processes, no convenient end points, no objectives.

The answer is objectives have gone from being plan-side attributes of process maps to run-time attributes of Cases.

Once you venture into areas where there is a need to accommodate a random mix of ad hoc and structured interventions (i.e. most business areas today except where we may find a few not-yet-automated end-to-end processes), it is the Case Manager who decides what the objectives are for a Case and they, not BI analysts, nor IT, park these objectives at each Case.

Case Managers also monitor progress toward meeting Case objectives and this typically requires fairly complex non-subjective decision making, not simple counting of the number of remaining steps along a process template pathway.

See posts I have made on FOMM (Figure of Merit Matrices).

Just last week I read some promotional material announcing a ‘transition’ to “Case”.

I pointed out to the authors that Case is not new, that it actually is a term borrowed from the legal profession and was alive and well in the UK in the area of medicine from as early on as 1603.

They have not thus far responded to my objections.

It’s easy to determine within healthcare whether there is a focus on Cases.

Just walk into a clinic/hospital and ask if you can talk to a Case Manager. You will probably have a roomful of choices and some of these folks have been doing Case Management for decades. They have job titles, job descriptions and really perform “Case Management” day-in/day-out.

Most of my readers, aside from members of the flat-earth society, are starting to get this. Except that the way things have been going lately, we may very well soon have a flat earth, so they may end up having the last laugh.

“Case” – not new, not close, no cigar.

Posted in Adaptive Case Management, Business Process Management, Case Management, Data Interoperability, Decision Making, Enterprise Content Management, Process Management, Process Mapping | Tagged , , , | 1 Comment

3D Strategic Planning – What you need to know about it


Strategic Planning is a “connect-the-dots” exercise that usually starts off like this. . .big_data_mess_full_size

In order to develop strategy you need information on multiple Entities relating to your business (i.e. Capital, Access to Capital, Land, Equipment, Tools, Premises, Staff, Current Products/Services, Products / Services Under Development, Projects Awaiting Approval, Technology Trends, Changing Legislation, Competitors).

We know that decision-making involves the transformation of information into action. The problem is you cannot make decisions if you cannot easily find and access the information needed for such decisions.

For any given opportunity, arriving at a go/no decision will impact one or more of these Entities.

One added complexity is the way information belonging to one Entity interacts with information belonging to another Entity.

This is where we make the link between traditional strategic planning (deciding how to make best use of scarce resources on initiatives that track with corporate mission) and the “connect-the-dots” approach used by law enforcement for investigations.

The key point is the law enforcement “connect-the-dots” approach can be “borrowed” for corporate strategic planning purposes.

Here is a typical practical scenario

An opportunity has been identified, the sponsors present to the Board and the Board now has to assess the opportunity, assign a ranking, assign a priority and if the project manages to make its way through these filters, allocate funds to allow an initiative to move forward.

Different opportunities impact different Entities in different ways.

It follows that if you are consolidating information relating to corporate Entities you may need to provisionally allocate assets/resources to several competing opportunities.

All in the interest of making better decisions.

One way to do this is to consolidate all Entity information for the Corporation at a graphic knowledge base and then alias information relevant to each opportunity for local consultation at each opportunity.   This allows you to toggle between the “big picture” and individual opportunities, with each opportunity being able to “see” competition from others on the use of scarce resources.

If you find your favorite proposed initiative has a ranking below another initiative you can perhaps do more research on the merits/benefits of your initiative and improve its ranking.

The more you are able to “connect-the-dots” between initiatives and their “draws” on scarce resources, the greater the potential quality of your decision-making at and across initiatives.

Why 3D?

Well, you will discover soon enough that trying to build a single hierarchy comprising say, 500,000 information points on one computer screen requires the use of 3D “Russian Doll” or “Cascading Folder” mapping as illustrated in the US State Dept Country Profiles demo database (all countries, business, travel, law enforcement, narcotics, terrorism, etc.).

Try that on paper or whiteboard with post-its.

What you need is a graphic free-form search knowledge base capability that accommodates any number of separate hierarchies, with “connect-the-dots” facilities and with the ability to quickly zoom in from the “big picture” to low-level detail and back.

At the end of the day, it’s all about how you like to look at your corporation – you really only have two choices . . .

Like this

big_data_orderly

 

Or like this

big_data_mess_full_size

 

 

 

 

 

 

 

 

 

 

Think about this article next time you go to a meeting pushing a cart with a 3 foot pile of reports, papers, spreadsheets.

Key Words: Strategic planning, Connect-the-dots, knowledge bases

Posted in Strategic Planning | 1 Comment

Policy, procedure, KPIs – how to run a business


Corporations have infrastructures with various asset classes (capital, access to capital, plant, equipment, people, existing products/services, new products/services, customers, and partners).

The ones that have “secret sauces” succeed, most of the others fail.

The rules for success are quite simple – it’s important to build and maintain each asset class individually.

Then, aside from the risk of being leapfrogged, you need to also enhance your assets to keep ahead of the competition.

Not all assets have the same relative strategic value so you need a way when making decisions re the commitment/deployment of assets to study each fund request in the light of its potential to support strategy.

It pays to maintain a reserve in each asset class. If, for instance, you commit all of your staff to a large project you may need to refuse new opportunities and if your “all-eggs-in-one-basket” initiative fails you will find yourself in damage control mode.

The traditional approach to “management” has been to standardize and streamline.

Policy provides governance, procedure provides guidance, KPIs allow CEOs to steer the ship.

The problem is people don’t keep policy in mind, they don’t read procedure and because things change quickly, it’s very easy to be looking at the wrong KPIs.

This is why we have BPM (Business Process Management) and ACM/BPM (Adaptive Case Management).

A Bit of History

I have always maintained the position that BPM had its origins in CPM (nodes, arcs, end node objective).

CPM goes back to 1957 (possibly earlier) with  flow graphing hitting the streets both in the military (Polaris Program) and in the construction business (E I Du Pont de Nemours).

Media coverage of flow graphing peaked in 1962 with DOD/NASA’s Pert Cost. I don’t recall seeing much media frenzy on CPM but the “critical path” methodology has evolved over time to where few considering launching a large project would do so without CPM.

The main contribution introduced by BPM was content-driven decision-box branching and loopbacks.

It is worth pointing out that branching itself was a part of GERT (an invention of General Electric) which recognized the need to avoid having to engage all pathways in a flow graph. The difference in GERT, was that the branching was evidence-based (i.e. plan side) whereas BPM is content-sensitive, causing rule set engagement to occur at run-time.

BPM Today

The problem with BPM came when most of the low-hanging fruit (i.e. mostly linear, straight through, end-to-end processes) had been harvested.

This is resulting in an exodus from traditional BPMs to Case where objectives can no longer be conveniently parked at process end points.

ACM/BPM

The thing about Case is that it can host objectives and allow background BPM to continue to provide Orchestration.

Governance comes from rule-sets at the Case level. We call all of this ACM/BPM where ACM stands for “Adaptive Case Management”. Some just call the hybrid approach ACM but we need both Orchestration and Governance plus a few other tools.

Corporations that embrace ACM/BPM end up with the best of two worlds i.e. a flexible environment in which structured “best practices” in the form of process fragments are available for users, machines and software to thread together at run time, and where the ability exists at any stage during the lifecycle of a Case to insert ad hoc interventions.

Contrast this with the shaky foundation that maintains that you can, with BPM alone, map out all eventualities. It does not take a lot of analysis to realize that in a relatively simple flow graph, the closer you have branching decision-boxes to the start node, the greater the number of permutations and combinations. Easy to identify flow graphs where the number of permutations and combinations is in the hundreds of thousands.

ACM/BPM wins hands down because there are no restrictions at any point in the lifecycle of a Case re engaging specific processing

Bridging the gap between operations and strategy

It’s fine to be practicing ACM/BPM at the operations level but it’s a trees vs forest wheel-spinning exercise unless/until you have a way to evolve strategy, put in place ways and means of assigning priorities (few companies have the wherewithal to authorize all funding requests that are submitted) and then monitor approved funding requests to make sure work is proceeding on time, on budget and within specification.

Strategy -> Priority Setting -> Objectives     <-       ROIs <- Cases <- Case objectives

Narrowing the gap between operations and strategy requires ensuring that Case objectives are at all times supportive of strategic objectives.

i.e.  Case objectives -> KPIs -> Strategic objectives

The main tools I have used over time to bridge the gap between operations and strategy are a) shifting from ROIs to SROIs at the funding level and b) use of FOMM (Figure of Merit Matrices) during monitoring/tracking as a non-subjective means of assessing progress toward meeting Case objectives and c) finding ways to consolidate operations level data to a free-form search knowledge base environment that is able to simultaneously host KPIs.

I will re-visit these in future blog posts.

 

Posted in Adaptive Case Management, Business Process Management, Case Management, Competitive Advantage, MANAGEMENT, Strategic Planning | Leave a comment

Your BPMs shopping list


If you are in the market for a BPMs, you may be better off looking for a BPFMs (Business Process Fragment Management System).

shopping_cartPlease don’t make this into a new acronym – we don’t need more acronyms with “Intelligent Business Process Management”, “Agile Business Process Management”, “Dynamic Business Process Management”.  I am here to simplify things, not complicate them.

The thing is in today’s business world there are very few remaining end-to-end Business Processes to map.

Corporations long ago automated their continuous and end-to-end processes with the result that most of all we have left are “Process Fragments”.

What’s the difference between a Business Process and a Business Process Fragment?

Basically, It’s the presence or absence of objectives at flow graph end nodes.

Business Process flow graphs conveniently dovetail into a terminal node which can accommodate an objective. Get to the end node and you have reached the objective.

Business Process Fragment flow graphs, on the other hand, have no plan-side objectives.

You get to the end node of one process fragment and a robot, human or software threads the end node to another Process Fragment.

Of course, you could thread process fragments together plan side but this would require that you anticipate all possible eventualities and that you have in place rule sets to guide the branching down specific pathways.

Do the math. The higher the number of decision branching points toward the start of a flow graph, the higher the number of permutations and combinations. Easy in a relatively simple flow graph to get to 500,000.

It’s OK to let your knowledge workers do some of the decision branching. The usual reason for hiring knowledge workers is that they know what to do, when, how, and why and if you deploy them properly, where. .

If you are worried about allowing knowledge workers to pick the branching at run time, you probably should not have hired them.

Under the new era of Business Process Fragments, how do we know, plan side, when we are done?

Answer, you don’t.

Objectives move to run-time Case environments where a Case Manager decides when objectives have been met and it is OK to close the Case. (i.e. “It ain’t over until the fat lady sings”)

Now, it’s obvious from the above that our running list of “must haves” includes a) a graphic process mapping facility, b) a compiler so you can roll out templates of your graphic process maps, and c) a run-time environment capable of providing workload management across orders and users in the context of scarce resources.

You need d) global Case level rules so that as and when users deviate from “best practice” protocols, (i.e. skip over steps, perform steps out of sequence, perform ad hoc steps not in any process fragment and thread together process fragments in ways that are less than ideal), these users will be tripped up by such rules.

You also need e) a repository so you can look back over time and see who did what, how, when and where.

And, you need f) the capability to import data and export data from/to 3rd party local and remote systems, including customer systems and applications.

So, there you have it, your complete shopping list for a BPMs.

  • Process mapping
  • Compiler
  • Run-time Case management environment w/ workload management facilities
  • Global Case-level rule sets
  • Data Repository
  • Data Exchanger

By my count, that makes six (6) “must haves”.

Keep this list handy when you book a seat at a webinar or register for a seminar on BPMs.

The generic name for the type of system you should be looking for is ACM/BPM (Adaptive Case Management/Business Process Management).

Posted in Adaptive Case Management, Business Process Management, Case Management, Data Interoperability, Operational Planning, Process Mapping, R.A.L.B., Scheduling, Software Design | Tagged | Leave a comment

Pick a BPMs, any BPMs


We too easily settle for various states of affairs, only to find that outcomes could be more favorable given more research, better use of logic and less attention to paid “reviews”.

Hardly a day goes by without a new LinkedIn invite to attend a web demo or seminar on the “best” BPMs (business process management software suite).

“Best” for whom? The answer typically is “best” for the vendor.

Nowhere in these presentations is there much of an attempt to itemize essential functionality of a BPMs followed by a demonstration of how a product being showcased provides such functionality.

Most of these presentations are “show and tell” which translate to “. . . see what we can do with this fantastic product”.

If you have read my “Spend and Save” blog post, you can easily understand where many of the web demo/seminar attendees are coming from.

Unlike Sue in “Along Came Jones”, they don’t need fixes for their problems because they have not done an analysis of what their problems are, so no need to bring anyone in to fix these problems.

“Along Came Jones”, by the way, was a hit song written by Jerry Lieber and Mike Stoller and originally recorded by The Coasters in 1959. The song tells of the interaction between a gunslinger stock villain, “Salty Sam,” and a ranch owner, “Sweet Sue”. Sam makes various attempts, the first of which at an abandoned sawmill, to get Sue to give the villain the deed to her ranch but is, each time, outdone by Jones.

Remember the Coasters?

I did not expect you would, but here is a link (great saxophone!)

https://www.youtube.com/watch?v=MrGaoSB0Eus

Now, at a more practical level, stay tuned for my checklist of “must have” functions for a BPMs. Don’t sign up for any BPM web demos or BPM seminars until you have this checklist and don’t be shy about asking the presenters how/where in their product offerings they address these functions.

Posted in Business Process Management, Case Management, TECHNOLOGY | Tagged , | Leave a comment

The Importance of Continuity of Patient Care


Not so long ago, when you saw your GP and they referred you to a specialist, you would have to start all over again with your demographics, health history.

Most of us see several healthcare providers at different agencies over time, we visit ERs on weekends, and we have tests done at labs.

A reasonable patient expectation is that your need-to-know patient information will be available to healthcare providers/facilities you visit. But, don’t count on it just yet. In the absence of automation, providers are simply too busy to consolidate your patient data to their EHRs.

It’s not too much of a stretch to expect that if you are on vacation in a foreign country important information relating to you would also be available, on demand, at facilities you are visiting for the first time (again, on a strict need-to-know basis). Don’t count on it just yet.

I notice, at LinkedIn discussions, distinctions being drawn between interoperability and interconnectivity.

I hope we don’t go down the path where all EHRs end up being “standardized”. What is needed is the ability for agencies to exchange data (i.e. interconnectivity).

I seems that every second LinkedIn post on interoperability/interconnectivity at some stage cautions readers with “ … but we are not there yet”.  I agree “we” are not yet there in most cases but maintain that we could easily be there, now, and in some cases we are there.

An easy way to demonstrate interconnectivity and how it should work goes like this:

A managed care company that has member clinics can easily ask its members to upload daily progress notes, results of lab tests received etc. to a hub. The clinics already have connectivity with the MCO for submittal of claims. No big deal to set up a separate upload.

It’s also easy for these same agencies toward the end of each working day to upload a list of patients they will be seeing the next day and get back visit reports from other agencies for import to their EHRs.

If a particular EHR cannot dovetail incremental third-party visit data, nothing wrong with logging into the hub on a second screen to at least view patient activity at third-party agencies.

What if you are not a member of an MCO? Here, we need State or Federal hubs.

It’s probably not a good idea to have one central hub but, even with 50 hubs, these could be connected in a ring. If you are in your home town, your healthcare record is going to be at the local hub, if you are traveling, it’s not a big issue to link to your home base hub and download/upload.

There is a second type of needed interconnectivity.

Here, we need to interconnect staff and physical resources so that things go not fall between the cracks moving from one step to the next along a patient care pathway. Bearing in mind a typical hospital must to be able to process several hundreds of patients per day, there is an added need to prioritize tasks, then level and balance workload across staff, all in the context of scarce resources.

Traditional EHRs don’t do a very good job “managing” workflow for the simple reason that they lack RALB (Resource Allocation, Leveling and Balancing). The solution is to put it in. We have known how to manage serial and parallel tasks since 1957 (i.e. CPM), even before this.

Healthcare Data Interconnectivity 2015

I don’t buy into the notion that it’s “difficult” to export, package, ship, receive and import patient data. What we should be saying is many software manufacturers “make it difficult”, for customer retention purposes. Given the amount of money these systems cost, it’s not a bad strategy to run some of these “bad” suppliers out of town.

The world has had structured data exchange in the area of international shipping since the early 1960s.

Yes, healthcare is different in that much of the data is unstructured, but given a generic data exchanger you can allow any number of publishers and subscribers to each read/write using their own native data element naming conventions.

Of course, each publisher has to make available a “long name”  per data element so that subscribers can know what they are subscribing to otherwise the only hurdle is data transport formats that certain individual subscribers are not able to read.

It follows that the worst case scenario is the one where a formatter has to be written for a particular subscriber and a parser has to be written for the publisher should there be any need for that subscriber to push back data to the publisher. As the number of “standard” data transport formats increases, the demand for custom formats will decrease.

Easy/difficult to write formatters/parsers? Not at all, if you have software that can “sniff” a new data transport format and generate code to get this in/out of a generic data exchanger.  The software industry has come a long way from having programmers write code, to adapting code written by others (including themselves) to code written by programs.

So there you have it, a rational and practical solution for interconnectivity in healthcare.

End of . . . .

Posted in Case Management, Interconnectivity, Interoperability | Tagged , , , | Leave a comment

Successful Engineering Design Strategies


You may have read about Toyota’s announced focus on hydrogen fuel-cell powered automobiles.

This is a win, win, lose strategy (for Toyota, for those of us who like to live on this planet, but maybe not for manufacturers of home generators).

http://www.toyota-global.com/innovation/environmental_technology/fuelcell_vehicle/

The thing is, it seems that if you were to have one of the proposed Toyota cars, you could, providing you don’t need to drive off anywhere with that car, power up essential services at your home for a couple of days.

Clearly this hypothesis needs to be tested but if the promise is kept, small home generators are going to be a lot harder to sell than currently.

My point re engineering design in general is you need three (3) designs:

  1. The one you have in production
  2. Another, under test, that you could roll out fairly quickly.
  3. A disruptive design, in concept or on the drawing board, with the potential to sideline your designs 1 and 2.

No guarantee, of course, you won’t be leapfrogged by one of your current competitors.

The reality here is you might stand a greater chance of being leapfrogged by an organization that currently is not one of your competitors or by an organization that does not yet exist.

If this causes you not to sleep at night, read more of my blog articles.

My close friends maintain they are a sure remedy for insomnia.

 

Posted in Engineering Design, Manufacturing Operations, TECHNOLOGY | Tagged , , | Leave a comment

“Save Time and Spend” versus “Spend Time and Save”


When you go into any marketplace looking for a technology solution to address a complex problem, you basically have two options.

You can look at “best of class” rankings and pick near the top – this will save you time but it may cost you a lot of money.

The other option is to carry out an exhaustive search and hopefully use non-subjective filtering to find a “best fit”, spending time but saving money.

 

“Best of Class” – How it often works.

The first criteria of importance when you want to make a “best of class” product selection decision is the integrity and capability of the ranking service. Have the vendor products actually been evaluated or is the ranking a function of how much money vendors have paid to have their products “evaluated”?

Next, we have to worry about what type of prospective customer the ranking was done for.

A high-ranking system that handles a certain volume of transactions but is not scalable may not work for you.

 

“Best Fit” – How it usually really works

For organizations that want to do their own ranking, the usual approach is to prepare and issue an RFP. Corner cutting in this area usually results in the manufacture of a features list that may or may not be reflective of the needs of end-users.

A better approach is to consult users to find out what they actually need and write-up the RFP request on that basis.

Neither of these approaches results in a shopping list that requests a solution for unanticipated future needs. For this, you have to go to an experienced consultant, bearing in mind that most consultants with the ability to consistently predict the future will be difficult to lure out of the private Caribbean islands they have retired to.

This puts your solution search between ”Save Time and Spend” and “Spend Time and Save” in that preparing an RFP is a non-trivial exercise both time and money wise. The folks who prepare the RFPs do not always know, or bother to find out, what their users want or need, so they consult product literature and prepare an inventory of features across a large number of vendors.

The vendors invest heavily in responding and the one with the most features often gets the contract, all other things being equal. Except when the buyer has decided in advance who is to get the contract in which case the future “also ran” vendors really should not spend a lot of time and money responding to the RFP.

RFPs often start “feature wars” where any vendor capable of understanding the Scope of Work (SOW) will traipse through the checklists and respond positively.

The problem with “best fit” is users usually only need 10% of the features requested and so you can quickly see that a vendor high up on the feature count list might score very poorly on the few features that the users actually need. It would be helpful if features could be ranked on the basis of their usefulness but that seems to be difficult, particularly when the buyers go into the marketplace without bothering to consult the users re what is/is not important to them.

Sound depressing?

‘Fraid so.

The easy route ends up being expensive and the difficult route takes a long time.

.

 

Posted in TECHNOLOGY, Uncategorized | Leave a comment

Hello, our name is . . .


This post is for readers who are new to this blog.

Civerex is a 22-year veteran of the healthcare EHR wars, with occasional forages into the area of law enforcement, knowledgebase building, b2b, workflow management and data exchange.

We are not looking for investment capital, we want to help you to invest in yourself by private-labeling our software to give you the product you have been dreaming about and wanting to put on the market.

We have eight (8) software products (CiverWeb, CiverMind, CiverManage, CiverPsych, CiverMed, CiverOrders, CiverMail, CiverExchange), all of which seamlessly interconnect and cover the full spectrum of strategy development, setup of KPIs, process mapping, modeling, simulation, rollout, monitoring, data consolidation and monitoring of performance.

Over time, we have put together 1,500,000 lines of source code that can be private-labeled as a distinctive new product in virtually any business sector/application area, with a few changes in terminology. Your business sector/your application, the one you have been dreaming of.

Our product portfolio has been developed by one team of developers, not eight. The reason for this outcome is each time a problem presented, we held back until such time as we could work out a generic solution. Better one than eight.

Now, the thing about private-labeling with the right code base is that aside from being able to speak different languages, you can, with one code base, support multiple products where each install is different, the service directories are different, the workflows are different, the data collection forms in service are different and the rule sets in place are different.

We leaped over the hurdle of translating our application shells to different languages by putting language translation facilities in the software such that a partner in a different country/corporate culture can run the software we distribute, overtype the English and then, when no more English expressions can be found, ship back to us a language file which we compile to a new executable and send back as a translated application.

Our clients like “hands-on”.

They do not fancy having to build database tables/fields. We automated all of this long ago. They do like building, owning and managing their own workflows as an alternative to having to contract with us for “customization” or having to hire an expert consultant to help them build workflows and roll these out to a production environment. They don’t like learning languages or notations. They just want solutions to their problems.

In the area of data exchange where the challenge is to get disparate systems to talk to each other, we used to build parsers/formatters to allow trading partners to exchange data. We found this to be tedious and decided to write “sniffers” that could scan an incoming data file and, to an extent, save the programmer from having to write code to yield some minor variation of an earlier data transport file format

There are a few things we have not yet figured out how to do and, of course, we have not done much about solving problems we have not yet heard about.

Here’s the deal.

If you want to put a new application on the market in a particular industry area, you can do this easily by inventing the next-generation Ferrari in your garage. Bill Gates did it. Steve Jobs did it. Why not you?

Another option that will give you “instant gratification” is to become a partner of ours and configure a new product, branded the way you want.   The turnoff is the fees you will need to pay us to have us brand a product and provide ongoing support/maintenance or the times 5 fees you will have to pay to buy a copy of our source code, but only an ROI can say whether the garage option or the private label option might work for you.

You won’t get much help from us in the way of “Cheshire cat” smiling sales rep “assistance” – we are rather unique in that we have a management consulting division so for each question you put to us you are likely to get ten questions back. We don’t hesitate to tell you might be better off with another software vendor because we have found it’s a lot more pleasant to deal with “delighted customers” as opposed to “disgruntled former customers”.

Finally, if have a concept you need to promote to others, we have a Video Production Unit that can put a good spin on any set of ideas you feel you need to communicate to your peers, top management or other stakeholders.

Video recording sessions with us tend to be painful. We make you take, re-take and re-take, until the end result “looks good” – we can afford to do this because you pay by the hour but you will like the end result. If you want an inexpensive promo, talk to a teenager who has an iPad. You might end up with an award-winning video. Ask the tooth fairy if I am giving out good advice here.

Enough said.

Bottom line, you will never know whether this is a “good deal” unless/until you pick up the phone and call our Managing Director, Karl Walter Keirstead. Call 450 458 5601.

Who is Karl Walter Keirstead?

Just Google the name and then do some homework before you make your call. It’s a lot easier to talk to someone when you know who they are.

Posted in Decision Making, Organizational Development, Software Acquisition, Strategic Planning, TECHNOLOGY | Tagged | Leave a comment

Healthcare – The chickens finally have come home to roost


Make your day by clicking on the link here below and then read this blog post.

http://www.politico.com/story/2015/02/data-fees-health-care-reform-115402.html#ixzz3TR4UwsA2

If you feel healthcare in the USA is “too expensive” write to Rep. Michael Burgess (R-Texas), a physician who leads the House Energy and Commerce trade subcommittee and is drawing up a bill to enforce data sharing, and tell him he can have interoperability simply by taking some of the members of the “Electronic Health Records Association” to court.

My comments at “story” were:

[quote:

Who buys EMR software that is incapable of exporting its data? Who subscribes to a cloud EMR service that has no option for exporting the data?

And if you must acquire/subscribe to a system that charges ‘extra’ to unlock data export, why are doctors suffering sticker shock?

Is it because they bought a “car” without checking to see whether a motor/transmission was included and if not, how much extra?

Seems to me HIPPA says healthcare service providers are custodians of patient data. How can you be a custodian when you don’t have custody?

Thank goodness, at least one person has it right i.e. “Interoperability is what makes an EHR useful,” said Rep. Michael Burgess (R-Texas) – no surprise to see that Rep Burgess is an MD, not an IT person going into the marketplace to find software that “meets the needs” of clinical staff who have never been consulted re their needs.

And, if you are reading this and think that “interoperability” is “difficult” – think again.

My group builds software for healthcare, law enforcement, manufacturing, b2b.

Aside from healthcare, none of these other sectors could function without seamless interconnectivity. For this reason we built a Data Exchanger that lets any system talk to any other system.

For healthcare, we even built an e-hub that allowed 100+ clinics, all using different EMRs, to exchange data. It ran in pilot mode for about 12 months, consolidating more than 120,000,000 data elements without any significant hiccup.

Each time we found a new set of trading partners who could not use one of our “standard” data transport formats, we wrote a parser/formatter.

We found over time that the number of new formats slowed, but, we had, by that time, grown weary of writing parsers/formatters, so we developed a “sniffer” that could scan an incoming document, figure out pretty much on its own what was new/different and greatly reduce the amount of custom programming needed.

None of this was “rocket science”.

It may be time for a class action suit against the big “x”  EMR vendors. No need to include Civerex in this pack, our EMRs have included import/export facilities since 1995.

My take . . . .. I think many of the players in this game deserve each other.

Do we really need an act of congress to provide relief for victims of self-imposed stupidity?

As my grandmother used to say ” Well,  . . ..  I never”

]

Posted in FIXING HEALTHCARE, Interoperability | Tagged | Leave a comment