Policy, procedure, KPIs – how to run a business

Corporations have infrastructures with various asset classes (capital, access to capital, plant, equipment, people, existing products/services, new products/services, customers, and partners).

The ones that have “secret sauces” succeed, most of the others fail.

The rules for success are quite simple – it’s important to build and maintain each asset class individually.

Then, aside from the risk of being leapfrogged, you need to also enhance your assets to keep ahead of the competition.

Not all assets have the same relative strategic value so you need a way when making decisions re the commitment/deployment of assets to study each fund request in the light of its potential to support strategy.

It pays to maintain a reserve in each asset class. If, for instance, you commit all of your staff to a large project you may need to refuse new opportunities and if your “all-eggs-in-one-basket” initiative fails you will find yourself in damage control mode.

The traditional approach to “management” has been to standardize and streamline.

Policy provides governance, procedure provides guidance, KPIs allow CEOs to steer the ship.

The problem is people don’t keep policy in mind, they don’t read procedure and because things change quickly, it’s very easy to be looking at the wrong KPIs.

This is why we have BPM (Business Process Management) and ACM/BPM (Adaptive Case Management).

A Bit of History

I have always maintained the position that BPM had its origins in CPM (nodes, arcs, end node objective).

CPM goes back to 1957 (possibly earlier) with  flow graphing hitting the streets both in the military (Polaris Program) and in the construction business (E I Du Pont de Nemours).

Media coverage of flow graphing peaked in 1962 with DOD/NASA’s Pert Cost. I don’t recall seeing much media frenzy on CPM but the “critical path” methodology has evolved over time to where few considering launching a large project would do so without CPM.

The main contribution introduced by BPM was content-driven decision-box branching and loopbacks.

It is worth pointing out that branching itself was a part of GERT (an invention of General Electric) which recognized the need to avoid having to engage all pathways in a flow graph. The difference in GERT, was that the branching was evidence-based (i.e. plan side) whereas BPM is content-sensitive, causing rule set engagement to occur at run-time.

BPM Today

The problem with BPM came when most of the low-hanging fruit (i.e. mostly linear, straight through, end-to-end processes) had been harvested.

This is resulting in an exodus from traditional BPMs to Case where objectives can no longer be conveniently parked at process end points.


The thing about Case is that it can host objectives and allow background BPM to continue to provide Orchestration.

Governance comes from rule-sets at the Case level. We call all of this ACM/BPM where ACM stands for “Adaptive Case Management”. Some just call the hybrid approach ACM but we need both Orchestration and Governance plus a few other tools.

Corporations that embrace ACM/BPM end up with the best of two worlds i.e. a flexible environment in which structured “best practices” in the form of process fragments are available for users, machines and software to thread together at run time, and where the ability exists at any stage during the lifecycle of a Case to insert ad hoc interventions.

Contrast this with the shaky foundation that maintains that you can, with BPM alone, map out all eventualities. It does not take a lot of analysis to realize that in a relatively simple flow graph, the closer you have branching decision-boxes to the start node, the greater the number of permutations and combinations. Easy to identify flow graphs where the number of permutations and combinations is in the hundreds of thousands.

ACM/BPM wins hands down because there are no restrictions at any point in the lifecycle of a Case re engaging specific processing

Bridging the gap between operations and strategy

It’s fine to be practicing ACM/BPM at the operations level but it’s a trees vs forest wheel-spinning exercise unless/until you have a way to evolve strategy, put in place ways and means of assigning priorities (few companies have the wherewithal to authorize all funding requests that are submitted) and then monitor approved funding requests to make sure work is proceeding on time, on budget and within specification.

Strategy -> Priority Setting -> Objectives     <-       ROIs <- Cases <- Case objectives

Narrowing the gap between operations and strategy requires ensuring that Case objectives are at all times supportive of strategic objectives.

i.e.  Case objectives -> KPIs -> Strategic objectives

The main tools I have used over time to bridge the gap between operations and strategy are a) shifting from ROIs to SROIs at the funding level and b) use of FOMM (Figure of Merit Matrices) during monitoring/tracking as a non-subjective means of assessing progress toward meeting Case objectives and c) finding ways to consolidate operations level data to a free-form search knowledge base environment that is able to simultaneously host KPIs.

I will re-visit these in future blog posts.


Posted in Adaptive Case Management, Business Process Management, Case Management, Competitive Advantage, MANAGEMENT, Strategic Planning | Leave a comment

Your BPMs shopping list

If you are in the market for a BPMs, you may be better off looking for a BPFMs (Business Process Fragment Management System).

shopping_cartPlease don’t make this into a new acronym – we don’t need more acronyms with “Intelligent Business Process Management”, “Agile Business Process Management”, “Dynamic Business Process Management”.  I am here to simplify things, not complicate them.

The thing is in today’s business world there are very few remaining end-to-end Business Processes to map.

Corporations long ago automated their continuous and end-to-end processes with the result that most of all we have left are “Process Fragments”.

What’s the difference between a Business Process and a Business Process Fragment?

Basically, It’s the presence or absence of objectives at flow graph end nodes.

Business Process flow graphs conveniently dovetail into a terminal node which can accommodate an objective. Get to the end node and you have reached the objective.

Business Process Fragment flow graphs, on the other hand, have no plan-side objectives.

You get to the end node of one process fragment and a robot, human or software threads the end node to another Process Fragment.

Of course, you could thread process fragments together plan side but this would require that you anticipate all possible eventualities and that you have in place rule sets to guide the branching down specific pathways.

Do the math. The higher the number of decision branching points toward the start of a flow graph, the higher the number of permutations and combinations. Easy in a relatively simple flow graph to get to 500,000.

It’s OK to let your knowledge workers do some of the decision branching. The usual reason for hiring knowledge workers is that they know what to do, when, how, and why and if you deploy them properly, where. .

If you are worried about allowing knowledge workers to pick the branching at run time, you probably should not have hired them.

Under the new era of Business Process Fragments, how do we know, plan side, when we are done?

Answer, you don’t.

Objectives move to run-time Case environments where a Case Manager decides when objectives have been met and it is OK to close the Case. (i.e. “It ain’t over until the fat lady sings”)

Now, it’s obvious from the above that our running list of “must haves” includes a) a graphic process mapping facility, b) a compiler so you can roll out templates of your graphic process maps, and c) a run-time environment capable of providing workload management across orders and users in the context of scarce resources.

You need d) global Case level rules so that as and when users deviate from “best practice” protocols, (i.e. skip over steps, perform steps out of sequence, perform ad hoc steps not in any process fragment and thread together process fragments in ways that are less than ideal), these users will be tripped up by such rules.

You also need e) a repository so you can look back over time and see who did what, how, when and where.

And, you need f) the capability to import data and export data from/to 3rd party local and remote systems, including customer systems and applications.

So, there you have it, your complete shopping list for a BPMs.

  • Process mapping
  • Compiler
  • Run-time Case management environment w/ workload management facilities
  • Global Case-level rule sets
  • Data Repository
  • Data Exchanger

By my count, that makes six (6) “must haves”.

Keep this list handy when you book a seat at a webinar or register for a seminar on BPMs.

The generic name for the type of system you should be looking for is ACM/BPM (Adaptive Case Management/Business Process Management).

Posted in Adaptive Case Management, Business Process Management, Case Management, Data Interoperability, Operational Planning, Process Mapping, R.A.L.B., Scheduling, Software Design | Tagged | Leave a comment

Pick a BPMs, any BPMs

We too easily settle for various states of affairs, only to find that outcomes could be more favorable given more research, better use of logic and less attention to paid “reviews”.

Hardly a day goes by without a new LinkedIn invite to attend a web demo or seminar on the “best” BPMs (business process management software suite).

“Best” for whom? The answer typically is “best” for the vendor.

Nowhere in these presentations is there much of an attempt to itemize essential functionality of a BPMs followed by a demonstration of how a product being showcased provides such functionality.

Most of these presentations are “show and tell” which translate to “. . . see what we can do with this fantastic product”.

If you have read my “Spend and Save” blog post, you can easily understand where many of the web demo/seminar attendees are coming from.

Unlike Sue in “Along Came Jones”, they don’t need fixes for their problems because they have not done an analysis of what their problems are, so no need to bring anyone in to fix these problems.

“Along Came Jones”, by the way, was a hit song written by Jerry Lieber and Mike Stoller and originally recorded by The Coasters in 1959. The song tells of the interaction between a gunslinger stock villain, “Salty Sam,” and a ranch owner, “Sweet Sue”. Sam makes various attempts, the first of which at an abandoned sawmill, to get Sue to give the villain the deed to her ranch but is, each time, outdone by Jones.

Remember the Coasters?

I did not expect you would, but here is a link (great saxophone!)


Now, at a more practical level, stay tuned for my checklist of “must have” functions for a BPMs. Don’t sign up for any BPM web demos or BPM seminars until you have this checklist and don’t be shy about asking the presenters how/where in their product offerings they address these functions.

Posted in Business Process Management, Case Management, TECHNOLOGY | Tagged , | Leave a comment

The Importance of Continuity of Patient Care

Not so long ago, when you saw your GP and they referred you to a specialist, you would have to start all over again with your demographics, health history.

Most of us see several healthcare providers at different agencies over time, we visit ERs on weekends, and we have tests done at labs.

A reasonable patient expectation is that your need-to-know patient information will be available to healthcare providers/facilities you visit. But, don’t count on it just yet. In the absence of automation, providers are simply too busy to consolidate your patient data to their EHRs.

It’s not too much of a stretch to expect that if you are on vacation in a foreign country important information relating to you would also be available, on demand, at facilities you are visiting for the first time (again, on a strict need-to-know basis). Don’t count on it just yet.

I notice, at LinkedIn discussions, distinctions being drawn between interoperability and interconnectivity.

I hope we don’t go down the path where all EHRs end up being “standardized”. What is needed is the ability for agencies to exchange data (i.e. interconnectivity).

I seems that every second LinkedIn post on interoperability/interconnectivity at some stage cautions readers with “ … but we are not there yet”.  I agree “we” are not yet there in most cases but maintain that we could easily be there, now, and in some cases we are there.

An easy way to demonstrate interconnectivity and how it should work goes like this:

A managed care company that has member clinics can easily ask its members to upload daily progress notes, results of lab tests received etc. to a hub. The clinics already have connectivity with the MCO for submittal of claims. No big deal to set up a separate upload.

It’s also easy for these same agencies toward the end of each working day to upload a list of patients they will be seeing the next day and get back visit reports from other agencies for import to their EHRs.

If a particular EHR cannot dovetail incremental third-party visit data, nothing wrong with logging into the hub on a second screen to at least view patient activity at third-party agencies.

What if you are not a member of an MCO? Here, we need State or Federal hubs.

It’s probably not a good idea to have one central hub but, even with 50 hubs, these could be connected in a ring. If you are in your home town, your healthcare record is going to be at the local hub, if you are traveling, it’s not a big issue to link to your home base hub and download/upload.

There is a second type of needed interconnectivity.

Here, we need to interconnect staff and physical resources so that things go not fall between the cracks moving from one step to the next along a patient care pathway. Bearing in mind a typical hospital must to be able to process several hundreds of patients per day, there is an added need to prioritize tasks, then level and balance workload across staff, all in the context of scarce resources.

Traditional EHRs don’t do a very good job “managing” workflow for the simple reason that they lack RALB (Resource Allocation, Leveling and Balancing). The solution is to put it in. We have known how to manage serial and parallel tasks since 1957 (i.e. CPM), even before this.

Healthcare Data Interconnectivity 2015

I don’t buy into the notion that it’s “difficult” to export, package, ship, receive and import patient data. What we should be saying is many software manufacturers “make it difficult”, for customer retention purposes. Given the amount of money these systems cost, it’s not a bad strategy to run some of these “bad” suppliers out of town.

The world has had structured data exchange in the area of international shipping since the early 1960s.

Yes, healthcare is different in that much of the data is unstructured, but given a generic data exchanger you can allow any number of publishers and subscribers to each read/write using their own native data element naming conventions.

Of course, each publisher has to make available a “long name”  per data element so that subscribers can know what they are subscribing to otherwise the only hurdle is data transport formats that certain individual subscribers are not able to read.

It follows that the worst case scenario is the one where a formatter has to be written for a particular subscriber and a parser has to be written for the publisher should there be any need for that subscriber to push back data to the publisher. As the number of “standard” data transport formats increases, the demand for custom formats will decrease.

Easy/difficult to write formatters/parsers? Not at all, if you have software that can “sniff” a new data transport format and generate code to get this in/out of a generic data exchanger.  The software industry has come a long way from having programmers write code, to adapting code written by others (including themselves) to code written by programs.

So there you have it, a rational and practical solution for interconnectivity in healthcare.

End of . . . .

Posted in Case Management, Interconnectivity, Interoperability | Tagged , , , | Leave a comment

Successful Engineering Design Strategies

You may have read about Toyota’s announced focus on hydrogen fuel-cell powered automobiles.

This is a win, win, lose strategy (for Toyota, for those of us who like to live on this planet, but maybe not for manufacturers of home generators).


The thing is, it seems that if you were to have one of the proposed Toyota cars, you could, providing you don’t need to drive off anywhere with that car, power up essential services at your home for a couple of days.

Clearly this hypothesis needs to be tested but if the promise is kept, small home generators are going to be a lot harder to sell than currently.

My point re engineering design in general is you need three (3) designs:

  1. The one you have in production
  2. Another, under test, that you could roll out fairly quickly.
  3. A disruptive design, in concept or on the drawing board, with the potential to sideline your designs 1 and 2.

No guarantee, of course, you won’t be leapfrogged by one of your current competitors.

The reality here is you might stand a greater chance of being leapfrogged by an organization that currently is not one of your competitors or by an organization that does not yet exist.

If this causes you not to sleep at night, read more of my blog articles.

My close friends maintain they are a sure remedy for insomnia.


Posted in Engineering Design, Manufacturing Operations, TECHNOLOGY | Tagged , , | Leave a comment

“Save Time and Spend” versus “Spend Time and Save”

When you go into any marketplace looking for a technology solution to address a complex problem, you basically have two options.

You can look at “best of class” rankings and pick near the top – this will save you time but it may cost you a lot of money.

The other option is to carry out an exhaustive search and hopefully use non-subjective filtering to find a “best fit”, spending time but saving money.


“Best of Class” – How it often works.

The first criteria of importance when you want to make a “best of class” product selection decision is the integrity and capability of the ranking service. Have the vendor products actually been evaluated or is the ranking a function of how much money vendors have paid to have their products “evaluated”?

Next, we have to worry about what type of prospective customer the ranking was done for.

A high-ranking system that handles a certain volume of transactions but is not scalable may not work for you.


“Best Fit” – How it usually really works

For organizations that want to do their own ranking, the usual approach is to prepare and issue an RFP. Corner cutting in this area usually results in the manufacture of a features list that may or may not be reflective of the needs of end-users.

A better approach is to consult users to find out what they actually need and write-up the RFP request on that basis.

Neither of these approaches results in a shopping list that requests a solution for unanticipated future needs. For this, you have to go to an experienced consultant, bearing in mind that most consultants with the ability to consistently predict the future will be difficult to lure out of the private Caribbean islands they have retired to.

This puts your solution search between ”Save Time and Spend” and “Spend Time and Save” in that preparing an RFP is a non-trivial exercise both time and money wise. The folks who prepare the RFPs do not always know, or bother to find out, what their users want or need, so they consult product literature and prepare an inventory of features across a large number of vendors.

The vendors invest heavily in responding and the one with the most features often gets the contract, all other things being equal. Except when the buyer has decided in advance who is to get the contract in which case the future “also ran” vendors really should not spend a lot of time and money responding to the RFP.

RFPs often start “feature wars” where any vendor capable of understanding the Scope of Work (SOW) will traipse through the checklists and respond positively.

The problem with “best fit” is users usually only need 10% of the features requested and so you can quickly see that a vendor high up on the feature count list might score very poorly on the few features that the users actually need. It would be helpful if features could be ranked on the basis of their usefulness but that seems to be difficult, particularly when the buyers go into the marketplace without bothering to consult the users re what is/is not important to them.

Sound depressing?

‘Fraid so.

The easy route ends up being expensive and the difficult route takes a long time.



Posted in TECHNOLOGY, Uncategorized | Leave a comment

Hello, our name is . . .

This post is for readers who are new to this blog.

Civerex is a 22-year veteran of the healthcare EHR wars, with occasional forages into the area of law enforcement, knowledgebase building, b2b, workflow management and data exchange.

We are not looking for investment capital, we want to help you to invest in yourself by private-labeling our software to give you the product you have been dreaming about and wanting to put on the market.

We have eight (8) software products (CiverWeb, CiverMind, CiverManage, CiverPsych, CiverMed, CiverOrders, CiverMail, CiverExchange), all of which seamlessly interconnect and cover the full spectrum of strategy development, setup of KPIs, process mapping, modeling, simulation, rollout, monitoring, data consolidation and monitoring of performance.

Over time, we have put together 1,500,000 lines of source code that can be private-labeled as a distinctive new product in virtually any business sector/application area, with a few changes in terminology. Your business sector/your application, the one you have been dreaming of.

Our product portfolio has been developed by one team of developers, not eight. The reason for this outcome is each time a problem presented, we held back until such time as we could work out a generic solution. Better one than eight.

Now, the thing about private-labeling with the right code base is that aside from being able to speak different languages, you can, with one code base, support multiple products where each install is different, the service directories are different, the workflows are different, the data collection forms in service are different and the rule sets in place are different.

We leaped over the hurdle of translating our application shells to different languages by putting language translation facilities in the software such that a partner in a different country/corporate culture can run the software we distribute, overtype the English and then, when no more English expressions can be found, ship back to us a language file which we compile to a new executable and send back as a translated application.

Our clients like “hands-on”.

They do not fancy having to build database tables/fields. We automated all of this long ago. They do like building, owning and managing their own workflows as an alternative to having to contract with us for “customization” or having to hire an expert consultant to help them build workflows and roll these out to a production environment. They don’t like learning languages or notations. They just want solutions to their problems.

In the area of data exchange where the challenge is to get disparate systems to talk to each other, we used to build parsers/formatters to allow trading partners to exchange data. We found this to be tedious and decided to write “sniffers” that could scan an incoming data file and, to an extent, save the programmer from having to write code to yield some minor variation of an earlier data transport file format

There are a few things we have not yet figured out how to do and, of course, we have not done much about solving problems we have not yet heard about.

Here’s the deal.

If you want to put a new application on the market in a particular industry area, you can do this easily by inventing the next-generation Ferrari in your garage. Bill Gates did it. Steve Jobs did it. Why not you?

Another option that will give you “instant gratification” is to become a partner of ours and configure a new product, branded the way you want.   The turnoff is the fees you will need to pay us to have us brand a product and provide ongoing support/maintenance or the times 5 fees you will have to pay to buy a copy of our source code, but only an ROI can say whether the garage option or the private label option might work for you.

You won’t get much help from us in the way of “Cheshire cat” smiling sales rep “assistance” – we are rather unique in that we have a management consulting division so for each question you put to us you are likely to get ten questions back. We don’t hesitate to tell you might be better off with another software vendor because we have found it’s a lot more pleasant to deal with “delighted customers” as opposed to “disgruntled former customers”.

Finally, if have a concept you need to promote to others, we have a Video Production Unit that can put a good spin on any set of ideas you feel you need to communicate to your peers, top management or other stakeholders.

Video recording sessions with us tend to be painful. We make you take, re-take and re-take, until the end result “looks good” – we can afford to do this because you pay by the hour but you will like the end result. If you want an inexpensive promo, talk to a teenager who has an iPad. You might end up with an award-winning video. Ask the tooth fairy if I am giving out good advice here.

Enough said.

Bottom line, you will never know whether this is a “good deal” unless/until you pick up the phone and call our Managing Director, Karl Walter Keirstead. Call 450 458 5601.

Who is Karl Walter Keirstead?

Just Google the name and then do some homework before you make your call. It’s a lot easier to talk to someone when you know who they are.

Posted in Decision Making, Organizational Development, Software Acquisition, Strategic Planning, TECHNOLOGY | Tagged | Leave a comment

Healthcare – The chickens finally have come home to roost

Make your day by clicking on the link here below and then read this blog post.


If you feel healthcare in the USA is “too expensive” write to Rep. Michael Burgess (R-Texas), a physician who leads the House Energy and Commerce trade subcommittee and is drawing up a bill to enforce data sharing, and tell him he can have interoperability simply by taking some of the members of the “Electronic Health Records Association” to court.

My comments at “story” were:


Who buys EMR software that is incapable of exporting its data? Who subscribes to a cloud EMR service that has no option for exporting the data?

And if you must acquire/subscribe to a system that charges ‘extra’ to unlock data export, why are doctors suffering sticker shock?

Is it because they bought a “car” without checking to see whether a motor/transmission was included and if not, how much extra?

Seems to me HIPPA says healthcare service providers are custodians of patient data. How can you be a custodian when you don’t have custody?

Thank goodness, at least one person has it right i.e. “Interoperability is what makes an EHR useful,” said Rep. Michael Burgess (R-Texas) – no surprise to see that Rep Burgess is an MD, not an IT person going into the marketplace to find software that “meets the needs” of clinical staff who have never been consulted re their needs.

And, if you are reading this and think that “interoperability” is “difficult” – think again.

My group builds software for healthcare, law enforcement, manufacturing, b2b.

Aside from healthcare, none of these other sectors could function without seamless interconnectivity. For this reason we built a Data Exchanger that lets any system talk to any other system.

For healthcare, we even built an e-hub that allowed 100+ clinics, all using different EMRs, to exchange data. It ran in pilot mode for about 12 months, consolidating more than 120,000,000 data elements without any significant hiccup.

Each time we found a new set of trading partners who could not use one of our “standard” data transport formats, we wrote a parser/formatter.

We found over time that the number of new formats slowed, but, we had, by that time, grown weary of writing parsers/formatters, so we developed a “sniffer” that could scan an incoming document, figure out pretty much on its own what was new/different and greatly reduce the amount of custom programming needed.

None of this was “rocket science”.

It may be time for a class action suit against the big “x”  EMR vendors. No need to include Civerex in this pack, our EMRs have included import/export facilities since 1995.

My take . . . .. I think many of the players in this game deserve each other.

Do we really need an act of congress to provide relief for victims of self-imposed stupidity?

As my grandmother used to say ” Well,  . . ..  I never”


Posted in FIXING HEALTHCARE, Interoperability | Tagged | Leave a comment

Closing the Business Planning, Monitoring, Control Circle

As specialty practitioners continue to focus on specific operational methodologies and tools and argue about which are “best”, the fundamentals are that managing a business is about building, maintaining and enhancing competitive advantage.PlanMonitorControl

Any organization that is not able to build/maintain competitive advantage will fade. And, any organization that is unable to enhance its competitive advantage runs the risk of being leapfrogged by disruptive technologies (e.g. fuel cell powered automobiles that can act as portable electrical generators during power outages).

It all starts with strategy formulation.

We all know strategy is typically developed in isolation and then “communicated” to operational units who are left to interpret the strategy. It’s very easy to identify organizations that have a good strategy with poor operational performance and there are many examples of organizations that are super-efficient at building products customers do not want.

The notion that “everything is a process” at the operational level is not sustainable in organizations where most of the staff members are knowledge workers. There are few end-to-end processes with convenient end point objectives in these organizations. Instead, we find “process fragments” that people, machines and software thread together at run time.

The range of operational methodologies/tools is such that no amount of buying “best in class” rankings will deter enterprising organizations from finding more innovative/cost effective “solutions” to their problems.

Buyers can “save time and spend” or “spend time and save”.

It’s a bad idea, actually, to wait for problems to be identified and to then only start looking for “solutions”.

Some capabilities need to be classified as “strategic corporate assets”, IT infrastructure being a prime example.

Picking an IT Infrastructure – one of the most important corporate decisions you will make.

If an organization subscribes to the notion that the objective of any business is to remain in business, it is important to pick an Information infrastructure that has the potential to address unanticipated future needs, problems and opportunities.

Easily said, not so easy to do.

But, failure to adopt technology that is “future-proof” results in a need to return to the marketplace for new technology in 2-3 years long before the ROI for the old technology has had time to run its course.

Noone understands this better than video production companies, who, having recently gone to HD, are currently tripping over each other to upgrade to 4K.

Meanwhile, manufacturers are getting ready to roll out 8K cameras but, wait, Canon just announced a new sensor that is 60 times sharper than 1080p HD


If an organization wants to be successful, it needs methodologies and tools that are capable of allowing the organization to close the gap between strategy and operations.

If you are a subscriber to my collection of rants, you will have seen comments on the different information architectural needs of transaction processing applications versus applications/environments needed to carry out strategic planning.

Transaction processing applications pull/push messages from/to engines that provide very specific services. A Case-based run-time environment that showcases seamless interoperability has the wherewithal to provide decision support, collect operational level data, share data and build up a history of transactions, Case by Case.  All you need in such an environment is the ability to establish a cursor position in an RDBMS and engage processing.

Knowlegebase applications require a different IT architecture.

Here,  we have a need for graphic display of thousands (sometimes tens of thousands) of records, organized in multi-root hierarchical trees,  with free-form search capabilities across all records. The records typically come from multiple Entities where, in some organizations, there is no duplication of data elements across Entities (i.e. resources, customers, products, competitors, policy/procedure/legislation)

The focus can be on a particular structured data element value, text, key words, even images, and the scope of any search is likely to extend to current and historical data as opposed to current data only.

The 2nd difference between Kbase application IT architectures and the usual structures found in traditional RDBMS applications is that in a Kbase you need to be able to put a simultaneous focus on all records, not just the “hits”.

Users are likely, some of the time at least, to be more interested in what a search did NOT find as what was found. (i.e. McDonalds today wants to put an outlet at a location where Wendys is, tomorrow they may want to put an outlet where Wendys is not).

The 3rd and final difference is that the “structure” of data in a Kbase is likely to change on-the-fly. (i.e. I have records organized by City – following a search I may want to cluster some of these temporarily or permanently by type of business, keeping the City structure intact).

It’s easy to understand that in a Kbase application you need to load record stubs for all records into memory,  carry out searches and then build your KMap out of memory.

Different User mindset, different User Interface, different architecture.

See “It’s Time For You To Get Your Big Data Organized”


Stay tuned for more information on “Cases of Cases” in a subsequent blog post.

Posted in Case Management, Competitive Advantage, Decision Making, Operational Planning, Organizational Development, Strategic Planning | Tagged , , | Leave a comment

The potential of telemedicine for reducing the cost of healthcare and for improving quality of life

Telehealth has the potential to greatly reduce the cost of healthcare services delivery and ???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????greatly increase the quality of life.

In respect of returning military personnel, John Liebert, MD and William JJ Birnes, PhD, JD published in 2013 a book called “Wounded Minds” where they highlighted the impact of results of the inefficiency of traditional treatment approaches ($32.2 billion annual expenditure for anxiety disorders alone).

These two authors state (page 254) in respect of the use of new suicide prevention initiatives that “Technology aimed at augmenting therapy is another strategy, one designed to overcome some access to care issues in remote areas. Virtual reality and telemedicine are examples”

Anxiety is just one area of medicine that can benefit from telehealth (i.e. substance abuse, depression, etc.)

It would, in my view, be a mistake to limit looking to telehealth to address the behavioral sub-set of conditions that patients can present with or to restrict telehealth to care access in remote areas.

It’s my view we have hardly begun to scratch the surface here.

We need to remind ourselves that the approach to medicine as currently practiced (i.e. fixing problems) is far less efficient than encouraging lifestyles that help to prevent problems from developing (i.e. wellness).

We can use telehealth in the area of treatment planning/monitoring as well as in the area of promoting wellness, with the caveat that no single approach/methodology/technique should replace all others.

Increasing availability of medical devices in the field bring us back to telemetry, a technology that is absolutely pervasive in industry, with origins back in the 19th century (data transmission between the Russian Tsar’s winter palace and army headquarters developed in 1845).

My area of interest in healthcare is in continuity of care (i.e. doing the right things, the right way, using the right resources, at the right places and times).

The foundation for this is twofold

a) there cannot be ten best ways to do something nor should there be only one.

b) healthcare resources are scarce so we need to make efficient use of these.

We can talk on and on about telehealth but it is an area that has many moving parts and these all have to fit together smoothly and seamlessly if we are to make effective and efficient use of this important technology.

Civerex has been a pioneer in providing infrastructure for telehealth.

We had in place in the early 2000’s telehealth Tx planning monitoring software for use in the treatment of anxiety disorders. The communication at the time was purely via telephone, but with call centers in one time zone, providers in another and patients in yet a third time zone, it was important in the appointment booking software module used by call center staff to make sure that providers and patients would “meet” at the right time.

Civerex’s current focus is to provide customers with efficient ways of recording 1:1 telehealth sessions and consolidating video/voice recordings at patient EHRs. We are looking to accommodate live video broadcasts of in-home sessions carried out by clinical staff so that senior staff back at clinics can tune into these broadcasts and provide real-time advice/assistance on the administration of treatment plan protocols.


About Civerex

The owners of Civerex developed in the late 1980’s a software product called RapidTox for the diagnosis of instances of poisoning. We were inspired by the work of Robert Driesbach, MD, who published the 1st edition of Handbook of Poisoning back in 1955.

The foundation of our work on RapidTox was a diagnostic algorithm that was able to identify candidate poisons on the basis of symptoms/signs. Selection of a poison gave the user a list of modalities (treatments that worked) with goals/objectives plus the ability to carry out differential diagnoses.

Our current suite of behavioral/medical software products continue to include the diagnostic algorithm in addition to putting a focus on encouraging consistent use of “best practices” protocols via background orchestration, with accommodation for deviating from these as and when deemed appropriate/necessary, subject to governance. The two core methodologies we use are BPM (business process management) and ACM (adaptive case management).

Another area of interest is promotion of the concept that discharge planning should start with the first incoming phone call.

We have spent a lot of time on providing seamless interoperability by and between our products and local and remote 3rd party systems and applications and we promote for general use, a product called CiverExchange that addresses this need.

Any groups interested in collaborating with Civerex should contact us at 450 458 5601 to highlight areas of interest and be prepared to apply for research grants to fund any proposed initiatives that Civerex may agree to in respect of collaborative undertakings.

We are happy to provide “private label” versions of our software for loading content subject to be of interest to different communities of prospective users.


Posted in Adaptive Case Management, Business Process Management, Data Interoperability, FIXING HEALTHCARE, Interoperability, Telehealth, Video Production | Tagged , , , | Leave a comment