How the EU’s GDPR (General Data Protection Regulation) impacts your business


EU_member_maps

GDPR
(General Data Protection Regulation
 EU 2016/679) took effect one week ago (May 25, 2018).

The Regulation and two related Directives EU 2016/680 and EU 2016/681 deal with the processing of personal data relating to natural persons who are citizens of one of the 28-member States or persons residing in one or more of the 28-member States.

The first thing to note is that if you are a corporation whose headquarters is outside of the EU and you host any personal data relating to EU citizens/residents, you are also subject to EU Regulation 2016/679.

Article 4 of EU 679 defines “processing” to include “collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction”.

The only way to understand the far-reaching impact of the legislation is to read the legal texts – there is one overarching Regulation and two Directives (680 & 681) but each member state (28 of them) has the right to publish its own versions of EU 2016 670/680/681.

If you do the math, you get 3 legal texts x 28 states x 150 explanatory texts/articles, for a possible total of 12,600 notes/articles.

If you have operations in more than one EU state, your policy/procedure re personal data relating to natural persons may require adjustments.

Know and Understand the legislation

A graphic free-form-search knowledgebase where you can simultaneous view all texts/articles for a search phrase facilitates the task.

You can view the texts/articles this way (i.e. no knowledge base),

pdpr4

or this way . . ., (e.g. with a knowledge base)

pdpr3

The problem with organizations running transaction-based software applications is that current systems are not likely to meet GDPR minimal requirements. There will be disclosures, fines will be imposed.

If you do a risk assessment OR bring in a consultant to carry out a risk assessment, the opinion is likely to be that in the event of disclosure you could receive a fine equal to 4% of your total world revenue. (i.e. Article 83).

A search at my EU 679/680/681 knowledgebase for “fine” highlights all “hits” down to the individual text/clause, allowing you quick/easy access to, in this case, Article 83.

pdpr2

GDPR sets the bar to a new, higher level in the area of natural persons data protection.

Be aware that “rule sets”, typically pervasive in transaction processing software, have great difficulty protecting personal “documents” (i.e. images, audio recordings, video recordings, documents, and memo field recordings, etc.).

For this reason, only share “documents” via a data exchanger that handles data sharing at transaction application process instance steps, using PCPs (process control points) that require manual intervention/clearance with other than across-the-board user permissioning.

Here are three (3) high-level recommendations re the design and configuration of transaction processing software systems to handle data relating to natural persons.

1.      Restrict sharing of data relating to natural persons.

Only share data relating to natural persons on a need-to-know basis – the more data you share, the greater the risk of deliberate or inadvertent data breaches.

2.      Importance of having formal data repositories.

Your core transaction processing software systems need to feature consolidating Case Histories that post system-generated date/timestamps, a user “signatures”, along with data, as it was, on the form versions that were in service at the time (who, what, where, when) at any process step you commit. Simple “logs” where you see accesses, without precise tracking on what was accessed, when, by whom, from where, are not good enough.

Postings to data histories must be automated, as opposed to selective and must include visits to all forms that host personal data, even if no data is recorded on these forms.

When a user leaves a process step form that has been visited and edited, the software suite must immediately lock down the new History posting.

Following this, no changes must be allowed to the new posting (or to any previous postings), except that authorized users can look up a form that is in the History, clone the form with its data, proceed to carry out edits on the form copy and then save the form template and data as a new session in the History.

3.      Strictly control access to stored personal data

All access to personal data for purposes of viewing or data sharing should be via “official” interfaces only (i.e. direct log in, portal access, data exchanger).

a)      Data being directly entered into the system by logged-in ordinary users should be controlled by user name and password, with permissions down to the form level i.e. a newly-added user should have no access to any record in the system. Access to the records that contain data relating to natural persons should be granted by “role”. Some records (i.e. ‘VIP’ records) should be blocked from access/viewing for other than specific designated users. VIP records should be excluded from searches, from directory listings and from Reports that list data across records.

b)     Casual and external users should only be able to access data relating to persons via a portal login using a user name and password, preferably using dual factor authentication.

One possible configuration for portals is to use a portal-facing IIS server that pushes out a user-specific “menu of services” itemizing authorized service requests. The content of the “menu of services” should be “role-based”. (i.e. the only people who get to see the data for a database record are those actively involved in processing that data). Clicking on a portal user menu line-item invokes a database engine at the portal-facing server. The database engine alone logs into the back-end server, retrieves requested forms /data and posts the form(s) and data to the user in-tray at the portal.

c)      Batch data streams from/to local or remote 3rd party systems and applications should only post to a standalone data exchanger that accommodates a mapping of need-to-know data elements per subscriber (typically a local or remote system). Parsers and formatters are the responsibility of subscribers and publishers and data streams must be encrypted for data transport. The data exchanger should accommodate rule sets that operate on incoming data (point of origin checks, range checks, boilerplate pick list lookup verification) and tag problematic incoming data for manual clearance.

Bottom line, if your current transaction processing systems/apps do not restrict sharing on a strict need-to-know basis, if your data repositories to not build/maintain formal Histories or if you they allow access to natural person’s data via other than official interfaces, you need to replace, re-engineer or adapt your systems/apps.

Many corporations view such undertakings as requiring a huge amount of capital/time/disruption,

They fail to realize that task execution/tracking (the major source of natural person data) can be carried out under a Case platform that can link to existing systems/apps via a generic data exchanger, giving the corporation improved control over workflow/workload, with the option of later adapting/re-engineering/replacing legacy systems/apps.

Here is a list of caveats in respect of data sharing of data relating to natural persons.

  • Avoid e-mail systems (too easy to inadvertently reference the wrong addressee, proliferation of storage of personal data).
  • Avoid cloud document-sharing facilities (better to have everything relating to a dbms record “at” the dbms record).
  • Avoid use of APIs or RPA apps that build transaction logs, complete with personal data (i.e. determine where these apps are, who has access to them, what the retention strategy of each app is?) – your Case Histories are your permanent record of each transaction.

kwkeirstead

https://kwkeirstead.wordpress.com

Posted in Data Interoperability, Database Technology, Enterprise Content Management, Software Design, Uncategorized | Tagged , , , | Leave a comment

How Low Can You Go? –  Part IV – Essentials for building, sustaining & improving corporate competitive advantage.


Limbo

This article attempts to bridge the gap between operational methods such as ACM/BPM/CPM/RALB and FOMM and methods used to evolve business strategy.The mission of all strategic and operational initiatives is to build, sustain and augment competitive advantage.

See “Theories of the Firm – Expanding RBV using 3D free-form search Kbases” at https://wp.me/pzzpB-Ms

RBV allows organizations to assign priorities to promising initiatives that are competing for scarce resources.  Proper implementation, in theory, leads to increased competitive advantage.

The mechanism for bridging the gap between an initiative and its practical implementation within, typically, a Case, is to have operations prepare an ROI submission, citing benefits and objectives. The latter can be parked at a Case in a Figure of Merit Matrix.  Periodic assessment of progress toward meeting the objectives of the Case with regard for the ROI really is all you need to bridge the gap.

Things often don’t go this way because of the bizarre habit of some corporations of authorizing initiatives and then not bothering to carry out periodic monitoring/auditing on progress.

In extreme cases, the strategy is changed yet the initiative continues.  The practical equivalent of this in government is to have an agency in charge of taking care of monuments that no longer exist.

“Must haves” for bridging the gap between operations and strategy/between strategy and operations include

  • RBV
  • Case w/FOMM

Note from the “Theories of the Firm . . “ article, that, for large corporations, RBV does not work very well if you don’t have access to 3D free-form-search knowledgebases.

For me, all of this boils down to “. . . you cannot manage what you cannot see”.

Photo by : Anneli Salo

Posted in Case Management, Decision Making, Financial Planning, FOMM, R.A.L.B., Risk Analysis, Strategic Planning, Uncategorized | Leave a comment

How Low Can You Go? – Part III – Case Management Essentials


This article explores the contribution of three (3) methodologies to the effectiveness of work.

Limbo

The three methodologies are:

 ACM (Adaptive Case Management),

 RALB (Resource Allocation, Leveling and     Balancing)

 FOMM (Figure of Merit Matrices).

ACM (Adaptive Case Management)

ACM builds on BPM in the area of work performance by providing a run-time platform for the management of  “work” (i.e. management of workflow and workload).

Here, the logical strategy is to have background BPM at Cases provide orchestration for the performance of process steps or tasks, with the option for various actors to deviate from what might otherwise end up being a rigid sequence of tasks.

“Must have’s” for ACM include giving users a workspace or platform that consists of nothing more at the primary user interface than a split screen with a user “InTray” on one side and a traditional Calendar on the other.

The Calendar hosts fixed-time tasks.

The InTray hosts floating-time tasks that post as and when process steps become “current” along BPM process pathways. InTray tasks post automatically due to user skill attribute tagging at process steps, plan-side.

Context/situation-appropriate data information display/data collection Forms similarly post automatically as a result of plan-side encoding of Forms at process steps.

Run-time efficiency starts with posting of each “current” step to the attention of all users with the right skill classification. The first to “take” a task causes the task to lock down (the user becomes the exclusive owner of that task for editing purposes). Other users with the same skill classification have read-access only.

In the normal course of events, most users are likely to be working on several projects (i.e. Cases) at any given time. Accordingly, the run-time platform must be capable of accommodating multi-tier scheduling which leads us to a discussion of a 3rd methodology called RALB (Resource Allocation Leveling and Balancing).

Bottom line, re ACM, it removes task performance rigidity and accommodates minor master management of tasks via RALB.

ACM is nothing more than a workspace.

Technically it is a cursor position at an RDBMS (relational database management system),  which means an organization can have as many Cases as needed (i.e. 100; 1,000; more than 1,000).

Case is the place where users practice ACM.

Unlike BPM, Case/ACM only has a run-time side. There is no plan-side to Case in that whereas a Case can be set up as a clone of another Case, for all intents and purposes, each Case is likely to end up as unique at the time it is closed by its Case Manager.

“Must-haves” for Case/ACM include a workspace per user plus a Case Log where each intervention captures data, as it was, at the time it was recorded, on the form versions that were in service at the time, with a system-applied data/timestamp and user “signature”.

ACM also requires the ability at a Case to auto-export data to local and remote systems and applications and to auto-import data from these. ACM handles who, what, how, where plus when.

RALB (Resource Allocation, Leveling and Balancing)

RALB impacts efficiency at Cases both at the individual user level as well as efficiency across users.

You may be surprised to learn that all of us basically work the same way – we come into our places of work and immediately take note of fixed-time commitments (e.g. meetings, etc.). We take note of time intervals between commitments and make decisions regarding tasks to initiate, advance or complete.

If the time between one meeting and the next is long, a user is likely a focus on advancing the state of one large task. Otherwise, the user may try to complete several small tasks.

For these reasons, users need to be able to micro-schedule tasks. There are, of course, exceptions, one being “breakfast meds” in healthcare that reasonably cannot be deferred from their usual schedule.

Bottom line, users want/need to be able to adjust the timing of tasks, including re-scheduling of certain tasks to another day.

Supervisors also need to be able to adjust the timing of user and their own tasks on the basis of changing customer priorities, sometimes removing tasks from one user and assigning these to other users.

FOMM (Figure of Merit Matrices)

FOMM impacts the effectiveness of work at Cases.

FOMM was invented, it seems, by the Rand Corporation – I recall an article dealing with non-subjective decision-making relating to the range, accuracy and payload of ballistic missiles.

Our adaptation of FOMM has been to provide a means of non-subjective assessment of progress toward meeting Case-level objectives/goals.

The value of FOMM at Cases is easily explained – most Cases have multiple objectives. Some objectives are more important then others. The essential contribution of FOMM is to “weight” objectives and calculate progress toward Case completion.

Progress assessments at Cases are subject to “S” curve behavior where progress is characteristically slow at the onset, followed by rapid progress, only to slow down typically at the 90% complete stage.

Anyone who works with once-through projects using CPM is familiar with “S” curves. The usual project control strategy is to calculate the critical path every few days and shift to a “punch list” once the project is at the 90% complete stage.

“Must-haves” for FOMM are a means of structuring objectives/goals, assigning weights to objectives and making available facilities for recording incremental progress.

All of this can be provided by common spreadsheets and since Cases can accommodate any type of data, FOMM spreadsheets are easily accommodated at Cases themselves, making them easy to access.

Related articles:

How Low Can You Go? – Part I

How Low Can You Go? – Part II – BPM essentials

 How Low Can You Go? – Part III – Case Management essentials

How Low Can You Go? –  Part IV – Essentials for building, sustaining & improving corporate competitive advantage.

 Photo by : Anneli Salo
Posted in Adaptive Case Management, Automated Resource Allocation, Case Management, Decision Making, FOMM, Operational Planning, Process Management, R.A.L.B., Scheduling, Uncategorized | Leave a comment

How Low Can You Go? – Part II – BPM Essentials


This article explores the contribution of a methodology called BPM (Business Process Management) to the “management of work” and lists essential features for background BPM services at Cases. Limbo

BPM is core to workflow. It also contributes, to an extent, to workload management. It contributes directly to efficiency and contributes indirectly to effectiveness.

In the overall scheme of things, BPM deals with what, how, who, and where.

It’s one thing to embrace a methodology and another to make good use of that methodology so let’s go over essentials or “must haves” for BPM.

Your takeaway will hopefully be that you will see BPM as core to the planning and management of work, but relatively easy to implement.

BPM has a plan-side dimension (process mapping capability) and a run-time-side dimension (background orchestration capability at Cases)

One BPM “must have” is the ability to document “as-is’ processes.

This is best accomplished within a graphic space where users are able to put down tasks or process steps and interconnect these with directional arrows.

Extending an “as-is” process map to a “to-be” process map requires the ability to re-arrange and extend process steps (e.g. easy insertion of steps between existing steps, disconnecting and easy re-connecting directional arcs between steps).

The transition from a “to-be” process map to a run-time “best practice” process is typically automatic following several process improvement iterations.

I will detail here below four (4) “must-have” features of run-time BPM out of a total list of 13.

See . . . .

“Success Factors with BPM ”

https://wp.me/pzzpB-JH

1 BPM Compilers

Auto-generation of a run-time process template is the first BPM “must-have” feature for the simple reason that different steps must be performed by different people with different skill sets.

A BPM compiler solves the problem by carving up process maps into discrete steps for posting to the InTrays of the right people at the right time.

2 Run-time BPM (branching decision box construct)

A second “must-have” feature is the ability along a process pathway (process map template instance) to engage processing along sub-pathways.  The required construct for this is a ‘branching decision box” roughly equivalent to a fork on a road except that in an electronic environment, two or more sub-pathways, or all sub-pathways can be contemporaneously engaged.

Two types of branching decision boxes are needed. One, manual, where the user selects from available options, the other, automated, where the software system makes selections from available options based on data present at each decision box.

3 Run-time BPM (loopback construct)

A third “must-have” feature is a “loopback”. This allows a portion of a workflow to be processed two or more times (e.g. call a telephone number and connect or try later up to, say, three times and then take some alternative action if communication has not been established).

4 Run-time BPM (link to pathway construct)

The fourth “must have” feature is “link to pathway”.

In the absence of “link to pathway” BPM practitioners have no way for allowing managers and staff or robots to thread together “process fragments”. A process fragment is any non-end-to-end sequence of steps that does not have a definable objective at the “last” step in that sequence.

To summarize, organizations need BPM to help plan and manage workflow, manage workload and achieve operational efficiency.,

This leaves operational effectiveness as a topic for a subsequent article.

Related articles:

How Low Can You Go? – Part I

How Low Can You Go? – Part II – BPM  essentials

 How Low Can You Go? – Part III – Case Management essentials

How Low Can You Go? –  Part IV – Essentials for Building, sustaining & improving corporate competitive advantage.

Photo by : Anneli Salo
Posted in Business Process Improvement, Business Process Management, Case Management, Operational Planning, Process Mapping, Uncategorized | Leave a comment

How Low Can You Go?  – Part I


I’ve been following /participating in various recent BPM.com discussions relating to no-code/low-code.Limbo

Lots of posturing by vendors and product champions re run-time platforms they are promoting or using as well as posturing re process mapping environments they are promoting / using.

Clearly, if you are a software vendor whose software suite requires a lot of coding, you are not going to jump onto a low-code/no-code bandwagon. Your pitch is going to be “coding gives flexibility”.

If you are a vendor of a no-code/low-code software suite you are going to focus on attracting customers who can live with whatever limitations come with your environment. Your pitch is going to be  . . . “Look, Ma, no hands!”

Bottom line, the world of BPM divides into two groups, both happy with BPM , both much better off than those who have not yet “discovered” BPM.

Problems and fails arise when a corporation picks a no-code/low-code environment and later identifies “must-have” needs that the environment cannot handle.

And, problems and fails arise when a corporation that picks a “high-code” environment but finds that they are unable to manage the environment.

The question becomes “How low can you go?” (i.e. what are the minimum needs for practicing BPM?)

We will explore in upcoming blog posts, what most corporations need in the area of BPM process mapping, followed by Case Manager/knowledge worker needs in the areas of  ACM (Adaptive Case Management) and, finally, what corporate planners and corporate governance boards need for building, sustaining and improving corporate competitive advantage.

Clearly we want BPM to support ACM, and we want ACM to contribute to improving competitive advantage.

Don’t look for a direct link between operational initiatives management and corporate competitive advantage building, sustaining and improvement – there is none.

How Low Can You Go?  – Part II – BPM essentials

How Low Can You Go?  – Part III – Case Management essentials

How Low Can You Go?  – Part IV – Essentials for building, sustaining & improving corporate competitive advantage.

Photo by : Anneli Salo
Posted in Adaptive Case Management, Business Process Management, Case Management, Competitive Advantage, Decision Making, Enterprise Content Management, FOMM, Knowledge Bases, Operational Efficiency, Risk Analysis, Strategic Planning | Leave a comment

Work, Workflow, Workload and BPM


gears-cogs-machine-machinery-159298.jpeg

Work as we know it today, is about to change.

Most of us will soon have automobiles that drive themselves. 3-D printing already allows assemblies to be built that are not assemblies.  AI seems to be making a comeback and we’re starting to see machines capable of self-diagnosis and preventive maintenance.   In the area of Healthcare, science and technology is causing a shift away from fixing problems to preventing problems. The list is by no means exhaustive.

Work itself is not going away but we will see significant changes in the balance of work done by people, software, machines and robots.

Let’s take a look at “work”.

Our first observation is it takes time and costs money. This tells us that work needs to be purposeful.  In business, “purposeful” means that any work done sustains or improves competitive advantage.

We know, in general, what to do.

“How” brings us to “best practices”. What are “best practices”, anyway?

Fundamentally, the ways you do things currently are your “best practices”.

If you haven’t paid much attention to improving your “best practices” this means you can achieve significant gains in efficiency and possibly effectiveness by moving to “better” practices. Unless, of course, you don’t care about efficiency or effectiveness, in which case your “best practices” will remain as they are, whilst competitors rush past or leapfrog you.

If you have paid a lot of attention to improving your “best practices”, your best practices will be better than those of your competitors.  This is a good thing in one sense, but a not-so-good side effect is that you may be at a stage of diminishing returns or at a stage where small improvements can be highly disruptive.

Now, given this article is about BPM, where does BPM (Business Process Management) fit into all of this?

Simply stated, BPM is a business mindset and a methodology that allows organizations to improve efficiency and, to an extent, effectiveness.

BPM allows you to map out your practices, transition these to “best practices” and, with the help of a few other concepts/methods (i.e. Case, plus Resource Allocation, Leveling and Balancing or RALB, plus Figure of Merit Matrices or FOMM), makes it easy for people, software, machines and robots to make consistent good use of best practices.

Fine, but how?

“How” is the hard part . . . .

A good starting point is to point out that publication of a paper process diagram with an expectation that people will improve efficiency and effectiveness by staring at the map will not work

You need, first of all, a graphic canvas on which you map out each process, a compiler that can carve up your maps into steps for automatic posting to people, software, machines and robots as steps become current along processes, and, you need a run-time platform (i.e. Case) capable of hosting compiled BPM processes (i.e. process templates) in order to provide orchestration and governance in respect of the performance of work.

The platform itself must also be capable of providing Case-level background governance.

Governance in the form of rules at BPM steps and governance in the form of rules at the Case-level are essential to prevent extreme, unwanted, excursions away from best practices, given that actors at Cases are basically free to do what they like, how they like, and when they like.

Fortunately, all Cases have Case Managers who have the last say (i.e. Cases only get closed by Case Managers).

The right mix of orchestration and governance can have a highly positive impact on efficiency (i.e. doing the right things, the right way, using the right resources).

This leaves two additional dimensions to work that need discussion.

The first of these is the ability to do things at the right time and our only hope here is to carry out data mining plus data analysis to get to where we have a better idea of which sub-pathways users are likely to take and what forward task timings are likely to be as Case Managers focus on achieving Case objectives.

Lastly, we need a way to non-subjectively assess progress toward meeting Case objectives and here, the best we have in the way of methods is FOMM (Figure of Merit Matrices).

FOMM is all important in that it allows Case Managers to focus on efficiency and effectiveness. It pays not to confuse these terms.  You can be inefficient, yet effective. You can be efficient, but not effective. But, if you are ineffective, it does not matter whether you are efficient or inefficient.

Read this material over a few times and you will have the foundation for increasing efficiency through process improvement  and increasing effectiveness through good Case Management.

When you have a sustained focus on efficiency and effectiveness, outcomes improve.

The math goes like this

Case + BPM+ RALB + FOMM -> Better Outcomes

 

UPDATE

There are four dimensions to “work”.

Dimension workflow workload efficiency effectiveness
Methodology BPM BPM/RALB BPM/ACM ACM/FOMM

where

BPM      Business Process Management

RALB      Resource Allocation, Leveling and Balancing

ACM      Adaptive Case Management

FOMM  Figure of Merit Matrices

Notice the contribution of BPM at three of the four dimensions of “work”

Note the difference between workflow (i.e. the sequencing of tasks) compared to workload management (i.e. the allocation, leveling and balancing of scarce resources to tasks).

Note also the difference between efficiency compared to effectiveness.  The domain of efficiency is the perceived best use of resources in the performance of work whereas the domain of effectiveness is the result of the performance of work.

Organizations can be efficient and effective. They can be somewhat efficient yet effective but it is a matter of little consequence whether they are efficient or inefficient if they are ineffective.

Posted in Adaptive Case Management, Automated Resource Allocation, Business Process Management, Case Management, Competitive Advantage, Compliance Control, Decision Making, FOMM, Operational Planning, Operations Management, Process Management, Productivity Improvement, R.A.L.B., Risk Analysis, Uncategorized | 2 Comments

Process Improvement – Under the Hood


Here’s the deal . . .car-engine-motor-clean-customized-159293

You are in a corporation that

1) “thinks” BPM,

2) has mapped processes,

3) has a run-time workflow\workload Case platform, where Cases have declared objectives,

4) has non-subjective means of assessing progress toward meeting Case objectives,

5) has a compiler that is capable of carving up your process maps into run-time BPM templates,

6) has a way to stream Cases onto BPM templates to generate private instances of said templates for each Case,

7) has good Case Managers.

All good,  except that essential to requisite #3 (run-time workflow/workload Case platform), is the ability to auto-build a Case History.

Each user log-in that augments data  needs to result in auto-recording of the “session” by way of a system-applied date/timestamp, complete with a user “signature” plus all data, as it was, at the time it was entered, on the form versions that were in service at the time.

Once in, the platform must not allow any changes to the data.  Errors/omissions, late data are handled by loading/posting copies of Forms, allowing edits to these Forms with new “session” recordings.

Considering that not all data needed at a Case can be recorded precisely at the time the data becomes known to a user, all Forms at process steps (structured or ad hoc) must accommodate a reference date-of-capture date which can precede the Case History session date by hours, days, even weeks.

The cardinal rule in some industry sectors is that data, not in the system, “does not exist” – the interpretation of this rule is that protocol requires that users visit the Case History prior to taking any current decisions/actions. If the data is not in the Hx, there is a good chance that the decision will be made only on the basis of what is in the Case History.  Who knew what, when is all – important in many Case audits.

So, given a Case History, how now do you go about improving decision-making at Cases and dynamically improving your inventory of processes that run across Cases ?

First comes data analytics.

Unless you are trying to use crowd facial recognition to post big screen notices to individual buyers at shopping malls re Internet searches they did last night, data analytics for improved dynamic decision making is not complicated.

A small change at branching decision boxes along workflows allows analytics to provide a hint to the user as to which branching options have been the most popular (i.e. they went that way 60% of the time).

Clearly, your data sampling size must be sufficient and you may need/want to filter your statistics according to a timeframe, especially for initiatives that anticipate different seasonal outcomes or initiatives where legislation may have changed recently.

As for dynamic process improvement, the best approach I have been able to think of is to post the process map and then overlay it to show skips, jumps, and loopbacks , with stats, where possible.

Ad hoc interventions should be noted as well, particularly in terms of their timing (e.g.  each time step #1024 is skipped, we observe that a specific ad hoc intervention is inserted, possibly giving a good indication that the map needs to be updated to show the ad hoc intervention in lieu of the skipped step).

 

 

 

 

Posted in Adaptive Case Management, Business Process Improvement, Business Process Management, Case Management, Decision Making, FOMM, Process auditing, Risk Analysis | Leave a comment

Take it to the limit with BPM


onward_and_upward

The transition BPM aficionados are going through today, moving from neat-looking structured workflows (end-to-end processes) to arbitrary clusters of small workflows (process fragments) and pure ad hoc interventions, is proving to be a difficult one.

It all begins with discovery that once you embrace process fragments, there is no convenient “process-end-node”.

 

The result is you no longer have, plan side, a place to park process objective(s).

Like it or not, objectives need to move from plan-side to run-time side.

Fortunately, someone/somewhere, figured out that in order to be able to thread together process fragments you need a workspace.

They figured “Case” was the right place to host background BPM (workflow), the right place to host three-tier scheduling (workload), and the right place to position algorithms for assessing progress toward meeting Case objectives (i.e. measures).

As indicated above, Case is a workspace. If you are modeling, it’s a sandbox. Technically it’s just a cursor position in an RDBMS.

If you can navigate to a Case, you can get at everything relating to that Case and this includes structured data (entryfield, checkbox, radio button, list, grid, etc.) plus unstructured data (memo, image, and video/audio). The secret sauce is to put everything IN the Case, not all over the place with “links” to objects.

Objectives are easily accommodated at Cases.

Sold on Case?  If yes, then let’s do a stress test.

The thing is I have been telling folks for years that Case lets you manage a mix of structured/ unstructured data and that the mix can be 5/95% or 95/5%, but, can we go to 0/100% or 100%/0%.?

And, if we can go to 0/100% structured/unstructured, are we still practicing BPM?

I say, yes, today.

The reason is has taken 8 years at this blog to “take it to the limit” is that we only recently started to see, in the real world, Cases where BPM can continue to be core in the absence of any workflows (i.e. all of the process fragments are “processes” of one step).

You can visualize this by taking any process you may have and proceeding to delete all of directional arrows.

Such is the starting scenario for the electronic version of Practical Homicide Investigation – Checklist and Field Guide (e-Checklist), where we are taking on the task of “improving investigative outcomes” using “smart software” at smartphones and tablets.

Facts are, the “smartphones/tablets” really aren’t that “smart” in this app.

It’s the investigators who are smart and these folks are more than capable of deciding, across the crime scene investigation team, who should do what, when, how, and why.

Except that we cannot have two different accounts from the same witness re the same event and we cannot have support staff putting down chalk lines and markers before the cs photographer has done his/her initial walkabout.

The command and control center laptop (e-hub) actually has a lot of “smarts” that guide workflow and help to reduce errors and omissions.

If one investigator ‘takes’ a checklist (i.e. a best practice protocol), the software shows the checklist as busy but grants read-only access to others.

The fact that the “system” accommodates near real-time data consolidation means that dynamic decision making at crime scenes is enhanced.  Parking a generic data exchanger near the e-hub allows an investigator to send a facial image to a remote facial recognition database and get back “hits”.

In the area of evidence-gathering, it helps to have ready access to finding, collecting, bagging and tagging evidence because there are more than 60 evidence protocols and any errors or omissions routinely result in the evidence getting thrown out of court.

Take a look . . .

I won’t go on, because most of you are not into homicide investigations, so your task for today is to try to identify references to BPM in the 10-minute overview to “e-Checklist”.

You won’t find any.

However, BPM is absolutely core to this app and police departments, as they adapt e-Checklist to filter out checklists that are not relevant to a particular crime scene case (i.e the victim is an adult, no need to consult the “Sudden Infant Death Syndrome” checklist), will gradually put in place process fragments of more than one step each and will evolve rules for  preserving, for example, “chains of custody” for evidence.

Another thing you need to know is that “e-Checklist” (an encapsulation of 1,500,000 lines of complex computer code across four interconnected modules) required little more than minor changes in terminology – the workflows were built with the standard version of our process mapping environment; the Case dbms is standard; the checklists were built with the standard CiverIncident Form Painter; the Portal forms that post at smartphones/tablets are pushouts of back end server forms;  the Generic Data Exchanger, as you might have guessed, did not need other than a few minor enhancements, so what  you have here is a low-code BPM app, where, nominally, 100% of the workflow is unstructured.

Did we take BPM to the limit in this app?

Let us know if you agree. Let us know if you disagree.

“Take it to the Limit”, The Eagles -, “Live at the Summit: Houston, 1976”

Posted in Adaptive Case Management, Business Process Management, Case Management, Data Interoperability, Database Technology, Decision Making, Major Crimes Case Management | 2 Comments

KPPs and BPM


KPPs and BPM

gateway-3130595_640You are probably familiar with KPIs (Key Performance Indicators), but this article is about KPPs (Key Process Points).

KPPs are wait-state auto-commit steps along BPM pathways.

They have predecessors and successors like all other BPM steps. They become current when the last of their immediate predecessors has been committed and their immediate successors become current at the time KPPs commit.

The performing resource is “System” and KPPs post to user System’s InTray.

KPPs have Forms, same as all other BPM steps, with, in the case of KPP steps, a mandatory Form Rule that reads data and “fires” when the rule evaluates to TRUE.

For performance reasons, it is important that each KPP have a cycle timer so that the Form Rule is evaluated at reasonable times along the process timeline (i.e. every hour, every day, once a week).

KPPs are becoming more and more important as BPM processes depend on data external to their run-time Case Management platforms.

They typically require a generic data exchanger so that data pulled from local and remote applications and systems to the data exchanger or data posted to the exchanger can be mapped to data element names that individual trading partner Case Management platforms recognize. Most organizations use pull protocol with some of their data trading partners and use push protocol with other of their data trading partners.

OK, so if we have KPPs waiting for incoming data, why don’t we have KPPs waiting to output data for push to local and remote systems and applications or for pull by local and remote systems and applications.

The reason is this data management task is handled by the data exchanger. For pull, the subscriber can decide the frequency of pickup and, for push, again, the subscriber can work out a strategy that suits the subscriber.

If you don’t have/use KPPs, your BPMs is an island and you are basically working with a model of a process as opposed to a real-life mirror of a process.

Posted in Case Management, Operations Management, Uncategorized | Tagged , | Leave a comment

Can your BPMs improve operational effectiveness and efficiency?


Apparently not, based on Wikipedia’s definition – “a BPMS is a technological suite of tools designed to help the BPM professionals accomplish their goals

businessman hand point on virtual business network in board room

The conclusion is easily explained. . .

Operational efficiency, for sure, can be achieved by automating business processes but to achieve effectiveness it is Case goals, not the goals of BPM professionals, that need to receive the focus.

If the Wikipedia definition is right, corporations need a run-time Case Management System that is capable of hosting background BPM compiled process maps as templates in addition to a BPMs.

And, the Case “goals” reasonably come from ROIs requesting funds to undertake the initiative(s) that is/are the focus of each Case.

Clearly, not all Cases need ROIs – options are 1) annual functional department budgets for small initiatives typically confined within one functional department and 2) collaborative low-cost initiatives that span two or more functional departments.

Need more proof?

Consider a knowledge worker in a large corporation – unless this person is totally focused on some large initiative i.e. one Case, chances are they will be working on multiple Cases – this takes Case Management of each process template instance out of the hands of BPM.

The worker will typically see multiple tasks relating to multiple Cases posting to his/her InTray and the worker will want to micro-schedule his/her work across some or all assigned Cases.

On top of this, the worker’s supervisor will, from time to time, receive re-prioritization requests and may impose deadlines on the worker for certain tasks/Cases. The supervisor may also offload tasks/Cases from the worker.

The micro-scheduling/re-prioritization activity is called RALB (resource allocation, leveling, and balancing) and is not part of what most folks consider to be part of BPM.

Next, we have the need for automated data exchange between Case Management Systems and local and remote 3rd party systems and applications.  Without this, your Case becomes an island. As we connect more and more to the IoT, we need ways and means of exporting/importing data and, for both efficiency and effectiveness, much of this data flow needs to be automated

Example: You have a workflow that is expecting receipt of a supplier notice of shipment.  The notice can come in the form of an edi message but you certainly don’t want this to come to you as an e-mail attachment. The happy scenario is your supplier uploads the message to a generic data exchanger and your Case Management environment polls the data exchanger at set time intervals, looking for the shipment confirmation. Pathway rules then allow the processing to move forward.

Lastly, if your Case is the least bit complex, it will have several goals and you will not want to rely on subjective assessments by Case Managers re whether a set of goals has/has not been reached.  Left on their own, it is important to remember that “Case Managers close Cases”.   Their power at Cases is pretty much absolute.  Figure Of Merit Matrices (FOMM) make the decision-making less arbitrary.

Bottom line, here is the scorecard . . . .

 

Performance Requirement BPMs Case
Efficiency Y Y
Effectiveness N Y
Business Goal Focus N Y
RALB N Y
Data Exchange N Y
FOMM N Y

 

Conclusion

BPMs                                necessary, not sufficient

Case/BPM                        necessary, not sufficient (1)

 RBV/Case/BPM            necessary, sufficient(2)

(1)

The reason Case/BPM gets a “not sufficient” tag is that Case/BPM are operational tools – we don’t get 360-degree coverage unless/until the organization subscribes to RBV (Resource-Based view), or equivalent, for strategy development.

Absent RBV, the organization lacks an orderly means of allocating scarce resources to Cases so any goals defined at Cases are not likely to be met.

(2)

See

Theories of the Firm – Expanding RBV using 3D free-form search Kbases

http://wp.me/pzzpB-Ms

 

 

Posted in Adaptive Case Management, Business Process Management, Case Management, Data Interoperability, Decision Making, FOMM, R.A.L.B. | 1 Comment