Crime Reduction Initiatives Implementation Guide


syringe-1884784_640Success with Crime Reduction Initiatives involves identification of high return/low risk candidate initiatives, setting the focus on a small number of initiatives, goal/objective setting, then updating and rollout of Departmental Policy & Procedure.

Crime Reduction – Drug Distribution/Use

Phase I
Identification of high return/low risk candidate initiatives                 RBV
Selection of one or more initiatives                                                           RBV
Goal/objective setting                                                                                    RBV

The starting position for implementing a Crime Reduction Initiative is to use a strategy-building method such as RBV (Resource-Based View) (i.e. Consulting local, and state and federal crime statistics, browsing available publications/documents, identifying candidate initiatives, and selecting one or more initiatives for implementation, setting goals/objectives).

Phase II
Updating Policy & Procedure (P&P), rollout of P&P                                 BPM

Implementation of a Crime Reduction Initiative includes updating P&P and rollout of P&P to staff for day-to-day guidance/governance. Here, the method of choice is BPM (Business Process Management) i.e. detailing the sequence of needed interventions, guiding the processing of interventions, preventing deviation away from protocol using rules.

Phase III
Day-to-day incident/case management                                                      ACM
Periodic assessment of progress to goals/objectives                               ACM

ACM (Adaptive Case Management) is the method of choice for incident response/case management. Background BPM templates provide orchestration, in-line rules provide governance, ACM accommodates recording of data at protocol steps and auto-building of Incident/Case Histories.

Researching Initiatives
In the CiverMind™ demo system screenshot below, we have set up  links to various “resources” from select resource publishers, one of which is National Public Safety Participation (NPSP).

Depending on a PD’s setup for CiverMind, a CiverMind sheet could host links to 20,000 or more resources from various Publishers. For this demo, we will be featuring 3,000 resource links.

It is important to be able to research/select initiatives within a 3D knowledgebase such as CiverMind because the amount of information searching/prioritization needed can otherwise be overwhelming – a 3D Kbase User Interface allows all searching/ prioritization of candidate initiatives to be carried out at ONE computer screen.

CiverMind Crime Reduction

CiverMind 3D Kbase Screenshot

We can engage key word searches such as “drug, juvenile” to find resources to include in a Crime Reduction Initiative for “drug use by juveniles”. The search gave 487 hits in the demo.

A revised search phrase consisting of “drug juvenile NPSP” narrowed the hits to a more manageable 100.

Notice that NPSP provides a “Toolkit” for extraction of copies of NPSP resources – CiverMind accommodates buildup of links to resources of multiple hosted publishers.

Updating Policy/Procedure
As and when initiatives are identified for implementation, there will almost always be a need to update ePolicy/Procedure (i.e. preparing new P&P, updating existing P&P).

The usual scenario in respect of P&P involves updating of multiple P&P documents.
P&P management is greatly simplified when a Department hosts all of its P&P in a Kbase like CiverMind (i.e. auto-revisioning, auto-storage, easy access to hundreds or thousands of protocols from one computer screen, sophisticated search/masking of protocols with no “hits”).

Start with an initial search across your current P&P to find, for the above example, existing P&P relating to drugs/juveniles to avoid unnecessary proliferation of new P&P documents.

Most PDs have 200 Protocols, some run only 1-2 pages, others run 20 or more pages.

Our demo Kbase includes P&P for various major city PDs. This is a dramatization – in the normal course of events a PD will only need/want to host its own Policy and Procedure.

Rollout of Policy and Procedure to Member Smartphones/Tablets
Once you have an updated set of your P&P, your choices for rollout are a) Off-line publication (i.e. printed manuals) b) On-line publication (e-books) or c) In-line publication where narrative P&P texts for process steps are embedded in flowgraphs as checklists with auto-posting of steps to user InTrays at smartphones and tablets.

The benefits of in-line publication with orchestration and governance are as follows:

• Greatly reduced errors and omissions
• Real-time data collection, improved dynamic decision making at Incidents /Cases
• Improved outcomes

Not convinced you need 3D Kbases to manage 5,000 or more document links?

Here is a view of the same data featured above, minus the organizational capability that is native to 3D Kbase platforms . . .

CiverMind_Crime_Reduction_Expanded

Stay tuned for an upcoming post on “BPM for Law Enforcement Incident Response/Case Management”

Posted in Crime Reduction Initiatives, Knowledge Bases, Law Enforcement | Leave a comment

Must-Have Features in a Run-Time Case Management Platform


Case Managers spend their time managing Cases and leveling and balancing resources across Cases. shopping_cart

Popular examples of Cases are Patients in healthcare, Investigations in law enforcement but, in general, a Case is just a cursor position at a post-relational database management system. So, we can have “Cases” where the focus is on a supplier, a customer, an insurance claim, a helicopter receiving periodic maintenance.

The methodology of choice for Case Management is ACM (Adaptive Case Manager).

ACM recognizes that a Case ends up as a mix of structured protocols (i.e. process fragments) plus ad hoc interventions. Each Case is typically unique. Each has goals and objectives and their presence is essential to closing any Case.

If you are planning on developing a Case Management software suite or planning to acquire one, here is a list of fifteen (15) “must-have” features:

  • Official interfaces only (normal users, casual users, import/export engine)
  • Case Hx (longitudinal view)
  • Case Hx (workflow view)
  • Case Goals/Objectives
  • FOMM for assessing progress toward Case Goals/Objectives
  • Post-relational dbms
  • Menu of Services (for selecting “best practices” protocols)
  • User Workspace (InTray, capable of hosting “best practices” protocol template instances)
  • Background orchestration from BPM process template instances
  • 3-Tier Scheduling (system, users, supervisors)
  • Skip/Jump at Process Steps
  • Insert an ad hoc intervention at a Case
  • Break Glass (for emergency takeover of an in-progress intervention)
  • Re-assign/take back a process step that is not being worked on
  • Advanced Security (who can do what, when)

Whether you decide to build or buy, don’t drop any of the above items from your shopping list without careful consideration of the consequences.

Feel free to call Civerex at 800 529 5355 if you have questions.

 

Posted in Adaptive Case Management, Case Management, Software Acquisition, Software Design | Tagged | 4 Comments

Closing the gap between strategy and operations – is as easy as A, B, C, D, E


abcdeCorporations need two pillars (ECM and DCM) plus three meta-methods to build, sustain and augment competitive advantage.

Absent any one of these and the corporation is likely to fail at closing the gap between strategy/operations and operations/strategy.

Let’s highlight these “must haves” in reverse order (E-D-C-B-A)

ECM (Enterprise Content Management) [strategic pillar]

The inventory of corporate resources/capabilities and management thereof (land, plant, equipment, tools, methods, staff, suppliers, customers etc.), along with tentative and funded initiatives that consume/use resources and capabilities over specified timespans.

DCM (Document Content Management) [operational level pillar]

The inventory of “documents” within an organization comprising text, .pdf, .doc, spreadsheet, image, video, audio files – Includes templates (data elements, layouts, rules) plus instances of templates that have received data element values, typically directly at Cases or via remote system and application data imports.

CPM (Critical Path Method)

The method of choice for planning, monitoring and controlling once-through initiatives.

BPM (Business Process Management)

The corporation’s inventory of “best practices” protocols (workflows, comprising steps, with attached data presentation/data collection forms and performance roles).

ACM (Adaptive Case Management)

A run-time environment that accommodates management of initiatives (hosting BPM templates that provide orchestration, accommodating ad hoc interventions, goals/objectives, plus a means of non-subjective assessment of progress toward goals/objectives).

Posted in Adaptive Case Management, Business Process Management, Case Management, Decision Making, Enterprise Content Management, FOMM, Knowledge Bases, Operations Management, Process Management, Project Planning, R.A.L.B., Resource Based View, Uncategorized | 2 Comments

Strategy Development – Same old, or entirely new?


Recently, I was asked whether Edith Penrose’s OldNew
“RBV” (Resource Based View) is today obsolete.

Fair question, given that RBV goes back to 1959.

This led to a question as to whether Porter’s Competitive Advantage method has or has not evolved to “Sustainable competitive advantage” / ” Temporary Competitive Advantage”.

Here is a link to one leading article on the topic “Sustainable competitive advantage or temporary competitive advantage: Improving understanding of an important strategy construct”. (T O’Shannassy – ‎2008)

https://www.emeraldinsight.com/doi/full/10.1108/17554250810926357

A key statement in the article is “. . .in many industries for many firms competitive advantage is only a temporary outcome due to the influence of environmental uncertainty.”

Seems to me uncertainty (not just environmental uncertainty) has always been present, just as any initiative is, and always has been, characterized by risk.

I would add to this that all competitive advantage is, and always has been, temporary (Yogi Berra probably would have said that “You have it until you don’t have it”), so my question is why have people spent time and money picking at Porter’s method?

If we go along with the distinction, a “temporary” CA could be the result of landing a big contract, but a more likely scenario is building CA takes a lot of time and money and once you have CA, you need to pause to consolidate/sustain your CA but then try to augment your CA.

One thing we can say, for sure, is that anyone who subscribes to RBA and follows the method: RBV steps

knows that if strategy changes, in-progress initiatives need to be reviewed and, in some instances, terminated (e.g. a competitor leapfrogs your innovative product, no point continuing with that implementation).

RBA requires that all work performed be supportive of strategy, accordingly the timeline for a strategic initiative must exceed implementation time for that initiative.

Since most important initiatives get authorized by way of ROI submissions, the timeline for an initiative is the time to breakeven, hopefully, a bit longer.

Bottom line, long-running initiatives work for long strategies, short-running initiatives are needed for short duration strategies and all competitive advantage is temporary,

Part of identifying/prioritizing initiatives to build/sustain/augment CA is to have a mix or short-lifecycle strategies and long-lifecycle strategies.

Anything new in all of the above?

Yes.

What is “new” is the ability to view all corporate resources, view all initiatives, view all prioritized initiatives, view the status of all initiative implementation and view the status of all Cases at a graphic free-form-search knowledge base. The icing on the cake is the ability to change resource allocations and priorities of tentative initiatives and make real-time assessments in respect of in-progress Cases.

Nice alternative to arriving at a strategy planning meeting pushing a cartload of studies, reports, supporting documents and spreadsheets.

 

Posted in Case Management, Competitive Advantage, Resource Based View, Strategic Planning, Uncategorized | Tagged , | 1 Comment

How the EU’s GDPR (General Data Protection Regulation) impacts your business


EU_member_maps

GDPR
(General Data Protection Regulation
 EU 2016/679) took effect one week ago (May 25, 2018).

The Regulation and two related Directives EU 2016/680 and EU 2016/681 deal with the processing of personal data relating to natural persons who are citizens of one of the 28-member States or persons residing in one or more of the 28-member States.

The first thing to note is that if you are a corporation whose headquarters is outside of the EU and you host any personal data relating to EU citizens/residents, you are also subject to EU Regulation 2016/679.

Article 4 of EU 679 defines “processing” to include “collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction”.

The only way to understand the far-reaching impact of the legislation is to read the legal texts – there is one overarching Regulation and two Directives (680 & 681) but each member state (28 of them) has the right to publish its own versions of EU 2016 670/680/681.

If you do the math, you get 3 legal texts x 28 states x 150 explanatory texts/articles, for a possible total of 12,600 notes/articles.

If you have operations in more than one EU state, your policy/procedure re personal data relating to natural persons may require adjustments.

Know and Understand the legislation

A graphic free-form-search knowledgebase where you can simultaneous view all texts/articles for a search phrase facilitates the task.

You can view the texts/articles this way (i.e. no knowledge base),

pdpr4

or this way . . ., (e.g. with a knowledge base)

pdpr3

The problem with organizations running transaction-based software applications is that current systems are not likely to meet GDPR minimal requirements. There will be disclosures, fines will be imposed.

If you do a risk assessment OR bring in a consultant to carry out a risk assessment, the opinion is likely to be that in the event of disclosure you could receive a fine equal to 4% of your total world revenue. (i.e. Article 83).

A search at my EU 679/680/681 knowledgebase for “fine” highlights all “hits” down to the individual text/clause, allowing you quick/easy access to, in this case, Article 83.

pdpr2

GDPR sets the bar to a new, higher level in the area of natural persons data protection.

Be aware that “rule sets”, typically pervasive in transaction processing software, have great difficulty protecting personal “documents” (i.e. images, audio recordings, video recordings, documents, and memo field recordings, etc.).

For this reason, only share “documents” via a data exchanger that handles data sharing at transaction application process instance steps, using PCPs (process control points) that require manual intervention/clearance with other than across-the-board user permissioning.

Here are three (3) high-level recommendations re the design and configuration of transaction processing software systems to handle data relating to natural persons.

1.      Restrict sharing of data relating to natural persons.

Only share data relating to natural persons on a need-to-know basis – the more data you share, the greater the risk of deliberate or inadvertent data breaches.

2.      Importance of having formal data repositories.

Your core transaction processing software systems need to feature consolidating Case Histories that post system-generated date/timestamps, a user “signatures”, along with data, as it was, on the form versions that were in service at the time (who, what, where, when) at any process step you commit. Simple “logs” where you see accesses, without precise tracking on what was accessed, when, by whom, from where, are not good enough.

Postings to data histories must be automated, as opposed to selective and must include visits to all forms that host personal data, even if no data is recorded on these forms.

When a user leaves a process step form that has been visited and edited, the software suite must immediately lock down the new History posting.

Following this, no changes must be allowed to the new posting (or to any previous postings), except that authorized users can look up a form that is in the History, clone the form with its data, proceed to carry out edits on the form copy and then save the form template and data as a new session in the History.

3.      Strictly control access to stored personal data

All access to personal data for purposes of viewing or data sharing should be via “official” interfaces only (i.e. direct log in, portal access, data exchanger).

a)      Data being directly entered into the system by logged-in ordinary users should be controlled by user name and password, with permissions down to the form level i.e. a newly-added user should have no access to any record in the system. Access to the records that contain data relating to natural persons should be granted by “role”. Some records (i.e. ‘VIP’ records) should be blocked from access/viewing for other than specific designated users. VIP records should be excluded from searches, from directory listings and from Reports that list data across records.

b)     Casual and external users should only be able to access data relating to persons via a portal login using a user name and password, preferably using dual factor authentication.

One possible configuration for portals is to use a portal-facing IIS server that pushes out a user-specific “menu of services” itemizing authorized service requests. The content of the “menu of services” should be “role-based”. (i.e. the only people who get to see the data for a database record are those actively involved in processing that data). Clicking on a portal user menu line-item invokes a database engine at the portal-facing server. The database engine alone logs into the back-end server, retrieves requested forms /data and posts the form(s) and data to the user in-tray at the portal.

c)      Batch data streams from/to local or remote 3rd party systems and applications should only post to a standalone data exchanger that accommodates a mapping of need-to-know data elements per subscriber (typically a local or remote system). Parsers and formatters are the responsibility of subscribers and publishers and data streams must be encrypted for data transport. The data exchanger should accommodate rule sets that operate on incoming data (point of origin checks, range checks, boilerplate pick list lookup verification) and tag problematic incoming data for manual clearance.

Bottom line, if your current transaction processing systems/apps do not restrict sharing on a strict need-to-know basis, if your data repositories to not build/maintain formal Histories or if you they allow access to natural person’s data via other than official interfaces, you need to replace, re-engineer or adapt your systems/apps.

Many corporations view such undertakings as requiring a huge amount of capital/time/disruption,

They fail to realize that task execution/tracking (the major source of natural person data) can be carried out under a Case platform that can link to existing systems/apps via a generic data exchanger, giving the corporation improved control over workflow/workload, with the option of later adapting/re-engineering/replacing legacy systems/apps.

Here is a list of caveats in respect of data sharing of data relating to natural persons.

  • Avoid e-mail systems (too easy to inadvertently reference the wrong addressee, proliferation of storage of personal data).
  • Avoid cloud document-sharing facilities (better to have everything relating to a dbms record “at” the dbms record).
  • Avoid use of APIs or RPA apps that build transaction logs, complete with personal data (i.e. determine where these apps are, who has access to them, what the retention strategy of each app is?) – your Case Histories are your permanent record of each transaction.

kwkeirstead

https://kwkeirstead.wordpress.com

Posted in Data Interoperability, Database Technology, Enterprise Content Management, Software Design, Uncategorized | Tagged , , , | Leave a comment

How Low Can You Go? –  Part IV – Essentials for building, sustaining & improving corporate competitive advantage.


Limbo

This article attempts to bridge the gap between operational methods such as ACM/BPM/CPM/RALB and FOMM and methods used to evolve business strategy.The mission of all strategic and operational initiatives is to build, sustain and augment competitive advantage.

See “Theories of the Firm – Expanding RBV using 3D free-form search Kbases” at https://wp.me/pzzpB-Ms

RBV allows organizations to assign priorities to promising initiatives that are competing for scarce resources.  Proper implementation, in theory, leads to increased competitive advantage.

The mechanism for bridging the gap between an initiative and its practical implementation within, typically, a Case, is to have operations prepare an ROI submission, citing benefits and objectives. The latter can be parked at a Case in a Figure of Merit Matrix.  Periodic assessment of progress toward meeting the objectives of the Case with regard for the ROI really is all you need to bridge the gap.

Things often don’t go this way because of the bizarre habit of some corporations of authorizing initiatives and then not bothering to carry out periodic monitoring/auditing on progress.

In extreme cases, the strategy is changed yet the initiative continues.  The practical equivalent of this in government is to have an agency in charge of taking care of monuments that no longer exist.

“Must haves” for bridging the gap between operations and strategy/between strategy and operations include

  • RBV
  • Case w/FOMM

Note from the “Theories of the Firm . . “ article, that, for large corporations, RBV does not work very well if you don’t have access to 3D free-form-search knowledgebases.

For me, all of this boils down to “. . . you cannot manage what you cannot see”.

Photo by : Anneli Salo

Posted in Case Management, Decision Making, Financial Planning, FOMM, R.A.L.B., Risk Analysis, Strategic Planning, Uncategorized | Leave a comment

How Low Can You Go? – Part III – Case Management Essentials


This article explores the contribution of three (3) methodologies to the effectiveness of work.

Limbo

The three methodologies are:

 ACM (Adaptive Case Management),

 RALB (Resource Allocation, Leveling and     Balancing)

 FOMM (Figure of Merit Matrices).

ACM (Adaptive Case Management)

ACM builds on BPM in the area of work performance by providing a run-time platform for the management of  “work” (i.e. management of workflow and workload).

Here, the logical strategy is to have background BPM at Cases provide orchestration for the performance of process steps or tasks, with the option for various actors to deviate from what might otherwise end up being a rigid sequence of tasks.

“Must have’s” for ACM include giving users a workspace or platform that consists of nothing more at the primary user interface than a split screen with a user “InTray” on one side and a traditional Calendar on the other.

The Calendar hosts fixed-time tasks.

The InTray hosts floating-time tasks that post as and when process steps become “current” along BPM process pathways. InTray tasks post automatically due to user skill attribute tagging at process steps, plan-side.

Context/situation-appropriate data information display/data collection Forms similarly post automatically as a result of plan-side encoding of Forms at process steps.

Run-time efficiency starts with posting of each “current” step to the attention of all users with the right skill classification. The first to “take” a task causes the task to lock down (the user becomes the exclusive owner of that task for editing purposes). Other users with the same skill classification have read-access only.

In the normal course of events, most users are likely to be working on several projects (i.e. Cases) at any given time. Accordingly, the run-time platform must be capable of accommodating multi-tier scheduling which leads us to a discussion of a 3rd methodology called RALB (Resource Allocation Leveling and Balancing).

Bottom line, re ACM, it removes task performance rigidity and accommodates minor master management of tasks via RALB.

ACM is nothing more than a workspace.

Technically it is a cursor position at an RDBMS (relational database management system),  which means an organization can have as many Cases as needed (i.e. 100; 1,000; more than 1,000).

Case is the place where users practice ACM.

Unlike BPM, Case/ACM only has a run-time side. There is no plan-side to Case in that whereas a Case can be set up as a clone of another Case, for all intents and purposes, each Case is likely to end up as unique at the time it is closed by its Case Manager.

“Must-haves” for Case/ACM include a workspace per user plus a Case Log where each intervention captures data, as it was, at the time it was recorded, on the form versions that were in service at the time, with a system-applied data/timestamp and user “signature”.

ACM also requires the ability at a Case to auto-export data to local and remote systems and applications and to auto-import data from these. ACM handles who, what, how, where plus when.

RALB (Resource Allocation, Leveling and Balancing)

RALB impacts efficiency at Cases both at the individual user level as well as efficiency across users.

You may be surprised to learn that all of us basically work the same way – we come into our places of work and immediately take note of fixed-time commitments (e.g. meetings, etc.). We take note of time intervals between commitments and make decisions regarding tasks to initiate, advance or complete.

If the time between one meeting and the next is long, a user is likely a focus on advancing the state of one large task. Otherwise, the user may try to complete several small tasks.

For these reasons, users need to be able to micro-schedule tasks. There are, of course, exceptions, one being “breakfast meds” in healthcare that reasonably cannot be deferred from their usual schedule.

Bottom line, users want/need to be able to adjust the timing of tasks, including re-scheduling of certain tasks to another day.

Supervisors also need to be able to adjust the timing of user and their own tasks on the basis of changing customer priorities, sometimes removing tasks from one user and assigning these to other users.

FOMM (Figure of Merit Matrices)

FOMM impacts the effectiveness of work at Cases.

FOMM was invented, it seems, by the Rand Corporation – I recall an article dealing with non-subjective decision-making relating to the range, accuracy and payload of ballistic missiles.

Our adaptation of FOMM has been to provide a means of non-subjective assessment of progress toward meeting Case-level objectives/goals.

The value of FOMM at Cases is easily explained – most Cases have multiple objectives. Some objectives are more important then others. The essential contribution of FOMM is to “weight” objectives and calculate progress toward Case completion.

Progress assessments at Cases are subject to “S” curve behavior where progress is characteristically slow at the onset, followed by rapid progress, only to slow down typically at the 90% complete stage.

Anyone who works with once-through projects using CPM is familiar with “S” curves. The usual project control strategy is to calculate the critical path every few days and shift to a “punch list” once the project is at the 90% complete stage.

“Must-haves” for FOMM are a means of structuring objectives/goals, assigning weights to objectives and making available facilities for recording incremental progress.

All of this can be provided by common spreadsheets and since Cases can accommodate any type of data, FOMM spreadsheets are easily accommodated at Cases themselves, making them easy to access.

Related articles:

How Low Can You Go? – Part I

How Low Can You Go? – Part II – BPM essentials

 How Low Can You Go? – Part III – Case Management essentials

How Low Can You Go? –  Part IV – Essentials for building, sustaining & improving corporate competitive advantage.

 Photo by : Anneli Salo
Posted in Adaptive Case Management, Automated Resource Allocation, Case Management, Decision Making, FOMM, Operational Planning, Process Management, R.A.L.B., Scheduling, Uncategorized | Leave a comment

How Low Can You Go? – Part II – BPM Essentials


This article explores the contribution of a methodology called BPM (Business Process Management) to the “management of work” and lists essential features for background BPM services at Cases. Limbo

BPM is core to workflow. It also contributes, to an extent, to workload management. It contributes directly to efficiency and contributes indirectly to effectiveness.

In the overall scheme of things, BPM deals with what, how, who, and where.

It’s one thing to embrace a methodology and another to make good use of that methodology so let’s go over essentials or “must haves” for BPM.

Your takeaway will hopefully be that you will see BPM as core to the planning and management of work, but relatively easy to implement.

BPM has a plan-side dimension (process mapping capability) and a run-time-side dimension (background orchestration capability at Cases)

One BPM “must have” is the ability to document “as-is’ processes.

This is best accomplished within a graphic space where users are able to put down tasks or process steps and interconnect these with directional arrows.

Extending an “as-is” process map to a “to-be” process map requires the ability to re-arrange and extend process steps (e.g. easy insertion of steps between existing steps, disconnecting and easy re-connecting directional arcs between steps).

The transition from a “to-be” process map to a run-time “best practice” process is typically automatic following several process improvement iterations.

I will detail here below four (4) “must-have” features of run-time BPM out of a total list of 13.

See . . . .

“Success Factors with BPM ”

https://wp.me/pzzpB-JH

1 BPM Compilers

Auto-generation of a run-time process template is the first BPM “must-have” feature for the simple reason that different steps must be performed by different people with different skill sets.

A BPM compiler solves the problem by carving up process maps into discrete steps for posting to the InTrays of the right people at the right time.

2 Run-time BPM (branching decision box construct)

A second “must-have” feature is the ability along a process pathway (process map template instance) to engage processing along sub-pathways.  The required construct for this is a ‘branching decision box” roughly equivalent to a fork on a road except that in an electronic environment, two or more sub-pathways, or all sub-pathways can be contemporaneously engaged.

Two types of branching decision boxes are needed. One, manual, where the user selects from available options, the other, automated, where the software system makes selections from available options based on data present at each decision box.

3 Run-time BPM (loopback construct)

A third “must-have” feature is a “loopback”. This allows a portion of a workflow to be processed two or more times (e.g. call a telephone number and connect or try later up to, say, three times and then take some alternative action if communication has not been established).

4 Run-time BPM (link to pathway construct)

The fourth “must have” feature is “link to pathway”.

In the absence of “link to pathway” BPM practitioners have no way for allowing managers and staff or robots to thread together “process fragments”. A process fragment is any non-end-to-end sequence of steps that does not have a definable objective at the “last” step in that sequence.

To summarize, organizations need BPM to help plan and manage workflow, manage workload and achieve operational efficiency.,

This leaves operational effectiveness as a topic for a subsequent article.

Related articles:

How Low Can You Go? – Part I

How Low Can You Go? – Part II – BPM  essentials

 How Low Can You Go? – Part III – Case Management essentials

How Low Can You Go? –  Part IV – Essentials for Building, sustaining & improving corporate competitive advantage.

Photo by : Anneli Salo
Posted in Business Process Improvement, Business Process Management, Case Management, Operational Planning, Process Mapping, Uncategorized | Leave a comment

How Low Can You Go?  – Part I


I’ve been following /participating in various recent BPM.com discussions relating to no-code/low-code.Limbo

Lots of posturing by vendors and product champions re run-time platforms they are promoting or using as well as posturing re process mapping environments they are promoting / using.

Clearly, if you are a software vendor whose software suite requires a lot of coding, you are not going to jump onto a low-code/no-code bandwagon. Your pitch is going to be “coding gives flexibility”.

If you are a vendor of a no-code/low-code software suite you are going to focus on attracting customers who can live with whatever limitations come with your environment. Your pitch is going to be  . . . “Look, Ma, no hands!”

Bottom line, the world of BPM divides into two groups, both happy with BPM , both much better off than those who have not yet “discovered” BPM.

Problems and fails arise when a corporation picks a no-code/low-code environment and later identifies “must-have” needs that the environment cannot handle.

And, problems and fails arise when a corporation that picks a “high-code” environment but finds that they are unable to manage the environment.

The question becomes “How low can you go?” (i.e. what are the minimum needs for practicing BPM?)

We will explore in upcoming blog posts, what most corporations need in the area of BPM process mapping, followed by Case Manager/knowledge worker needs in the areas of  ACM (Adaptive Case Management) and, finally, what corporate planners and corporate governance boards need for building, sustaining and improving corporate competitive advantage.

Clearly we want BPM to support ACM, and we want ACM to contribute to improving competitive advantage.

Don’t look for a direct link between operational initiatives management and corporate competitive advantage building, sustaining and improvement – there is none.

How Low Can You Go?  – Part II – BPM essentials

How Low Can You Go?  – Part III – Case Management essentials

How Low Can You Go?  – Part IV – Essentials for building, sustaining & improving corporate competitive advantage.

Photo by : Anneli Salo
Posted in Adaptive Case Management, Business Process Management, Case Management, Competitive Advantage, Decision Making, Enterprise Content Management, FOMM, Knowledge Bases, Operational Efficiency, Risk Analysis, Strategic Planning | Leave a comment

Work, Workflow, Workload and BPM


gears-cogs-machine-machinery-159298.jpeg

Work as we know it today, is about to change.

Most of us will soon have automobiles that drive themselves. 3-D printing already allows assemblies to be built that are not assemblies.  AI seems to be making a comeback and we’re starting to see machines capable of self-diagnosis and preventive maintenance.   In the area of Healthcare, science and technology is causing a shift away from fixing problems to preventing problems. The list is by no means exhaustive.

Work itself is not going away but we will see significant changes in the balance of work done by people, software, machines and robots.

Let’s take a look at “work”.

Our first observation is it takes time and costs money. This tells us that work needs to be purposeful.  In business, “purposeful” means that any work done sustains or improves competitive advantage.

We know, in general, what to do.

“How” brings us to “best practices”. What are “best practices”, anyway?

Fundamentally, the ways you do things currently are your “best practices”.

If you haven’t paid much attention to improving your “best practices” this means you can achieve significant gains in efficiency and possibly effectiveness by moving to “better” practices. Unless, of course, you don’t care about efficiency or effectiveness, in which case your “best practices” will remain as they are, whilst competitors rush past or leapfrog you.

If you have paid a lot of attention to improving your “best practices”, your best practices will be better than those of your competitors.  This is a good thing in one sense, but a not-so-good side effect is that you may be at a stage of diminishing returns or at a stage where small improvements can be highly disruptive.

Now, given this article is about BPM, where does BPM (Business Process Management) fit into all of this?

Simply stated, BPM is a business mindset and a methodology that allows organizations to improve efficiency and, to an extent, effectiveness.

BPM allows you to map out your practices, transition these to “best practices” and, with the help of a few other concepts/methods (i.e. Case, plus Resource Allocation, Leveling and Balancing or RALB, plus Figure of Merit Matrices or FOMM), makes it easy for people, software, machines and robots to make consistent good use of best practices.

Fine, but how?

“How” is the hard part . . . .

A good starting point is to point out that publication of a paper process diagram with an expectation that people will improve efficiency and effectiveness by staring at the map will not work

You need, first of all, a graphic canvas on which you map out each process, a compiler that can carve up your maps into steps for automatic posting to people, software, machines and robots as steps become current along processes, and, you need a run-time platform (i.e. Case) capable of hosting compiled BPM processes (i.e. process templates) in order to provide orchestration and governance in respect of the performance of work.

The platform itself must also be capable of providing Case-level background governance.

Governance in the form of rules at BPM steps and governance in the form of rules at the Case-level are essential to prevent extreme, unwanted, excursions away from best practices, given that actors at Cases are basically free to do what they like, how they like, and when they like.

Fortunately, all Cases have Case Managers who have the last say (i.e. Cases only get closed by Case Managers).

The right mix of orchestration and governance can have a highly positive impact on efficiency (i.e. doing the right things, the right way, using the right resources).

This leaves two additional dimensions to work that need discussion.

The first of these is the ability to do things at the right time and our only hope here is to carry out data mining plus data analysis to get to where we have a better idea of which sub-pathways users are likely to take and what forward task timings are likely to be as Case Managers focus on achieving Case objectives.

Lastly, we need a way to non-subjectively assess progress toward meeting Case objectives and here, the best we have in the way of methods is FOMM (Figure of Merit Matrices).

FOMM is all important in that it allows Case Managers to focus on efficiency and effectiveness. It pays not to confuse these terms.  You can be inefficient, yet effective. You can be efficient, but not effective. But, if you are ineffective, it does not matter whether you are efficient or inefficient.

Read this material over a few times and you will have the foundation for increasing efficiency through process improvement  and increasing effectiveness through good Case Management.

When you have a sustained focus on efficiency and effectiveness, outcomes improve.

The math goes like this

Case + BPM+ RALB + FOMM -> Better Outcomes

 

UPDATE

There are four dimensions to “work”.

Dimension workflow workload efficiency effectiveness
Methodology BPM BPM/RALB BPM/ACM ACM/FOMM

where

BPM      Business Process Management

RALB      Resource Allocation, Leveling and Balancing

ACM      Adaptive Case Management

FOMM  Figure of Merit Matrices

Notice the contribution of BPM at three of the four dimensions of “work”

Note the difference between workflow (i.e. the sequencing of tasks) compared to workload management (i.e. the allocation, leveling and balancing of scarce resources to tasks).

Note also the difference between efficiency compared to effectiveness.  The domain of efficiency is the perceived best use of resources in the performance of work whereas the domain of effectiveness is the result of the performance of work.

Organizations can be efficient and effective. They can be somewhat efficient yet effective but there  it is a matter of little consequence whether they are efficient or inefficient if they are ineffective.

Posted in Adaptive Case Management, Automated Resource Allocation, Business Process Management, Case Management, Competitive Advantage, Compliance Control, Decision Making, FOMM, Operational Planning, Operations Management, Process Management, Productivity Improvement, R.A.L.B., Risk Analysis, Uncategorized | 2 Comments