How to Install BPM in a Company


winding-road-553481_640

It is difficult to recommend a “best” way to install BPM in a company.

The approach needs to vary with the consultant, and vary with the client.

One size fits all, ends up predictably fitting none of the players.

 

Photo Credit Jill Wellington

What works for our group goes like this:

1.

Do a live session with a few stakeholders where you build a small workflow (10-20 steps) of the client’s choice (steps, directional arrows, all steps with the same placeholder Form, all steps with the same routing i.e. “all”).

Make sure the workflow has at least one manual branching-decision-box.

Compile the workflow and roll it out to an ACM/BPM workflow/workload platform.

As the BPM engine posts steps according to the logic of your plan-side workflow, record some data at the placeholder Form at each step then commit the step.

Go to the History to show that the platform records an auto-date/time stamp per step, with the data that was entered at each step at the placeholder Form.

[quick build – this session must not run more than 1 hour]

2.

Now, repeat #1, where some of the steps have different Routings (i.e. replace “all” with e.g. “contracts”, “design”, “production”, “shipping”).

[illustrate Routing as an attribute of workflow steps]

3.

Replace the placeholder Forms with, assuming the client can provide these, images of Forms needed at each step in #1.

Make sure there is an additional “memo” Form field at each Form. This is very useful during modeling to capture notes (i.e. wrong Form, missing Fields, wrong Routing,  wrong sequencing of steps, etc.).

[illustrate Forms as an attribute of workflow steps]

4.

Upgrade your workflow by adding a Rule Set that automates your branching decision box step i.e. a Yes/No decision box where an upstream form picks up a Yes/No value at, say, a combobox. Set the routing at the branching decision box to “system”.

[introduce the client to workflow branching/re-merging]

5.

Show the client how to build a custom workflow step Form and encode this to one or more steps.

Make sure there are two steps that feature the same step Form, to illustrate that data recorded at the 1st step flows along the workflow to the 2nd step.

[upgrade of #3]

6.

Now, compile, roll out and re-run your latest upgrade of your workflow where you skip a step, re-visit an already completed step, insert a step not in your workflow.

Mention that Rules can be put in place at steps to guard against extreme, unwanted excursions away from “best practices”.

[demonstrate that your solution accommodates excursions away from “best practices”]

7.

Illustrate re-direction of steps where a supervisor overrides the auto-routing by assigning a step to a named user.

[demonstrate escalation as part of RALB (Resource Allocation, Leveling and Balancing)]

8.

Explain that “Case Managers are the folks who close Cases” and that some non-subjective means is needed to show progress along a workflow, typically along an “S” curve.

[illustrate non-subjective decision-making for Case closing]

 

On site vs GoToMeeting or equivalent web-based sessions?

If you consult across countries, try to avoid having to go to the client site for small projects.

Make sure with remote sessions that you can accomplish a few of the #2-#8 steps in no more than one hour. Combine steps where possible but avoid long drawn-out remote sessions as  some stakeholders have short attention spans.

 

Onboarding stakeholders

During client sessions, let a stakeholder take over your mouse/ keyboard.  If the client is willing, let them do some of the processing at steps #1 and #2. The more stakeholders participate, the greater their level of confidence, sustained onboarding, etc.

 

Increasing the chances of a successful ACM/BPM implementation

Once the company is on board with ACM/BPM make sure that top management understands that a separate set of methods/tools/software/platform can (should) be put in place for building, sustaining and augmenting strategic competitive advantage (infrastructure/ resource inventory, risk assessment, vulnerabilities, remediation, funding initiatives).

Explain that the way to narrow the gap between strategy->operations and operations-> strategy is for top management/operations to build initiatives, fund the more promising of these and set the focus at Cases on Initiatives, not processes.

Do not hard sell this to the point of losing the operational efficiency/effectiveness engagement by over-complicating.

 

See 300+ blog posts that go back to 2010 – many relate to BPM.

http://www.kwkeirstead.wordpress.com

Posted in Adaptive Case Management, Business Process Management, Case Management, Competitive Advantage, FOMM, Operational Efficiency, Operational Planning, Planning, Process Management, Process Mapping, R.A.L.B., Risk Analysis | Tagged , | Leave a comment

Corporate Agility – where the rubber meets the road . .


Corporate agility is essential today for building, sustaining and augmenting corporate strategy and for operational workflow/workload management  (i.e achieving operational efficiency & effectiveness).

It seems today as a result of the persistence of BPM (Business Process Management) practitioners over 20 years,  corporations are in good shape to achieve agility in the operations area.  (no code/low code for process mapping and re-mapping; run-time workflow/workload management platforms; and, to an increasing extent, real-time predictive analytics decision-support).

With minor exceptions, things are not at all as advanced in the area of strategy building.

Consider the need for corporations to evolve strategy and operational response to cyber and terrorist attacks. If you go to Wikipedia, you will see that in respect of the latter there are 2,000 + incidents per year sourced from 4,000 + terrorist groups.

Suppose you have mining operations in a country like Burkina Faso, West Africa. Your strategic variables are available national infrastructure protection plus corporate-specific infrastructure protection plans and event/incident response tactics.

Most corporations lack the resources/skills to evolve comprehensive Infrastructure Protection Plans on their own, so they have to rely on external security consultants.  Most of these groups are selling man-hours, so their preference is to arrive and never leave.

Civerex and its partners HNCT/GI take corporations through the same  readiness/ resilience process, but our end objective is tech transfer that allows our clients, going forward, to build / own/ manage their own strategic and tactical plans for threat identification, threat assessment, vulnerability identification/ remediation, readiness and response.

Most of the work involves no code, no database construction, no application system building, but significant semi-real-time data collection setup and frequent tweaks by clients to their strategy and operating procedures.

An Infrastructure Protection Plan (IPP)  Kbase can easily comprise 50,000 data points, inclusive of protocols for emergency response to chemical, biological and radiological attacks.

Worldwide 3D Kbase IPP template plus corporate-specific template.

Corporate_Infrastructure_Protection

Engaging a search for “Burkina Faso” – notice that the search results show the country of interest, nearby countries, terrorist groups operating within the country of interest, incidents at the country of interest and that filtering has reduced the number of entities of interest from  9 to 4.Corporate_Infrastructure_Protection_BURKINA

Right-clicking on the search node (green node) takes the user to a listbox of the “hits” across the highlighted entities for easy browsing of the node content and any linked URLs and linked files.

Posted in #strategy, Competitive Advantage, Corporate Infrastructure Protection, FOMM, Operational Planning, Risk Analysis | Tagged , , | Leave a comment

Corporate Infrastructure Protection


Corporations have finite Infrastructure and scarce Resources.

infrastructure protection

Corporations need methods to evolve strategic internal initiatives from their Infrastructure and Resources.

Corporations need methods to protect their Infrastructure and Resources from external initiatives (i.e. unwanted events and incidents).

The purpose of this new series of posts is to demonstrate that corporate strategic planning and corporate infrastructure protection planning can/should be carried out within the same e-platform.

It will immediately become obvious that the methods/steps for Corporate Strategic Planning (CSP) are different from methods/steps needed for Corporate Infrastructure Protection Planning (CIPP).

Not so obvious is that an e-platform that is capable of hosting RBV (Resource-Based View) is the platform of choice for practicing both CSP and CIPP.

Strategic Planning

The usual approach to strategic planning is

   Infrastructure/Resources Inventory ->

Strategy ->

   Initiatives ->

      Competitive Advantage

Reading between the lines, the process unfolds as follows:

  1. Corporations have finite infrastructure and scarce resources.
  2. Infrastructure/resources decay over time.
  3. The only way to increase competitive advantage is to evolve initiatives that allocate Infrastructure/resources and make good use of such infrastructure/resources.
  4. Each initiative needs a risk rating, ROI, goals/objectives, and a timeline. It is important to avoid biased mixes of initiatives within the range “low risk, high return, short timeline” and “high risk, low return, long timeline”.
  5. Strategies/initiatives/goals/objectives need periodic monitoring and, where needed, adjustment.

Corporate Infrastructure Protection Planning

The model for Corporate Infrastructure Protection planning is

External Events Inventory ->

   Risk Assessment of Possible Events ->

      Vulnerability Assessment/Remediation ->

         Event Detection ->

            Incident Avoidance/Counter Action(s)

If avoidance/counteraction tactics fail, the corporation moves forward to

Incident Management ->

   Recovery

Reading between the lines, the process unfolds as follows:

  1. In the area of Infrastructure Protection, events come from the outside. Usually, there is no advance warning of an event.
  2. Events quickly escalate to incidents to damage/destruction.
  3. Corporations need to have an inventory of possible Events and a ready set of ways and means to prevent events from becoming incidents, plus, when this fails, ways and means of managing incidents and recovering from incidents.
  4. Corporate assets can have strong links – if one link in a value chain breaks, an entire set of assets can become dysfunctional or be destroyed.
  5. Events/incidents need to be pre-assessed for risk.
  6. Time is of the essence – some incidents require counteraction within seconds of the detection of events.
  7. Event detection is  a 24 x 7 job.

Strategic Planning is key to building competitive advantage (i.e. strategy -> internal initiatives -> competitive advantage), Infrastructure Protection Planning puts a focus on readiness and resilience (i.e. external events awareness,  events, incidents, recovery).

What is remarkable is that corporations that subscribe to RBV (Resource Based View) can carry out strategic planning and infrastructure protection planning at the same e-platform.

The obvious starting benefit is a common inventory of corporate assets (i.e. assets to be deployed, assets to be protected).

Stay tuned to learn how you can build strategy and infrastructure protection plans in an e-platform (i.e. 3D free-form-search Knowledgebases).

Posted in Corporate Infrastructure Protection | Tagged | Leave a comment

Unlocking The Secrets To Building and Sustaining Competitive Advantage


secretsCompetitive Advantage is the result of better use of available Resources. 

NOTE: In this article “Resources” picks up Infrastructure plus Renewal Resources plus Non-Renewal Resources.

The range of Resources for any corporation can include:

Capital, Access to Capital, Land, Equipment, Tools, Premises, Staff, Current Products/Services, Products / Services Under Development, Projects Awaiting Approval, Technology Trends, Changing Legislation, Competitors.

We know from RBV (Resource Based View) that corporations that are able to “view” all of their Resources tend to make better decisions re building up a proper mix of initiatives that draw on these resources (i.e. avoid high risk / low return initiatives; avoid initiatives that tie up key resources for too long a period of time; terminate or cancel initiatives that are non-performing).

Clearly, Operations needs to put a dual focus on work that advances the state of initiatives and work that is supportive of ongoing initiatives (i.e. maintaining compliance with external rules and regulations).

A problem arises when Operations puts too sharp a focus on, for example, processes.

There is no direct path between “continuous process improvement” and success from the implementation of corporate initiatives.  Whereas process improvement impacts efficiency, it only impacts effectiveness marginally.

The direct path from work to competitive advantage is as detailed below:

It’s not that difficult for an organization to transition to this model.

1.
Actors who perform the work and oversee the progress of work need a workspace (commonly called “Case”).

2.
The workspace must have an undercurrent comprising

  • orchestration from background BPM,
  • governance at the Case\Initiative level,
  • workload management i.e. RALB (Resource Allocation, Leveling, Balancing)
  • non-subjective assessment of progress toward meeting goals/objectives i.e. FOMM (Figure of Merit Matrices).

See some 300+ articles on the importance of orchestration from workflows, governance, workload management and non-subjective approaches to decision – making for both strategy development, and for achievement of operational efficiency and effectiveness

https://kwkeirstead.wordpress.com/

Photo Credit:
benwhitephotography

Posted in #strategy, Adaptive Case Management, BPM, Business Process Management, Case Management, Decision Making, FOMM, Operations Management, Resource Based View, Strategic Planning | 2 Comments

Problems with Throwaway Code in Transaction Processing Systems


It seems pointless in any transaction processing system to . . .

1) go to the trouble of only allowing data to be recorded via Official Interfaces,

2) go to the trouble of building Transaction Histories that allow recall of Sessions,  viewing of data, as it was, at the time it was collected, on the Form versions that were in service at the time,TransBin
     

    then process transactions using code you build on-the-fly and
    then throw away the code.

Part of the purpose of a Transaction History is to be able to re-trace the processing in the event of errors.

The problem goes beyond throw-away-code – it extends to any local or remote system or application where you do not have absolute control over source code changes (i.e. an archive of all versions of the source).

So, the only practical option when accepting data from any local or remote systems, not part of your main transaction processing app, is to carry out incoming data reasonableness checks that confirm that processing results are within range. 

This is difficult. (i.e. if the temperature today is 32 degrees F and you go to an app that maps your temperature reading to degrees C and you see 300 degrees C, you know something is not right. But, if you get back 1 degree C, that, too, means the processing is not right).

A partial remedy in a BPMs is to position pre-processing and post-processing rules at process steps to carry out real-time “audits” on outgoing and incoming data.

Pre-processing:                 “Is it OK to engage processing at this step?”

Post-processing:              “Are the calculated results from the local or remote external
                                              app within boundary conditions?”

Comment:

Note the reference to “official interfaces” above.

No organization today will allow data to be “poked” into its data structures – the reason is that such actions bypass in-line security.

It follows that the only acceptable approach to shipping/receiving transactions between any two systems involves use of an agreed-upon data transport envelope that publishers generate and subscribers import, presumably invoking appropriate pre-processing rules.

The usual range of “official interfaces” includes:

  1. Direct keying at traditional User Interfaces (user logs in, loads an app,
    engages whatever processing options are available at the application
    menu bar and icon bar, then logs out).
  2. Portal access using a range of devices that are able to get to the Internet via a browser.
  3. Data exchange in/out of a generic data exchanger (using “push” or “pull”).
Posted in Database Technology, Software Design, Software Source Control | Leave a comment

Augmenting Decision Support at Adaptive Case Management (ACM) Platforms


Traditional ACM(1) augments background BPM(2) decision-support at Cases via two methods : RALB(3), FOMM(4).compass-152121_640

  1. Allowing Users to micro-schedule assigned tasks (RALB).
  2. Allowing Case Managers to periodically assess progress toward meeting Case goals and objectives (FOMM).

Statistical Overlays

Augmented decision support at Cases can be provided via statistical overlays of mined data across completed Cases, at active Cases :

  1. Provisional assignment of durations for not-yet-current tasks.
  2. Engagement of a CPM (Critical Path Method) algorithm that calculates actual/expected dates at the Case end node.

i.e. Practical use of this setup would provide advice/assistance as follows – engage this sub-pathway and get to Case closure in eight (8) weeks, engage another sub-pathway and get to Case closure in six (6) weeks.

Caveat

Traditional CPM assumes a merge of all pathways to a single end node. Cases typically have multiple end nodes.

It follows that unless users are prompted to indicate one or more successor nodes at each Case end node, at the time the node is declared to be “complete”, the CPM algorithm will not be able to calculate the “critical path”.

Probabilistic Branching Overlays

Given that, unlike CPM, ACM engagement of some divergent sub-pathways is optional, data mining can further augment decision support at ACM platform branching decision boxes via probabilistic overlays (i.e. users chose option “A” 40% of the time, users chose option “B” 60% of the time).

Clearly, some filtering is required when data mining (i.e. exclude Cases that did not go to successful closings; exclude branching decision box probabilistic overlay options that have low rates of reported use).

Note that if seasonal filtering is in effect for data mining, a 40/60% overlay for “summer” can easily display/shift to 20/80% or 80/20% for “winter”, depending on the focus of a Case.

Recommendation
If you do not currently have an initiative to improve Decision Making at your ACM platform, my recommendation is to study the potential of statistical overlays and probabilistic overlays before jumping onto the RPA and AI “bandwagons”.

 

Terms used in this Blog Post:

(1) ACM – Adaptive Case Management

(2) RALB – Business Process Management

(3) RALB – Resource Allocation, Leveling and Balancing

(4) FOMM – Figure of Merit Matrices

Posted in Adaptive Case Management, Decision Making, FOMM, Risk Analysis | Tagged | Leave a comment

Is this where you want to be?


This is a pitch to consultants and business managers who subscribe to and use BPM (Business Process Management). BPM_vs_BPM

It turns out there are two flavors of BPM  –  “B(P)M” (business management that receives orchestration from process templates, process fragment templates, users, software and machines) and “(BP)M” (management by business processes).

Any similarity between the two ends here.

If you put a focus on B(P)M what this means is that you are subscribing to the use of BPM to provide background workflow orchestration at Cases – this impacts efficiency and effectiveness.

(BP)M, on the other hand, puts too sharp a focus on processes – you impact efficiency, but only minimally impact effectiveness, unless your processes are all end-to-end processes.

If you are up to it, and your clients are on board, clearly, B(P)M is where you will want to be.

Transitioning from (BP)M to B(P)M requires a change in mindset, so here are a few tips & tricks.

Most (BP)M consultants come from a background of end-to-end processes.  Mapped end-to-end processes are easy to roll out to production environments. You have one start task and one end task (e.g. “cutting the ribbon”). The objective is to get to the end task. Your process map details what, why, who and, to an extent, where and when.

B(P)M is different.

Much of the “process management” assistance that clients are looking for today is not in the area of end-to-end processes.

What we have today are “process fragments” that get threaded together at run time by workers, software and machines. Process fragments do not have plan-side objectives.

Under B(P)M, objectives become a property of run-time Case,  i.e. a patient, an insurance claim, a helicopter under MRO.  For discussion purposes here, Case is not equivalent to “use-Case”.

Process fragments continue the tradition of what, why, who, where and when, except that some of the interventions are now ad hoc interventions.  Tasks at Cases become a mix of structured and unstructured interventions –  this adds flexibility  to “process management” (i.e. Case management, really), but, it also adds variability to where and when.

“Ribbon-cutting” at a Case under B(P)M takes place when the Case Manager closes the Case – no exceptions!

As and when you transition to B(P)M, your legacy BPMS will need major surgery.

As explained, your new BPMS will need to accommodate any mix of structured and unstructured Case interventions. Fortunately, once you realize that a process of one step still is a process, no accommodation is needed so long as your BPMS provides workflow and workload functionality.

Seamless threading of process fragments adds a bit of complexity.

Since you can no longer rely on logic connections between tasks to guide all processing, each process fragment needs a rule set at its start task so that the task can report “OK to engage processing” or “NOT OK to engage processing”.

Your tasks become more data-driven under B(P)M.

However, manual override by a user/Case Manager is always an option i.e. skip the task. At some risk/peril to all stakeholders. . .

It’s worthwhile here to elaborate on the term “data-driven”.

Whereas, with end-to-end BPM and legacy BPMS’, data flows take place along process pathways, all non-instance-specific data at a Case can be accessed by a process fragment rule set.

Rule sets in B(P)M are pervasive – they can be found upstream from tasks, at tasks, and immediately downstream from tasks. They can be found at branching decision boxes and are essential at most loop back constructs to prevent churning.

Finally, your BPMs needs R.A.L.B and F.O.M.M.

You can’t properly manage work at Cases without R.A.L.B. (Resource Allocation, Leveling and Balancing). – absent R.A.L.B. for anything beyond a moderately complex workflow and you become unable to carry out workload management.

Re F.O.M.M (Figure or Merit Matrices) – you won’t get consistency across Cases if Case Managers cannot get a “second opinion” from F.O.M.M. The unique contribution of F.O.M.M is to make decision-making non-subjective.

Now, before leaving this space, click on the link below – you will get to a hard-to-find music video that I feel is fantastic.

Title Inspiration:

Kacey Musgraves & Willie Nelson “Are you sure this is where you want to be”

https://music.youtube.com/watch?v=cDCFjYVKAkY&feature=share

Posted in BPM, Business Process Management, Case Management, Process Management | Tagged | 2 Comments

How do we get to Excellence from Efficiency and Effectiveness?


In business, efficiency is the easiest journey, the journey to effectiveness is more difficult.Triple_E

Achieving excellence requires all-hands-on-deck, in my view, and is therefore more difficult to achieve.

One thing I recall from days at GE was my boss encouraging all design engineers to do daily walkabouts, to see firsthand, the impact of their designs on production-line work/workers.

Walkabouts are helpful for discovering reasons for inconsistencies in finished products and, in a few cases, to discover and explain consistencies.

In the plant where I worked, one of the products was watt-hour meters and these, at the time, required magnets.

For years HQ could not understand why the quality of the magnets was so high, relative to other watt-hour production plants.

A walkabout reveled that the blacksmith (yes, they had a blacksmith in those days), whose shift was 8 to 4, same as everyone else’s, actually came in at 6 AM to start up his hearth and stayed on the job until 6 PM to clean up.

The effect was a steady state by 8 AM and readiness to start the next day by 6 PM.

I have been a fan of walkabouts (along production facilities and in the office) since that time.

Posted in Competitive Advantage, Operational Efficiency, Productivity Improvement | Tagged , , | 1 Comment

The Nature of Strategic Decision Making


Much of what I read about “Business Management/Decision Making” seems to be written by folks who have little experience in business management.

Business Management is all about Decision Making (but not only about decision making).The_Thiinker

The purpose of this article is to put a focus on strategic decision-making – specifically how strategic decisions are made in real corporate life.

Corporations evolve strategies and allocate resources to Initiatives by way of ROI (Return on Investment) submissions/ authorizations.

All Initiatives have goals/objectives. All Initiatives have time spans. If you see one that appears to go on and on, you are looking at an initiative that receives extensions to previous allocations of resources.

There is no point doing work that does not contribute to advancing the state of an Initiative toward its goals/objectives.

Progress toward initiative goals/objectives is non-linear. Generally, it follows an “S” curve (slow to achieve liftoff, following by rapid progress, only to slow down toward the end of implementation).

Work involves steps, different steps require different resources and most work benefits from consistent use of “best practice” protocols. Some work is unstructured.

Decisions along Initiative timelines must be made before steps, at steps and after steps in order to maintain forward momentum of Initiatives.

Decision-making is the transformation of information into action.

I count six (6) sources of information for decision-making (knowledge, experience, intuition, wisdom, data/analytics, and rule sets/algorithms).

Good decisions are generally the result of reliance on more than one of the six (6) sources of information.

  1. Knowledge maps easily to information providing the decision-maker understands what specific knowledge he/she has access to (i.e. known knowns, known unknowns, unknown knowns, unknown unknowns).
  2. Experience maps to information when such experience was gained dealing with initiatives similar to the one that has the focus.
  3. Intuition maps to information when the decision-maker has a good track record relying on intuition at prior initiatives.
  4. Wisdom is a state of maturity that some people reach – in respect of decision making it has two manifestations i.e. knowing what to do, knowing what not to do.
  5. Data/analytics maps to information when the data is good and the analysis is sound.
  6. Rule-sets map well to initiatives when data is within the boundary conditions of the rule sets or when an algorithm working on the same type/quality of data has yielded good decisions.

Final points . . . .

Decisions typically get made when they need to be made.

Many decisions are made in the absence of adequate information, without consideration of associated risk/uncertainty and without consideration of the amount of resources they tie up (i.e. from low risk/short timeline/high return to high risk/long timeline/low return).

When you are making decisions bear in mind Donald Rumsfield’s 4K’s (known knowns, known unknowns, unknown knowns, unknown unknowns).  What you don’t know will hurt you!

Another good piece of advice is to bear in mind that if you cannot see the resources you are committing to initiatives, the quality of any decisions you make will be diminished.

The core message of the RBV (Resource Based View) methodology is “… it is difficult to make decisions when you cannot see the resources that will be impacted by such decisions”.

See “Decisions, Decisions, Decisions”  (2014-12-02) for a operational perspective on decision-making.

https://wp.me/pzzpB-CX

 

Posted in #strategy, Enterprise Content Management, Knowledge Bases, Risk Analysis | Tagged | Leave a comment

Protect and Serve – The search for efficiency and effectiveness


Police Departments have the same overall focus as private sector corporations.

1) evolving strategies that are supportive of a mission, then defining and putting in place initiatives that make good use of available scarce resources.

2) achieving operational efficiency and effectiveness.

Police_Night_Scene

Photo Jan-Gottweis

 

Whereas corporations bridge the gap between operations and strategy using ROI requests/approvals, PDs strive to eliminate/avoid any gap between operations and strategy by way of operational adherence to published policy and procedures.

In order for this to happen, Policy and Procedure (P&P) must exist and be readily available to all members of any individual or team response to incidents and be readily available for staff tasked with managing Cases.

P&P can be evolved using a range of Document Management Systems.

Assuming a common set of services across same-size-city PDs, three approaches can be used for writing/distributing P&P.

a) independent research.

b) reference to policy models (i.e. IACP).

c) construction of a Kbase featuring P&P from other same-size-city PDs.

Our preference in providing consulting services to PDs is option c), where, subject to copyright approval, we provide our clients with a Kbase comprising full-text content P&P from 10-20 PDs.  The client can then extract and add their P&P to the Kbase from DMS’ they are using (e.g. PowerDMS) and proceed to carry out full-text searches across the content of the Kbase.

Typical questions are:

  1. Do we have P&P for terrorist drone attacks?
  2. What is covered and to what level of detail?
  3. Are any updates to our P&P appropriate for “terrorist drone attacks”?

Regarding availability of P&P, there are currently three options for rollout:

a) “off-line” (printed manuals),

b) “on-line” (portal access),

c) “in-line” (rollout of P&P in the form of checklists with data capture facilities or real-time posting of P&P content task-by-task as tasks become current along the incident or Case timeline, with data capture facilities).

Our preference for rollout is, again, the last listed option (i.e. “in-line”).

This involves mapping P&P narratives to workflows, following by software carving up the workflows into tasks according to skill contribution or administrative level of approval.  A workload management engine then posts individual tasks to the attention of  staff for information and action as these tasks become “current” along the workflow timeline.

“In-Line” improves incident response and Case decision-making (performing the right tasks, at the right time, using the right resources, using the right forms), with auto-consolidation of all data from all tasks to a command and control Incident/Case Management platform and Incident/Case History.

Errors and omissions decrease dramatically as a result of orchestration (i.e. auto-task posting) plus governance (i.e. rule sets that operate on any data that is input by staff).

As usual, methods are NOT totally portable from private sector to the public sector.

Whereas in a private sector setting, governance plays a role of accommodating deviation away from P&P so long as extreme, unwanted deviation does not take place, governance in the area of the public sector needs to be tighter.

Secondly, whereas corporations find it useful to consolidate and trend key indicators to a Kbase, Police Departments want full-text Case content at Kbases so that staff can “connect-the-dots” across active and cold Cases.

Here is a screenshot of a Policy and Procedure Kbase that has links various resources plus links to 10 published major-city Police Department P&P data sets (about 5,000 documents in all, with the potential to go to 15,000 documents).

For more information on Kbase construction and the use of 3D free-form-search knowledgebases to increase operational efficiency and effectiveness within police departments, call Civerex at 1+ 800 529 5355 (USA) or 1+ 450 458 5601 (elsewhere).

PDs

Posted in Community Policing, Crime Reduction Initiatives, Homicide Investigations, Law Enforcement, Major Crimes Case Management, Strategic Planning | Tagged | Leave a comment