Avoiding Transformation Program Fails


McKinsey & Company claimed in 2013 that 70% of Transformation Programs Fail (AIPMM, September 2013)BusinessFailure

https://www.slideshare.net/aipmm/70-26633757

The McKinsey breakdown on fails went like this (Employee Resistance: 39%; Lack of Management Support: 33%; Inadequate Resources or budget: 14%; Other: 14%).

I don’t see much evidence that this has changed over these past seven years.

My first takeaway viewing the slide presentation was that it was mostly about not taking the time up-front to model the run-time platform users would eventually be using.

Since work is all about converting information into action, system designers should reasonably look to a generic workflow/workload management platform where organizational processes can be mapped, rolled out and made available to users for modeling purposes.  If the user perception is that work is easier under the new system than under the current system, employee resistance is likely to decrease.

The mapping should be done by the users, with a facilitator present.  Clearly, having to learn a notation or language in order to be able to map out a process is a turn-off.  Same for having to build/maintain database tables, building data display/data collection Forms at process steps and putting in place any required rule set development.

The solution here is to have an initial modeling phase that uses images of eventual forms and summary narrative descriptions of rule sets. Real forms and rule sets can come later.

My second takeaway was that a good portfolio management  platform (proposed initiatives, threat assessments, ROI/SROI and explicit funding/project auditing mechanisms) relieves Management of the need to provide ongoing “support”.  That said, the fast pace of business today requires that Management continuously scan the horizon for opportunities/problems and monitor the progress of initiatives during implementation phases.

Stay tuned for a description of orderly processes for planning and managing Transformation Programs.

“Success Evolving Strategies (methods, tools, platforms)”

https://wp.me/pzzpB-13d

Then, a second article

“Success Achieving Operational Efficiency and Effectiveness (methods, tools, platforms)”

https://wp.me/pzzpB-13i

Posted in Adaptive Case Management, BPM, Case Management, Competitive Advantage, Operational Efficiency, Operational Planning, R.A.L.B. | Leave a comment

How to Install BPM in a Corporation


winding-road-553481_640

It is difficult to recommend a “best” way to install BPM in a corporation.

The approach needs to vary with the consultant, and vary with the client.

One-size-fits-all, ends up predictably fitting none of the players.

Photo Credit Jill Wellington

Here below is the script we recommend for our consultants:

1.

Do a live session with a few stakeholders where you build a small workflow (10-20 steps) of the client’s choice (steps, directional arrows, all steps using the same placeholder Form, all steps with the same routing i.e. “all”).

Make sure the workflow has at least one (1) manual branching-decision-box.

Compile the workflow and roll it out as a process template to an ACM/BPM workflow/workload platform.

Stream a Case record onto a template instance to generate a private instance.

As the BPM engine posts tasks or steps according to the logic of your template instance, record some data at the placeholder Form at each step, then commit the step.

Make it clear to the stakeholders that background workflow automation causes the next step along the template instance to post automatically as soon as the current step is “complete”.

Go to the History to show the stakeholders that the platform contemporaneously records, in the History, the data that was entered at each step at the placeholder Form, complete with a system generated date/time stamp.

[quick build – this session should not run more than 1 hour]

2.

Now, repeat #1, where each template instance step has been assigned a “skill” Routing (i.e. replace “all” with  “contracts”, “design”, “production”, or “shipping”).

Compile, rollout and model the workflow with the stakeholders.

[illustrate Routing as an attribute of workflow steps]

3.

Replace the placeholder Forms along the workflow with, assuming the stakeholders have provided these in advance, images of Forms needed at each step in #1.

Make sure there is an extra “memo” Form field at each Form. This is very useful during modeling to capture notes (i.e. wrong Form, missing Fields, wrong Routing,  wrong sequencing of steps, etc.).

[illustrate Forms as an attribute of workflow steps]

4.

Upgrade your workflow by adding a Rule Set that automates your branching decision box step. Set the routing at the branching decision box to “system”.

Install a rule at each option within a Yes/No decision box where an upstream form picks up a Yes/No value at, say, a combobox and automatically resolves the decision box when the decision box becomes “current” along the template instance. 

Compile, rollout. . . .

[introduce the client to workflow branching/re-merging]

5.

Show the stakeholders how to build a custom workflow step Form and encode this to one or more steps.

Make sure there are at least two steps that feature the same step Form to illustrate that data recorded at the upstream step flows along the workflow to downstream step(s).

Compile, rollout . .

[upgrade of #3]

6.

Now, compile, roll out and process a template instance for your latest upgrade of your workflow where you skip a step, re-visit an already completed step, insert a step not in your workflow.

Point out that Rules can be put in place at steps to guard against extreme, unwanted excursions away from “best practices”.

[demonstrate that your solution accommodates excursions away from “best practices”]

7.

Illustrate re-direction of steps where a supervisor overrides the auto-routing by assigning a step to a named user.

[demonstrate escalation as part of RALB (Resource Allocation, Leveling and Balancing)]

8.

Explain that “Case Managers are the folks who close Cases” and that some non-subjective means is needed to show progress along a workflow, typically along an “S” curve.

[illustrate non-subjective decision-making for Case closing]

On site vs GoToMeeting or equivalent web-based sessions?

Make sure with remote sessions that you can accomplish a few of the #2-#8 steps in no more than one hour. Combine steps where possible but avoid long drawn-out remote sessions as some stakeholders have short attention spans.

Onboarding stakeholders

During client sessions, let a stakeholder take over your mouse/ keyboard.  If the client is willing, let them do some of the processing at steps #1 and #2. The more stakeholders participate, the greater their level of confidence, sustained onboarding, etc.

Increasing the chances of a successful ACM/BPM implementation

Once the corporation is on board with ACM/BPM make sure that top management understands that a separate set of methods/tools/software/platform can (should) be put in place for building, sustaining and augmenting strategic competitive advantage (infrastructure/ resource inventory, risk assessment, vulnerabilities, remediation, funding initiatives).

Explain that the way to narrow the gap between strategy->operations and operations-> strategy is for top management/operations to assess initiatives, fund the more promising of these and set the focus at Cases on Initiatives, not processes.

Do not hard sell this to the point of losing the operational efficiency/effectiveness engagement by over-complicating.

See 300+ blog posts that go back to 2010 – many relate to BPM.

http://www.kwkeirstead.wordpress.com

Posted in Adaptive Case Management, Business Process Management, Case Management, Competitive Advantage, FOMM, Operational Efficiency, Operational Planning, Planning, Process Management, Process Mapping, R.A.L.B., Risk Analysis | Tagged , | Leave a comment

Corporate Agility – where the rubber meets the road . .


Corporate agility is essential today for building, sustaining and augmenting corporate strategy and for operational workflow/workload management  (i.e achieving operational efficiency & effectiveness).

It seems today as a result of the persistence of BPM (Business Process Management) practitioners over 20 years,  corporations are in good shape to achieve agility in the operations area.  (no-code/low-code for process mapping and re-mapping; run-time workflow/workload management platforms; and, to an increasing extent, real-time predictive analytics decision-support).

With minor exceptions, things are not at all as advanced in the area of strategy building.

Consider the need for corporations to evolve strategy and operational response to cyber and terrorist attacks. If you go to Wikipedia, you will see that in respect of the latter there are 2,000 + incidents per year sourced from 4,000 + terrorist groups.

Suppose you have mining operations in a country like Burkina Faso, West Africa. Your strategic variables are available national infrastructure protection plus corporate-specific infrastructure protection plans and event/incident response tactics.

Most corporations lack the resources/skills to evolve comprehensive Infrastructure Protection Plans on their own, so they have to rely on external security consultants.  Most of these groups are selling man-hours, so their preference is to arrive and never leave.

Civerex and its partners HNCT/GI take corporations through the same  readiness/ resilience process, but our end objective is tech transfer that allows our clients, going forward, to build / own/ manage their own strategic and tactical plans for threat identification, threat assessment, vulnerability identification/ remediation, readiness and response.

Most of the work involves no code, no database construction, no application system building, but significant semi-real-time data collection setup and frequent tweaks by clients to their strategy and operating procedures.

An Infrastructure Protection Plan (IPP)  Kbase can easily comprise 50,000 data points, inclusive of protocols for emergency response to chemical, biological and radiological attacks.

Worldwide 3D Kbase IPP template plus corporate-specific template.

Corporate_Infrastructure_Protection

Engaging a search for “Burkina Faso” – notice that the search results show the country of interest, nearby countries, terrorist groups operating within the country of interest, incidents at the country of interest and that filtering has reduced the number of entities of interest from  9 to 4.Corporate_Infrastructure_Protection_BURKINA

Right-clicking on the search node (green node) takes the user to a listbox of the “hits” across the highlighted entities for easy browsing of the node content and any linked URLs and linked files.

Posted in #strategy, Competitive Advantage, Corporate Infrastructure Protection, FOMM, Operational Planning, Risk Analysis | Tagged , , | Leave a comment

Corporate Infrastructure Protection


Corporations have finite Infrastructure and scarce Resources.

infrastructure protection

Corporations need methods to evolve strategic internal initiatives from their Infrastructure and Resources.

Corporations need methods to protect their Infrastructure and Resources from external initiatives (i.e. unwanted events and incidents).

The purpose of this new series of posts is to demonstrate that corporate strategic planning and corporate infrastructure protection planning can/should be carried out within the same e-platform.

It will immediately become obvious that the methods/steps for Corporate Strategic Planning (CSP) are different from methods/steps needed for Corporate Infrastructure Protection Planning (CIPP).

Not so obvious is that an e-platform that is capable of hosting RBV (Resource-Based View) is the platform of choice for practicing both CSP and CIPP.

Strategic Planning

The usual approach to strategic planning is

   Infrastructure/Resources Inventory ->

Strategy ->

   Initiatives ->

      Competitive Advantage

Reading between the lines, the process unfolds as follows:

  1. Corporations have finite infrastructure and scarce resources.
  2. Infrastructure/resources decay over time.
  3. The only way to increase competitive advantage is to evolve initiatives that allocate Infrastructure/resources and make good use of such infrastructure/resources.
  4. Each initiative needs a risk rating, ROI, goals/objectives, and a timeline. It is important to avoid biased mixes of initiatives within the range “low risk, high return, short timeline” and “high risk, low return, long timeline”.
  5. Strategies/initiatives/goals/objectives need periodic monitoring and, where needed, adjustment.

Corporate Infrastructure Protection Planning

The model for Corporate Infrastructure Protection planning is

External Events Inventory ->

   Risk Assessment of Possible Events ->

      Vulnerability Assessment/Remediation ->

         Event Detection ->

            Incident Avoidance/Counter Action(s)

If avoidance/counteraction tactics fail, the corporation moves forward to

Incident Management ->

   Recovery

Reading between the lines, the process unfolds as follows:

  1. In the area of Infrastructure Protection, most, but not all, events come from the outside. Usually, there is no advance warning of an event.
  2. Events quickly escalate to incidents to damage/destruction.
  3. Corporations need to have an inventory of possible Events and a ready set of ways and means to prevent events from becoming incidents, plus, when this fails, ways and means of managing incidents and recovering from incidents.
  4. Corporate assets can have strong links – if one link in a value chain breaks, an entire set of assets can become dysfunctional or be destroyed.
  5. Events/incidents need to be pre-assessed for risk.
  6. Time is of the essence – some incidents require counteraction within seconds of the detection of events.
  7. Event detection is  a 24 x 7 job.

Strategic Planning is key to building competitive advantage (i.e. strategy -> internal initiatives -> competitive advantage), Infrastructure Protection Planning puts a focus on readiness and resilience (i.e. external events awareness,  events, incidents, recovery).

What is remarkable is that corporations that subscribe to RBV (Resource Based View) can carry out strategic planning and infrastructure protection planning at the same e-platform.

The obvious starting benefit is a common inventory of corporate assets (i.e. assets to be deployed, assets to be protected).

Stay tuned to learn how you can build strategy and infrastructure protection plans in an e-platform (i.e. 3D free-form-search Knowledgebases).

Posted in Corporate Infrastructure Protection | Tagged | Leave a comment

Unlocking The Secrets To Building and Sustaining Competitive Advantage


secretsCompetitive Advantage is the result of better use of available Resources. 

NOTE: In this article “Resources” picks up Infrastructure plus Renewal Resources plus Non-Renewal Resources.

The range of Resources for any corporation can include:

Capital, Access to Capital, Land, Equipment, Tools, Premises, Staff, Current Products/Services, Products / Services Under Development, Projects Awaiting Approval, Technology Trends, Changing Legislation, Competitors.

We know from RBV (Resource Based View) that corporations that are able to “view” all of their Resources tend to make better decisions re building up a proper mix of initiatives that draw on these resources (i.e. avoid high risk / low return initiatives; avoid initiatives that tie up key resources for too long a period of time; terminate or cancel initiatives that are non-performing).

Clearly, Operations needs to put a dual focus on work that advances the state of initiatives and work that is supportive of ongoing initiatives (i.e. maintaining compliance with external rules and regulations).

A problem arises when Operations puts too sharp a focus on, for example, processes.

There is no direct path between “continuous process improvement” and success from the implementation of corporate initiatives.  Whereas process improvement impacts efficiency, it only impacts effectiveness marginally.

The direct path from work to competitive advantage is as detailed below:

It’s not that difficult for an organization to transition to this model.

1.
Actors who perform the work and oversee the progress of work need a workspace (commonly called “Case”).

2.
The workspace must have an undercurrent comprising

  • orchestration from background BPM,
  • governance at the Case\Initiative level,
  • workload management i.e. RALB (Resource Allocation, Leveling, Balancing)
  • non-subjective assessment of progress toward meeting goals/objectives i.e. FOMM (Figure of Merit Matrices).

See some 300+ articles on the importance of orchestration from workflows, governance, workload management and non-subjective approaches to decision – making for both strategy development, and for achievement of operational efficiency and effectiveness

https://kwkeirstead.wordpress.com/

Photo Credit:
benwhitephotography

Posted in #strategy, Adaptive Case Management, BPM, Business Process Management, Case Management, Decision Making, FOMM, Operations Management, Resource Based View, Strategic Planning | 2 Comments

Problems with Throwaway Code in Transaction Processing Systems


It seems pointless in any transaction processing system to . . .

1) go to the trouble of only allowing data to be recorded via Official Interfaces,

2) go to the trouble of building Transaction Histories that allow recall of Sessions,  viewing of data, as it was, at the time it was collected, on the Form versions that were in service at the time,TransBin
     

    then process transactions using code you build on-the-fly and
    then throw away the code.

Part of the purpose of a Transaction History is to be able to re-trace the processing in the event of errors.

The problem goes beyond throw-away-code – it extends to any local or remote system or application where you do not have absolute control over source code changes (i.e. an archive of all versions of the source).

So, the only practical option when accepting data from any local or remote systems, not part of your main transaction processing app, is to carry out incoming data reasonableness checks that confirm that processing results are within range. 

This is difficult. (i.e. if the temperature today is 32 degrees F and you go to an app that maps your temperature reading to degrees C and you see 300 degrees C, you know something is not right. But, if you get back 1 degree C, that, too, means the processing is not right).

A partial remedy in a BPMs is to position pre-processing and post-processing rules at process steps to carry out real-time “audits” on outgoing and incoming data.

Pre-processing:                 “Is it OK to engage processing at this step?”

Post-processing:              “Are the calculated results from the local or remote external
                                              app within boundary conditions?”

Comment:

Note the reference to “official interfaces” above.

No organization today will allow data to be “poked” into its data structures – the reason is that such actions bypass in-line security.

It follows that the only acceptable approach to shipping/receiving transactions between any two systems involves use of an agreed-upon data transport envelope that publishers generate and subscribers import, presumably invoking appropriate pre-processing rules.

The usual range of “official interfaces” includes:

  1. Direct keying at traditional User Interfaces (user logs in, loads an app,
    engages whatever processing options are available at the application
    menu bar and icon bar, then logs out).
  2. Portal access using a range of devices that are able to get to the Internet via a browser.
  3. Data exchange in/out of a generic data exchanger (using “push” or “pull”).
Posted in Database Technology, Software Design, Software Source Control | Leave a comment

Augmenting Decision Support at Adaptive Case Management (ACM) Platforms


Traditional ACM(1) augments background BPM(2) decision-support at Cases via two methods : RALB(3), FOMM(4).compass-152121_640

  1. Allowing Users to micro-schedule assigned tasks (RALB).
  2. Allowing Case Managers to periodically assess progress toward meeting Case goals and objectives (FOMM).

Statistical Overlays

Augmented decision support at Cases can be provided via statistical overlays of mined data across completed Cases, at active Cases :

  1. Provisional assignment of durations for not-yet-current tasks.
  2. Engagement of a CPM (Critical Path Method) algorithm that calculates actual/expected dates at the Case end node.

i.e. Practical use of this setup would provide advice/assistance as follows – engage this sub-pathway and get to Case closure in eight (8) weeks, engage another sub-pathway and get to Case closure in six (6) weeks.

Caveat

Traditional CPM assumes a merge of all pathways to a single end node. Cases typically have multiple end nodes.

It follows that unless users are prompted to indicate one or more successor nodes at each Case end node, at the time the node is declared to be “complete”, the CPM algorithm will not be able to calculate the “critical path”.

Probabilistic Branching Overlays

Given that, unlike CPM, ACM engagement of some divergent sub-pathways is optional, data mining can further augment decision support at ACM platform branching decision boxes via probabilistic overlays (i.e. users chose option “A” 40% of the time, users chose option “B” 60% of the time).

Clearly, some filtering is required when data mining (i.e. exclude Cases that did not go to successful closings; exclude branching decision box probabilistic overlay options that have low rates of reported use).

Note that if seasonal filtering is in effect for data mining, a 40/60% overlay for “summer” can easily display/shift to 20/80% or 80/20% for “winter”, depending on the focus of a Case.

Recommendation
If you do not currently have an initiative to improve Decision Making at your ACM platform, my recommendation is to study the potential of statistical overlays and probabilistic overlays before jumping onto the RPA and AI “bandwagons”.

 

Terms used in this Blog Post:

(1) ACM – Adaptive Case Management

(2) RALB – Business Process Management

(3) RALB – Resource Allocation, Leveling and Balancing

(4) FOMM – Figure of Merit Matrices

Posted in Adaptive Case Management, Decision Making, FOMM, Risk Analysis | Tagged | Leave a comment

Is this where you want to be?


BPM_vs_BPM

This is a pitch to consultants and business managers who subscribe to and use BPM (Business Process Management). 

It turns out there are two flavors of BPM  –  “B(P)M” (business management that receives orchestration from process templates, process fragment templates, users, software and machines) and “(BP)M” (management by business processes).

Any similarity between the two ends here.

If you put a focus on B(P)M what this means is that you are subscribing to the use of BPM to provide background workflow orchestration at Cases (i.e. initiatives)- this impacts efficiency and effectiveness.

(BP)M, on the other hand, puts a focus on managing an entire business via processes, not via managing a portfolio of initiatives. Here, you end up impacting efficiency, but only minimally impacting effectiveness.

If you are up to it, and your clients are on board, clearly, B(P)M is where you will want to be.

Change Management is Essential

Transitioning from (BP)M to B(P)M requires a change in mindset, so here are a few tips & tricks.

Most (BP)M consultants come from a background of end-to-end processes.  Mapped end-to-end processes are easy to roll out to production environments. You have one start task and one end task (e.g. “cutting the ribbon”). The objective is to get to the end task. Your process map details what, why, who and, to an extent, where and when.

B(P)M is different.

Much of the “process management” assistance that clients are looking for today is not in the area of end-to-end processes.

What we have today are “process fragments” that get threaded together at run time by workers, software and machines. Process fragments do not have plan-side objectives.

Under B(P)M, objectives become a property of run-time Case,  i.e. a patient, an insurance claim, a helicopter under MRO.  For discussion purposes here, Case is not equivalent to “use-Case”.

Process fragments continue the tradition of what, why, who, where and when, except that some of the interventions are now ad hoc interventions.  Tasks at Cases become a mix of structured and unstructured interventions –  this adds flexibility  to “process management” (i.e. Case management, really), but, it also adds variability to where and when.

“Ribbon-cutting” at a Case under B(P)M takes place when the Case Manager closes the Case – no exceptions!

As and when you transition to B(P)M, your legacy BPMS will need major surgery.

As explained, your new BPMS will need to accommodate any mix of structured and unstructured Case interventions. Fortunately, once you realize that a process of one step still is a process, no accommodation is needed so long as your BPMS provides workflow and workload functionality.

Seamless threading of process fragments adds a bit of complexity.

Since you can no longer rely on logic connections between tasks to guide all processing, each process fragment needs a rule set at its start task so that the task can report “OK to engage processing” or “NOT OK to engage processing”.

Your tasks become more data-driven under B(P)M.

However, manual override by a user/Case Manager is always an option i.e. skip the task. At some risk/peril to all stakeholders. . .

It’s worthwhile here to elaborate on the term “data-driven”.

Whereas, with end-to-end BPM and legacy BPMS’, data flows take place along process pathways, all non-instance-specific data at a Case can be accessed by a process fragment rule set.

Rule sets in B(P)M are pervasive – they can be found upstream from tasks, at tasks, and immediately downstream from tasks. They can be found at branching decision boxes and are essential at most loop back constructs to prevent churning.

Finally, your BPMs needs R.A.L.B and F.O.M.M.

You can’t properly manage work at Cases without R.A.L.B. (Resource Allocation, Leveling and Balancing). – absent R.A.L.B. for anything beyond a moderately complex workflow and you become unable to carry out workload management.

Re F.O.M.M (Figure or Merit Matrices) – you won’t get consistency across Cases if Case Managers cannot get a “second opinion” from F.O.M.M. The unique contribution of F.O.M.M is to make decision-making non-subjective.

Now, before leaving this space, click on the link below – you will get to a hard-to-find music video that I feel is fantastic.

Title Inspiration:

Kacey Musgraves & Willie Nelson “Are you sure this is where you want to be”

Posted in BPM, Business Process Management, Case Management, Process Management | Tagged | 2 Comments

How do we get to Excellence from Efficiency and Effectiveness?


In business, efficiency is the easiest journey, the journey to effectiveness is more difficult.Triple_E

Achieving excellence requires all-hands-on-deck, in my view, and is therefore more difficult to achieve.

One thing I recall from days at GE was my boss encouraging all design engineers to do daily walkabouts, to see firsthand, the impact of their designs on production-line work/workers.

Walkabouts are helpful for discovering reasons for inconsistencies in finished products and, in a few cases, to discover and explain consistencies.

In the plant where I worked, one of the products was watt-hour meters and these, at the time, required magnets.

For years HQ could not understand why the quality of the magnets was so high, relative to other watt-hour production plants.

A walkabout reveled that the blacksmith (yes, they had a blacksmith in those days), whose shift was 8 to 4, same as everyone else’s, actually came in at 6 AM to start up his hearth and stayed on the job until 6 PM to clean up.

The effect was a steady state by 8 AM and readiness to start the next day by 6 PM.

I have been a fan of walkabouts (along production facilities and in the office) since that time.

Posted in Competitive Advantage, Operational Efficiency, Productivity Improvement | Tagged , , | 1 Comment

The Nature of Strategic Decision Making


Much of what I read about “Business Management/Decision Making” seems to be written by folks who have little experience in business management.

Business Management is all about Decision Making (but not only about decision making).The_Thiinker

The purpose of this article is to put a focus on strategic decision-making – specifically how strategic decisions are made in real corporate life.

Corporations evolve strategies and allocate resources to Initiatives by way of ROI (Return on Investment) submissions/ authorizations.

All Initiatives have goals/objectives. All Initiatives have time spans. If you see one that appears to go on and on, you are looking at an initiative that receives extensions to previous allocations of resources.

There is no point doing work that does not contribute to advancing the state of an Initiative toward its goals/objectives.

Progress toward initiative goals/objectives is non-linear. Generally, it follows an “S” curve (slow to achieve liftoff, following by rapid progress, only to slow down toward the end of implementation).

Work involves steps, different steps require different resources and most work benefits from consistent use of “best practice” protocols. Some work is unstructured.

Decisions along Initiative timelines must be made before steps, at steps and after steps in order to maintain forward momentum of Initiatives.

Decision-making is the transformation of information into action.

I count six (6) sources of information for decision-making (knowledge, experience, intuition, wisdom, data/analytics, and rule sets/algorithms).

Good decisions are generally the result of reliance on more than one of the six (6) sources of information.

  1. Knowledge maps easily to information providing the decision-maker understands what specific knowledge he/she has access to (i.e. known knowns, known unknowns, unknown knowns, unknown unknowns).
  2. Experience maps to information when such experience was gained dealing with initiatives similar to the one that has the focus.
  3. Intuition maps to information when the decision-maker has a good track record relying on intuition at prior initiatives.
  4. Wisdom is a state of maturity that some people reach – in respect of decision making it has two manifestations i.e. knowing what to do, knowing what not to do.
  5. Data/analytics maps to information when the data is good and the analysis is sound.
  6. Rule-sets map well to initiatives when data is within the boundary conditions of the rule sets or when an algorithm working on the same type/quality of data has yielded good decisions.

Final points . . . .

Decisions typically get made when they need to be made.

Many decisions are made in the absence of adequate information, without consideration of associated risk/uncertainty and without consideration of the amount of resources they tie up (i.e. from low risk/short timeline/high return to high risk/long timeline/low return).

When you are making decisions bear in mind Donald Rumsfield’s 4K’s (known knowns, known unknowns, unknown knowns, unknown unknowns).  What you don’t know will hurt you!

Another good piece of advice is to bear in mind that if you cannot see the resources you are committing to initiatives, the quality of any decisions you make will be diminished.

The core message of the RBV (Resource Based View) methodology is “… it is difficult to make decisions when you cannot see the resources that will be impacted by such decisions”.

See “Decisions, Decisions, Decisions”  (2014-12-02) for a operational perspective on decision-making.

https://wp.me/pzzpB-CX

 

Posted in #strategy, Enterprise Content Management, Knowledge Bases, Risk Analysis | Tagged | Leave a comment