The shift towards specialist chatbots has highlighted how organisations are vulnerable to decision distortions and deceptions, with the way their people work deep within the organisation.
This vulnerability relates to inherent weaknesses within repetitive tasks, which are reliant upon individuals making accurate decisions. The scope is extensive as decisions are embedded within the very fabric of work activities such as those people involved with performing any combination of the following tasks when they:
Each of these activities involves contextual knowledge to support making informed decisions.
Sometimes there is a singular decision with limited options such as: yes, no or not sure. Other times, decisions flow in different directions due to the complex permutations of choices, pathways and outcomes involved.
By their very nature, decisions sometimes go wrong simply because of human shortcomings. Behavioural economics shows that any decision with an element of risk is subject to human biases.
Let’s take a simplistic case, where a service agent pressurises a customer to give them a rating of at least 8 out of 10 when they receive a survey, otherwise the agent claims they could get into ‘trouble’. This is a clear case of attempting to distort a customer decision. The consequence is that it can result with two types of deceptions:
1. If this type of behaviour is repeated frequently across multiple service agents, it means the contaminated customer insights potentially distort strategic decisions taken in good faith. If the distorted insights are reported to stakeholders, the deception is amplified.
2. The customer insights for the service agent is used to appraise their performance, which is then linked into bonuses, salary increases and even promotion to becoming a supervisor. An inappropriate promotion of an agent to a supervisor risks amplifying the level of customer feedback distortion and deception as they put pressure upon their team of service agents.
This simplistic case is a powerful illustration of when employee behavioural incentives result with decision distortions and deceptions contaminating the brand, strategy and tactics of the organisation.
The vulnerability of decision distortions and deceptions is pervasive as it impacts decision-making processes embedded within activities deep within the organisation. For example, let’s take most workflows have one or more data input forms that require decisions to be made as part of the input. Typically, these types of forms are supported by procedures to guide the decision-making, but the absence of a decision audit trail means there is no transparency and traceability of the actual decision process applied. Frequently, these forms are completed by people using subjective decision-making based on their experience. Flawed decisions can lead to unintended consequences negatively impacting revenues, costs, risks and sometimes brand contamination. Misselling of regulated products is a common example.
As the rate of knowledge increases in complexity there is an amplification of poor quality decisions in the way people work. Leaders should first target the crucial decision-making processes, which are vulnerable especially for organisational exposures related to negligence, errors, false positives, false negatives and handoffs. Using these priorities, it is relatively easy using the right tools to identify the weaknesses within the decisions flow. At this stage, it is relatively a small step to rapidly prototype chatbots to strengthen the decisions flow and deliver conversations-as-a-service, with the full benefit of transparency and traceability.
Organisations cannot afford to ignore the human factor deep within their decision-making processes involving many activities leading to non-productive interactions.
#chatbot; #freddiemcmahon; #chatbotauthor; #df2020; #knowledge; #decisions