The AI Blind Spot: Unnoticed, Unexpected Impact of Enterprise AI
The AI Blind Spot: Unnoticed, Unexpected Impact of Enterprise AI
AI has become a dominant part of the value delivery process, buoyed by incredible advancements and experiences delivered out of consumer tech companies such as search, e-commerce, and social networking. With this, enterprises are accelerating the adoption of AI and AI technologies into their value generation processes. As the trend grows and takes a foothold, we are bound to see enterprise products and services surround us that aim to facilitate product usage through the application of AI.
There has been a lot of public debate about the dangers associated with AI – most famously, the discourse between Elon Musk and Mark Zuckerberg. The debate focuses really on how dangerous can AI become and if it can really lead to a world war or even the extinction of mankind. There is a growing body of data and resources on both sides of the debate supporting or refuting these projections. The debate is hardly settled and will continue to grow.
The AI blind spot in the AI process
The real, immediate danger of AI lies in the AI blind spot that exists in the enterprise. To understand the blind spot, let’s take a few minutes to unpack the enterprise AI process. The AI process is generally the steps to developing an AI model to solve a specific problem, integrating that into a product or service and then delivering the product/service to the ultimate customer/user.
Step 1: Signal Collection – The process where signals are collected and extracted from various data sets, data-producing systems, and sources internal and external to the enterprise.
Step 2: Signal Ranking – The process of ranking the collected signals to determine the ones that are best suited to solve the problem at hand.
Step 3: Model Development (and Improvement) – The process of converting the signals (and the associated data) into a model that can predict the signal for any new, future data.
Step 4: Product-Model Integration – The process of integrating the AI model into the product/service workflow to facilitate a customer/user request/decision/action.
Step 5: Quality Evaluation – The process of measuring and determining the quality of the AI model in real-world usage by customers.
Step 6: Go to Step 1 – Rinse and Repeat.
The flaw at the root of the AI blind spot
The flaw in the AI process is built into Step 1 – the quality of an AI model is directly dependent on the signals that are collected in this step. When the most relevant signals are collected and ranked, a high-quality AI model can be created.
However, if high quality, highly relevant signals are not collected or missed, it can impact the quality of the model in two ways:
- Either the AI model ends up not being good at making accurate predictions.
- Or, it impacts the ability of the enterprise to understand how the AI model behaves in the real world. This further impacts the short and long-term impact it has on the enterprise’s supply chain and the market in which the enterprise operates.
The second item above is the AI Blindspot.
Consumer tech is not leading the way.
Across the board, consumer tech has led the development and resurgence of AI and paved the way through contributed code, technology, data and models to adopt AI in how the enterprise operates, builds, and delivers value. Often, the easiest strategy for the enterprise to get started with AI is to find parallels to its problems on the consumer side, find a consumer tech company that has solved that problem, and attempt to recreate the solution on the enterprise side.
However, in the AI blindspot, consumer tech is not leading the way and is not expected to do so in the future. This is because of several reasons:
- Consumer tech often provides a lot of value for free (indirectly costing users their data and privacy), and thus, the bar for compensation stemming from AI going rampant is very low.
- Consumer tech companies often have a massive user base, operate like virtual monopolies, and benefit from a strong network effect – making it harder for users to react to bad AI.
- Consumer tech scenarios are quite narrow, making it harder for poor AI to do a lot of damage at a personal level.
The hacking of the US election through the manipulation of the Facebook newsfeed or Google/FB/Twitter Ads is an example of AI going rampant. As the user provides their feedback through their actions (like, comment, share, bookmark, etc.), the AI model “learns” the user’s preferences and tries to find and provide more similar content. The AI here can be “hacked” by making the user take a few actions, and the pattern can be built and set in.
Consumer tech companies’ reaction to this “hacking” is to hire humans to monitor and track these manipulations and use manual horsepower to stem the attacks. However, this is not enough for several reasons.
- AI cannot be reset: AI models in a continuously-learning mode do not maintain past state, and hence, cannot be reset to a previous version of the model. The only option is to either remove the model entirely or revert to a previously trained, older version of the model. Resetting an AI model to behave as it did before a specific event or time period is not an option.
- AI cannot be unlearned: AI models build the ability to detect patterns incrementally i.e., the intelligence begins to associate certain patterns with certain labels. The more it sees a specific pattern, the more able it is to recognize that pattern. However, AI cannot be unlearned i.e., after it has been trained, it is not possible to make the AI model unlearn a specific example, data, or event.
- Human Verification cannot be scaled: In an environment such as Google or Facebook, a human verification step is not feasible because of the sheer amount of data being generated. It would require an incredible number of people not only to verify the quality of AI, but also act and behave consistently and uniformly. This has been proven to be very hard to achieve.
This creates a massive problem for the enterprise. Typical non-consumer enterprises do not benefit from the network effects, stickiness, and a forgiving customer. The impact of low-quality AI is more pronounced for an enterprise in a B2B setting, and the fallback from mistakes can often be astronomical or consequential.
‘AI gone rogue’ does not offer too many options to the enterprise, which is why it is critical that the enterprise invest in and design guide rails for its AI-driven experiences to contain the damage from such AI. Some techniques that enterprises can use to put in safeguards are:
- Invest in an ensemble model technique where multiple AI models are polled for the answer, and the final answer is what the majority determines.
- Invest in ceiling and floor functions to prevent actions/decisions/behavior that is undesirable.
- Invest in detecting biases and assumptions in their training data by using multiple people to label, verify, select and process data.
AI blindspot of the ‘unknown unknown’ type
The biggest risk from AI to the enterprise is not from AI eventually growing to destroy the world and humanity, but from the AI completely missing substantial or consequential chunks of the environment where it needs to operate. The complex value supply chains (which it is a part of) and its relationship with other AI will inadvertently be added to the mix.
Step 1 of the AI design process can often miss key signals for a multitude of reasons:
- Business context not transferred from the domain expert to the data scientist.
- The domain experts themselves might have a flawed or biased view of the world containing incorrect assumptions.
- Key signals are unmonitored and not available for analysis.
- Available signals only provide coverage on the first-level value network.
At the same time, the signals available in Step 1 are often the only ones that can be monitored as part of any feedback loops. This means that quality measurement and quality metrics are also based on flawed or incomplete signals causing the quality measurement and improvement efforts also to harbor the AI blindspot.
The ‘unnoticed, unexpected impact’ of the AI blindspot
The most potent of AI blind spots is the unnoticeable and unexpected impact of AI in the real world. In an environment of complex value supply chains, the real-world impact of AI on the environment and the value supply chain can not only be unexpected but more likely to go unnoticed and eventually surface as shifted market, shifted customer preferences, shifted customer behavior, and shifted customer needs. In addition, the behavior of independent AI models in a complex value supply chain themselves needs to be monitored so as to avoid leaked biases, compounding quality issues, and cascading errors.
Take the example of a travel aggregator that uses AI to set prices. The AI is designed to maximize profits. So, at the hint of an increase in demand, the AI increases the prices. This continues for multiple seasons with the prices increasing slightly every season if there are limited service providers in the market. Eventually, the prices reach a point where they begin to impact the travel decisions of travelers who either change their travel plans, change their travel budget or possibly decide not to travel. A subtle change in behavior now can balloon into a significant market shift with the impact felt in all connected, complementary, and supplementary industries.
Uber’s surge pricing is another example of where AI-driven pricing can backfire. During extraordinary events, Uber’s surge pricing has increased prices by 5000x – making the service affordable to only a few select users. This has caused considerable backlash to Uber, with several users abandoning the service and cities/governments turning on Uber – causing far significant impact on their bottom line.
AI impact that goes unnoticed can trigger a cascading impact that fundamentally alters the market and the environment where the enterprise operates. AI impact that is unexpected is often a function of imperfect understanding of the operating environment and market. Unnoticed and unexpected AI impact can severely influence the health of an enterprise, its business, and its customers. Thus, it needs to be carefully monitored.
Removing AI blindspots
Because enterprises have a lot to lose from rogue AI, they have to work several times harder than consumer tech to protect themselves, their businesses, and their customers. Enterprises need to ensure that the AI that they build has the guide rails to protect them. Since no AI can be built with a 100% recall on all relevant signals, enterprises need to invest as much effort in tracking the AI and its behavior post-release as they do while building the AI. Enterprises need to lead a constant search for new relevant signals that capture the unnoticed, unexpected impact of their AI and determine rapidly when their AI has gone rogue, and devise strategies to reign in the AI.
Designing, building, and delivering AI-enabled products and services requires that enterprises cultivate a discipline in their signal design and selection capabilities that brings in multiple perspectives to increase the likelihood of covering the maximum possible set of relevant signals. This can be typically put in place by having a multi-disciplinary team participate in the signal discovery phase and that the signal discovery phase be repeated periodically. In addition, enterprises should ensure that they have a deep understanding of the value supply chain and are able to leverage this understanding to define new signals that can help them track the long-term impact of their AI models and how various AI models interact and influence each other.