Call us at 650-400-3029 (PST)

Don’t bite off more AI than you can trust

Many, most, companies are actively considering using AI in their business, but few are making much progress. Some of this is due to a poorly defined approach. Some is due to confusion as to what AI means – mixing conversational AI (a user experience technology) with decision-making AI (an extension of predictive analytics and machine learning). But one of the most common concerns is that of trust – how can we trust an AI that we build?

This is a real issue for AI algorithms, which tend to be black-box and opaque about how they arrived at their conclusions. Much good work is being done to make AI algorithms, and opaque analytic models such as neural networks, more transparent. Frameworks such as LIME provide a plausible explanation of a model result, while vendors in this space are investing heavily in techniques to explain decisions made.

But most companies make the problem worse than it needs to be by biting off too big an AI problem: They try to use a single AI algorithm to make a whole business decision – should we approve this claim or should we originate this loan. This makes problems of trust much more serious because you are totally reliant on the AI for the decision. It also makes verifying the AI more complex, as the entire claim or loan has to be reviewed by a human to confirm the AI is behaving as expected. And it means that any new products or services cannot be addressed at all by your automation. The AI needs a certain amount of data before it can do anything useful and for new loan or policy types you have no data.

Working with clients we have found that the solution involved thinking about the decision, first. Instead of trying to solve the whole problem with AI, break down the decision into pieces. For each piece decide if you can solve it with business rules (transparent, easy to manage, easy to add for new products), predictive analytics (data-driven but reasonably transparet),  and AI. Using decision modeling and our DecisionsFirst approach, you can do exactly this.

This has three key benefits:

  • A more focused role for AI makes it easier to build trust, get business buy-in, manage change and get regulators on board.
  • You get business results faster as reducing the scope of the AI you are building makes it easier to get your AI into production. Its more practical work, less science experiment.
  • New products and services can be supported in your decision quickly and easily using explicit decision rules. As you gather data, your analytics and AI can be updated to incrementally improve results.

Plus, if you want, you can drive to automated decisions because much of the decision is being handled in ways that can be demonstrably compliant.

Our DecisionsFirst approach ensures you don’t choke on a huge bite of AI. By using decision modeling to find where the value of AI is highest and by integrating AI with other decision-making technologies (rules, analytics) we help you create a practical, trusted solution. Contact us to learn more.