This is episode three in the Accern Podcast Series. Our host is Accern Co-Founder and CTO Anshul Pandey. Here he speaks with Capgemini Data Solutions Strategy Head Arun Prasad. They discuss the role of end-user experience in natural language processing. Arun lays out three trends in artificial intelligence (AI). These include data, modeling, and user experience. They also consider the role of AI in verifying insurance claims. A common thread throughout the conversation is the importance of no-code and low-code platforms to empower users.
On three trends in AI
Arun says, “It's a combination of three steps. The first one is all about data. It's about bringing as much relevant information to the problem at hand. And how can it be sourced through from the rightful mechanics of picking the right variable, picking the source. Making sure that there is no model drifting around it. That it's current. And the currency of the data is good. And also bringing about historical values. These are non-traditional insights mixed along with the traditional insight. So that's problem number one in AI.
“Problem number two is modeling in itself. How can we make better models, more adaptable models, and the ability to reinforce the learning? Making sure that it's bias free, and ensuring that it's relevant to the use cases. Because in a fast changing environment, you don't want to go back and keep changing your models every time. So there's a resiliency aspect around the model itself.
“And then the third is also in user experience. Thinking about how the end user will perceive these things. If you're talking about underwriters in an insurance organization, claims adjusters in the insurance organization, or you're looking at a product person who's evolving new products in the insurance industry. Or the end consumer, who's probably the insured themselves, looking at their score. They want to know more about what's going on behind the scenes. And that's why there's a huge buzz these days about low-code/no-code platforms. Enabling the end user to actively give inputs and then have a combined experience at the delivery stage so that they can actually consume it.”
On the role of AI in verifying insurance claims
Arun says, “In the pandemic, most of the insurance companies had a big spike in the number of business interruption claims. The volume of these claims were so large that you don't have the facility to figure out which one is false, which one is right, etc. And if you're a product owner, you're always wanting to know, is there any coverage gaps that exist? Are there any exclusions that are explicit? You also have to worry about the interpretability of the exclusions.
“Case in point is we were working with an insurance company. We had to go through some 1200-plus forms for a single product line. And talking about one product line to figure out if there is any misinterpretation possibility. If there are any gaps in coverage that were not priced accordingly. Or any potential choice of better language for the upcoming new iteration. It's a very sensitive topic and it's also a very important topic as a product owner.
“Today there needs to be a system in the insurance industry which can actually collate and highlight things that could be misinterpreted. So that's why the prior company that I work for, ISO, has isoforms and that's validated and figured out. There are no other forms out there that they can actually utilize. But manuscript ID forms go through a lot of pain, you know, when such massive action happens. So it's a very good opportunity for AI systems to help the product owners, the underwriters, the claims adjusters in a massive way.”
Empowering end users through no code
According to Arun, “When you're actually looking at low code, no code platforms. Where you're essentially changing the game quite a bit. You can empower the end user. They say, ‘Oh, I can actually change the behavior of this dashboard by myself.’ Or ‘I can change the way that the system is behaving by myself.’ Going to the admin section, and then changing certain parameters and making sure that it's behaving the same way as you want it to. Without needing a big budget. Without needing the development information that you can actually go through, right. That's a very big enablement or primary. What's stopping you is the ability to empower the end users to low code no code.
“There's other issues, such as availability of data at the right time, having an infrastructure that could support. And enabling a culture, if you will, to have somewhat ‘open mind’ for adopting newer upcoming non-traditional data insights. Rather than saying that, okay, this is how I've always been doing it. And these are some things that I've always done. Instead, take that from an experience perspective, and also add to it more information of a non traditional nature, coming to you just in time. So the accessibility of that is also a very big deal. Today, that's stopping consideration for NLP use cases across the board.”