Connect with us


Machine learning in the cloud is helping businesses innovate



Machine learning in the cloud is helping businesses innovate

In the past decade, machine learning has become a familiar technology for improving the efficiency and accuracy of processes like recommendations, supply chain forecasting, developing chatbots, image and text search, and automated customer service functions, to name a few. Machine learning today is becoming even more pervasive, impacting every market segment and industry, including manufacturing, SaaS platforms, health care, reservations and customer support routing, natural language processing (NLP) tasks such as intelligent document processing, and even food services.

Take the case of Domino’s Pizza, which has been using machine learning tools created to improve efficiencies in pizza production. “Domino’s had a project called Project 3/10, which aimed to have a pizza ready for pickup within three minutes of an order, or have it delivered within 10 minutes of an order,” says Dr. Bratin Saha, vice president and general manager of machine learning services for Amazon AI. “If you want to hit those goals, you have to be able to predict when a pizza order will come in. They use predictive machine learning models to achieve that.”

The recent rise of machine learning across diverse industries has been driven by improvements in other technological areas, says Saha—not the least of which is the increasing compute power in cloud data centers.

Over the last few years,” explains Saha, “the amount of total compute that can be thrown at machine learning problems has been doubling almost every four months. That’s 5 to 6 times more than Moore’s Law. As a result, a lot of functions that once could only be done by humans—things like detecting an object or understanding speech—are being performed by computers and machine learning models.”

“At AWS, everything we do works back from the customer and figuring out how we reduce their pain points and how we make it easier for them to do machine learning. At the bottom of the stack of machine learning services, we are innovating on the machine learning infrastructure so that we can make it cheaper for customers to do machine learning and faster for customers to do machine learning. There we have two AWS innovations. One is Inferentia and the other is Trainium.”

The current machine learning use cases that help companies optimize the value of their data to perform tasks and improve products is just the beginning, Saha says.

“Machine learning is just going to get more pervasive. Companies will see that they’re able to fundamentally transform the way they do business. They’ll see they are fundamentally transforming the customer experience, and they will embrace machine learning.”

Show notes and references

AWS Machine Learning Infrastructure

Full Transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma. This is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic today is machine learning in the cloud. Across all industries, the exponential increase of data collection demands faster and novel ways to analyze data, but also learn from it to make better business decisions. This is how machine learning in the cloud helps fuel innovation for enterprises, from startups to legacy players.

Two words for you: data innovation. My guest is Dr. Bratin Saha, vice president and general manager of machine learning services for Amazon AI. He has held executive roles at NVIDIA and Intel. This episode of Business Lab is produced in association with AWS. Welcome, Bratin.

Dr. Bratin Saha: Thank you for having me, Laurel. It’s great to be here.

Laurel: Off the top, could you give some examples of how AWS customers are using machine learning to solve their business problems?

Bratin: Let’s start with the definition of what we mean by machine learning. Machine learning is a process where a computer and an algorithm can use data, usually historical data, to understand patterns, and then use that information to make predictions about the future. Businesses have been using machine learning to do a variety of things, like personalizing recommendations, improving supply chain forecasting, making chatbots, using it in health care, and so on.

For example, Autodesk was able to use the machine learning infrastructure we have for their chatbots to improve their ability to handle requests by almost five times. They were able to use the improved chatbots to address more than 100,000 customer questions per month.

Then there’s Nerd Wallet. Nerd Wallet is a personal finance startup that did not personalize the recommendations they were giving to customers based on the customer’s preferences. They’re now using AWS machine learning services to tailor the recommendations to what a person actually wants to see, which has significantly improved their business.

Then we have customers like Thomson Reuters. Thomson Reuters is one of the world’s most trusted providers of answers, with teams of experts. They use machine learning to mine data to connect and organize information to make it easier for them to provide answers to questions.

In the financial sector, we have seen a lot of uptake in machine learning applications. One company, for example, is a payment service provider, was able to build a fraud detection model in just 30 minutes.

The reason I’m giving you so many examples is to show how machine learning is becoming pervasive. It’s going across geos, going across market segments, and being used by companies of all kinds. I have a few other examples I want to share to show how machine learning is also touching industries like manufacturing, food delivery, and so on.

Domino’s Pizza, for example, had a project called Project 3/10, where they wanted to have a pizza ready for pickup within three minutes of an order, or have it delivered within 10 minutes of an order. If you want to hit those goals, you have to be able to predict when a pizza order will come in. They use machine learning models to look at the history of orders. Then they use the machine learning model that was trained on that order history. They were then able to use that to predict when an order would come in, and they were able to deploy this to many stores, and they were able to hit the targets.

Machine learning has become pervasive in how our customers are doing business. It’s starting to be adopted in virtually every industry. We have more than several hundred thousand customers using our machine learning services. One of our machine learning services, Amazon SageMaker, has been one of the fastest growing services in AWS history.

Laurel: Just to recap, customers can use machine learning services to solve a number of problems. Some of the high-level problems would be a recommendation engine, image search, text search, and customer service, but then, also, to improve the quality of the product itself.

I like the Domino’s Pizza example. Everyone understands how a pizza business may work. But if the goal is to turn pizzas around as quickly as possible, to increase that customer satisfaction, Domino’s had to be in a place to collect data, be able to analyze that historic data on when orders came in, how quickly they turned around those orders, how often people ordered what they ordered, et cetera. That was what the prediction model was based on, correct?

Bratin: Yes. You asked a question about how we think about machine learning services. If you look at the AWS machine learning stack, we think about it as a three-layered service. The bottom layer is the machine learning infrastructure.

What I mean by this is when you have a model, you are training the model to predict something. Then the predictions are where you do this thing called inference. At the bottom layer, we provide the most optimized infrastructure, so customers can build their own machine learning systems.

Then there’s a layer on top of that, where customers come and tell us, “You know what? I just want to be focused on the machine learning. I don’t want to build a machine learning infrastructure.” This is where Amazon SageMaker comes in.

Then there’s a layer on top of that, which is what we call AI services, where we have pre-trained models that can be used for many use cases.

So, we look at machine learning as three layers. Different customers use services at different layers, based on what they want, based on the kind of data science expertise they have, and based on the kind of investments they want to make.

The other part of our view goes back to what you mentioned at the beginning, which is data and innovation. Machine learning is fundamentally about gaining insights from data, and using those insights to make predictions about the future. Then you use those predictions to derive business value.

In the case of Domino’s Pizza, there is data around historical order patterns that can be used to predict future order patterns. The business value there is improving customer service by getting orders ready in time. Another example is Freddy’s Frozen Custard, which used machine learning to customize menus. As a result of that, they were able to get a double-digit increase in sales. So, it’s really about having data, and then using machine learning to gain insights from that data. Once you’ve gained insights from that data, you use those insights to drive better business outcomes. This goes back what you mentioned at the beginning: you start with data and then you use machine learning to innovate on top of it.

Laurel: What are some of the challenges organizations have as they start their machine learning journeys?

Bratin: The first thing is to collect data and make sure it is structured well—clean data—that doesn’t have a lot of anomalies. Then, because machine learning models typically get better if you can train them with more and more data, you need to continue collecting vast amounts of data. We often see customers create data lakes in the cloud, like on Amazon S3, for example. So, the first step is getting your data in order and then potentially creating data lakes in the cloud that you can use to feed your data-based innovation.

The next step is to get the right infrastructure in place. That is where some customers say, “Look, I want to just build the whole infrastructure myself,” but the vast majority of customers say, “Look, I just want to be able to use a managed service because I don’t want to have to invest in building the infrastructure and maintaining the infrastructure,” and so on.

The next is to choose a business case. If you haven’t done machine learning before, then you want to get started with a business case that leads to a good business outcome. Often what can happen with machine learning is to see it’s cool, do some really cool demos, but those don’t translate into business outcomes, so you start experiments and you don’t really get the support that you need.

Finally, you need commitment because machine learning is a very iterative process. You’re training a model. The first model you train may not get you the results you desire. There’s a process of experimentation and iteration that you have to go through, and it can take you a few months to get results. So, putting together a team and giving them the support they need is the final part.

If I had to put this in terms of a sequence of steps, it’s important to have data and a data culture. It’s important in most cases for customers to choose to use a managed service to build and train their models in the cloud, simply because you get storage a lot easier and you get compute a lot easier. The third is to choose a use case that is going to have business value, so that your company knows this is something that you want to deploy at scale. And then, finally, be patient and be willing to experiment and iterate, because it often takes a little bit of time to get the data you need to train the models well and actually get the business value.

Laurel: Right, because it’s not something that happens overnight.

Bratin: It does not happen overnight.

Laurel: How do companies prepare to take advantage of data? Because, like you said, this is a four-step process, but you still have to have patience at the end to be iterative and experimental. For example, do you have ideas on how companies can think about their data in ways that makes them better prepared to see success, perhaps with their first experiment, and then perhaps be a little bit more adventurous as they try other data sets or other ways of approaching the data?

Bratin: Yes. Companies usually start with a use case where they have a history of having good data. What I mean by a history of having good data is that they have a record of transactions that have been made, and most of the records are accurate. For example, you don’t have a lot of empty record transactions.

Typically, we have seen that the level of data maturity varies between different parts of a company. You start with the part of a company where the data culture is a lot more prevalent. You start from there so that you have a record of historical transactions that you stored. You really want to have fairly dense data to use to train your models.

Laurel: Why is now the right time for companies to start thinking about deploying machine learning in the cloud?

Bratin: I think there is a confluence of factors happening now. One is that machine learning over the last five years has really taken off. That is because the amount of compute available has been increasing at a very fast rate. If you go back to the IT revolution, the IT revolution was driven by Moore’s Law. Under Moore’s Law, compute doubled every 18 months.

Over the last few years, the amount of total compute has been doubling almost every four months. That’s five times more than Moore’s Law. The amount of progress we have seen in the last four to five years has been really amazing. As a result, a lot of functions that once could only be done by humans—like detecting an object or understanding speech—are being performed by computers and machine learning models. As a result of that, a lot of capabilities are getting unleashed. That is what has led to this enormous increase in the applicability of machine learning—you can use it for personalization, you can use it in health care and finance, you can use it for tasks like churn prediction, fraud detection, and so on.

One reason that now is a good time to get started on machine learning in the cloud is just the enormous amount of progress in the last few years that is unleashing these new capabilities that were previously not possible.

The second reason is that a lot of the machine learning services being built in the cloud are making machine learning accessible to a lot more people. Even if you look at four to five years ago, machine learning was something that only very expert practitioners could do and only a handful of companies were able to do because they had expert practitioners. Today, we have more than a hundred thousand customers using our machine learning services. That tells you that machine learning has been democratized to a large extent, so that many more companies can start using machine learning and transforming their business.

Then comes the third reason, which is that you have amazing capabilities that are now possible, and you have cloud-based tools that are democratizing these capabilities. The easiest way to get access to these tools and these capabilities is through the cloud because, first, it provides the foundation of compute and data. Machine learning is, at its core, about throwing a lot of compute on data. In the cloud, you get access to the latest compute. You pay as you go, and you don’t have to make upfront huge investments to set up compute farms. You also get all the storage and the security and privacy and encryption, and so on—all of that core infrastructure that is needed to get machine learning going.

Laurel: So Bratin, how does AWS innovate to help organizations with machine learning, model training, and inference?

Bratin: At AWS, everything we do works back from the customer and figuring out how we reduce their pain points and how we make it easier for them to do machine learning. At the bottom of the stack of machine learning services, we are innovating on the machine learning infrastructure so that we can make it cheaper for customers to do machine learning and faster for customers to do machine learning. There we have two AWS innovations. One is Inferentia and the other is Trainium. These are custom chips that we designed at AWS that are purpose-built for inference, which is the process of making machine learning predictions, and for training. Inferentia today provides the lowest cost inference instances in the cloud. And Trainium, when it becomes available later this year, will be providing the most powerful and the most cost-effective training instances in the cloud.

We have a number of customers using Inferentia today. Autodesk uses Inferentia to host their chatbot models, and they were able to improve the cost and latencies by almost five times. Airbnb has over four million hosts who welcome more than 900 million guests in almost every country. Airbnb saw a two-times improvement in throughput by using the Inferentia instances, which means that they were able to serve almost twice as many requests for customer support than they would otherwise have been able to do. Another company called Sprinklr develops a SaaS customer experience platform, and they have an AI-driven unified customer experience management platform. They were able to deploy the natural language processing models in Inferentia, and they saw significant performance improvements as well.

Even internally, our Alexa team was able to move their inferences over from GPUs to Inferentia-based systems, and they saw more than a 50% improvement in cost due to these Inferentia-based systems. So, we have that at the lowest layer of the infrastructure. On top of that, we have the managed services, where we are innovating so that customers become a lot more productive. That is where we have SageMaker Studio, which is the world’s first IDE, that offers tools like debuggers and profilers and explainability, and a host of other tools—like a visual data preparation tool—that make customers a lot more productive. At the top of it, we have AI services where we provide pre-trained models for use cases like search and document processing—Kendra for search, Textract for document processing, image and video recognition—where we are innovating to make it easier for customers to address these use cases right out of the box.

Laurel: So, there are some benefits, for sure, for machine learning services in the cloud—like improved customer service, improved quality, and, hopefully, increased profit, but what key performance indicators are important for the success of machine learning projects, and why are these particular indicators so important?

Bratin: We are working back from the customer, working back from the pain points based on what customers tell us, and inventing on behalf of the customers to see how we can innovate to make it easier for them to do machine learning. One part of machine learning, as I mentioned, is predictions. Often, the big cost in machine learning in terms of infrastructure is in the inference. That is why we came out with Inferentia, which are today the most cost-effective machine learning instances in the cloud. So, we are innovating at the hardware level.

We also announced Tranium. That will be the most powerful and the most cost-effective training instances in the cloud. So, we are first innovating at the infrastructure layer so that we can provide customers with the most cost-effective compute.

Next, we have been looking at the pain points of what it takes to build an ML service. You need data collection services, you need a way to set up a distributed infrastructure, you need a way to set up an inference system and be able to auto scale it, and so on. We have been thinking a lot about how to build this infrastructure and innovation around the customers.

Then we have been looking at some of the use cases. So, for a lot of these use cases, whether it be search, or object recognition and detection, or intelligent document processing, we have services that customers can directly use. And we continue to innovate on behalf of them. I’m sure we’ll come up with a lot more features this year and next to see how we can make it easier for our customers to use machine learning.

Laurel: What key performance indicators are important for the success of machine learning projects? We talked a little bit about how you like to improve customer service and quality, and of course increase profit, but to assign a KPI to a machine learning model, that’s something a bit different. And why are they so important?

Bratin: To assign the KPIs, you need to work back from your use case. So, let’s say you want to use machine learning to reduce fraud. Your overall KPI is, what was the reduction in fraud detection? Or let’s say you want to use it for churn reduction. You are running a business, your customers are coming, but a certain number of them are churning off. You want to then start with, how do I reduce my customer churn by some percent? So, you start with the top-level KPI, which is a business outcome that you want to achieve, and how to get an improvement in that business outcome.

Let’s take the churn prediction example. At the end of the day, what is happening is you have a machine learning model that is using data and the amount of training it had to make certain predictions around which customer is going to churn. That boils down, then, to the accuracy of the model. If the model is saying 100 people are going to churn, how many of them actually churn? So, that becomes a question of accuracy. And then you also want to look at how well the machine learning model detected all the cases.

So, there are two aspects of quality that you’re looking for. One is, of the things that the model predicted, how many of them actually happened? Let’s say this model predicted these 100 customers are going to churn. How many of them actually churn? And let’s just say 95 of them actually churn. So, you have a 95% precision there. The other aspect is, suppose you’re running this business and you have 1,000 customers. And let’s say in a particular year, 200 of them churned. How many of those 200 did the model predict would actually churn? That is called recall, which is, given the total set, how much is the machine learning model able to predict? So, fundamentally, you start from this business metric, which is what is the outcome I want to get, and then you can convert this down into model accuracy metrics in terms of precision, which is how accurate was the model in predicting certain things, and then recall, which is how exhaustive or how comprehensive was the model in detecting all situations.

So, at a high level, these are the things you’re looking for. And then you’ll go down to lower-level metrics. The models are running on certain instances on certain pieces of compute: what was the infrastructure cost and how do I reduce those costs? These services, for example, are being used to handle surges during Prime Day or Black Friday, and so on. So, then you get to those lower-level metrics, which is, am I able to handle surges in traffic? It’s really a hierarchical set of KPIs. Start with the business metric, get down to the model metrics, and then get down to the infrastructure metrics.

Laurel: When you think about machine learning in the cloud in the next three to five years, what are you seeing? What are you thinking about? What can companies do now to prepare for what will come?

Bratin: I think what will happen is that machine learning will get more pervasive. Because what will happen is customers will see that they’re able to fundamentally transform the way to do business. Companies will see that they fundamentally are transforming the customer experience, and they will embrace machine learning. We have seen that at Amazon as well—we have a long history of investing in machine learning. We have been doing this for more than 20 years, and we have changed how we serve customers with or Alexa or Amazon Go, Prime. And now with AWS, where we have taken this knowledge that we have gained over the past two decades of deploying machine learning at scale and are making it available to our customers now. So, I do think we will see a much more rapid uptake of machine learning.

Then we’ll see a lot of broad use cases like intelligent document processing, a lot of paper-based processing, will become automated because a machine learning model is now able to scan those documents and infer information from them—infer semantic information, not just the syntax. If you think of paper-based processes, whether it’s loan processing and mortgage processing, a lot of that will get automated. Then, we are also seeing businesses get a lot more efficient in terms of personalization like forecasting, supply chain forecasting, demand forecasting, and so on.

We are seeing a lot of uptake of machine learning in health. We have customers, GE for example, uses a machine learning service for radiology. They use machine learning to scan radiology images to determine which ones are more serious, and therefore, you want to get the patients in early. We are also seeing potential and opportunity for using machine learning in genomics for precision medicine. So, I do think a lot of innovation is going to happen with machine learning in health care.

We’ll see a lot of machine learning in manufacturing. A lot of manufacturing processes will become more efficient, get automated, and become safer because of machine learning.

So, I see in the next five to 10 years, pick any domain—like sports, NFL, NASCAR, Bundesliga, they’re all using our machine learning services. NFL uses Amazon SageMaker to give their fans a more immersive experience through Next Gen Stats. Bundesliga uses our machine learning services to make a range of predictions and provide a much more immersive experience. Same with NASCAR. NASCAR has a lot of data history from their races, and they’re using that to train models to provide a much more immersive experience to their viewers because they can predict much more easily what’s going to happen. So, sports, entertainment, financial services, health care, manufacturing—I think we’ll see a lot more uptake of machine learning and making the world a smarter, healthier, and safer place.

Laurel: What a great conversation. Thank you very much, Bratin for joining us on Business Lab.

Bratin: Thank you. Thank you for having me. It was really nice talking to you.

Laurel: That was Dr. Bratin Saha, Vice President and General Manager of Machine Learning Services for Amazon AI, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review overlooking the Charles river. That’s it for this episode of Business Law. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can also find us in prints on the web and at events each year around the world. For more information about us and the show, please check out our website at This show is available wherever you get your podcasts. If you enjoy this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.


The hunter-gatherer groups at the heart of a microbiome gold rush



The hunter-gatherer groups at the heart of a microbiome gold rush

The first step to finding out is to catalogue what microbes we might have lost. To get as close to ancient microbiomes as possible, microbiologists have begun studying multiple Indigenous groups. Two have received the most attention: the Yanomami of the Amazon rainforest and the Hadza, in northern Tanzania. 

Researchers have made some startling discoveries already. A study by Sonnenburg and his colleagues, published in July, found that the gut microbiomes of the Hadza appear to include bugs that aren’t seen elsewhere—around 20% of the microbe genomes identified had not been recorded in a global catalogue of over 200,000 such genomes. The researchers found 8.4 million protein families in the guts of the 167 Hadza people they studied. Over half of them had not previously been identified in the human gut.

Plenty of other studies published in the last decade or so have helped build a picture of how the diets and lifestyles of hunter-gatherer societies influence the microbiome, and scientists have speculated on what this means for those living in more industrialized societies. But these revelations have come at a price.

A changing way of life

The Hadza people hunt wild animals and forage for fruit and honey. “We still live the ancient way of life, with arrows and old knives,” says Mangola, who works with the Olanakwe Community Fund to support education and economic projects for the Hadza. Hunters seek out food in the bush, which might include baboons, vervet monkeys, guinea fowl, kudu, porcupines, or dik-dik. Gatherers collect fruits, vegetables, and honey.

Mangola, who has met with multiple scientists over the years and participated in many research projects, has witnessed firsthand the impact of such research on his community. Much of it has been positive. But not all researchers act thoughtfully and ethically, he says, and some have exploited or harmed the community.

One enduring problem, says Mangola, is that scientists have tended to come and study the Hadza without properly explaining their research or their results. They arrive from Europe or the US, accompanied by guides, and collect feces, blood, hair, and other biological samples. Often, the people giving up these samples don’t know what they will be used for, says Mangola. Scientists get their results and publish them without returning to share them. “You tell the world [what you’ve discovered]—why can’t you come back to Tanzania to tell the Hadza?” asks Mangola. “It would bring meaning and excitement to the community,” he says.

Some scientists have talked about the Hadza as if they were living fossils, says Alyssa Crittenden, a nutritional anthropologist and biologist at the University of Nevada in Las Vegas, who has been studying and working with the Hadza for the last two decades.

The Hadza have been described as being “locked in time,” she adds, but characterizations like that don’t reflect reality. She has made many trips to Tanzania and seen for herself how life has changed. Tourists flock to the region. Roads have been built. Charities have helped the Hadza secure land rights. Mangola went abroad for his education: he has a law degree and a master’s from the Indigenous Peoples Law and Policy program at the University of Arizona.

Continue Reading


The Download: a microbiome gold rush, and Eric Schmidt’s election misinformation plan



The Download: a microbiome gold rush, and Eric Schmidt’s election misinformation plan

Over the last couple of decades, scientists have come to realize just how important the microbes that crawl all over us are to our health. But some believe our microbiomes are in crisis—casualties of an increasingly sanitized way of life. Disturbances in the collections of microbes we host have been associated with a whole host of diseases, ranging from arthritis to Alzheimer’s.

Some might not be completely gone, though. Scientists believe many might still be hiding inside the intestines of people who don’t live in the polluted, processed environment that most of the rest of us share. They’ve been studying the feces of people like the Yanomami, an Indigenous group in the Amazon, who appear to still have some of the microbes that other people have lost. 

But there is a major catch: we don’t know whether those in hunter-gatherer societies really do have “healthier” microbiomes—and if they do, whether the benefits could be shared with others. At the same time, members of the communities being studied are concerned about the risk of what’s called biopiracy—taking natural resources from poorer countries for the benefit of wealthier ones. Read the full story.

—Jessica Hamzelou

Eric Schmidt has a 6-point plan for fighting election misinformation

—by Eric Schmidt, formerly the CEO of Google, and current cofounder of philanthropic initiative Schmidt Futures

The coming year will be one of seismic political shifts. Over 4 billion people will head to the polls in countries including the United States, Taiwan, India, and Indonesia, making 2024 the biggest election year in history.

Continue Reading


Navigating a shifting customer-engagement landscape with generative AI



Navigating a shifting customer-engagement landscape with generative AI

A strategic imperative

Generative AI’s ability to harness customer data in a highly sophisticated manner means enterprises are accelerating plans to invest in and leverage the technology’s capabilities. In a study titled “The Future of Enterprise Data & AI,” Corinium Intelligence and WNS Triange surveyed 100 global C-suite leaders and decision-makers specializing in AI, analytics, and data. Seventy-six percent of the respondents said that their organizations are already using or planning to use generative AI.

According to McKinsey, while generative AI will affect most business functions, “four of them will likely account for 75% of the total annual value it can deliver.” Among these are marketing and sales and customer operations. Yet, despite the technology’s benefits, many leaders are unsure about the right approach to take and mindful of the risks associated with large investments.

Mapping out a generative AI pathway

One of the first challenges organizations need to overcome is senior leadership alignment. “You need the necessary strategy; you need the ability to have the necessary buy-in of people,” says Ayer. “You need to make sure that you’ve got the right use case and business case for each one of them.” In other words, a clearly defined roadmap and precise business objectives are as crucial as understanding whether a process is amenable to the use of generative AI.

The implementation of a generative AI strategy can take time. According to Ayer, business leaders should maintain a realistic perspective on the duration required for formulating a strategy, conduct necessary training across various teams and functions, and identify the areas of value addition. And for any generative AI deployment to work seamlessly, the right data ecosystems must be in place.

Ayer cites WNS Triange’s collaboration with an insurer to create a claims process by leveraging generative AI. Thanks to the new technology, the insurer can immediately assess the severity of a vehicle’s damage from an accident and make a claims recommendation based on the unstructured data provided by the client. “Because this can be immediately assessed by a surveyor and they can reach a recommendation quickly, this instantly improves the insurer’s ability to satisfy their policyholders and reduce the claims processing time,” Ayer explains.

All that, however, would not be possible without data on past claims history, repair costs, transaction data, and other necessary data sets to extract clear value from generative AI analysis. “Be very clear about data sufficiency. Don’t jump into a program where eventually you realize you don’t have the necessary data,” Ayer says.

The benefits of third-party experience

Enterprises are increasingly aware that they must embrace generative AI, but knowing where to begin is another thing. “You start off wanting to make sure you don’t repeat mistakes other people have made,” says Ayer. An external provider can help organizations avoid those mistakes and leverage best practices and frameworks for testing and defining explainability and benchmarks for return on investment (ROI).

Using pre-built solutions by external partners can expedite time to market and increase a generative AI program’s value. These solutions can harness pre-built industry-specific generative AI platforms to accelerate deployment. “Generative AI programs can be extremely complicated,” Ayer points out. “There are a lot of infrastructure requirements, touch points with customers, and internal regulations. Organizations will also have to consider using pre-built solutions to accelerate speed to value. Third-party service providers bring the expertise of having an integrated approach to all these elements.”

Continue Reading

Copyright © 2021 Seminole Press.