Chatbots are seemingly taking over the internet and mobile applications. First-time visitors to a site or app now are often greeted by a cheery chatbot seeking to answer questions or guide them through an onboarding process.
When site or app users have problems, retailers, banks, and other businesses are asking users if they want to chat rather than call. More often than not, this routes the users into an initial conversation with a chatbot.
Is sending a customer or user to a chatbot something they actually want?
A debate continues to rage about whether chatbots are viewed as annoying or helpful by customers. In fact, both views are true. And the determining factor for whether a user likes interacting with a chatbot — or even prefers it to a human interaction — depends entirely on the context.
Two Paths Diverged in the AI Chatbot Road: T-Mobile’s “No Robots” vs. BofA’s Erica Personal AI
On August 15, 2018, upstart mobile voice and data provider T-Mobile announced it would ban all robots and automated systems from direct interactions with customers on support calls and chats.
“There are no robots or automated phone menus. No getting bounced around from department to department. No shouting “representative,” crowed T-Mobile in a press release.
At the same time, T-Mobile expanded its live customer support hours to 24/7, going counter to trends limiting human support agent times to working hours. Since the move, T-Mobile claims it has achieved higher levels of customer satisfaction and improved customer retention/
Survey after survey has found people tolerate automated customer support through conversational AI and chatbots but don’t love it and don’t prefer it.
On the other extreme, some large enterprises have taken an alternative path of going all-in-on Conversational AI and chatbots. In June 2018, Bank of America launched Erica (a play on the last part of the word America).
The bot resides within Bank of America’s mobile banking app rolled out to all users. By December 2019, 10 million of the Bank of America’s mobile users had activated Erica within their apps and were interacting with the chatbot.
One of the more sophisticated chatbots on the market, Erica allows users to ask questions by voice by text messages, or simply to navigate through tap menus.
Surveys of users showed satisfaction with Erica of over 80%, which is a staggering figure in the often challenging world of financial services NPS scores.
So, which is the right way to deliver a superior Customer Experience in the most efficient manner? T-Mobile’s add-more-humans approach or Bank of America’s deep embrace of expensive but powerful AI?
Do customers prefer to work with bots or humans? The truth is complicated.
Surveys have indicated that customers prefer to speak with humans for support needs.
Conversely, by a variety of metrics, customers are growing more accustomed to chatbots and Conversational AI.
Over time, the reality is chatbots will handle a greater and greater portion of customer interactions and will also become an indispensable tool for human support and sales agents – virtually merging into one support continuum.
Equally important, in technology the true test of adoption is not what surveys say but what users do. Increasingly, that means talking or texting with chatbots rather than waiting to talk to a human,
Consumer Trends Driving Growth of Chatbots
More and more customers are choosing to interact with a chatbot over traditional phone support. This choice is being driven by broad technology user trends.
- Text More Popular Than Voice: We are more used to texting to communicate and even prefer it over voice calls. For large swathes of the populace, these modes are preferred over live conversations with humans. The rise of WhatsApp and Facebook Messenger, which have collectively over 3 billion users and are primarily used for chatting to avoid SMS fees, has further accelerated the trend.
- More Comfortable Interacting With Machines: We are growing more and more used to interacting with machines to ask questions or make requests. Asking Alexa to turn off the lights or telling Siri to call your mother is great. Asking an airline chatbot if any flights going into Denver are delayed has human-machine interactions that are now a normal part of our lives.
- Less Patience: We are becoming less patient. Netflix brings us millions of movie titles in a search window. We use an app to tell our Roomba to clean the house before we get home from work – and get our wish.Technology brings us instant gratification. As a result, we are less likely to be willing to spend time on hold or wait for a call back if a chatbot can answer our questions or take care of our problems. Related to this point is the fact that chatbots run 24/7 every day of the year. They scale up or down to meet demand and are always available. For consumers used to instant gratification, this is a powerful draw.
- Better Algorithms: The technology behind chatbots, Natural Language Processing (NLP), has progressed in leaps and bounds. This means we impatient humans can more easily interact with chatbots without having to repeat ourselves or resort to using multiple phrasings to get our request to register.New algorithms – most notably GPT3 – have come out that can reliably and economically be trained to understand specific types of subject matter and respond in ways that are remarkably close to normal human response.
According to the 2019 AI Index Report, published by a global consortium including Stanford University and Google, NLP can now comprehend passages of text better than humans. This also has allowed more advanced chatbots to handle complex, multi-step support tasks.
Chatbots can provide proactive guidance and even anticipate needs. A flight delay question might prompt an advanced chatbot to offer to book a hotel room for the night because it knew a late flight might mean a missed connection.
Three Simple Questions to Determine Context
Clearly, chatbot usage is growing and users are voting with their texts and their voices, indicating preference. That said, the desire for a human versus a bot remains highly context-dependent.
Context dictates what the chatbot is capable of doing in any given situation. Context is also variable and can shift with the state of the user as they run through a customer or support journey.
Understanding where a user is on the journey and their context can inform expectations of and proper usage of chatbots. Here are some simple questions to determine whether to use a chatbot or to what the limitations of a chatbot might be in a given situation.
Does the user want to talk to (or likely prefer to talk to) a representative?
This is a no-brainer. If they don’t want to chat, don’t make them chat. Ironically, many companies still push hard to drive users into chat support queues under the thesis that users will learn how to chat and adopt it (and save the company money).
Usually the act of saying “representative” is intentional enough that a company is far better off complying with their wishes. Here, too, AI can provide a guide.
Over time, companies can gather data about customer preferences and use that to better understand which conversational mode is best for what type of users based on any descriptive characteristics.
Can a chatbot recognize the user?
If a chatbot can recognize the identity of a user, then it can tap profile and historical data about the user to generate more bespoke solutions and conversation. Identifying the user is far easier when the user is on a mobile app or logged into a website or calling from a known phone number.
This question does limit advanced support to existing users rather than new users for which there is little history. But when it is possible to recognize the user and match them to a profile? If you can get a profile, then the canvas for chatbot to operate with is much more broad and interactions can be much more detailed rather than limited to simple keyword and menu-driven interactions.
Is the user asking a complicated or simple question?
Chatbots can quickly and easily dispatch more personalized answers to many simple questions. “When is my reservation?” or “What is my order status?” are easy to answer when the identity of a user is known and they are operating inside of a controlled environment.
In a similar manner, when the company is using a chatbot to replace a form or other structured information gathering exercise, then chatbots or Conversational AI can operate very effectively.
For more complex questions that involve multiple variables and may not be as easy to understand based on pure keyword analysis, more advanced chatbots that leverage NLP and conversational AI can increasingly provide back-and-forth support that is on par or better than human agents.
This is riding the curve of rapid improvements in AI, as demonstrated through the steady increase in the ability of AI systems to understanding complete even more complex natural language tasks as well or better than humans.
Conclusion: The Future of Customer Conversations Is A Hybrid Between Chatbots and Humans
T-Mobile may claim that it does not force anyone to talk to robots but in reality, its system can recognize automatically whether you are calling from your own device.
Behind the scenes, T-Mobile uses analytics and automation to help customer service agents do their jobs more quickly and efficiently. In this case, the chatbots may not be visible on the front but their output and enablement are visible on the back. Agents happen to act as the intermediary between the two.
This is the true future of chatbots – a technology that acts as a fluid interface somewhere in the customer journey to provide assistance. The recipient may be a customer talking to a chatbot or a support agent that has a chatbot automatically populating conversational snippets.
In this scenario, a company like T-Mobile can help agents work faster, answer questions more quickly, and breeze through the simplest queries. They can then save more time for the harder customers and questions that internal systems cannot automatically address.
BofA’s Erica can serve as a more forward presence, intercepting and deflecting simple inquiries. When a query grows too complex — and out of context for Erica — then the AI chatbot can easily route the request to a human support agent spending most of their time on tougher cases.
So which is right? Do customers prefer to talk to human agents rather than chatbots as the surveys indicate? Or do customers prefer to use chatbots to waiting to talk to humans, as the usage trends clearly indicate?
The answer is both. If customers are voting with their time spent and their immediate menu choices, they clearly do like intelligent chatbots more than waiting for a human. Meaning, they prefer chatbots and AI, given the right context and the right situation.
On the other hand, humans still and probably always will prefer live support agents when they have complex, nested and conditional questions to resolve.
These types of questions require the most advanced sort of conversational intelligence — one that even agents do better fulfilling when assisted by technology and AI behind the scene. The irony is that either way, customers are talking to chatbots — directly or indirectly.
The technology to improve both the experiences of T-Mobile and Bank of America customers is invariably the same under the covers. The sooner businesses realize that this is never an either-or-equation, the sooner they can determine where AI should sit in their Customer Experience stack.
Fintech Kennek raises $12.5M seed round to digitize lending
London-based fintech startup Kennek has raised $12.5 million in seed funding to expand its lending operating system.
According to an Oct. 10 tech.eu report, the round was led by HV Capital and included participation from Dutch Founders Fund, AlbionVC, FFVC, Plug & Play Ventures, and Syndicate One. Kennek offers software-as-a-service tools to help non-bank lenders streamline their operations using open banking, open finance, and payments.
The platform aims to automate time-consuming manual tasks and consolidate fragmented data to simplify lending. Xavier De Pauw, founder of Kennek said:
“Until kennek, lenders had to devote countless hours to menial operational tasks and deal with jumbled and hard-coded data – which makes every other part of lending a headache. As former lenders ourselves, we lived and breathed these frustrations, and built kennek to make them a thing of the past.”
The company said the latest funding round was oversubscribed and closed quickly despite the challenging fundraising environment. The new capital will be used to expand Kennek’s engineering team and strengthen its market position in the UK while exploring expansion into other European markets. Barbod Namini, Partner at lead investor HV Capital, commented on the investment:
“Kennek has developed an ambitious and genuinely unique proposition which we think can be the foundation of the entire alternative lending space. […] It is a complicated market and a solution that brings together all information and stakeholders onto a single platform is highly compelling for both lenders & the ecosystem as a whole.”
The fintech lending space has grown rapidly in recent years, but many lenders still rely on legacy systems and manual processes that limit efficiency and scalability. Kennek aims to leverage open banking and data integration to provide lenders with a more streamlined, automated lending experience.
The seed funding will allow the London-based startup to continue developing its platform and expanding its team to meet demand from non-bank lenders looking to digitize operations. Kennek’s focus on the UK and Europe also comes amid rising adoption of open banking and open finance in the regions.
Featured Image Credit: Photo from Kennek.io; Thank you!
Fortune 500’s race for generative AI breakthroughs
As excitement around generative AI grows, Fortune 500 companies, including Goldman Sachs, are carefully examining the possible applications of this technology. A recent survey of U.S. executives indicated that 60% believe generative AI will substantially impact their businesses in the long term. However, they anticipate a one to two-year timeframe before implementing their initial solutions. This optimism stems from the potential of generative AI to revolutionize various aspects of businesses, from enhancing customer experiences to optimizing internal processes. In the short term, companies will likely focus on pilot projects and experimentation, gradually integrating generative AI into their operations as they witness its positive influence on efficiency and profitability.
Goldman Sachs’ Cautious Approach to Implementing Generative AI
In a recent interview, Goldman Sachs CIO Marco Argenti revealed that the firm has not yet implemented any generative AI use cases. Instead, the company focuses on experimentation and setting high standards before adopting the technology. Argenti recognized the desire for outcomes in areas like developer and operational efficiency but emphasized ensuring precision before putting experimental AI use cases into production.
According to Argenti, striking the right balance between driving innovation and maintaining accuracy is crucial for successfully integrating generative AI within the firm. Goldman Sachs intends to continue exploring this emerging technology’s potential benefits and applications while diligently assessing risks to ensure it meets the company’s stringent quality standards.
One possible application for Goldman Sachs is in software development, where the company has observed a 20-40% productivity increase during its trials. The goal is for 1,000 developers to utilize generative AI tools by year’s end. However, Argenti emphasized that a well-defined expectation of return on investment is necessary before fully integrating generative AI into production.
To achieve this, the company plans to implement a systematic and strategic approach to adopting generative AI, ensuring that it complements and enhances the skills of its developers. Additionally, Goldman Sachs intends to evaluate the long-term impact of generative AI on their software development processes and the overall quality of the applications being developed.
Goldman Sachs’ approach to AI implementation goes beyond merely executing models. The firm has created a platform encompassing technical, legal, and compliance assessments to filter out improper content and keep track of all interactions. This comprehensive system ensures seamless integration of artificial intelligence in operations while adhering to regulatory standards and maintaining client confidentiality. Moreover, the platform continuously improves and adapts its algorithms, allowing Goldman Sachs to stay at the forefront of technology and offer its clients the most efficient and secure services.
Featured Image Credit: Photo by Google DeepMind; Pexels; Thank you!
UK seizes web3 opportunity simplifying crypto regulations
As Web3 companies increasingly consider leaving the United States due to regulatory ambiguity, the United Kingdom must simplify its cryptocurrency regulations to attract these businesses. The conservative think tank Policy Exchange recently released a report detailing ten suggestions for improving Web3 regulation in the country. Among the recommendations are reducing liability for token holders in decentralized autonomous organizations (DAOs) and encouraging the Financial Conduct Authority (FCA) to adopt alternative Know Your Customer (KYC) methodologies, such as digital identities and blockchain analytics tools. These suggestions aim to position the UK as a hub for Web3 innovation and attract blockchain-based businesses looking for a more conducive regulatory environment.
Streamlining Cryptocurrency Regulations for Innovation
To make it easier for emerging Web3 companies to navigate existing legal frameworks and contribute to the UK’s digital economy growth, the government must streamline cryptocurrency regulations and adopt forward-looking approaches. By making the regulatory landscape clear and straightforward, the UK can create an environment that fosters innovation, growth, and competitiveness in the global fintech industry.
The Policy Exchange report also recommends not weakening self-hosted wallets or treating proof-of-stake (PoS) services as financial services. This approach aims to protect the fundamental principles of decentralization and user autonomy while strongly emphasizing security and regulatory compliance. By doing so, the UK can nurture an environment that encourages innovation and the continued growth of blockchain technology.
Despite recent strict measures by UK authorities, such as His Majesty’s Treasury and the FCA, toward the digital assets sector, the proposed changes in the Policy Exchange report strive to make the UK a more attractive location for Web3 enterprises. By adopting these suggestions, the UK can demonstrate its commitment to fostering innovation in the rapidly evolving blockchain and cryptocurrency industries while ensuring a robust and transparent regulatory environment.
The ongoing uncertainty surrounding cryptocurrency regulations in various countries has prompted Web3 companies to explore alternative jurisdictions with more precise legal frameworks. As the United States grapples with regulatory ambiguity, the United Kingdom can position itself as a hub for Web3 innovation by simplifying and streamlining its cryptocurrency regulations.
Featured Image Credit: Photo by Jonathan Borba; Pexels; Thank you!