Connect with us

Politics

RPA Get Smarter – Ethics and Transparency Should be Top of Mind

Published

on

Stuart Battersby


The early incarnations of Robotic Process Automation (or RPA) technologies followed fundamental rules.  These systems were akin to user interface testing tools in which, instead of a human operator clicking on areas of the screen, software (or a ‘robot’ as it came to be known) would do this instead.  This freed up user time spent on exceedingly low-level tasks such as scraping content from the screen, copy and paste, etc.

Whilst basic in the functionality, these early implementations of RPA brought clear speed and efficiency advantages.  The tools evolved to encompass basic workflow automation in the following years, but the process was rigid with limited applicability across an enterprise.

Shortly after 2000, automation companies such as UiPath, Automation Anywhere, and Blue Prism were founded (albeit some with different names at their initial incarnation).  With a clear focus on the automation space, these companies started making significant inroads into the enterprise automation space.

RPA gets smarter

Over the years, the functionality of RPA systems has grown significantly.  No longer are they the rigid tools of their early incarnations, but instead, they offer much smarter process automation.  UiPath, for example, list 20 automation products on their website across groups such as Discover, Build, Manage, Run & Engage.  Their competitors also have comprehensive offerings.

Use cases for Robotic Process Automation are now wide and varied.  For example, with smart technology built-in, rather than just clicking on-screen regions, systems may now automatically extract content from invoices (or other customer-submitted data) and convert this into a structured database format.  These smart features may well be powered by forms of Artificial Intelligence, albeit hidden under the hood of the RPA application itself.  Automation Anywhere has a good example of this exact use case.

Given the breadth of use cases now addressed by RPA technologies across enterprise organizations, it is hard to see a development and product expansion route that does not add more AI functionality to the RPA tools themselves.  Whilst still being delivered in the package of Robotic.

Process Automation software, it is likely that this functionality will move from being hidden under the hood and used to power specific use cases in the RPA software (such as content extraction) to function in its own right that is accessible to the user.

The blurring of AI & RPA

The RPA vendors will compete with the AI vendors that sell automated machine learning software to the enterprise.  Termed AutoML, these tools enable users with little or no data science experience (often termed citizen data scientists) to build custom AI models with their data.  These models are not restricted to specifically defined use cases but can be anything the business users wish to (and have the supporting data to) build.

With our example above, once the data has been extracted from the invoices, why not let the customer build a custom AI model to classify these invoices by priority without bringing in or connecting to an additional 3rd party AI tool?  This is the logical next step in the RPA marketplace; some leaders in the space already have some of this functionality in place.

This blurring of the lines between Robotic Process Automation and Artificial Intelligence is particularly topical right now because, alongside the specialized RPA vendors, established technology companies such as Microsoft are releasing their own low-code RPA solutions to the market.  Taking Microsoft as an example, it has a long history with Artificial Intelligence.  Via Azure, its many different AI tools, including tools to build custom AI models and a dedicated AutoML solution.  Most relevant is the push to combine their products to make unique value propositions.  In our context here, that means it is likely that low-code RPA and Azure’s AI technologies will be closely aligned.

The evolving discussion of AI ethics

Evolving at the same time as RPA and AI technologies are the discussions, and in some jurisdictions regulations, on the ethics of AI systems.  Valid concerns are being raised about the ethics of AI and the diversity of organizations that build AI.

In general, these discussions and regulations aim to ensure that AI systems are built, deployed, and used in a fair, transparent and responsible manner.  There are critical organizational and ethical reasons to ensure your AI systems behave ethically.

When systems are built that operate on data that represents people (such as in HR, Finance, Healthcare, Insurance, etc.), the systems must be transparent and unbiased; even beyond use cases built with people’s data, organizations are now demanding transparency in their AI so that they can effectively assess the operational risks of deploying that AI in their business.

A typical approach is defining the business’s ethical principles, creating or adopting an ethical AI framework, and continually evaluating your AI systems against that framework and ethical principles.

As with RPA, the development of AI models may be outsourced to 3rd party companies. So evaluating the transparency and ethics of these systems becomes even more important given the lack of insight into the build process.

However, most public and organizational discussions of ethics are usually only in the context of Artificial Intelligence (where the headlines in the media are typically focused).  For this reason, developers and users of RPA systems could feel that these ethical concerns may not apply to them as they are ‘only’ working with process automation software.

Automation can impact people’s lives

If we go back to our example of invoice processing used before, we saw the potential for a custom AI model within the RPA software to automatically prioritize invoices for payment.  The technology shift would be minor to change this use case to one in healthcare that prioritized healthcare insurance claims instead of invoices.

The RPA technology could still extract data from claims documents automatically and translate this into a structured format.  The business could then train a custom classification model (using historical claims data) to prioritize payments, or conversely, flag payments to be put on hold pending review.

However, here the ethical concerns should now be very apparent.  The decision made by this model, held within the RPA software, will directly affect individuals’ health and finances.

As seen in this example, what may seem like relatively benign automation software is actually evolving to either reduce (or potentially completely remove) the human in the loop from critical decisions that impact people’s lives.  The technology may or may not be explicitly labeled and sold as Artificial Intelligence; however, the notions of ethics should still very much be top of mind.

We need a different lens

It may be better to see these ethical concerns, not through a lens of AI but instead, one focussed on automated algorithmic decisioning.

The reality is that it is not just the fact that AI technology may be making decisions that should be of concern, but in fact, any automated approach that does not have sufficient human oversight (whether this is powered by a rules-based system, Robotic Process Automation, shallow machine learning or complex deep learning for example).

Indeed if you look to the UK’s recently announced Ethics, Transparency and Accountability Framework, which is targeted at the public sector, you will see that it is focussed on ‘Automated Decision-Making.’  From the guidance document, “Automated decision-making refers to both solely automated decisions (no human judgment) and automated assisted decision-making (assisting human judgment).”

Similarly, the GDPR has been in force in the European Union for some time now, making clear provisions for individuals’ rights concerning automated individual decision-making.  The European Commission gives the following definition: “Decision-making based solely on automated means happens when decisions are taken about you by technological means and without any human involvement.

Finally, the state of California proposed in 2020 the Automated Decision Systems Accountability Act with similar goals and definitions.  Within this Act Artificial Intelligence (but not Robotic Process Automation explicitly) is called out: “‘Automated decision system’ or ‘ADS’ means a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts persons” with assessment for accuracy, fairness, bias, discrimination, privacy, and security. Therefore, it is clear that the principle of the more general lens is recognized in public policymaking.

Enterprises should apply governance to RPA too

As organizations are putting in place teams, processes, and technologies to govern the development and use of AI within their organization, these must be extended to include all automated decisioning systems.  To reduce the burden and facilitate operation at scale within large organizations, there should not be one set of processes and tools for RPA and one for AI (or indeed, for each AI model).

This would result in a huge manual process to gather the relevant information, make this information comparable, and map it to the chosen process framework.  Instead, a unified approach should allow for a common set of controls that lead to informed decision-making and approvals.

This should not also be seen at odds with the adoption of RPA or AI; clear guidelines and approvals enable teams to go ahead with implementation, knowing the bounds in which they can operate. When using the more general lens, rather than one just targeted at AI, the implication becomes clear; ethics should be top of mind for developers and users of all automated decisioning systems, not just AI, which includes Robotic Process Automation.

Image Credit: pixabay; pexels; thank you!

Stuart Battersby

Chief Technology Officer @ Chatterbox Labs

Dr Stuart Battersby is a technology leader and CTO of Chatterbox Labs. With a PhD in Cognitive Science from Queen Mary, University of London Stuart now leads all research and technical development for Chatterbox’s ethical AI platform AIMI.

Politics

Fintech Kennek raises $12.5M seed round to digitize lending

Published

on

Google eyed for $2 billion Anthropic deal after major Amazon play


London-based fintech startup Kennek has raised $12.5 million in seed funding to expand its lending operating system.

According to an Oct. 10 tech.eu report, the round was led by HV Capital and included participation from Dutch Founders Fund, AlbionVC, FFVC, Plug & Play Ventures, and Syndicate One. Kennek offers software-as-a-service tools to help non-bank lenders streamline their operations using open banking, open finance, and payments.

The platform aims to automate time-consuming manual tasks and consolidate fragmented data to simplify lending. Xavier De Pauw, founder of Kennek said:

“Until kennek, lenders had to devote countless hours to menial operational tasks and deal with jumbled and hard-coded data – which makes every other part of lending a headache. As former lenders ourselves, we lived and breathed these frustrations, and built kennek to make them a thing of the past.”

The company said the latest funding round was oversubscribed and closed quickly despite the challenging fundraising environment. The new capital will be used to expand Kennek’s engineering team and strengthen its market position in the UK while exploring expansion into other European markets. Barbod Namini, Partner at lead investor HV Capital, commented on the investment:

“Kennek has developed an ambitious and genuinely unique proposition which we think can be the foundation of the entire alternative lending space. […] It is a complicated market and a solution that brings together all information and stakeholders onto a single platform is highly compelling for both lenders & the ecosystem as a whole.”

The fintech lending space has grown rapidly in recent years, but many lenders still rely on legacy systems and manual processes that limit efficiency and scalability. Kennek aims to leverage open banking and data integration to provide lenders with a more streamlined, automated lending experience.

The seed funding will allow the London-based startup to continue developing its platform and expanding its team to meet demand from non-bank lenders looking to digitize operations. Kennek’s focus on the UK and Europe also comes amid rising adoption of open banking and open finance in the regions.

Featured Image Credit: Photo from Kennek.io; Thank you!

Radek Zielinski

Radek Zielinski is an experienced technology and financial journalist with a passion for cybersecurity and futurology.

Continue Reading

Politics

Fortune 500’s race for generative AI breakthroughs

Published

on

Deanna Ritchie


As excitement around generative AI grows, Fortune 500 companies, including Goldman Sachs, are carefully examining the possible applications of this technology. A recent survey of U.S. executives indicated that 60% believe generative AI will substantially impact their businesses in the long term. However, they anticipate a one to two-year timeframe before implementing their initial solutions. This optimism stems from the potential of generative AI to revolutionize various aspects of businesses, from enhancing customer experiences to optimizing internal processes. In the short term, companies will likely focus on pilot projects and experimentation, gradually integrating generative AI into their operations as they witness its positive influence on efficiency and profitability.

Goldman Sachs’ Cautious Approach to Implementing Generative AI

In a recent interview, Goldman Sachs CIO Marco Argenti revealed that the firm has not yet implemented any generative AI use cases. Instead, the company focuses on experimentation and setting high standards before adopting the technology. Argenti recognized the desire for outcomes in areas like developer and operational efficiency but emphasized ensuring precision before putting experimental AI use cases into production.

According to Argenti, striking the right balance between driving innovation and maintaining accuracy is crucial for successfully integrating generative AI within the firm. Goldman Sachs intends to continue exploring this emerging technology’s potential benefits and applications while diligently assessing risks to ensure it meets the company’s stringent quality standards.

One possible application for Goldman Sachs is in software development, where the company has observed a 20-40% productivity increase during its trials. The goal is for 1,000 developers to utilize generative AI tools by year’s end. However, Argenti emphasized that a well-defined expectation of return on investment is necessary before fully integrating generative AI into production.

To achieve this, the company plans to implement a systematic and strategic approach to adopting generative AI, ensuring that it complements and enhances the skills of its developers. Additionally, Goldman Sachs intends to evaluate the long-term impact of generative AI on their software development processes and the overall quality of the applications being developed.

Goldman Sachs’ approach to AI implementation goes beyond merely executing models. The firm has created a platform encompassing technical, legal, and compliance assessments to filter out improper content and keep track of all interactions. This comprehensive system ensures seamless integration of artificial intelligence in operations while adhering to regulatory standards and maintaining client confidentiality. Moreover, the platform continuously improves and adapts its algorithms, allowing Goldman Sachs to stay at the forefront of technology and offer its clients the most efficient and secure services.

Featured Image Credit: Photo by Google DeepMind; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Politics

UK seizes web3 opportunity simplifying crypto regulations

Published

on

Deanna Ritchie


As Web3 companies increasingly consider leaving the United States due to regulatory ambiguity, the United Kingdom must simplify its cryptocurrency regulations to attract these businesses. The conservative think tank Policy Exchange recently released a report detailing ten suggestions for improving Web3 regulation in the country. Among the recommendations are reducing liability for token holders in decentralized autonomous organizations (DAOs) and encouraging the Financial Conduct Authority (FCA) to adopt alternative Know Your Customer (KYC) methodologies, such as digital identities and blockchain analytics tools. These suggestions aim to position the UK as a hub for Web3 innovation and attract blockchain-based businesses looking for a more conducive regulatory environment.

Streamlining Cryptocurrency Regulations for Innovation

To make it easier for emerging Web3 companies to navigate existing legal frameworks and contribute to the UK’s digital economy growth, the government must streamline cryptocurrency regulations and adopt forward-looking approaches. By making the regulatory landscape clear and straightforward, the UK can create an environment that fosters innovation, growth, and competitiveness in the global fintech industry.

The Policy Exchange report also recommends not weakening self-hosted wallets or treating proof-of-stake (PoS) services as financial services. This approach aims to protect the fundamental principles of decentralization and user autonomy while strongly emphasizing security and regulatory compliance. By doing so, the UK can nurture an environment that encourages innovation and the continued growth of blockchain technology.

Despite recent strict measures by UK authorities, such as His Majesty’s Treasury and the FCA, toward the digital assets sector, the proposed changes in the Policy Exchange report strive to make the UK a more attractive location for Web3 enterprises. By adopting these suggestions, the UK can demonstrate its commitment to fostering innovation in the rapidly evolving blockchain and cryptocurrency industries while ensuring a robust and transparent regulatory environment.

The ongoing uncertainty surrounding cryptocurrency regulations in various countries has prompted Web3 companies to explore alternative jurisdictions with more precise legal frameworks. As the United States grapples with regulatory ambiguity, the United Kingdom can position itself as a hub for Web3 innovation by simplifying and streamlining its cryptocurrency regulations.

Featured Image Credit: Photo by Jonathan Borba; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Copyright © 2021 Seminole Press.