Connect with us

Politics

Is Your Data Good Enough for Your Machine Learning/AI Plans?

Published

on

Data Helath Checklist


Developments in AI are a high priority for businesses and governments globally. Yet, a fundamental aspect of AI remains neglected: poor data quality.

AI algorithms rely on reliable data to generate optimal results – if the data is biased, incomplete, insufficient, and inaccurate, it leads to devastating consequences.

AI systems that identify patient diseases are an excellent example of how poor data quality can lead to adverse outcomes. When ingested with insufficient data, these systems produce false diagnoses and inaccurate predictions resulting in misdiagnoses and delayed treatments. For example, a study conducted at the University of Cambridge of over 400 tools used for diagnosing Covid-19 found reports generated by AI entirely unusable, caused by flawed datasets.

In other words, your AI initiatives will have devastating real-world consequences if your data isn’t good enough.

What Does “Good Enough” Data Mean?

There is quite a debate on what ‘good enough’ data means. Some say good enough data doesn’t exist. Others say the need for good data causes analysis paralysis – while HBR outrightly states your machine learning tools are useless if your information is terrible.

At WinPure, we define good enough data as complete, accurate, valid data that can be confidently used for business processes with acceptable risks, the level of which is subjected to individual objectives and circumstances of a business.’

Most companies struggle with data quality and governance more than they admit. Add to the tension; they are overwhelmed and under immense pressure to deploy AI initiatives to stay competitive. Sadly, this means problems like dirty data are not even part of boardroom discussions until it causes a project to fail.

How Does Poor Data Impact AI Systems?

Data quality issues arise at the start of the process when the algorithm feeds on training data to learn patterns. For example, if an AI algorithm is provided with unfiltered social media data, it picks up abuses, racist comments, and misogynist remarks, as seen with Microsoft’s AI bot. Recently, AI’s inability to detect dark-skinned persons was also believed as due to partial data.

How is this related to data quality?

The absence of data governance, the lack of data quality awareness, and isolated data views (where such a gender disparity may have been noticed) lead to poor outcomes.

What To Do?

When businesses realize they’ve got a data quality problem, they panic about hiring. Consultants, engineers, and analysts are blindly hired to diagnose, clean up data and resolve issues ASAP. Unfortunately, months pass before any progress is made, and despite spending millions on the workforce, the problems don’t seem to disappear. A knee-jerk approach to a data quality problem is hardly helpful.

Actual change starts at the grass root level.

Here are three crucial steps to take if you want your AI/ML project to move in the right direction.

Creating awareness and acknowledging data quality issues

For starters, evaluate the quality of your data by building a culture of data literacy. Bill Schmarzo, a powerful voice in the industry, recommends using design thinking to create a culture where everyone understands and can contribute to an organization’s data goals and challenges.

In today’s business landscape, data and data quality is no longer the sole responsibility of IT or data teams. Business users must be aware of dirty data problems and inconsistent and duplicate data, among other issues.

So the first critical thing to do – make data quality training an organizational effort and empower teams to recognize poor data attributes.

Here’s a checklist you can use to begin a conversation on the quality of your data.

Data Helath Checklist. Source: WinPure Company

Devise a plan for meeting quality metrics

Businesses often make the mistake of undermining data quality problems. They hire data analysts to do the mundane data cleaning tasks instead of focusing on planning and strategy work. Some businesses use data management tools to clean, de-dupe, merge, and purge data without a plan. Unfortunately, tools and talents cannot solve problems in isolation. It would help if you had a strategy to meet data quality dimensions.

The strategy must address data collection, labeling, processing, and whether the data fits the AI/ML project. For instance, if an AI recruitment program only selects male candidates for a tech role, it’s obvious the training data for the project was biased, incomplete (since it did not gather enough data on female candidates), and inaccurate. Thus, this data did not meet the true purpose of the AI project.

Data quality goes beyond the mundane tasks of cleanups and fixes. Setting up data integrity and governance standards before beginning the project is best. It saves a project from going kaput later!

Asking the right questions & setting accountability

There are no universal standards for ‘good enough data or data quality levels. Instead, it all depends on your business’s information management system, guidelines for data governance (or the absence of them), and the knowledge of your team and business goals, among numerous other factors.

Here are a few questions to ask your team before kickstarting the project:

  • What’s the origin of our information, and what is the data collection method?
  • What issues affect the data collection process and threaten positive outcomes?
  • What information does the data deliver? Is it in compliance with data quality standards (i.e., i.eare the information accurate, completely reliable, and constant)?
  • Are designated individuals aware of the importance of data quality and poor quality?
  • Are roles and responsibilities defined? For example, who’s required to maintain regular data cleanup schedules? Who’s responsible for creating master records?
  • Is the data fit for purpose?

Ask the right questions, assign the right roles, implement data quality standards and help your team address challenges before they become problematic!

To Conclude

Data quality isn’t just fixing typos or errors. It ensures AI systems aren’t discriminatory, misleading, or inaccurate. Before launching an AI project, it’s necessary to address the flaws in your data and tackle data quality challenges. Moreover, initiate organization-wide data literacy programs to connect every team to the overall objective.

Frontline employees who handle, process, and label the data need training on data quality to identify bias and errors in time.

Featured Image Credit: Provided by the Author; Thank you!

Interior Article Images: Provided by the Author; Thank you!

Farah Kim

Farah Kim is a human-centric marketing consultant with a knack for problem-solving and simplifying complex information into actionable insights for business leaders. She’s been involved in tech, B2B, and B2C since 2011.

Politics

Fintech Kennek raises $12.5M seed round to digitize lending

Published

on

Google eyed for $2 billion Anthropic deal after major Amazon play


London-based fintech startup Kennek has raised $12.5 million in seed funding to expand its lending operating system.

According to an Oct. 10 tech.eu report, the round was led by HV Capital and included participation from Dutch Founders Fund, AlbionVC, FFVC, Plug & Play Ventures, and Syndicate One. Kennek offers software-as-a-service tools to help non-bank lenders streamline their operations using open banking, open finance, and payments.

The platform aims to automate time-consuming manual tasks and consolidate fragmented data to simplify lending. Xavier De Pauw, founder of Kennek said:

“Until kennek, lenders had to devote countless hours to menial operational tasks and deal with jumbled and hard-coded data – which makes every other part of lending a headache. As former lenders ourselves, we lived and breathed these frustrations, and built kennek to make them a thing of the past.”

The company said the latest funding round was oversubscribed and closed quickly despite the challenging fundraising environment. The new capital will be used to expand Kennek’s engineering team and strengthen its market position in the UK while exploring expansion into other European markets. Barbod Namini, Partner at lead investor HV Capital, commented on the investment:

“Kennek has developed an ambitious and genuinely unique proposition which we think can be the foundation of the entire alternative lending space. […] It is a complicated market and a solution that brings together all information and stakeholders onto a single platform is highly compelling for both lenders & the ecosystem as a whole.”

The fintech lending space has grown rapidly in recent years, but many lenders still rely on legacy systems and manual processes that limit efficiency and scalability. Kennek aims to leverage open banking and data integration to provide lenders with a more streamlined, automated lending experience.

The seed funding will allow the London-based startup to continue developing its platform and expanding its team to meet demand from non-bank lenders looking to digitize operations. Kennek’s focus on the UK and Europe also comes amid rising adoption of open banking and open finance in the regions.

Featured Image Credit: Photo from Kennek.io; Thank you!

Radek Zielinski

Radek Zielinski is an experienced technology and financial journalist with a passion for cybersecurity and futurology.

Continue Reading

Politics

Fortune 500’s race for generative AI breakthroughs

Published

on

Deanna Ritchie


As excitement around generative AI grows, Fortune 500 companies, including Goldman Sachs, are carefully examining the possible applications of this technology. A recent survey of U.S. executives indicated that 60% believe generative AI will substantially impact their businesses in the long term. However, they anticipate a one to two-year timeframe before implementing their initial solutions. This optimism stems from the potential of generative AI to revolutionize various aspects of businesses, from enhancing customer experiences to optimizing internal processes. In the short term, companies will likely focus on pilot projects and experimentation, gradually integrating generative AI into their operations as they witness its positive influence on efficiency and profitability.

Goldman Sachs’ Cautious Approach to Implementing Generative AI

In a recent interview, Goldman Sachs CIO Marco Argenti revealed that the firm has not yet implemented any generative AI use cases. Instead, the company focuses on experimentation and setting high standards before adopting the technology. Argenti recognized the desire for outcomes in areas like developer and operational efficiency but emphasized ensuring precision before putting experimental AI use cases into production.

According to Argenti, striking the right balance between driving innovation and maintaining accuracy is crucial for successfully integrating generative AI within the firm. Goldman Sachs intends to continue exploring this emerging technology’s potential benefits and applications while diligently assessing risks to ensure it meets the company’s stringent quality standards.

One possible application for Goldman Sachs is in software development, where the company has observed a 20-40% productivity increase during its trials. The goal is for 1,000 developers to utilize generative AI tools by year’s end. However, Argenti emphasized that a well-defined expectation of return on investment is necessary before fully integrating generative AI into production.

To achieve this, the company plans to implement a systematic and strategic approach to adopting generative AI, ensuring that it complements and enhances the skills of its developers. Additionally, Goldman Sachs intends to evaluate the long-term impact of generative AI on their software development processes and the overall quality of the applications being developed.

Goldman Sachs’ approach to AI implementation goes beyond merely executing models. The firm has created a platform encompassing technical, legal, and compliance assessments to filter out improper content and keep track of all interactions. This comprehensive system ensures seamless integration of artificial intelligence in operations while adhering to regulatory standards and maintaining client confidentiality. Moreover, the platform continuously improves and adapts its algorithms, allowing Goldman Sachs to stay at the forefront of technology and offer its clients the most efficient and secure services.

Featured Image Credit: Photo by Google DeepMind; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Politics

UK seizes web3 opportunity simplifying crypto regulations

Published

on

Deanna Ritchie


As Web3 companies increasingly consider leaving the United States due to regulatory ambiguity, the United Kingdom must simplify its cryptocurrency regulations to attract these businesses. The conservative think tank Policy Exchange recently released a report detailing ten suggestions for improving Web3 regulation in the country. Among the recommendations are reducing liability for token holders in decentralized autonomous organizations (DAOs) and encouraging the Financial Conduct Authority (FCA) to adopt alternative Know Your Customer (KYC) methodologies, such as digital identities and blockchain analytics tools. These suggestions aim to position the UK as a hub for Web3 innovation and attract blockchain-based businesses looking for a more conducive regulatory environment.

Streamlining Cryptocurrency Regulations for Innovation

To make it easier for emerging Web3 companies to navigate existing legal frameworks and contribute to the UK’s digital economy growth, the government must streamline cryptocurrency regulations and adopt forward-looking approaches. By making the regulatory landscape clear and straightforward, the UK can create an environment that fosters innovation, growth, and competitiveness in the global fintech industry.

The Policy Exchange report also recommends not weakening self-hosted wallets or treating proof-of-stake (PoS) services as financial services. This approach aims to protect the fundamental principles of decentralization and user autonomy while strongly emphasizing security and regulatory compliance. By doing so, the UK can nurture an environment that encourages innovation and the continued growth of blockchain technology.

Despite recent strict measures by UK authorities, such as His Majesty’s Treasury and the FCA, toward the digital assets sector, the proposed changes in the Policy Exchange report strive to make the UK a more attractive location for Web3 enterprises. By adopting these suggestions, the UK can demonstrate its commitment to fostering innovation in the rapidly evolving blockchain and cryptocurrency industries while ensuring a robust and transparent regulatory environment.

The ongoing uncertainty surrounding cryptocurrency regulations in various countries has prompted Web3 companies to explore alternative jurisdictions with more precise legal frameworks. As the United States grapples with regulatory ambiguity, the United Kingdom can position itself as a hub for Web3 innovation by simplifying and streamlining its cryptocurrency regulations.

Featured Image Credit: Photo by Jonathan Borba; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Copyright © 2021 Seminole Press.