Connect with us

Politics

AI Brings New Capabilities and Risks to Healthcare Data Security

Published

on

Prashanth Samudrala


Data protection is a critical aspect of properly managing a healthcare organization’s IT environment. Healthcare data remains one of the top targets for cybercriminals due to the sensitivity level of their system data. The targeted datasets are personally identifiable information (PII), financial information, and health information.

These organizations can strengthen their systems by introducing periodic updates and applications as part of their DevSecOps strategy. Speed, reliability, and security are all critical aspects of a successful DevSecOps approach. The tools and processes used to pursue this goal dictate your level of success.

That said, despite constantly releasing new tools, recent advancements in artificial intelligence (AI) are receiving massive attention. For example, generative AI and large language models (LLM) are helping workers in various industries expedite processes and offload manual tasks, continually improving their programs.

Developers are discovering AI tools can quickly produce lines of code with a few simple prompts. This technology is still very young, so it’s unclear how successful these efforts will be, but that isn’t stopping many development teams from diving right into using AI tools.

Healthcare companies need to retain strict control over their IT infrastructure. So, how do AI tools factor into their requirements?

Generative AI and LLM tools can significantly increase time to market, but what are the risks? Are the necessary levels of control possible for healthcare DevSecOps teams?

Let’s explore where this technology is currently, what it means for InfoSec teams, and how to utilize these powerful new tools safely.

How Generative AI and LLM Work

Both generative AI and LLM tools work with prompts. A user can ask questions or request a function, and the tool generates a response. These responses are tweaked with further questions or prompts to suit the user best.

However, there’s a difference between generative AI and LLM. Generative AI describes any type of artificial intelligence that uses learned behavior to produce unique content. It generates pictures and text and encompasses large language models and other types of AI.

On the other hand, LLMs are highly refined versions of generative AI. They are trained on large amounts of data, produce human-like responses, and are more applicable to DevOps practices. Users can input commands asking the program to create a flow or a trigger, for example, then the LLM can produce code applicable to the user’s request to the program.

Choosing the Right Model

There are a variety of AI models to choose from. Open-sourced models based on previous versions are trained with new source material daily. Larger, more popular models like Google Bard and Open AI’s Chat GPT are the most well-known versions of large language models in use.

These tools are trained on websites, articles, and books. The information contained within this source text informs responses to user queries and dictates how the program formulates its responses.

The architecture of generative AI tools is built with multiple layers of mechanisms to help them understand the relationships and dependencies between words and statements, allowing them to be more conversational.

The data fed into an AI model informs the responses. These systems are refined over time by learning from interactions with users as well as new source material. Further training and refinement will make these tools more accurate and reliable.

Learning from user-input data is a great way to expedite the learning process for generative AI and LLM tools. However, this approach can introduce data security risks for DevSecOps teams. But before we dig into the risks, let’s look at what teams stand to gain from implementing generative AI tools.

What Can Generative AI/LLM Do for DevOps?

The available toolset for developers is quickly becoming more specialized. Tools like Einstein GPT have the potential to change the way we look at software development and enable healthcare organizations to decrease the time to market for their software development practices.

Here are a few of the ways LLM tools can benefit DevOps teams.

  1. Increase Release Velocity

Speed is a major benefit for DevOps teams. The ability to quickly introduce a reliable update or application makes the organization more flexible and able to respond to emerging issues. Healthcare organizations that frequently introduce timely releases are leaders in the industry, and more likely to experience success.

LLM tools help developers write large chunks of code in a fraction of the time it takes them to write the code on their own. Putting the development stage of the application life cycle on the fast track with automated writing positions to produce much quicker.

  1. Reduce Manual Processes

Our team members are our greatest assets, but human error is unavoidable. Introducing new automated tools to the DevOps pipeline goes a long way toward reducing errors and streamlining operations. This is just as true for LLM tools as it is for standard DevOps tools like static code analysis and CI/CD automation.

The ability for developers to input instructions and have the LLM tool perform a large percentage of the coding greatly increases productivity.

Manual, repetitive tasks lead to mistakes. But when developers can offload most of the writing to an LLM, all they need to do is review the code before committing it to the project.

  1. Provide Reference Material

Confusion leads to lost time. Productivity drops when developers can’t find the answer to a question or encounter a confusing error. Generative AI and LLM tools provide context and answers to specific questions in real-time.

Detailed explanations for programming language documentation, bug identification, and usage patterns are all available at your developers’ fingertips.

Troubleshooting becomes streamlined, allowing your team to get back to work instead of spending time troubleshooting. LLM tools suggest fixes and debugging strategies to keep updates on schedule.

Potential Data Security Risks Associated with AI

Responses to LLM queries are different every time. And while this might work well in a conversational setting, it can lead to issues for developers using the technology to write code. Bad code leads to data security vulnerabilities. For regulated industries like healthcare, every potential vulnerability needs to be examined.

There are still a lot of questions about how the utilization of these tools will play out, but here are a few key considerations:

  1. Unreliable Results

Generative AI and LLM tools are very quickly to produce results, but the results may not be high quality. All the results—whether it’s an answer to a question about history or a line of code—come from input data. If that source data contains errors, so will the results the LLM tool provides.

DevOps teams have standards they expect their developers to achieve. The code produced by LLM tools doesn’t automatically adhere to these guidelines.

The performance of the resulting code may not be perfect. It’s simply a response to a prompt. And while these tools are a huge advancement on any type of query-based tool we’ve seen in the past, they’re still not perfect.

  1. Compliance Concerns

Tools like Einstein GPT are so new that there are a lot of questions regarding how they will impact a DevOps pipeline. When it comes to regulatory compliance with data security regulations, industries like healthcare need to get some answers before they can safely and confidently use these tools.

For example, what happens to the code generated by an LLM tool? Are you storing it in a public repository? If so, this would cause a great compliance concern about unprotected source code. What would happen if this code were used in a healthcare organization’s production environment?

These tools are trained on public information that comes from GitHub for development knowledge. It’s impossible to know exactly what has gone into this training, which means security risks may be present. That means anyone whose queries are answered with insecure code would share the same security risk.

Regulated industries need to be particularly careful with these tools. Healthcare organizations handle incredibly sensitive information. The level of control needed by regulated industries simply isn’t possible at this point with LLM and generative AI tools.

  1. Implementation Challenges

LLM tools increase the pace at which developers produce code. It removes the bottleneck from the development stage of producing an update, but that bottleneck will move farther down the line. There is a tipping point between moving fast and moving too fast. It will be challenging to maintain control.

A surrounding infrastructure of automated DevOps tools can help ease the strain of expedited development but are too much to take on all at once if systems aren’t already in place. These tools are already out there, and developers are using them because of how easy they can make their jobs. Management might ask teams to avoid using these tools, but it will be difficult to limit usage.

How to Prevent These Issues

These tools are quickly growing in popularity. As new LLM tools continue to roll out, DevOps teams don’t have a lot of time to prepare. This means healthcare organizations need to begin preparing today to stay ahead of the potential vulnerabilities associated with these tools.

Here are a few things that can help you avoid the potential downsides of LLM and generative AI tools.

  1. Strengthen Your DevOps Pipeline

An optimized DevOps pipeline will include an array of automated tools and open communication across departmental teams. Enabling team members with automated tools ensures total coverage of a project and reduces manual processes.

These factors will be increasingly necessary as LLM tools boost the speed at which code is written. Harnessing this speed is crucial to ensuring all quality checks finalize without creating issues farther down the pipeline.

Implementing and perfecting the usage of these tools sets up teams for success as LLM tools become widely available. Healthcare companies need to be able to control their DevOps pipeline. A surrounding DevOps infrastructure provides the support needed to achieve that control.

  1. Scan Code with Static Code Analysis

The code produced by LLM tools is unreliable. This means your team needs to spend more time on the back end of the development stage to ensure any errors are fixed before the code merges with the master repository.

Static code analysis is a non-negotiable aspect of a healthcare organization’s DevOps toolset. This automated tool checks every line of code against internal rules to flag anything that could result in bugs and errors if left untouched.

And while it might be tempting to settle for a generic static code analysis tool, they simply don’t provide the coverage needed to achieve consistently high code quality and regulatory compliance.

  1. Offer Continuous Training

Human error is the number one cause of data loss. It is mitigated by leaning on automated tools that reduce manual work, and offering training to new and existing team members. LLM tools are powerful, but their benefits are matched by their risks, which all depend on how they’re used.

To ensure successful implementation, communicate best practices with your team and clearly define your organization’s expectations. These best practices include considerations such as verifying proper structures for every piece of code that comes from an LLM tool, backing up critical system data, and avoiding any unsanctioned tools. Healthcare companies especially need to be careful with how their team interacts with the platform, given the sensitivity of the data they hold.

Proper Attention Starts Today

Generative AI and LLM tools will continue to become more prevalent. Many potentially great benefits could be seen by using these tools, but there are also significant risks. Healthcare companies must be intentional when building their DevOps approach and, without fail, test every line of code from an LLM tool.

Featured Image Credit: Tima Miroshnichenko; Pexels; Thank you!

Prashanth Samudrala

Vice President of Products – AutoRABIT

Prashanth is the VP of Product Management for AutoRABIT. As a former Salesforce developer and architect, his knowledge of Salesforce DevOps comes from extensive experience. He currently lives in Chicago, IL and is a big fan of its food and beer.

Politics

Fintech Kennek raises $12.5M seed round to digitize lending

Published

on

Google eyed for $2 billion Anthropic deal after major Amazon play


London-based fintech startup Kennek has raised $12.5 million in seed funding to expand its lending operating system.

According to an Oct. 10 tech.eu report, the round was led by HV Capital and included participation from Dutch Founders Fund, AlbionVC, FFVC, Plug & Play Ventures, and Syndicate One. Kennek offers software-as-a-service tools to help non-bank lenders streamline their operations using open banking, open finance, and payments.

The platform aims to automate time-consuming manual tasks and consolidate fragmented data to simplify lending. Xavier De Pauw, founder of Kennek said:

“Until kennek, lenders had to devote countless hours to menial operational tasks and deal with jumbled and hard-coded data – which makes every other part of lending a headache. As former lenders ourselves, we lived and breathed these frustrations, and built kennek to make them a thing of the past.”

The company said the latest funding round was oversubscribed and closed quickly despite the challenging fundraising environment. The new capital will be used to expand Kennek’s engineering team and strengthen its market position in the UK while exploring expansion into other European markets. Barbod Namini, Partner at lead investor HV Capital, commented on the investment:

“Kennek has developed an ambitious and genuinely unique proposition which we think can be the foundation of the entire alternative lending space. […] It is a complicated market and a solution that brings together all information and stakeholders onto a single platform is highly compelling for both lenders & the ecosystem as a whole.”

The fintech lending space has grown rapidly in recent years, but many lenders still rely on legacy systems and manual processes that limit efficiency and scalability. Kennek aims to leverage open banking and data integration to provide lenders with a more streamlined, automated lending experience.

The seed funding will allow the London-based startup to continue developing its platform and expanding its team to meet demand from non-bank lenders looking to digitize operations. Kennek’s focus on the UK and Europe also comes amid rising adoption of open banking and open finance in the regions.

Featured Image Credit: Photo from Kennek.io; Thank you!

Radek Zielinski

Radek Zielinski is an experienced technology and financial journalist with a passion for cybersecurity and futurology.

Continue Reading

Politics

Fortune 500’s race for generative AI breakthroughs

Published

on

Deanna Ritchie


As excitement around generative AI grows, Fortune 500 companies, including Goldman Sachs, are carefully examining the possible applications of this technology. A recent survey of U.S. executives indicated that 60% believe generative AI will substantially impact their businesses in the long term. However, they anticipate a one to two-year timeframe before implementing their initial solutions. This optimism stems from the potential of generative AI to revolutionize various aspects of businesses, from enhancing customer experiences to optimizing internal processes. In the short term, companies will likely focus on pilot projects and experimentation, gradually integrating generative AI into their operations as they witness its positive influence on efficiency and profitability.

Goldman Sachs’ Cautious Approach to Implementing Generative AI

In a recent interview, Goldman Sachs CIO Marco Argenti revealed that the firm has not yet implemented any generative AI use cases. Instead, the company focuses on experimentation and setting high standards before adopting the technology. Argenti recognized the desire for outcomes in areas like developer and operational efficiency but emphasized ensuring precision before putting experimental AI use cases into production.

According to Argenti, striking the right balance between driving innovation and maintaining accuracy is crucial for successfully integrating generative AI within the firm. Goldman Sachs intends to continue exploring this emerging technology’s potential benefits and applications while diligently assessing risks to ensure it meets the company’s stringent quality standards.

One possible application for Goldman Sachs is in software development, where the company has observed a 20-40% productivity increase during its trials. The goal is for 1,000 developers to utilize generative AI tools by year’s end. However, Argenti emphasized that a well-defined expectation of return on investment is necessary before fully integrating generative AI into production.

To achieve this, the company plans to implement a systematic and strategic approach to adopting generative AI, ensuring that it complements and enhances the skills of its developers. Additionally, Goldman Sachs intends to evaluate the long-term impact of generative AI on their software development processes and the overall quality of the applications being developed.

Goldman Sachs’ approach to AI implementation goes beyond merely executing models. The firm has created a platform encompassing technical, legal, and compliance assessments to filter out improper content and keep track of all interactions. This comprehensive system ensures seamless integration of artificial intelligence in operations while adhering to regulatory standards and maintaining client confidentiality. Moreover, the platform continuously improves and adapts its algorithms, allowing Goldman Sachs to stay at the forefront of technology and offer its clients the most efficient and secure services.

Featured Image Credit: Photo by Google DeepMind; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Politics

UK seizes web3 opportunity simplifying crypto regulations

Published

on

Deanna Ritchie


As Web3 companies increasingly consider leaving the United States due to regulatory ambiguity, the United Kingdom must simplify its cryptocurrency regulations to attract these businesses. The conservative think tank Policy Exchange recently released a report detailing ten suggestions for improving Web3 regulation in the country. Among the recommendations are reducing liability for token holders in decentralized autonomous organizations (DAOs) and encouraging the Financial Conduct Authority (FCA) to adopt alternative Know Your Customer (KYC) methodologies, such as digital identities and blockchain analytics tools. These suggestions aim to position the UK as a hub for Web3 innovation and attract blockchain-based businesses looking for a more conducive regulatory environment.

Streamlining Cryptocurrency Regulations for Innovation

To make it easier for emerging Web3 companies to navigate existing legal frameworks and contribute to the UK’s digital economy growth, the government must streamline cryptocurrency regulations and adopt forward-looking approaches. By making the regulatory landscape clear and straightforward, the UK can create an environment that fosters innovation, growth, and competitiveness in the global fintech industry.

The Policy Exchange report also recommends not weakening self-hosted wallets or treating proof-of-stake (PoS) services as financial services. This approach aims to protect the fundamental principles of decentralization and user autonomy while strongly emphasizing security and regulatory compliance. By doing so, the UK can nurture an environment that encourages innovation and the continued growth of blockchain technology.

Despite recent strict measures by UK authorities, such as His Majesty’s Treasury and the FCA, toward the digital assets sector, the proposed changes in the Policy Exchange report strive to make the UK a more attractive location for Web3 enterprises. By adopting these suggestions, the UK can demonstrate its commitment to fostering innovation in the rapidly evolving blockchain and cryptocurrency industries while ensuring a robust and transparent regulatory environment.

The ongoing uncertainty surrounding cryptocurrency regulations in various countries has prompted Web3 companies to explore alternative jurisdictions with more precise legal frameworks. As the United States grapples with regulatory ambiguity, the United Kingdom can position itself as a hub for Web3 innovation by simplifying and streamlining its cryptocurrency regulations.

Featured Image Credit: Photo by Jonathan Borba; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Copyright © 2021 Seminole Press.