Connect with us

Politics

Generative AI: Posing Risk of Criminal Abuse

Published

on

Generative AI: Posing Risk of Criminal Abuse


The use of generative artificial intelligence (AI) by hackers has become an emerging threat to cybersecurity. Generative AI allows hackers to generate realistic and convincing fake data, such as images, videos, and text, which they can use for phishing scams, social engineering attacks, and other types of cyberattacks.

In this article, we will provide a comprehensive technical analysis of generative AI used by hackers, including its architecture, operation, and deployment.

Different Kinds of Generative AI

Generative AI is a subset of machine learning (ML) that involves training models to generate new data that is similar to the original training data. Hackers can use various types of generative AI models, such as generative adversarial networks (GANs), variational autoencoders (VAEs), and recurrent neural networks (RNNs).

  1. Generative Adversarial Networks (GANs): GANs consist of two neural networks: a generator and a discriminator. The generator generates fake data, and the discriminator distinguishes between real and fake data. The generator learns to create realistic data by receiving feedback from the discriminator. Hackers can use GANs to create fake images, videos, and text.
  2. Variational Autoencoders (VAEs): VAEs are another type of generative AI model that involves encoding input data into a lower-dimensional space and then decoding it to generate new data. VAEs can be used to generate new images, videos, and text.
  3. Recurrent Neural Networks (RNNs): RNNs are a type of neural network that can generate new data sequences, such as text or music. Hackers can use RNNs to generate fake text, such as phishing emails.

Generative AI: The risk

Generative AI models operate by learning patterns and relationships in the original training data and then generating new data that is similar to the original data.

Hackers can train these models on large datasets of real data, such as images, videos, and text, to generate convincing fake data. Hackers can also use transfer learning to fine-tune existing generative AI models to generate specific types of fake data, such as images of a specific person or fake emails that target a particular organization.

Transfer learning involves taking a pre-trained generative AI model and fine-tuning it on a smaller dataset of new data. Hackers can use a range of machine learning algorithms to generate convincing fake data.

In more detail, GANs can be used to generate realistic images and videos by training the generator on a dataset of real images and videos. VAEs can be used to generate new images by encoding them into a lower-dimensional space and then decoding them back into the original space. RNNs can be used to generate fake text, such as phishing emails.

Hackers can train an RNN on a large dataset of legitimate emails and then fine-tune it to generate fake emails that are similar in tone and style to the original emails. These fake emails can contain malicious links or attachments that can infect the victim’s computer or steal sensitive information.

Academic research: Generative AI for malicious activities

Several research papers have explored the use of generative AI in cyberattacks. For example, a paper titled “Generating Adversarial Examples with Adversarial Networks” explored how GANs can be used to generate adversarial examples that can fool machine learning models. Adversarial examples are inputs to machine learning models that have been intentionally designed to cause the model to make a mistake.

Another paper titled “Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN” explored how GANs can be used to generate adversarial malware examples that can evade detection by antivirus software. The paper demonstrated that GANs could be used to generate malware samples that can bypass signature-based detection methods and evade heuristic-based detection methods as well.

In addition to research papers, there are also tools and frameworks available that allow hackers to easily generate fake data using generative AI. For example, DeepFakes is a tool that allows users to create realistic fake videos by swapping the faces of people in existing videos. This tool can be used for malicious purposes, such as creating fake videos to defame someone or spread false information.

Generative AI: Facilitating work of Criminal Actors

Nowadays, hackers using generative AI models in various ways to carry out cyberattacks. For example, they can use fake images and videos to create convincing phishing emails that appear to come from legitimate sources, such as banks or other financial institutions.

Criminal Actors can also use fake text generated by OpenAI or similar tools to create convincing phishing emails that are personalized to the victim. These emails can use social engineering tactics to trick the victim into clicking on a malicious link or providing sensitive information.

Generative AI has several use cases for hackers, including:

  1. Phishing attacks: Hackers can use generative AI to create convincing fake data, such as images, videos, and text, to craft phishing emails that appear to come from legitimate sources. These emails can contain links or attachments that install malware on the victim’s computer or steal their login credentials.
  2. Social engineering attacks: Generative AI can be used to create fake social media profiles that appear to be real. Hackers can use these profiles to gain the trust of their targets and trick them into providing sensitive information or clicking on a malicious link.
  3. Malware development: Hackers can use generative AI to create new strains of malware that are designed to evade detection by traditional antivirus software. By generating thousands of variants of a single malware sample, they can create unique versions of the malware that are difficult to detect.
  4. Password cracking: Generative AI can be used to generate new password combinations for brute force attacks on password-protected systems. By training AI models on existing passwords and patterns, hackers can generate new password combinations that will likely be successful.
  5. Fraudulent activities: Hackers can use generative AI to create fake documents, such as invoices and receipts, that appear to be legitimate. They can use these documents to carry out fraudulent activities, such as billing fraud or expense reimbursement fraud.
  6. Impersonation attacks: Generative AI can be used to create fake voice recordings or videos that can be used to impersonate someone else. This can be used to trick victims into providing sensitive information or carrying out unauthorized actions.

Reducing the Risk of Generative AI Misuse by Cybercriminals

With the increasing use of generative AI by cybercriminals to carry out various malicious activities, it has become crucial for the world to take appropriate steps to reduce the risk of its misuse. The following are some of the measures that can be taken to achieve this goal:

  1. Implement Strong Security Measures: Organizations and individuals should implement strong security measures to protect their systems and data from cyber threats. This includes using multi-factor authentication, strong passwords, and regularly updating software and applications.
  2. Develop Advanced Security Tools: Researchers and security experts should continue to develop advanced security tools that can detect and prevent cyberattacks that use generative AI. These tools should be able to identify and block malicious traffic that uses fake data generated by AI models.
  3. Increase Awareness and Education: It is important to increase awareness and education about the potential risks of generative AI misuse. This includes training employees and individuals on how to identify and avoid phishing attacks, social engineering tactics, and other types of cyber threats.
  4. Strengthen Regulations: Governments and regulatory bodies should strengthen regulations around the use of generative AI to prevent its misuse. This includes setting standards for data privacy and security, as well as monitoring and enforcing compliance.

Reducing the risk of generative AI misuse by cybercriminals requires a collective effort from individuals, organizations, and governments. By implementing strong security measures, developing advanced security tools, increasing awareness and education, and strengthening regulations, we can create a safer and more secure digital world.

Conclusion

In conclusion, generative AI is a powerful tool that can be used for both legitimate and malicious purposes. While it has many potential applications in fields such as medicine, art, and entertainment, it also poses a significant cybersecurity threat.

Hackers can use generative AI to create convincing fake data that can be used to carry out phishing scams, social engineering attacks, and other types of cyberattacks. It is essential for cybersecurity professionals to stay up-to-date with the latest advancements in generative AI and develop effective countermeasures to protect against these types of attacks.

Featured Image Credit: Graphic Provided by the Author; Thank you!

Jim Biniyaz

CEO and Co-Founder

Jim is CEO and Co-Founder of ResilientX Security and a General Partner in Parrot Media Group. He is passionate about Cyber Security, innovation, and product development. Previously Jim was Co-Founder of DeltaThreat and Next IQ Ltd.

Politics

Fintech Kennek raises $12.5M seed round to digitize lending

Published

on

Google eyed for $2 billion Anthropic deal after major Amazon play


London-based fintech startup Kennek has raised $12.5 million in seed funding to expand its lending operating system.

According to an Oct. 10 tech.eu report, the round was led by HV Capital and included participation from Dutch Founders Fund, AlbionVC, FFVC, Plug & Play Ventures, and Syndicate One. Kennek offers software-as-a-service tools to help non-bank lenders streamline their operations using open banking, open finance, and payments.

The platform aims to automate time-consuming manual tasks and consolidate fragmented data to simplify lending. Xavier De Pauw, founder of Kennek said:

“Until kennek, lenders had to devote countless hours to menial operational tasks and deal with jumbled and hard-coded data – which makes every other part of lending a headache. As former lenders ourselves, we lived and breathed these frustrations, and built kennek to make them a thing of the past.”

The company said the latest funding round was oversubscribed and closed quickly despite the challenging fundraising environment. The new capital will be used to expand Kennek’s engineering team and strengthen its market position in the UK while exploring expansion into other European markets. Barbod Namini, Partner at lead investor HV Capital, commented on the investment:

“Kennek has developed an ambitious and genuinely unique proposition which we think can be the foundation of the entire alternative lending space. […] It is a complicated market and a solution that brings together all information and stakeholders onto a single platform is highly compelling for both lenders & the ecosystem as a whole.”

The fintech lending space has grown rapidly in recent years, but many lenders still rely on legacy systems and manual processes that limit efficiency and scalability. Kennek aims to leverage open banking and data integration to provide lenders with a more streamlined, automated lending experience.

The seed funding will allow the London-based startup to continue developing its platform and expanding its team to meet demand from non-bank lenders looking to digitize operations. Kennek’s focus on the UK and Europe also comes amid rising adoption of open banking and open finance in the regions.

Featured Image Credit: Photo from Kennek.io; Thank you!

Radek Zielinski

Radek Zielinski is an experienced technology and financial journalist with a passion for cybersecurity and futurology.

Continue Reading

Politics

Fortune 500’s race for generative AI breakthroughs

Published

on

Deanna Ritchie


As excitement around generative AI grows, Fortune 500 companies, including Goldman Sachs, are carefully examining the possible applications of this technology. A recent survey of U.S. executives indicated that 60% believe generative AI will substantially impact their businesses in the long term. However, they anticipate a one to two-year timeframe before implementing their initial solutions. This optimism stems from the potential of generative AI to revolutionize various aspects of businesses, from enhancing customer experiences to optimizing internal processes. In the short term, companies will likely focus on pilot projects and experimentation, gradually integrating generative AI into their operations as they witness its positive influence on efficiency and profitability.

Goldman Sachs’ Cautious Approach to Implementing Generative AI

In a recent interview, Goldman Sachs CIO Marco Argenti revealed that the firm has not yet implemented any generative AI use cases. Instead, the company focuses on experimentation and setting high standards before adopting the technology. Argenti recognized the desire for outcomes in areas like developer and operational efficiency but emphasized ensuring precision before putting experimental AI use cases into production.

According to Argenti, striking the right balance between driving innovation and maintaining accuracy is crucial for successfully integrating generative AI within the firm. Goldman Sachs intends to continue exploring this emerging technology’s potential benefits and applications while diligently assessing risks to ensure it meets the company’s stringent quality standards.

One possible application for Goldman Sachs is in software development, where the company has observed a 20-40% productivity increase during its trials. The goal is for 1,000 developers to utilize generative AI tools by year’s end. However, Argenti emphasized that a well-defined expectation of return on investment is necessary before fully integrating generative AI into production.

To achieve this, the company plans to implement a systematic and strategic approach to adopting generative AI, ensuring that it complements and enhances the skills of its developers. Additionally, Goldman Sachs intends to evaluate the long-term impact of generative AI on their software development processes and the overall quality of the applications being developed.

Goldman Sachs’ approach to AI implementation goes beyond merely executing models. The firm has created a platform encompassing technical, legal, and compliance assessments to filter out improper content and keep track of all interactions. This comprehensive system ensures seamless integration of artificial intelligence in operations while adhering to regulatory standards and maintaining client confidentiality. Moreover, the platform continuously improves and adapts its algorithms, allowing Goldman Sachs to stay at the forefront of technology and offer its clients the most efficient and secure services.

Featured Image Credit: Photo by Google DeepMind; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Politics

UK seizes web3 opportunity simplifying crypto regulations

Published

on

Deanna Ritchie


As Web3 companies increasingly consider leaving the United States due to regulatory ambiguity, the United Kingdom must simplify its cryptocurrency regulations to attract these businesses. The conservative think tank Policy Exchange recently released a report detailing ten suggestions for improving Web3 regulation in the country. Among the recommendations are reducing liability for token holders in decentralized autonomous organizations (DAOs) and encouraging the Financial Conduct Authority (FCA) to adopt alternative Know Your Customer (KYC) methodologies, such as digital identities and blockchain analytics tools. These suggestions aim to position the UK as a hub for Web3 innovation and attract blockchain-based businesses looking for a more conducive regulatory environment.

Streamlining Cryptocurrency Regulations for Innovation

To make it easier for emerging Web3 companies to navigate existing legal frameworks and contribute to the UK’s digital economy growth, the government must streamline cryptocurrency regulations and adopt forward-looking approaches. By making the regulatory landscape clear and straightforward, the UK can create an environment that fosters innovation, growth, and competitiveness in the global fintech industry.

The Policy Exchange report also recommends not weakening self-hosted wallets or treating proof-of-stake (PoS) services as financial services. This approach aims to protect the fundamental principles of decentralization and user autonomy while strongly emphasizing security and regulatory compliance. By doing so, the UK can nurture an environment that encourages innovation and the continued growth of blockchain technology.

Despite recent strict measures by UK authorities, such as His Majesty’s Treasury and the FCA, toward the digital assets sector, the proposed changes in the Policy Exchange report strive to make the UK a more attractive location for Web3 enterprises. By adopting these suggestions, the UK can demonstrate its commitment to fostering innovation in the rapidly evolving blockchain and cryptocurrency industries while ensuring a robust and transparent regulatory environment.

The ongoing uncertainty surrounding cryptocurrency regulations in various countries has prompted Web3 companies to explore alternative jurisdictions with more precise legal frameworks. As the United States grapples with regulatory ambiguity, the United Kingdom can position itself as a hub for Web3 innovation by simplifying and streamlining its cryptocurrency regulations.

Featured Image Credit: Photo by Jonathan Borba; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Copyright © 2021 Seminole Press.