Connect with us


Defending AI With AI: The AI-Enabled Solutions to Next-Gen Cyberthreats – ReadWrite



Defending AI With AI: The AI-Enabled Solutions to Next-Gen Cyberthreats - ReadWrite

The intersection of AI and cybersecurity is a subject of growing concern in the industry, particularly on how AI can be used to mitigate attacks and neutralize threats. Many stakeholders are coming to terms with the fact that AI can be a force of evil too. According to BCG, over 90% of cybersecurity professionals in the US and Japan expect attackers to start using AI to launch attacks. And this is, in fact, becoming a reality already.

AI presents big opportunities for cyber attackers, allowing them to increase attacks in terms of speed, volume, and sophistication to massive proportions. According to Alejandro Correa Bahnsen of Cyxtera, AI-based attacks can bypass traditional detection systems more than 15% of the time — whereas an average phishing attack (without AI) can only detect the attacks 0.3% of the time. An example is #SNAP_R.

Defending AI With AI: The AI-Enabled Solutions to Next-Gen Cyberthreats

In addressing this growing threat, it’s important to note that AI-based offensive requires AI-based defenses. That is, where deepfakes can trick security systems and higher AI-backed authentication should be applied. Et cetera.

Organizations are only just coming to terms with the risks of artificial intelligence. It is pertinent for businesses to act as quickly as possible to protect their systems against these attacks. WannaCry introduced a whole different level of sophistication to cyber-attacks — and now plus AI? That shouldn’t be allowed to happen.

Risks of AI in conducting cyber attacks

1. Scalability

At the 2016 Black Hat Conference, senior researchers debuted an automated spear-phishing program. Spear phishing, ordinarily, is tasking and time-consuming; depending on the scope of the attack. The attacker most likely has to collect large amounts of information about their targets for effective social engineering. Those researchers demonstrated how data science and machine learning can be used to automate and scale spear phishing attacks.

2. Impersonation

Months ago, experts at the Dawes Centre for Future Crime ranked deepfakes as the most serious AI crime threat. It’s not hard to see why. Deepfakes are a tool of disinformation, political manipulation, and deceit. Moreso, malicious actors can use deepfakes to impersonate trusted contacts and compromise business emails (voice phishing) to conduct financial fraud. And the worst is that they are hard to detect.

The possibility of deepfake ridicules voice biometrics and authentication. And these deepfakes will lead people to distrust audio and visual evidence, which have for long been tamper-proof sources of substantiation.

3. Detection-evasion

One way that AI can be used for evading detection is data poisoning. By targeting and compromising the data used to train and configure intelligent threat detection systems, say, making the system label obviously spam emails as safe, attackers can move more stealthily, and more dangerously.

Research shows that poisoning just 3% of a data set can raise error possibility by up to 91%. AI can be used to both evade attacks and adapt to defensive mechanisms.

4. Sophistication

All the points above underscore how AI enhances attacks. AI attacks are worse off because of automation and machine learning. Automation breaks the limit of human effort while machine learning makes the attack algorithms to improve from experience and become more efficient, notwithstanding if attacks are successful or not.

The adaptability means that AI-based attacks will only get stronger and more dangerous unless stronger counter innovations for resistance are developed.

Using AI to defend against AI

A. Machine learning for threat detection

In defending AI with AI, machine learning comes to play to help automate threat detection, especially with new threats that traditional antivirus and firewall systems are not equipped to defend against. Machine learning can significantly reduce instances of false positives, a serious menace in traditional threat detection, by 50% to 90% (cybersecurity intelligencedotcom).

Unlike the detection tools of the previous generation, which are signature-based, machine learning can monitor and log network usage patterns among employees in an organization and alert supervisors when it observes anomalous behavior.

Apparently, 93% of SOCs now use AI and machine learning tools in threat detection. The more data generated and the more sophisticated cyber-attacks get, security professionals will have to enhance their defense and detection capabilities with supervised and unsupervised machine learning.

B. Enhancing authentication via AI

Weak authentication is the most common way by which malicious actors gain unauthorized access to endpoints. And as seen with deepfakes, even biometric authentication no longer seems fail-proof. AI increases the sophistication of defenses by adding context to authentication requirements.

Risk-Based Authentication tools use AI-backed behavioral biometrics to identify suspicious activity and prevent endpoint compromise. Then, authentication extends beyond user verification to real-time intelligence. RBA, which is also called adaptive intelligence, assesses details such as location info, IP address, device info, data sensitivity, etc. to calculate a risk score and grant or restrict access.

For instance, if a person always logs in through a computer at work on workday mornings and on one occasion, tries to log in through a mobile device at a restaurant on a weekend, that may be a sign of compromise and the system will duly flag it.

With a smart RBA security model, merely knowing the password to a system is not enough for an attacker.

In addition to this, AI-powered authentication systems will start implementing continuous authentication, while still using behavioral analytics. Instead of a single login per session, which may be attacked midway, the system works continuously in the background authenticating the user by analyzing user environment and behavior for suspicious patterns.

C. AI in phishing prevention

Enhancing threat detection is one way by which AI can be used to prevent email phishing attacks and also enable safety when using torrenting websites for downloading media contents. It can as well do so with simple behavioral analysis. Say you receive an email purportedly from the CEO, AI can analyze the message to spot patterns that are inconsistent with the manner of communication from the actual CEO.

Features such as writing style, syntax, and word choice can reveal contrarieties, prevent you from falling into the trap and browse and download safely.

AI can also scan email metadata to detect altered signatures, even if the email address looks okay. It also scans links and images to verify their authenticity. Unlike traditional anti-phishing tools which block malicious emails through filters that can be easily bypassed, AI takes up the challenge directly against the core of phishing emails: social engineering.

What makes social engineering attacks difficult to overcome is that they are psychological, rather than technological. Hitherto, sheer human cleverness and skepticism had been tools for overcoming them. Now, AI has upped prevention, extending apprehension beyond human limits.

By recognizing patterns that are not immediately obvious to human beings, AI can determine when an email is malicious even if it does not contain any suspicious links or code. And it does this at scale using automation.

D. Predictive Analytics

The ultimate benefit of AI in cybersecurity is the ability to predict and build up defenses against attacks before they occur. AI can help human overseers to maintain comprehensive visibility over the entire network infrastructure of an organization and analyze endpoints to detect possible vulnerabilities. In this age of remote working and BYOD policies where IT departments increasingly find endpoint security difficult, AI can make their work much easier.

AI is our best bet against zero-day vulnerabilities, allowing us to quickly build smart defenses before those vulnerabilities are exploited by malicious actors. AI cybersecurity is becoming a sort of digital immune system for our organizations similar to how antibodies in the human are becoming system launch offensives against alien substances.


Last year, some Australian Researchers bypassed the famed Cylance AI antivirus without using the common method of dataset poisoning. They simply studied how the antivirus worked and created a universal bypass solution. The exercise called to question the practice of leaving computers to determine what should be trusted and also caused eyebrows to be raised concerning how effective AI is for cybersecurity.

However, more importantly, that research underscores the fact that AI is not a silver bullet and that human oversight remains necessary for combating advanced cyber threats. What we do know is that human effort alone with legacy cybersecurity tools is not enough to overcome the next generation of cyber threats, powered by AI.

We must use AI as the best offense and defense against AI.

Joseph Chukwube

Entrepreneur, Digital Marketer, Blogger

Digital Marketer and PR Specialist, Joseph Chukwube is the Founder of Digitage, a digital marketing agency for Startups, Growth Companies and SMEs. He discusses Cybersecurity, E-commerce and Lifestyle and he’s a published writer on TripWire, Business 2 Community, Infosecurity Magazine, Techopedia, Search Engine Watch and more. To say hey or discuss a project, proposal or idea, reach him via


Fintech Kennek raises $12.5M seed round to digitize lending



Google eyed for $2 billion Anthropic deal after major Amazon play

London-based fintech startup Kennek has raised $12.5 million in seed funding to expand its lending operating system.

According to an Oct. 10 report, the round was led by HV Capital and included participation from Dutch Founders Fund, AlbionVC, FFVC, Plug & Play Ventures, and Syndicate One. Kennek offers software-as-a-service tools to help non-bank lenders streamline their operations using open banking, open finance, and payments.

The platform aims to automate time-consuming manual tasks and consolidate fragmented data to simplify lending. Xavier De Pauw, founder of Kennek said:

“Until kennek, lenders had to devote countless hours to menial operational tasks and deal with jumbled and hard-coded data – which makes every other part of lending a headache. As former lenders ourselves, we lived and breathed these frustrations, and built kennek to make them a thing of the past.”

The company said the latest funding round was oversubscribed and closed quickly despite the challenging fundraising environment. The new capital will be used to expand Kennek’s engineering team and strengthen its market position in the UK while exploring expansion into other European markets. Barbod Namini, Partner at lead investor HV Capital, commented on the investment:

“Kennek has developed an ambitious and genuinely unique proposition which we think can be the foundation of the entire alternative lending space. […] It is a complicated market and a solution that brings together all information and stakeholders onto a single platform is highly compelling for both lenders & the ecosystem as a whole.”

The fintech lending space has grown rapidly in recent years, but many lenders still rely on legacy systems and manual processes that limit efficiency and scalability. Kennek aims to leverage open banking and data integration to provide lenders with a more streamlined, automated lending experience.

The seed funding will allow the London-based startup to continue developing its platform and expanding its team to meet demand from non-bank lenders looking to digitize operations. Kennek’s focus on the UK and Europe also comes amid rising adoption of open banking and open finance in the regions.

Featured Image Credit: Photo from; Thank you!

Radek Zielinski

Radek Zielinski is an experienced technology and financial journalist with a passion for cybersecurity and futurology.

Continue Reading


Fortune 500’s race for generative AI breakthroughs



Deanna Ritchie

As excitement around generative AI grows, Fortune 500 companies, including Goldman Sachs, are carefully examining the possible applications of this technology. A recent survey of U.S. executives indicated that 60% believe generative AI will substantially impact their businesses in the long term. However, they anticipate a one to two-year timeframe before implementing their initial solutions. This optimism stems from the potential of generative AI to revolutionize various aspects of businesses, from enhancing customer experiences to optimizing internal processes. In the short term, companies will likely focus on pilot projects and experimentation, gradually integrating generative AI into their operations as they witness its positive influence on efficiency and profitability.

Goldman Sachs’ Cautious Approach to Implementing Generative AI

In a recent interview, Goldman Sachs CIO Marco Argenti revealed that the firm has not yet implemented any generative AI use cases. Instead, the company focuses on experimentation and setting high standards before adopting the technology. Argenti recognized the desire for outcomes in areas like developer and operational efficiency but emphasized ensuring precision before putting experimental AI use cases into production.

According to Argenti, striking the right balance between driving innovation and maintaining accuracy is crucial for successfully integrating generative AI within the firm. Goldman Sachs intends to continue exploring this emerging technology’s potential benefits and applications while diligently assessing risks to ensure it meets the company’s stringent quality standards.

One possible application for Goldman Sachs is in software development, where the company has observed a 20-40% productivity increase during its trials. The goal is for 1,000 developers to utilize generative AI tools by year’s end. However, Argenti emphasized that a well-defined expectation of return on investment is necessary before fully integrating generative AI into production.

To achieve this, the company plans to implement a systematic and strategic approach to adopting generative AI, ensuring that it complements and enhances the skills of its developers. Additionally, Goldman Sachs intends to evaluate the long-term impact of generative AI on their software development processes and the overall quality of the applications being developed.

Goldman Sachs’ approach to AI implementation goes beyond merely executing models. The firm has created a platform encompassing technical, legal, and compliance assessments to filter out improper content and keep track of all interactions. This comprehensive system ensures seamless integration of artificial intelligence in operations while adhering to regulatory standards and maintaining client confidentiality. Moreover, the platform continuously improves and adapts its algorithms, allowing Goldman Sachs to stay at the forefront of technology and offer its clients the most efficient and secure services.

Featured Image Credit: Photo by Google DeepMind; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading


UK seizes web3 opportunity simplifying crypto regulations



Deanna Ritchie

As Web3 companies increasingly consider leaving the United States due to regulatory ambiguity, the United Kingdom must simplify its cryptocurrency regulations to attract these businesses. The conservative think tank Policy Exchange recently released a report detailing ten suggestions for improving Web3 regulation in the country. Among the recommendations are reducing liability for token holders in decentralized autonomous organizations (DAOs) and encouraging the Financial Conduct Authority (FCA) to adopt alternative Know Your Customer (KYC) methodologies, such as digital identities and blockchain analytics tools. These suggestions aim to position the UK as a hub for Web3 innovation and attract blockchain-based businesses looking for a more conducive regulatory environment.

Streamlining Cryptocurrency Regulations for Innovation

To make it easier for emerging Web3 companies to navigate existing legal frameworks and contribute to the UK’s digital economy growth, the government must streamline cryptocurrency regulations and adopt forward-looking approaches. By making the regulatory landscape clear and straightforward, the UK can create an environment that fosters innovation, growth, and competitiveness in the global fintech industry.

The Policy Exchange report also recommends not weakening self-hosted wallets or treating proof-of-stake (PoS) services as financial services. This approach aims to protect the fundamental principles of decentralization and user autonomy while strongly emphasizing security and regulatory compliance. By doing so, the UK can nurture an environment that encourages innovation and the continued growth of blockchain technology.

Despite recent strict measures by UK authorities, such as His Majesty’s Treasury and the FCA, toward the digital assets sector, the proposed changes in the Policy Exchange report strive to make the UK a more attractive location for Web3 enterprises. By adopting these suggestions, the UK can demonstrate its commitment to fostering innovation in the rapidly evolving blockchain and cryptocurrency industries while ensuring a robust and transparent regulatory environment.

The ongoing uncertainty surrounding cryptocurrency regulations in various countries has prompted Web3 companies to explore alternative jurisdictions with more precise legal frameworks. As the United States grapples with regulatory ambiguity, the United Kingdom can position itself as a hub for Web3 innovation by simplifying and streamlining its cryptocurrency regulations.

Featured Image Credit: Photo by Jonathan Borba; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Copyright © 2021 Seminole Press.