Connect with us

Tech

Rediscover trust in cybersecurity

Published

on

Rediscover trust in cybersecurity


The world has changed dramatically in a short amount of time—changing the world of work along with it. The new hybrid remote and in-office work world has ramifications for tech—specifically cybersecurity—and signals that it’s time to acknowledge just how intertwined humans and technology truly are.

Enabling a fast-paced, cloud-powered collaboration culture is critical to rapidly growing companies, positioning them to out innovate, outperform, and outsmart their competitors. Achieving this level of digital velocity, however, comes with a rapidly growing cybersecurity challenge that is often overlooked or deprioritized : insider risk, when a team member accidentally—or not—shares data or files outside of trusted parties. Ignoring the intrinsic link between employee productivity and insider risk can impact both an organizations’ competitive position and its bottom line. 

You can’t treat employees the same way you treat nation-state hackers

Insider risk includes any user-driven data exposure event—security, compliance or competitive in nature—that jeopardizes the financial, reputational or operational well-being of a company and its employees, customers, and partners. Thousands of user-driven data exposure and exfiltration events occur daily, stemming from accidental user error, employee negligence, or malicious users intending to do harm to the organization. Many users create insider risk accidentally, simply by making decisions based on time and reward, sharing and collaborating with the goal of increasing their productivity. Other users create risk due to negligence, and some have malicious intentions, like an employee stealing company data to bring to a competitor. 

From a cybersecurity perspective, organizations need to treat insider risk differently than external threats. With threats like hackers, malware, and nation-state threat actors, the intent is clear—it’s malicious. But the intent of employees creating insider risk is not always clear—even if the impact is the same. Employees can leak data by accident or due to negligence. Fully accepting this truth requires a mindset shift for security teams that have historically operated with a bunker mentality—under siege from the outside, holding their cards close to the vest so the enemy doesn’t gain insight into their defenses to use against them. Employees are not the adversaries of a security team or a company—in fact, they should be seen as allies in combating insider risk.

Transparency feeds trust: Building a foundation for training

All companies want to keep their crown jewels—source code, product designs, customer lists—from ending up in the wrong hands. Imagine the financial, reputational, and operational risk that could come from material data being leaked before an IPO, acquisition, or earnings call. Employees play a pivotal role in preventing data leaks, and there are two crucial elements to turning employees into insider risk allies: transparency and training. 

Transparency may feel at odds with cybersecurity. For cybersecurity teams that operate with an adversarial mindset appropriate for external threats, it can be challenging to approach internal threats differently. Transparency is all about building trust on both sides. Employees want to feel that their organization trusts them to use data wisely. Security teams should always start from a place of trust, assuming the majority of employees’ actions have positive intent. But, as the saying goes in cybersecurity, it’s important to “trust, but verify.” 

Monitoring is a critical part of managing insider risk, and organizations should be transparent about this. CCTV cameras are not hidden in public spaces. In fact, they are often accompanied by signs announcing surveillance in the area. Leadership should make it clear to employees that their data movements are being monitored—but that their privacy is still respected. There is a big difference between monitoring data movement and reading all employee emails.

Transparency builds trust—and with that foundation, an organization can focus on mitigating risk by changing user behavior through training. At the moment, security education and awareness programs are niche. Phishing training is likely the first thing that comes to mind due to the success it’s had moving the needle and getting employees to think before they click. Outside of phishing, there is not much training for users to understand what, exactly, they should and shouldn’t be doing.

For a start, many employees don’t even know where their organizations stand. What applications are they allowed to use? What are the rules of engagement for those apps if they want to use them to share files? What data can they use? Are they entitled to that data? Does the organization even care? Cybersecurity teams deal with a lot of noise made by employees doing things they shouldn’t. What if you could cut down that noise just by answering these questions?

Training employees should be both proactive and responsive. Proactively, in order to change employee behavior, organizations should provide both long- and short-form training modules to instruct and remind users of best behaviors. Additionally, organizations should respond with a micro-learning approach using bite-sized videos designed to address highly specific situations. The security team needs to take a page from marketing, focusing on repetitive messages delivered to the right people at the right time. 

Once business leaders understand that insider risk is not just a cybersecurity issue, but one that is intimately intertwined with an organization’s culture and has a significant impact on the business, they will be in a better position to out-innovate, outperform, and outsmart their competitors. In today’s hybrid remote and in-office work world, the human element that exists within technology has never been more significant.That’s why transparency and training are essential to keep data from leaking outside the organization. 

This content was produced by Code42. It was not written by MIT Technology Review’s editorial staff.

Tech

Human creators stand to benefit as AI rewrites the rules of content creation

Published

on

Human creators stand to benefit as AI rewrites the rules of content creation


A game-changer for content creation

Among the AI-related technologies to have emerged in the past several years is generative AI—deep-learning algorithms that allow computers to generate original content, such as text, images, video, audio, and code. And demand for such content will likely jump in the coming years—Gartner predicts that by 2025, generative AI will account for 10% of all data created, compared with 1% in 2022. 

Screenshot of Jason Allen’s work “Théâtre D’opéra Spatial,” Discord 

“Théâtre D’opéra Spatial” is an example of AI-generated content (AIGC), created with the Midjourney text-to-art generator program. Several other AI-driven art-generating programs have also emerged in 2022, capable of creating paintings from single-line text prompts. The diversity of technologies reflects a wide range of artistic styles and different user demands. DALL-E 2 and Stable Diffusion, for instance, are focused mainly on western-style artwork, while Baidu’s ERNIE-ViLG and Wenxin Yige produce images influenced by Chinese aesthetics. At Baidu’s deep learning developer conference Wave Summit+ 2022, the company announced that Wenxin Yige has been updated with new features, including turning photos into AI-generated art, image editing, and one-click video production.

Meanwhile, AIGC can also include articles, videos, and various other media offerings such as voice synthesis. A technology that generates audible speech indistinguishable from the voice of the original speaker, voice synthesis can be applied in many scenarios, including voice navigation for digital maps. Baidu Maps, for example, allows users to customize its voice navigation to their own voice just by recording nine sentences.

Recent advances in AI technologies have also created generative language models that can fluently compose texts with just one click. They can be used for generating marketing copy, processing documents, extracting summaries, and other text tasks, unlocking creativity that other technologies such as voice synthesis have failed to tap. One of the leading generative language models is Baidu’s ERNIE 3.0, which has been widely applied in various industries such as health care, education, technology, and entertainment.

“In the past year, artificial intelligence has made a great leap and changed its technological direction,” says Robin Li, CEO of Baidu. “Artificial intelligence has gone from understanding pictures and text to generating content.” Going one step further, Baidu App, a popular search and newsfeed app with over 600 million monthly users, including five million content creators, recently released a video editing feature that can produce a short video accompanied by a voiceover created from data provided in an article.

Improving efficiency and growth

As AIGC becomes increasingly common, it could make content creation more efficient by getting rid of repetitive, time-intensive tasks for creators such as sorting out source assets and voice recordings and rendering images. Aspiring filmmakers, for instance, have long had to pay their dues by spending countless hours mastering the complex and tedious process of video editing. AIGC may soon make that unnecessary. 

Besides boosting efficiency, AIGC could also increase business growth in content creation amid rising demand for personalized digital content that users can interact with dynamically. InsightSLICE forecasts that the global digital creation market will on average grow 12% annually between 2020 and 2030 and hit $38.2 billion. With content consumption fast outpacing production, traditional development methods will likely struggle to meet such increasing demand, creating a gap that could be filled by AIGC. “AI has the potential to meet this massive demand for content at a tenth of the cost and a hundred times or thousands of times faster in the next decade,” Li says.

AI with humanity as its foundation

AIGC can also serve as an educational tool by helping children develop their creativity. StoryDrawer, for instance, is an AI-driven program designed to boost children’s creative thinking, which often declines as the focus in their education shifts to rote learning. 

Continue Reading

Tech

The Download: the West’s AI myth, and Musk v Apple

Published

on

The Download: the West’s AI myth, and Musk v Apple


While the US and the EU may differ on how to regulate tech, their lawmakers seem to agree on one thing: the West needs to ban AI-powered social scoring.

As they understand it, social scoring is a practice in which authoritarian governments—specifically China—rank people’s trustworthiness and punish them for undesirable behaviors, such as stealing or not paying back loans. Essentially, it’s seen as a dystopian superscore assigned to each citizen.

The reality? While there have been some contentious local experiments with social credit scores in China, there is no countrywide, all-seeing social credit system with algorithms that rank people.

The irony is that while US and European politicians try to ban systems that don’t really exist, systems that do rank and penalize people are already in place in the West—and are denying people housing and jobs in the process. Read the full story.

—Melissa Heikkilä

Melissa’s story is from The Algorithm, her weekly AI newsletter covering all of the industry’s most interesting developments. Sign up to receive it in your inbox every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Apple has reportedly threatened to pull Twitter from the App Store
According to Elon Musk. (NYT $)
+ Musk has threatened to “go to war” with the company after it decided to stop advertising on Twitter. (WP $)
+ Apple’s reluctance to advertise on Twitter right now isn’t exactly unique. (Motherboard)
+ Twitter’s child protection team in Asia has been gutted. (Wired $)

2 Another crypto firm has collapsed
Lender BlockFi has filed for bankruptcy, and is (partly) blaming FTX. (WSJ $)
+ The company is suing FTX founder Sam Bankman-Fried. (FT $)
+ It looks like the much-feared “crypto contagion” is spreading. (NYT $)

3 AI is rapidly becoming more powerful—and dangerous
That’s particularly worrying when its growth is too much for safety teams to handle. (Vox)
+ Do AI systems need to come with safety warnings? (MIT Technology Review)
+ This AI chat-room game is gaining a legion of fans. (The Guardian)

4 A Pegasus spyware investigation is in danger of being compromised 
It’s the target of a disinformation campaign, security experts have warned. (The Guardian)
+ Cyber insurance won’t protect you from theft of your data. (The Guardian)

5 Google gave the FBI geofence data for its January 6 investigation 
Google identified more than 5,000 devices near the US Capitol during the riot. (Wired $)

6 Monkeypox isn’t going anywhere
But it’s not on the rise, either. (The Atlantic $)
+ The World Health Organization says it will now be known as mpox. (BBC)
+ Everything you need to know about the monkeypox vaccines. (MIT Technology Review)

7 What it’s like to be the unwitting face of a romance scam
James Scott Geras’ pictures have been used to catfish countless women. (Motherboard)

Continue Reading

Tech

What’s next in cybersecurity

Published

on

The Download: cybersecurity’s next act, and mass protests in China


One of the reasons cyber hasn’t played a bigger role in the war, according to Carhart, is because “in the whole conflict, we saw Russia being underprepared for things and not having a good game plan. So it’s not really surprising that we see that as well in the cyber domain.”

Moreover, Ukraine, under the leadership of  Zhora and his cybersecurity agency, has been working on its cyber defenses for years, and it has received support from the international community since the war started, according to experts. Finally, an interesting twist in the conflict on the internet between Russia and Ukraine was the rise of the decentralized, international cyber coalition known as the IT Army, which scored some significant hacks, showing  that war in the future can also be fought by hacktivists. 

Ransomware runs rampant again

This year, other than the usual corporations, hospitals, and schools, government agencies in Costa Rica, Montenegro, and Albania all suffered damaging ransomware attacks too. In Costa Rica, the government declared a national emergency, a first after a ransomware attack. And in Albania, the government expelled Iranian diplomats from the country—a first in the history of cybersecurity—following a destructive cyberattack.

These types of attacks were at an all-time high in 2022, a trend that will likely continue next year, according to Allan Liska, a researcher who focuses on ransomware at cybersecurity firm Recorded Future. 

“[Ransomware is] not just a technical problem like an information stealer or other commodity malware. There are real-world, geopolitical implications,” he says. In the past, for example, a North Korean ransomware called WannaCry caused severe disruption to the UK’s National Health System and hit an estimated 230,000 computers worldwide

Luckily, it’s not all bad news on the ransomware front. According to Liska, there are some early signs that point to “the death of the ransomware-as-a-service model,” in which ransomware gangs lease out hacking tools. The main reason, he said, is that whenever a gang gets too big, “something bad happens to them.”

For example, the ransomware groups REvil and DarkSide/BlackMatter were hit by governments; Conti, a Russian ransomware gang, unraveled internally when a Ukrainian researcher appalled by Conti’s public support of the war leaked internal chats; and the LockBit crew also suffered the leak of its code.  

“We are seeing a lot of the affiliates deciding that maybe I don’t want to be part of a big ransomware group, because they all have targets on their back, which means that I might have a target on my back, and I just want to carry out my cybercrime,” Liska says. 

Continue Reading

Copyright © 2021 Seminole Press.