Connect with us

Tech

How Apple’s locked down security gives extra protection to the best hackers

Published

on

How Apple's locked down security gives extra protection to the best hackers


“It’s a double-edged sword,” says Bill Marczak, a senior researcher at the cybersecurity watchdog Citizen Lab. “You’re going to keep out a lot of the riffraff by making it harder to break iPhones. But the 1% of top hackers are going to find a way in and, once they’re inside, the impenetrable fortress of the iPhone protects them.”

Marczak has spent the last eight years hunting those top-tier hackers. His research includes the groundbreaking 2016 “Million Dollar Dissident” report that introduced the world to the Israeli hacking company NSO Group. And in December, he was the lead author of a report titled “The Great iPwn,” detailing how the same hackers allegedly targeted dozens of Al Jazeera journalists.

He argues that while the iPhone’s security is getting tighter as Apple invests millions to raise the wall, the best hackers have their own millions to buy or develop zero-click exploits that let them take over iPhones invisibly. These allow attackers to burrow into the restricted parts of the phone without ever giving the target any indication of having been compromised. And once they’re that deep inside, the security becomes a barrier that keeps investigators from spotting or understanding nefarious behavior—to the point where Marczak suspects they’re missing all but a small fraction of attacks because they cannot see behind the curtain.

This means that even to know you’re under attack, you may have to rely on luck or vague suspicion rather than clear evidence. The Al Jazeera journalist Tamer Almisshal contacted Citizen Lab after he received death threats about his work in January 2020, but Marczak’s team initially found no direct evidence of hacking on his iPhone. They persevered by looking indirectly at the phone’s internet traffic to see who it was whispering to, until finally, in July last year, researchers saw the phone pinging servers belonging to NSO. It was strong evidence pointing toward a hack using the Israeli company’s software, but it didn’t expose the hack itself.

Sometimes the locked-down system can backfire even more directly. When Apple released a new version of iOS last summer in the middle of Marczak’s investigation, the phone’s new security features killed an unauthorized “jailbreak” tool Citizen Lab used to open up the iPhone. The update locked him out of the private areas of the phone, including a folder for new updates—which turned out to be exactly where hackers were hiding.

Faced with these blocks, “we just kind of threw our hands up,” says Marczak. “We can’t get anything from this—there’s just no way.” 

Beyond the phone

Ryan Stortz is a security engineer at the firm Trail of Bits. He leads development of iVerify, a rare Apple-approved security app that does its best to peer inside iPhones while still playing by the rules set in Cupertino. iVerify looks for security anomalies on the iPhone, such as unexplained file modifications—the sort of indirect clues that can point to a deeper problem. Installing the app is a little like setting up trip wires in the castle that is the iPhone: if something doesn’t look the way you expect it to, you know a problem exists.

But like the systems used by Marczak and others, the app can’t directly observe unknown malware that breaks the rules, and it is blocked from reading through the iPhone’s memory in the same way that security apps on other devices do. The trip wire is useful, but it isn’t the same as a guard who can walk through every room to look for invaders.

“You’re going to keep out a lot of the riffraff by making it harder to break iPhones. But the 1% of top hackers are going to find a way in and, once they’re inside, the impenetrable fortress of the iPhone protects them.”

Bill Marczak, Citizen Lab

Despite these difficulties, Stortz says, modern computers are converging on the lockdown philosophy—and he thinks the trade-off is worth it. “As we lock these things down, you reduce the damage of malware and spying,” he says.

This approach is spreading far beyond the iPhone. In a recent briefing with journalists, an Apple spokesperson described how the company’s Mac computers are increasingly adopting the iPhone’s security philosophy: its newest laptops and desktops run on custom-built M1 chips that make them more powerful and secure, in part by increasingly locking down the computer in the same ways as mobile devices.

“iOS is incredibly secure. Apple saw the benefits and has been moving them over to the Mac for a long time, and the M1 chip is a huge step in that direction,” says security researcher Patrick Wardle.

Tech

Meta’s new AI can turn text prompts into videos

Published

on

Meta’s new AI can turn text prompts into videos


Although the effect is rather crude, the system offers an early glimpse of what’s coming next for generative artificial intelligence, and it is the next obvious step from the text-to-image AI systems that have caused huge excitement this year. 

Meta’s announcement of Make-A-Video, which is not yet being made available to the public, will likely prompt other AI labs to release their own versions. It also raises some big ethical questions. 

In the last month alone, AI lab OpenAI has made its latest text-to-image AI system DALL-E available to everyone, and AI startup Stability.AI launched Stable Diffusion, an open-source text-to-image system.

But text-to-video AI comes with some even greater challenges. For one, these models need a vast amount of computing power. They are an even bigger computational lift than large text-to-image AI models, which use millions of images to train, because putting together just one short video requires hundreds of images. That means it’s really only large tech companies that can afford to build these systems for the foreseeable future. They’re also trickier to train, because there aren’t large-scale data sets of high-quality videos paired with text. 

To work around this, Meta combined data from three open-source image and video data sets to train its model. Standard text-image data sets of labeled still images helped the AI learn what objects are called and what they look like. And a database of videos helped it learn how those objects are supposed to move in the world. The combination of the two approaches helped Make-A-Video, which is described in a non-peer-reviewed paper published today, generate videos from text at scale.

Tanmay Gupta, a computer vision research scientist at the Allen Institute for Artificial Intelligence, says Meta’s results are promising. The videos it’s shared show that the model can capture 3D shapes as the camera rotates. The model also has some notion of depth and understanding of lighting. Gupta says some details and movements are decently done and convincing. 

However, “there’s plenty of room for the research community to improve on, especially if these systems are to be used for video editing and professional content creation,” he adds. In particular, it’s still tough to model complex interactions between objects. 

In the video generated by the prompt “An artist’s brush painting on a canvas,” the brush moves over the canvas, but strokes on the canvas aren’t realistic. “I would love to see these models succeed at generating a sequence of interactions, such as ‘The man picks up a book from the shelf, puts on his glasses, and sits down to read it while drinking a cup of coffee,’” Gupta says. 

Continue Reading

Tech

How AI is helping birth digital humans that look and sound just like us

Published

on

How AI is helping birth digital humans that look and sound just like us


Jennifer: And the team has also been exploring how these digital twins can be useful beyond the 2D world of a video conference. 

Greg Cross: I guess the.. the big, you know, shift that’s coming right at the moment is the move from the 2D world of the internet, into the 3D world of the metaverse. So, I mean, and that, and that’s something we’ve always thought about and we’ve always been preparing for, I mean, Jack exists in full 3D, um, You know, Jack exists as a full body. So I mean, Jack can, you know, today we have, you know, we’re building augmented reality, prototypes of Jack walking around on a golf course. And, you know, we can go and ask Jack, how, how should we play this hole? Um, so these are some of the things that we are starting to imagine in terms of the way in which digital people, the way in which digital celebrities. Interact with us as we move into the 3D world.

Jennifer: And he thinks this technology can go a lot further.

Greg Cross: Healthcare and education are two amazing applications of this type of technology. And it’s amazing because we don’t have enough real people to deliver healthcare and education in the real world. So, I mean, so you can, you know, you can imagine how you can use a digital workforce to augment. And, and extend the skills and capability, not replace, but extend the skills and, and capabilities of real people. 

Jennifer: This episode was produced by Anthony Green with help from Emma Cillekens. It was edited by me and Mat Honan, mixed by Garret Lang… with original music from Jacob Gorski.   

If you have an idea for a story or something you’d like to hear, please drop a note to podcasts at technology review dot com.

Thanks for listening… I’m Jennifer Strong.

Continue Reading

Tech

A bionic pancreas could solve one of the biggest challenges of diabetes

Published

on

A bionic pancreas could solve one of the biggest challenges of diabetes


The bionic pancreas, a credit card-sized device called an iLet, monitors a person’s levels around the clock and automatically delivers insulin when needed through a tiny cannula, a thin tube inserted into the body. It is worn constantly, generally on the abdomen. The device determines all insulin doses based on the user’s weight, and the user can’t adjust the doses. 

A Harvard Medical School team has submitted its findings from the study, described in the New England Journal of Medicine, to the FDA in the hopes of eventually bringing the product to market in the US. While a team from Boston University and Massachusetts General Hospital first tested the bionic pancreas in 2010, this is the most extensive trial undertaken so far.

The Harvard team, working with other universities, provided 219 people with type 1 diabetes who had used insulin for at least a year with a bionic pancreas device for 13 weeks. The team compared their blood sugar levels with those of 107 diabetic people who used other insulin delivery methods, including injection and insulin pumps, during the same amount of time. 

The blood sugar levels of the bionic pancreas group fell from 7.9% to 7.3%, while the standard care group’s levels remained steady at 7.7%. The American Diabetes Association recommends a goal of less than 7.0%, but that’s only met by approximately 20% of people with type 1 diabetes, according to a 2019 study

Other types of artificial pancreas exist, but they typically require the user to input information before they will deliver insulin, including the amount of carbohydrates they ate in their last meal. Instead, the iLet takes the user’s weight and the type of meal they’re eating, such as breakfast, lunch, or dinner, added by the user via the iLet interface, and it uses an adaptive learning algorithm to deliver insulin automatically.

Continue Reading

Copyright © 2021 Seminole Press.