Connect with us


How culture drives foul play on the internet, and how new “upcode” can protect us



cover of book, Easy Money

Shapiro’s book arrives just in time for the last gasp of the latest crypto wave, as major players find themselves trapped in the nets of human institutions. In early June, the US Securities and Exchange Commission went after Binance and Coinbase, the two largest cryptocurrency exchanges in the world, a few months after charging the infamous Sam Bankman-Fried, founder of the massive crypto exchange FTX, with fraud. While Shapiro mentions crypto only as the main means of payment in online crime, the industry’s wild ride through finance and culture deserves its own hefty chapter in the narrative of internet fraud. 

It may be too early for deep analysis, but we do have first-person perspectives on crypto from actor Ben McKenzie (former star of the teen drama The O.C.) and streetwear designer and influencer Bobby Hundreds, the authors of—respectively—Easy Money and NFTs Are a Scam/NFTs Are the Future. (More heavily reported books on the crypto era from tech reporter Zeke Faux and Big Short author Michael Lewis are in the works.) 

“If we are committing serious crimes like fraud, it is crucially important that we find ways to justify our behavior to others, and crucially, to ourselves.”

Ben McKenzie, former star of The O.C.

McKenzie testified at the Senate Banking Committee’s hearing on FTX that he believes the cryptocurrency industry “represents the largest Ponzi scheme in history,” and Easy Money traces his own journey from bored pandemic dabbler to committed crypto critic alongside the industry’s rise and fall. Hundreds also writes a chronological account of his time in crypto—specifically in nonfungible tokens, or NFTs, digital representational objects that he has bought, sold, and “dropped” on his own and through The Hundreds, a “community-based streetwear brand and media company.” For Hundreds, NFTs have value as cultural artifacts, and he’s not convinced that their time should be over (although he acknowledges that between 2019 and the writing of his book, more than $100 million worth of NFTs have been stolen, mostly through phishing scams). “Whether or not NFTs are a scam poses a philosophical question that wanders into moral judgments and cultural practices around free enterprise, mercantilism, and materialism,” he writes. 

ABRAMS, 2023

For all their differences (a lawyer, an actor, and a designer walk into a bar …), Shapiro, McKenzie, and Hundreds all explore characters, motivations, and social dynamics much more than they do technical innovations. Online crime is a human story, these books collectively argue, and explanations of why it happens, why it works, and how we can stay safe are human too.

To articulate how internet crime comes to be, Shapiro offers a new paradigm for the relationship between humanity and technology. He relabels technical computer code “downcode” and calls everything human surrounding and driving it “upcode.” From “the inner operations of the human brain” to “the outer social, political, and institutional forces that define the world,” upcode is the teeming ecosystem of humans and human systems behind the curtain of technology. Shapiro argues that upcode is responsible for all of technology’s impacts—positive and negative—and downcode is only its product. Technical tools like the blockchain, firewalls, or two-factor authentication may be implemented as efforts to ensure safety online, but they cannot address the root causes upstream. For any technologist or crypto enthusiast who believes computer code to be law and sees human error as an annoying hiccup, this idea may be disconcerting. But crime begins and ends with humans, Shapiro argues, so upcode is where we must focus both our blame for the problem and our efforts to improve online safety.

McKenzie and Hundreds deal with crypto and NFTS almost entirely at the upcode level: neither has training in computer science, and both examine the industry through personal lenses. For McKenzie, it’s the financial realm, where friends encouraged him to invest in tokens to compensate for being out of work during the pandemic. For Hundreds, it’s the art world, which has historically been inaccessible to most and inhospitable for many—and is what led him to gravitate toward streetwear as a creative outlet in the first place. Hundreds saw NFTs as a signal of a larger positive shift toward Web3, a nebulous vision of a more democratized form of the internet where creative individuals could get paid for their work and build communities of fans and artists without relying on tech companies. The appeal of Web3 and NFTs is based in cultural and economic realities; likewise, online scams happen because buggy upcode—like social injustice, runaway capitalism, and corporate monopolies—creates the conditions.  

Constructing downcode guardrails to allow in only “good” intentions won’t solve online crime because bad acts are not so easily dismissed as the work of bad actors. The people who perpetrate scams, fraud, and hacks—or even participate in the systems around it, like speculative markets—often subscribe to a moral rubric as they act illegally. In Fancy Bear, Shapiro cites the seminal research of Sarah Gordon, the first to investigate the psychology of people who wrote computer viruses when this malware first popped up in the 1990s. Of the 64 respondents to her global survey, all but one had developmentally appropriate moral reasoning based on ethics, according to a framework created by the psychologist Lawrence Kohlberg: that is, these virus writers made decisions based on a sense of right and wrong. More recent research from Alice Hutchings, the director of the University of Cambridge’s Cybercrime Centre, also found hackers as a group to be “moral agents, possessing a sense of justice, purpose, and identity.” Many hackers find community in their work; others, like Edward Snowden, who leaked classified information from the US National Security Agency in 2013, cross legal boundaries for what they believe to be expressly moral reasons. Bitcoin, meanwhile, may be a frequent agent of crime but was in fact created to offer a “trustless” way to avoid relying on banks after the housing crisis and government bailouts of the 2000s left many wondering if traditional financial institutions could be trusted with consumer interests. The definition of crime is also upcode, shaped by social contracts as well as legal ones.

cover of book, NFTs are a Scam

MCD, 2023

In NFTs Are a Scam/NFTs Are the Future, Hundreds interviews the renowned tech investor and public speaker Gary Vaynerchuk, or “Gary Vee,” a figure he calls the “face of NFTs.” It was Vee’s “zeal and belief” that convinced Hundreds to create his own NFT collection, Adam Bomb Squad. Vee tells Hundreds that critics “may be right” when they call NFTs a scam. But while some projects may be opportunistic rackets, he hopes the work he makes is the variety that endures. Vee might be lying here, but at face value, he professes a belief in a greater good that he and everyone he recruits (including the thousands of attendees at his NFT convention) can help build—even if there’s harm along the way. 


The Download: COP28 controversy and the future of families



The Download: COP28 controversy and the future of families

The United Arab Emirates is one of the world’s largest oil producers. It’s also the site of this year’s UN COP28 climate summit, which kicks off later this week in Dubai. 

It’s a controversial host, but the truth is that there’s massive potential for oil and gas companies to help address climate change, both by cleaning up their operations and by investing their considerable wealth and expertise into new technologies.

The problem is that these companies also have a vested interest in preserving the status quo. If they want to be part of a net-zero future, something will need to change—and soon. Read the full story.

—Casey Crownhart

How reproductive technology can reverse population decline

Birth rates have been plummeting in wealthy countries, well below the “replacement” rate. Even in China, a dramatic downturn in the number of babies has officials scrambling, as its population growth turns negative.

So, what’s behind the baby bust and can new reproductive technology reverse the trend? MIT Technology Review is hosting a subscriber-only Roundtables discussion on how innovations from the lab could affect the future of families at 11am ET this morning, featuring Antonio Regalado, our biotechnology editor, and entrepreneur Martín Varsavsky, founder of fertility clinic Prelude Fertility. Don’t miss out—make sure you register now.

The must-reads

Continue Reading


Unpacking the hype around OpenAI’s rumored new Q* model



Unpacking the hype around OpenAI’s rumored new Q* model

While we still don’t know all the details, there have been reports that researchers at OpenAI had made a “breakthrough” in AI that had alarmed staff members. Reuters and The Information both report that researchers had come up with a new way to make powerful AI systems and had created a new model, called Q* (pronounced Q star), that was able to perform grade-school-level math. According to the people who spoke to Reuters, some at OpenAI believe this could be a milestone in the company’s quest to build artificial general intelligence, a much-hyped concept referring to an AI system that is smarter than humans. The company declined to comment on Q*. 

Social media is full of speculation and excessive hype, so I called some experts to find out how big a deal any breakthrough in math and AI would really be.

Researchers have for years tried to get AI models to solve math problems. Language models like ChatGPT and GPT-4 can do some math, but not very well or reliably. We currently don’t have the algorithms or even the right architectures to be able to solve math problems reliably using AI, says Wenda Li, an AI lecturer at the University of Edinburgh. Deep learning and transformers (a kind of neural network), which is what language models use, are excellent at recognizing patterns, but that alone is likely not enough, Li adds. 

Math is a benchmark for reasoning, Li says. A machine that is able to reason about mathematics, could, in theory, be able to learn to do other tasks that build on existing information, such as writing computer code or drawing conclusions from a news article. Math is a particularly hard challenge because it requires AI models to have the capacity to reason and to really understand what they are dealing with. 

A generative AI system that could reliably do math would need to have a really firm grasp on concrete definitions of particular concepts that can get very abstract. A lot of math problems also require some level of planning over multiple steps, says Katie Collins, a PhD researcher at the University of Cambridge, who specializes in math and AI. Indeed, Yann LeCun, chief AI scientist at Meta, posted on X and LinkedIn over the weekend that he thinks Q* is likely to be “OpenAI attempts at planning.”

People who worry about whether AI poses an existential risk to humans, one of OpenAI’s founding concerns, fear that such capabilities might lead to rogue AI. Safety concerns might arise if such AI systems are allowed to set their own goals and start to interface with a real physical or digital world in some ways, says Collins. 

But while math capability might take us a step closer to more powerful AI systems, solving these sorts of math problems doesn’t signal the birth of a superintelligence. 

“I don’t think it immediately gets us to AGI or scary situations,” says Collins.  It’s also very important to underline what kind of math problems AI is solving, she adds.

Continue Reading


The Download: unpacking OpenAI Q* hype, and X’s financial woes




This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Unpacking the hype around OpenAI’s rumored new Q* model

Ever since last week’s dramatic events at OpenAI, the rumor mill has been in overdrive about why the company’s board tried to oust CEO Sam Altman.

While we still don’t know all the details, there have been reports that researchers at OpenAI had made a “breakthrough” in AI that alarmed staff members. The claim is that they came up with a new way to make powerful AI systems and had created a new model, called Q* (pronounced Q star), that was able to perform grade-school level math.

Some at OpenAI reportedly believe this could be a breakthrough in the company’s quest to build artificial general intelligence, a much-hyped concept of an AI system that is smarter than humans.

So what’s actually going on? And why is grade-school math such a big deal? Our senior AI reporter Melissa Heikkilä called some experts to find out how big of a deal any such breakthrough would really be. Here’s what they had to say.

This story is from The Algorithm, our weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 X is hemorrhaging millions in advertising revenue 
Internal documents show the company is in an even worse position than previously thought. (NYT $)
+ Misinformation ‘super-spreaders’ on X are reportedly eligible for payouts from its ad revenue sharing program. (The Verge)
It’s not just you: tech billionaires really are becoming more unbearable. (The Guardian)
2 The brakes seem to now be off on AI development 
With Sam Altman’s return to OpenAI, the ‘accelerationists’ have come out on top. (WSJ $)
Inside the mind of OpenAI’s chief scientist, Ilya Sutskever. (MIT Technology Review)
3 How Norway got heat pumps into two-thirds of its households
Mostly by making it the cheaper choice for people. (The Guardian)
Everything you need to know about the wild world of heat pumps. (MIT Technology Review)
4 How your social media feeds shape how you see the Israel-Gaza war
Masses of content are being pumped out, rarely with any nuance or historical understanding. (BBC)
China tried to keep kids off social media. Now the elderly are hooked. (Wired $)
5 US regulators have surprisingly little scope to enforce Amazon’s safety rules
As demonstrated by the measly $7,000 fine issued by Indiana after a worker was killed by warehouse machinery. (WP $)
6 How Ukraine is using advanced technologies on the battlefield 
The Pentagon is using the conflict as a testbed for some of the 800-odd AI-based projects it has in progress. (AP $)
Why business is booming for military AI startups. (MIT Technology Review)
7 Shein is trying to overhaul its image, with limited success
Its products seem too cheap to be ethically sourced—and it doesn’t take kindly to people pointing that out. (The Verge)
+ Why my bittersweet relationship with Shein had to end. (MIT Technology Review)
8 Every app can be a dating app now 💑
As people turn their backs on the traditional apps, they’re finding love in places like Yelp, Duolingo and Strava. (WSJ $)
+ Job sharing apps are also becoming more popular. (BBC)
9 People can’t get enough of work livestreams on TikTok
It’s mostly about the weirdly hypnotic quality of watching people doing tasks like manicures or frying eggs. (The Atlantic $)
10 A handy guide to time travel in the movies
Whether you prioritize scientific accuracy or entertainment value, this chart has got you covered. (Ars Technica)

Quote of the day

“It’s in the AI industry’s interest to make people think that only the big players can do this—but it’s not true.”

—Ed Newton-Rex, who just resigned as VP of audio at Stability.AI, says the idea that generative AI models can only be built by scraping artists’ work is a myth in an interview with The Next Web

The big story

The YouTube baker fighting back against deadly “craft hacks”

rainbow glue coming out of a hotglue gun onto a toothbrush, surrounded by caution tape


September 2022

Ann Reardon is probably the last person you’d expect to be banned from YouTube. A former Australian youth worker and a mother of three, she’s been teaching millions of subscribers how to bake since 2011. 

However, more recently, Reardon has been using her platform to warn people about dangerous new “craft hacks” that are sweeping YouTube, such as poaching eggs in a microwave, bleaching strawberries, and using a Coke can and a flame to pop popcorn.

Reardon was banned because she got caught up in YouTube’s messy moderation policies. In doing so, she exposed a failing in the system: How can a warning about harmful hacks be deemed dangerous when the hack videos themselves are not? Read the full story.

—Amelia Tait

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ London’s future skyline is looking increasingly like New York’s.
+ Whovians will never agree on who has the honor of being the best Doctor.
+ How to get into mixing music like a pro.
+ This Japanese sea worm has a neat trick up its sleeve—splitting itself in two in the quest for love.
+ Did you know there’s a mysterious tunnel under Seoul?

Continue Reading

Copyright © 2021 Seminole Press.