Tech
Mathematicians are deploying algorithms to stop gerrymandering
Published
2 years agoon
By
Drew Simpson
For decades, one of those users was Thomas Hofeller, “the Michelangelo of the modern gerrymander,” long the Republican National Committee’s official redistricting director, who died in 2018.
Gerrymandering schemes include “cracking” and “packing”—scattering votes for one party across districts, thus diluting their power, and stuffing like-minded voters into a single district, wasting the power they would have elsewhere. The city of Austin, Texas, is cracked, split among six districts (it is the largest US city that does not anchor a district).
In 2010, the full force of the threat materialized with the Republicans’ Redistricting Majority Project, or REDMAP. It spent $30 million on down-ballot state legislative races, with winning results in Florida, North Carolina, Wisconsin, Michigan, and Ohio. “The wins in 2010 gave them the power to draw the maps in 2011,” says David Daley, author of, Ratf**ked: The True Story Behind the Secret Plan to Steal America’s Democracy.
“What used to be a dark art is now a dark science.”
MICHAEL LI
That the technology had advanced by leaps and bounds since the previous redistricting cycle only supercharged the outcome. “It made the gerrymanders drawn that year so much more lasting and enduring than any other gerrymanders in our nation’s history,” he says. “It’s the sophistication of the computer software, the speed of the computers, the amount of data available, that makes it possible for partisan mapmakers to put their maps through 60 or 70 different iterations and to really refine and optimize the partisan performance of those maps.”
As Michael Li, a redistricting expert at the Brennan Center for Justice at the New York University’s law school, puts it: “What used to be a dark art is now a dark science.” And when manipulated maps are implemented in an election, he says, they are nearly impossible to overcome.
A mathematical microscope
Mattingly and his Duke team have been staying up late writing code that they expect will produce a “huge win, algorithmically”—preparing for real-life application of their latest tool, which debuted in a paper (currently under review) with the technically heady title “Multi-Scale Merge-Split Markov Chain Monte Carlo for Redistricting.”
Advancing the technical discourse, however, is not the top priority. Mattingly and his colleagues hope to educate the politicians and the public alike, as well as lawyers, judges, fellow mathematicians, scientists—anyone interested in the cause of democracy. In July, Mattingly gave a public lecture with a more accessible title that cut to the quick: “Can you hear the will of the people in the vote?”
Misshapen districts are often thought to be the mark of a gerrymander. With the 2012 map in North Carolina, the congressional districts were “very strange-looking beasts,” says Mattingly, who (with his key collaborator, Greg Herschlag) provided expert testimony in some of the ensuing lawsuits. Over the last decade, there have been legal challenges across the country—in Illinois, Maryland, Ohio, Pennsylvania, Wisconsin.
But while such disfigured districts “make really nice posters and coffee cups and T-shirts, ” Mattingly says, “ the truth is that stopping strange geometries will not stop gerrymandering.” And in fact, with all the technologically sophisticated sleights of hand, a gerrymandered map can prove tricky to detect.
JONATHAN MATTINGLY
The tools developed simultaneously by a number of mathematical scientists provide what’s called an “extreme-outlier test.” Each researcher’s approach is slightly different, but the upshot is as follows: a map suspected of being gerrymandered is compared with a large collection, or “ensemble,” of unbiased, neutral maps. The mathematical method at work—based on what are called Markov chain Monte Carlo algorithms—generates a random sample of maps from a universe of possible maps, and reflects the likelihood that any given map drawn will satisfy various policy considerations.
The ensemble maps are encoded to capture various principles used to draw districts, factoring in the way these principles interact with a state’s geopolitical geometry. The principles (which vary from state to state) include such criteria as keeping districts relatively compact and connected, making them roughly equal in population, and preserving counties, municipalities, and communities with common interests. And district maps must comply with the US Constitution and the Voting Rights Act of 1965.
With the Census Bureau’s release of the 2020 data, Mattingly and his team will load up the data set, run their algorithm, and generate a collection of typical, nonpartisan district plans for North Carolina. From this vast distribution of maps, and factoring in historical voting patterns, they’ll discern benchmarks that should serve as guardrails. For instance, they’ll assess the relative likelihood that those maps would produce various election outcome —say, the number of seats won by Democrats and Republicans—and by what margin: with a 50-50 split in the vote, and given plausible voting patterns, it’s unlikely that a neutral map would give Republicans 10 seats and the Democrats only three (as was the case with that 2012 map).
“We’re using computational mathematics to figure out what we’d expect as outcomes for unbiased maps, and then we can compare with a particular map,” says Mattingly.
By mid-September they’ll announce their findings, and then hope state legislators will heed the guardrails. Once new district maps are proposed later in the fall, they’ll analyze the results and engage with the public and political conversations that ensue—and if the maps are again suspected to be gerrymandered, there will be more lawsuits, in which mathematicians will again play a central role.
“I don’t want to just convince someone that something is wrong,” Mattingly says. “I want to give them a microscope so they can look at a map and understand its properties and then draw their own conclusions.”

COURTESY PHOTO
When Mattingly testified in 2017 and 2019, analyzing two subsequent iterations of North Carolina’s district maps, the court agreed that the maps in question were excessively partisan gerrymanders, discriminating against Democrats. Wes Pegden, a mathematician at Carnegie Mellon University, testified using a similar method in a Pennsylvania case; the court agreed that the map in question discriminated against Republicans.
“Courts have long struggled with how to measure partisan gerrymandering,” says Li. “But then there seemed to be a breakthrough, when court after court struck down maps using some of these new tools.”
When the North Carolina case reached the US Supreme Court in 2019 (together with a Maryland case), the mathematician and geneticist Eric Lander, a professor at Harvard and MIT who is now President Biden’s top science advisor, observed in a brief that “computer technology has caught up with the problem that it spawned.” He deemed the extreme-outlier standard—a test that simply asks, “What fraction of redistricting plans are less extreme than the proposed plan?”—a “straightforward, quantitative mathematical question to which there is a right answer.”
The majority of the justices concluded otherwise.
“The five justices on the Supreme Court are the only ones who seemed to have trouble seeing how the math and models worked,” says Li. “State and other federal courts managed to apply it—this was not beyond the intellectual ability of the courts to handle, any more than a complex sex discrimination case is, or a complex securities fraud case. But five justices of the Supreme Court said, ‘This is too hard for us.’”
“They also said, ‘This is not for us to fix—this is for the states to fix; this is for Congress to fix; it’s not for us to fix,’” says Li.
Will it matter?
As Daley sees it, the Supreme Court decision gives state lawmakers “a green light and no speed limit when it comes to the kind of partisan gerrymanders that they can enact when map-making later this month.” At the same time, he says, “the technology has improved to such a place that we can now use [it] to see through the technology-driven gerrymanders that are created by lawmakers.”
You may like
-
Large language models aren’t people. Let’s stop testing them as if they were.
-
The Download: inaccurate welfare algorithms, and training AI for free
-
Successfully deploying machine learning
-
Generative AI risks concentrating Big Tech’s power. Here’s how to stop it.
-
Meet the AI expert who says we should stop using AI so much
-
Deploying a multidisciplinary strategy with embedded responsible AI
Tech
Why embracing complexity is the real challenge in software today
Published
2 hours agoon
09/29/2023By
Drew Simpson
Redistributing complexity
The reason we can’t just wish away or “fix” complexity is that every solution—whether it’s a technology or methodology—redistributes complexity in some way. Solutions reorganize problems. When microservices emerged (a software architecture approach where an application or system is composed of many smaller parts), they seemingly solved many of the maintenance and development challenges posed by monolithic architectures (where the application is one single interlocking system). However, in doing so microservices placed new demands on engineering teams; they require greater maturity in terms of practices and processes. This is one of the reasons why we cautioned people against what we call “microservice envy” in a 2018 edition of the Technology Radar, with CTO Rebecca Parsons writing that microservices would never be recommended for adoption on Technology Radar because “not all organizations are microservices-ready.” We noticed there was a tendency to look to adopt microservices simply because it was fashionable.
This doesn’t mean the solution is poor or defective. It’s more that we need to recognize the solution is a tradeoff. At Thoughtworks, we’re fond of saying “it depends” when people ask questions about the value of a certain technology or approach. It’s about how it fits with your organization’s needs and, of course, your ability to manage its particular demands. This is an example of essential complexity in tech—it’s something that can’t be removed and which will persist however much you want to get to a level of simplicity you find comfortable.
In terms of microservices, we’ve noticed increasing caution about rushing to embrace this particular architectural approach. Some of our colleagues even suggested the term “monolith revivalists” to describe those turning away from microservices back to monolithic software architecture. While it’s unlikely that the software world is going to make a full return to monoliths, frameworks like Spring Modulith—a framework that helps developers structure code in such a way that it becomes easier to break apart a monolith into smaller microservices when needed—suggest that practitioners are becoming more keenly aware of managing the tradeoffs of different approaches to building and maintaining software.
Supporting practitioners with concepts and tools
Because technical solutions have a habit of reorganizing complexity, we need to carefully attend to how this complexity is managed. Failing to do so can have serious implications for the productivity and effectiveness of engineering teams. At Thoughtworks we have a number of concepts and approaches that we use to manage complexity. Sensible defaults, for instance, are starting points for a project or piece of work. They’re not things that we need to simply embrace as a rule, but instead practices and tools that we collectively recognize are effective for most projects. They give individuals and teams a baseline to make judgements about what might be done differently.
One of the benefits of sensible defaults is that they can guard you against the allure of novelty and hype. As interesting or exciting as a new technology might be, sensible defaults can anchor you in what matters to you. This isn’t to say that new technologies like generative AI shouldn’t be treated with enthusiasm and excitement—some of our teams have been experimenting with these tools and seen impressive results—but instead that adopting new tools needs to be done in a way that properly integrates with the way you work and what you want to achieve. Indeed, there are a wealth of approaches to GenAI, from high profile tools like ChatGPT to self-hosted LLMs. Using GenAI effectively is as much a question of knowing the right way to implement for you and your team as it is about technical expertise.
Interestingly, the tools that can help us manage complexity aren’t necessarily new. One thing that came up in the latest edition of Technology Radar was something called risk-based failure modeling, a process used to understand the impact, likelihood and ability of detecting the various ways that a system can fail. This has origins in failure modes and effects analysis (FMEA), a practice that dates back to the period following World War II, used in complex engineering projects in fields such as aerospace. This signals that there are some challenges that endure; while new solutions will always emerge to combat them, we should also be comfortable looking to the past for tools and techniques.
Learning to live with complexity
McKinsey’s argument that the productivity of development teams can be successfully measured caused a stir across the software engineering landscape. While having the right metrics in place is certainly important, prioritizing productivity in our thinking can cause more problems than it solves when it comes to complex systems and an ever-changing landscape of solutions. Technology Radar called this out with an edition with the theme, “How productive is measuring productivity?”This highlighted the importance of focusing on developer experience with the help of tools like DX DevEx 360.
Focusing on productivity in the way McKinsey suggests can cause us to mistakenly see coding as the “real” work of software engineering, overlooking things like architectural decisions, tests, security analysis, and performance monitoring. This is risky—organizations that adopt such a view will struggle to see tangible benefits from their digital projects. This is why the key challenge in software today is embracing complexity; not treating it as something to be minimized at all costs but a challenge that requires thoughtfulness in processes, practices, and governance. The key question is whether the industry realizes this.
This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.
Tech
Elon Musk wants more bandwidth between people and machines. Do we need it?
Published
7 hours agoon
09/29/2023By
Drew Simpson
This works because when the subjects imagine saying words, the electrodes measure their motor neurons, whose firing rate contains information about how they are trying to move their tongue and larynx. From these data it is now possible to determine what words people are thinking of saying with surprising accuracy. Researchers believe that with more electrodes listening to more neurons, and more bandwidth, they’ll get even better at it.
“We don’t need more electrodes for cursor control, but for speech, we are in a regime where data rate matters a lot,” says Angle. “It’s very clear we need to increase the channel count to make those systems viable. With a thousand electrodes, it will be as good as a cell phone transcribing your speech. So in this situation, yes, you’re increasing the information rate by 10 or a hundred times.”
Bottom line: When it comes to enhancing communication between nondisabled people my sources were skeptical that more bandwidth matters. The brain’s going to get in the way. But when it comes to restoring function, it does matter. It takes a lot of neurons—and a lot of data—to get a patient back to communicating at that basic 40 bits a second.
Read more from Tech Review’s archive
In 2021, I profiled Dennis DeGray, a paralyzed man who, at that time, was the world record holder for direct brain-to-computer communication. He could type via his thoughts at 18 words a minute “It’s almost a conversation between the device and myself,” DeGray told me. “It’s a very personal interaction.”
But speed records keep falling. This August, researchers demonstrated that two people who’d lost the ability to speak–one due to a stroke, another because of ALS–were able to quickly utter words through a computer connected to implants placed in their brains. Read the report by Cassandra Willyard here.
A few years back, Adam Piore recounted the bizarre tale of Phil Kennedy, a pioneering brain-implant researcher who took the extreme step of getting an implant installed in his own brain.
From around the web
A second person has received a heart from a gene-modified pig. Lawrence Faucette, a Navy vet with heart failure, underwent transplant surgery on September 20 in Maryland. The previous subject lived two months after the surgery. (Associated Press)
Scientific sleuths are getting better at uncovering rotten research. (WSJ)
Those new generation weight-loss drugs were prescribed to 1.7% of Americans in 2023. And you can expect the market for semaglutide to expand fast. That’s because more than 40% of Americans are obese. (CNN)
Tech
Why the dream of fusion power isn’t going away
Published
22 hours agoon
09/28/2023By
Drew Simpson
With those reactions, fusion reached what’s sometimes called scientific breakeven—a huge milestone by any definition. But, of course, there were caveats.
The lasers in this reactor are some of the most powerful in the world, but they’re also pretty inefficient. In the end, more power was pulled from the grid than what the fusion reactions produced. And most experts agree that this version of fusion isn’t super practical for power plants, at least in the near term.
While this was a milestone, it was more symbolic than practical. And it’s notable that in the meantime, the world’s largest and most famous fusion project is languishing—the massive international collaboration ITER (International Thermonuclear Experimental Reactor) has been plagued with delays and exploding costs.
But amid slow progress from national and international research efforts, the private sector has shown a lot of interest in fusion power. Cumulative investment reached $6.2 billion earlier this year. Investors are still putting money into the technology, with many citing the need for innovative climate technologies and recent progress in the private sector.
While no private fusion company has achieved net energy (or at least, hasn’t announced it), there have been some milestones to mark. Commonwealth Fusion Systems has broken records for magnetic field strength with its new superconductor materials, a technology that could be the key to making fusion work economically at scale. Other startups, like TAE Technologies, have celebrated temperatures of 75 million °C, or even hotter, another key stepping stone to reaching viable fusion reactors.
I think it’s exciting to see more startups jumping in on fusion energy. There’s a sense of urgency from these companies, because they need to make progress and continue raising money or risk going out of business.