Connect with us


Better democracy through technology



Deb Roy gives TED talk about data

When Mike Koval, the police chief of Madison, Wisconsin, abruptly resigned on a Sunday in September 2019, the community’s relationship with its men and women in blue was already strained. Use-of-force issues hung over the department after the killing of a Black teenager in 2015. Then, months before Koval left, another Black teenager, in the middle of a mental health crisis, was beaten on the head by an officer while being restrained by three others.

The process of selecting a new police chief followed a standard formula. A five-person team of mayor-appointed, city-council-­approved commissioners would make the ultimate decision, allowing for public comment beforehand. But this time, the commissioners wanted that public input to involve more of the local community than just the folks who regularly appeared at town-hall-style meetings. 

To gather more meaningful community feedback based on “lived experiences,” the commission took a new approach in which small groups of citizens—many from Madison’s most underheard neighborhoods—were brought together in a nonthreatening environment. Facilitators guided people who differed in age, ethnicity, gender, and socioeconomic status through intimate discussions on topics including what their own relationships with the police were like; whether they trusted or feared them; how they’d seen officers interact with kids and adults; and what type of training they thought police should receive to deal with stressful situations.

“The way we’re speaking with others is fundamentally broken. In every measurable way, things are getting more fractured and polarized.”

These conversations were recorded as part of an initiative called the Local Voices Network (LVN), which worked closely with the nonprofit Cortico and MIT’s Laboratory for Social Machines (LSM), headed by Professor Deb Roy. What made the process unique—and a potential model for other municipalities—was what happened next.

With help from machine-learning technology that Roy and an interdisciplinary team had developed over the past five years, MIT researchers sifted through hundreds of hours of audio to define topics and summarize larger conversations into snippets of text. By using this technology to augment human listening, the researchers were able to highlight parts of the conversations and identify the themes of greatest concern. The insights of 48 people in 31 different conversations were highlighted. The topics that emerged as common concerns became the basis for interview questions asked of the candidates to succeed Koval. Of the six final questions put before the four finalists, three came directly from the community conversations.

The facilitated work in Madison was a natural extension of Roy’s research in social media analytics. The scope of this work was further advanced when, in January 2021, MIT announced that the Laboratory for Social Machines would be expanded into an Institute-wide Center for Constructive Communication (CCC) based within the MIT Media Lab. The center will continue to work closely with Cortico, which Roy currently chairs. The two entities are now working hand in hand on building, as Roy says, “power tools” for democracy. 

In Madison, thanks to tools like those, “we were able to actually uplift the specific concerns of a variety of members of the community,” says Colleen Butler, former director of capacity building at Cortico.

According to Roy, that’s how civic dialogue is supposed to work: various voices learning from each other to bridge divides and inform public policymaking. Instead, what he currently sees is a fragmented, reactive, angry world where vitriol and provocation score more points than conversation and understanding.

“The way we’re speaking with others is fundamentally broken,” he says. “In every measurable way, things are getting more fractured and polarized.”

For more than two decades, Roy has been deeply immersed in studying the complexity of human communication. Today, by combining that study with work on social-impact technology, he hopes to foster more constructive personal connections and enhance civic discourse. His aim is to find much-needed civility and common ground both in person and in social networks. 

Reframing conversation 

Most parents-to-be obsess over necessities like the crib, the bottles, and the pacifiers. Deb Roy had another item on his list: audio equipment.

In 2005, just before his son was born, Roy outfitted his home with 11 video cameras and 14 microphones. Over three years, he collected data—90,000 hours of video, 140,000 hours of audio—on how familial interactions affected his son’s speech development. Dubbed the Human Speechome Project, it built on Roy’s PhD dissertation, which focused on developing machine-­learning models of human language. (He gave a TED talk about the experience in 2011.) 

Roy’s key insight from the project was the notion of recurrent shared contexts. Parents don’t generally talk to their infants about objects or people not in the room. To foster language learning, it’s more helpful to use words in reference to something the infants and caregivers can perceive or participate in together. Roy wondered where else that sort of phenomenon might be found. Michael Fleischman, a PhD student in his research group, had an idea: the way people talk about TV. It was only a couple years after Twitter was founded, in 2006, that Roy and Fleischman discovered there were social media users who talk about television shows and commercials airing in real time, without even knowing each other. 

Roy gave a wildly viral TED talk about collecting data—100,000 hours of video, 140,000 hours of audio—on his son’s speech development.


“That’s how we ended up looking at tweets and other social media that were about what was on television,” says Roy. “You have this shared context. People tuned in to a live broadcast, and then talked to one another or just broadcasted, into the ether, reactions.”

He and Fleischman thought this was the basis for a good business idea. Advertisers have large research budgets for the purpose of figuring out how to help them connect with would-be consumers. In 2008, Roy took an extended leave from MIT, and the pair founded Bluefin Labs, a social analytics startup, to help companies analyze what everyday people were saying about television programs and advertising. Using algorithms, the startup could pick out millions of online comments made about a show or commercial in the hours immediately after it aired. Seeing that sort of information could then help networks and companies understand what was resonating with audiences, especially in the ever-growing online sphere.

“Companies that figure this out will thrive in the next 10 to 15 years. Companies that don’t will fail,” said a Nielsen executive quoted in a profile of the company published in MIT Technology Review in 2011. 

Bluefin Labs was acquired by Twitter in 2013 for $100 million. For Roy, it served as a jumping-off point to his current work. He took a four-year role as Twitter’s chief media scientist, but he also went back to MIT.

“I knew that my long-term goal was to return to research,” he says. “My interest was to create a new kind of lab which could straddle the incredibly rich environment of doing explanatory and fundamental research with the skill set and all the things we did at Bluefin and Twitter.”

Forget analyzing the semantic patterns of the online world to figure out whether people liked a product being hawked during a commercial break: Roy wanted to take what he had learned at Bluefin, where he’d translated research into practical products and services, and apply those findings for noncommercial societal benefit. That’s when, in 2014, he set up the LSM at the Media Lab, with Twitter as a founding partner and main funder. He tapped Russell Stevens, a friend and previous advisor at Bluefin with a background in media and marketing, to help establish the lab. 

What the researchers discovered this time when they examined tweets and other social media posts was something wholly different from what they’d seen in the world of entertainment TV: a crumbling social context instead of a cohesive one. After the Boston Marathon bombing, rumors spread like wildfire. During the 2016 presidential election, unverified reports were shared widely. Big news events came and went, playing out for all to see, but people reacted differently depending on what they heard and what they believed. 

Through research at the lab, Roy, Stevens, and the LSM team tried to make sense of it—even going so far as to analyze millions of tweets to discern how false news spread through Twitter. (The resulting paper, which Roy coauthored, appeared on the cover of Sciencein 2018.) But to actually bridge those social divides, collaborators at the lab realized, they had to marry real-life conversations with the computational social science started at Bluefin and further developed at the LSM. 

“If we really wanted to understand why we may be fragmenting into isolated tribes, we actually had to go talk to people,” Stevens says. “That’s the only solution.”

Finding common ground

Bringing conversations in the online world back to earth, so to speak, was Roy’s purpose in creating the Center for Constructive Communication. The announcement that introduced the new center characterized it as an “evolution” of the LSM. Unlike the LSM, though, it has a mandate to reach beyond academia—to bring the tools of data-driven analytics to bear on conversations about society, culture, and politics, and then to see where connections between people can be made.

“A democracy can’t function if the public is so divided and unable to listen to each other,” says Ceasar McDowell, the center’s associate director. “What we find out is that people aren’t as far apart as you think, but they don’t have the space where they feel that they will be heard and listened to in order to find that connection.”

That’s where Cortico comes in. Founded in 2016, with Roy and Stevens as two of the three cofounders, the nonprofit aimed primarily to facilitate on-the-ground conversations—first with the social tools that the LSM was developing, and now with interpersonal technologies being created by CCC and Cortico. CCC, which leads research in analytics and design research, partners with Cortico to develop prototype translations of research that can be tested with field partners—often local, grassroots organizations. Cortico then integrates findings from successful pilot programs into the LVN platform, which it independently develops and operates. 

Can the marriage of real-life conversations with advanced digital technology put us on the road to becoming better citizens? Professor Deb Roy thinks so.

That platform, Cortico’s core initiative, is where the audio from these types of community conversations gets stored. Analytics tools—similar to what Bluefin Labs pioneered a decade ago—sift through the talk to find the common ground, and then to amplify those representative perspectives. Audio transcripts are made, and as the computer goes through the text, it picks out key points from conversations. Afterward, anyone can go back and listen to a particular segment to get the full context. CCC calls it “sense-making.” 

To Jacquelyn Boggess, one of the commissioners involved in picking Madison’s police chief, the insights gained this way proved invaluable. Typically, the people who show up at town halls are telling commissioners which person to pick. The conversations with Madison’s citizens, she says, instead gave her a chance to hear how her decision might affect them.

“They’re not telling me who to choose. They’re telling me who they are and what they need,” Boggess says. “People told me stories of their lives and what goes on in their lives, as opposed to telling me who they think I should choose for police chief, and that was much more helpful.”

In late 2020, the LSM and Cortico used the LVN process to connect with citizens in Atlanta during the covid pandemic. As part of a collaboration with the Atlanta-based Task Force for Global Health, Cortico set up virtual group conversations of about six to eight people. They spoke about their fears of the new disease, the questions they had about staying safe, and their concerns about how covid testing was conducted. Cortico and LSM researchers (CCC was still a few weeks away from being announced) shared insights from those conversations with Black ministers, who they hoped could answer those questions for their congregations. In early 2021, LVN came in handy again as vaccines were being rolled out. “As the vaccine gained steam, we were able to tap into what folks were saying on the ground,” says Stevens. The platform gave residents a chance to express any concerns they had about receiving a vaccination; again, the team then spun up the results into messaging that could be distributed by trusted voices in various city neighborhoods.

Kick-starting a revolution

In the future, Roy hopes to expand the capabilities of CCC, Cortico, and LVN. Some of that will be accomplished through hardware designed to use during these group conversations: a portable recording device called a “digital hearth,” which is supposed to be a little more inviting than just a smartphone or microphone sitting in the center of a table. At the same time, Cortico is designing programs to train community organizers and volunteers on how to organize and facilitate local conversations. 

“In general, online spaces, in order to meet certain design objectives and commercial objectives, tend to be disconnected from the in-person world,” Roy says. “We’re interested in weaving these back together.”

If a series of personal conversations could help Madison residents grapple with an issue as contentious as policing, and establish enough common ground to inform the questions asked in the official interviews, it seems to indicate that the process could work.

“I think it allows for greater transparency and community involvement—and, frankly, a more thoughtful process—than the more typical town hall type of meetings can offer,” says Butler.

Kick-starting a revolution in civic discourse is currently at the forefront of Roy’s mind. Right now, CCC is working on a new dashboard feature that would connect to information collected and organized in the LVN platform. A journalist set to moderate a public debate, for example, would be able to craft questions that address what’s on the minds of city residents as opposed to just picking a tweet or online comment at random. In fact, that’s exactly what is starting to happen with a new initiative in Boston.

Roy is careful to hedge his bets on how successful these new approaches can be. “The spaces for what we would call constructive conversation and constructive dialogue are shrinking,” he says. “I guess I know enough to realize it’d be naïve to think we’re going to fix that.”

Still, the tools he’s creating are unquestionably a start. 


AI and data fuel innovation in clinical trials and beyond



AI and data fuel innovation in clinical trials and beyond

Laurel: So mentioning the pandemic, it really has shown us how critical and fraught the race is to provide new treatments and vaccines to patients. Could you explain what evidence generation is and then how it fits into drug development?

Arnaub: Sure. So as a concept, generating evidence in drug development is nothing new. It’s the art of putting together data and analyses that successfully demonstrate the safety and the efficacy and the value of your product to a bunch of different stakeholders, regulators, payers, providers, and ultimately, and most importantly, patients. And to date, I’d say evidence generation consists of not only the trial readout itself, but there are now different types of studies that pharmaceutical or medical device companies conduct, and these could be studies like literature reviews or observational data studies or analyses that demonstrate the burden of illness or even treatment patterns. And if you look at how most companies are designed, clinical development teams focus on designing a protocol, executing the trial, and they’re responsible for a successful readout in the trial. And most of that work happens within clinical dev. But as a drug gets closer to launch, health economics, outcomes research, epidemiology teams are the ones that are helping paint what is the value and how do we understand the disease more effectively?

So I think we’re at a pretty interesting inflection point in the industry right now. Generating evidence is a multi-year activity, both during the trial and in many cases long after the trial. And we saw this as especially true for vaccine trials, but also for oncology or other therapeutic areas. In covid, the vaccine companies put together their evidence packages in record time, and it was an incredible effort. And now I think what’s happening is the FDA’s navigating a tricky balance where they want to promote the innovation that we were talking about, the advancements of new therapies to patients. They’ve built in vehicles to expedite therapies such as accelerated approvals, but we need confirmatory trials or long-term follow up to really understand the evidence and to understand the safety and the efficacy of these drugs. And that’s why that concept that we’re talking about today is so important, is how do we do this more expeditiously?

Laurel: It’s certainly important when you’re talking about something that is life-saving innovations, but as you mentioned earlier, with the coming together of both the rapid pace of technology innovation as well as the data being generated and reviewed, we’re at a special inflection point here. So, how has data and evidence generation evolved in the last couple years, and then how different would this ability to create a vaccine and all the evidence packets now be possible five or 10 years ago?

Arnaub: It’s important to set the distinction here between clinical trial data and what’s called real-world data. The randomized controlled trial is, and has remained, the gold standard for evidence generation and submission. And we know within clinical trials, we have a really tightly controlled set of parameters and a focus on a subset of patients. And there’s a lot of specificity and granularity in what’s being captured. There’s a regular interval of assessment, but we also know the trial environment is not necessarily representative of how patients end up performing in the real world. And that term, “real world,” is kind of a wild west of a bunch of different things. It’s claims data or billing records from insurance companies. It’s electronic medical records that emerge out of providers and hospital systems and labs, and even increasingly new forms of data that you might see from devices or even patient-reported data. And RWD, or real-world data, is a large and diverse set of different sources that can capture patient performance as patients go in and out of different healthcare systems and environments.

Ten years ago, when I was first working in this space, the term “real-world data” didn’t even exist. It was like a swear word, and it was basically one that was created in recent years by the pharmaceutical and the regulatory sectors. So, I think what we’re seeing now, the other important piece or dimension is that the regulatory agencies, through very important pieces of legislation like the 21st Century Cures Act, have jump-started and propelled how real-world data can be used and incorporated to augment our understanding of treatments and of disease. So, there’s a lot of momentum here. Real-world data is used in 85%, 90% of FDA-approved new drug applications. So, this is a world we have to navigate.

How do we keep the rigor of the clinical trial and tell the entire story, and then how do we bring in the real-world data to kind of complete that picture? It’s a problem we’ve been focusing on for the last two years, and we’ve even built a solution around this during covid called Medidata Link that actually ties together patient-level data in the clinical trial to all the non-trial data that exists in the world for the individual patient. And as you can imagine, the reason this made a lot of sense during covid, and we actually started this with a covid vaccine manufacturer, was so that we could study long-term outcomes, so that we could tie together that trial data to what we’re seeing post-trial. And does the vaccine make sense over the long term? Is it safe? Is it efficacious? And this is, I think, something that’s going to emerge and has been a big part of our evolution over the last couple years in terms of how we collect data.

Laurel: That collecting data story is certainly part of maybe the challenges in generating this high-quality evidence. What are some other gaps in the industry that you have seen?

Arnaub: I think the elephant in the room for development in the pharmaceutical industry is that despite all the data and all of the advances in analytics, the probability of technical success, or regulatory success as it’s called for drugs, moving forward is still really low. The overall likelihood of approval from phase one consistently sits under 10% for a number of different therapeutic areas. It’s sub 5% in cardiovascular, it’s a little bit over 5% in oncology and neurology, and I think what underlies these failures is a lack of data to demonstrate efficacy. It’s where a lot of companies submit or include what the regulatory bodies call a flawed study design, an inappropriate statistical endpoint, or in many cases, trials are underpowered, meaning the sample size was too small to reject the null hypothesis. So what that means is you’re grappling with a number of key decisions if you look at just the trial itself and some of the gaps where data should be more involved and more influential in decision making.

So, when you’re designing a trial, you’re evaluating, “What are my primary and my secondary endpoints? What inclusion or exclusion criteria do I select? What’s my comparator? What’s my use of a biomarker? And then how do I understand outcomes? How do I understand the mechanism of action?” It’s a myriad of different choices and a permutation of different decisions that have to be made in parallel, all of this data and information coming from the real world; we talked about the momentum in how valuable an electronic health record could be. But the gap here, the problem is, how is the data collected? How do you verify where it came from? Can it be trusted?

So, while volume is good, the gaps actually contribute and there’s a significant chance of bias in a variety of different areas. Selection bias, meaning there’s differences in the types of patients who you select for treatment. There’s performance bias, detection, a number of issues with the data itself. So, I think what we’re trying to navigate here is how can you do this in a robust way where you’re putting these data sets together, addressing some of those key issues around drug failure that I was referencing earlier? Our personal approach has been using a curated historical clinical trial data set that sits on our platform and use that to contextualize what we’re seeing in the real world and to better understand how patients are responding to therapy. And that should, in theory, and what we’ve seen with our work, is help clinical development teams use a novel way to use data to design a trial protocol, or to improve some of the statistical analysis work that they do.

Continue Reading


Power beaming comes of age



Power beaming comes of age

The global need for power to provide ubiquitous connectivity through 5G, 6G, and smart infrastructure is rising. This report explains the prospects of power beaming; its economic, human, and environmental implications; and the challenges of making the technology reliable, effective, wide-ranging, and secure.

The following are the report’s key findings:

Lasers and microwaves offer distinct approaches to power beaming, each with benefits and drawbacks. While microwave-based power beaming has a more established track record thanks to lower cost of equipment, laser-based approaches are showing promise, backed by an increasing flurry of successful trials and pilots. Laser-based beaming has high-impact prospects for powering equipment in remote sites, the low-earth orbit economy, electric transportation, and underwater applications. Lasers’ chief advantage is the narrow concentration of beams, which enables smaller trans- mission and receiver installations. On the other hand, their disadvantage is the disturbance caused by atmospheric conditions and human interruption, although there are ongoing efforts to tackle these deficits.

Power beaming could quicken energy decarbonization, boost internet connectivity, and enable post-disaster response. Climate change is spurring investment in power beaming, which can support more radical approaches to energy transition. Due to solar energy’s continuous availability, beaming it directly from space to Earth offers superior conversion compared to land-based solar panels when averaged over time. Electric transportation—from trains to planes or drones—benefits from power beaming by avoiding the disruption and costs caused by cabling, wiring, or recharge landings.

Beaming could also transfer power from remote renewables sites such as offshore wind farms. Other areas where power beaming could revolutionize energy solutions include refueling space missions and satellites, 5G provision, and post-disaster humanitarian response in remote regions or areas where networks have collapsed due to extreme weather events, whose frequency will be increased by climate change. In the short term, as efficiencies continue to improve, power beaming has the capacity to reduce the number of wasted batteries, especially in low-power, across-the- room applications.

Public engagement and education are crucial to support the uptake of power beaming. Lasers and microwaves may conjure images of death rays and unanticipated health risks. Public backlash against 5G shows the importance of education and information about the safety of new, “invisible” technologies. Based on decades of research, power beaming via both microwaves and lasers has been shown to be safe. The public is comfortable living amidst invisible forces like wi-fi and wireless data transfer; power beaming is simply the newest chapter.

Commercial investment in power beaming remains muted due to a combination of historical skepticism and uncertain time horizons. While private investment in futuristic sectors like nuclear fusion energy and satellites booms, the power-beaming sector has received relatively little investment and venture capital relative to the scale of the opportunity. Experts believe this is partly a “first-mover” problem as capital allocators await signs of momentum. It may be a hangover of past decisions to abandon beaming due to high costs and impracticality, even though such reticence was based on earlier technologies that have now been surpassed. Power beaming also tends to fall between two R&D comfort zones for large corporations: it does not deliver short-term financial gain, but it is also not long term enough to justify a steady financing stream.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Continue Reading


The porcelain challenge didn’t need to be real to get views



The porcelain challenge didn’t need to be real to get views

“I’ve dabbled in the past with trying to make fake news that is transparent about being fake but spreads nonetheless,” Durfee said. (He once, with a surprising amount of success, got a false rumor started that longtime YouTuber Hank Green had been arrested as a teenager for trying to steal a lemur from a zoo.)

On Sunday, Durfee and his friends watched as #PorcelainChallenge gained traction, and they celebrated when it generated its first media headline (“TikTok’s porcelain challenge is not real but it’s not something to joke about either”). A steady parade of other headlines, some more credulous than others, followed. 

But reflex-dependent viral content has a short life span. When Durfee and I chatted three days after he posted his first video about the porcelain challenge, he already could tell that it wasn’t going to catch as widely as he’d hoped. RIP. 

Nevertheless, viral moments can be reanimated with just the slightest touch of attention, becoming an undead trend ambling through Facebook news feeds and panicked parent groups. Stripping away their original context can only make them more powerful. And dubious claims about viral teen challenges are often these sorts of zombies—sometimes giving them a second life that’s much bigger (and arguably more dangerous) than the first.

For every “cinnamon challenge” (a real early-2010s viral challenge that made the YouTube rounds and put participants at risk for some nasty health complications), there are even more dumb ideas on the internet that do not trend until someone with a large audience of parents freaks out about them. 

Just a couple of weeks ago, for instance, the US Food and Drug Administration issued a warning about boiling chicken in NyQuil, prompting a panic over a craze that would endanger Gen Z lives in the name of views. Instead, as Buzzfeed News reported, the warning itself was the most viral thing about NyQuil chicken, spiking interest in a “trend” that was not trending.

And in 2018, there was the “condom challenge,” which gained widespread media coverage as the latest life-threatening thing teens were doing online for attention—“uncovered” because a local news station sat in on a presentation at a Texas school on the dangers teens face. In reality, the condom challenge had a few minor blips of interest online in 2007 and 2013, but videos of people actually trying to snort a condom up their nose were sparse. In each case, the fear of teens flocking en masse to take part in a dangerous challenge did more to amplify it to a much larger audience than the challenge was able to do on its own. 

The porcelain challenge has all the elements of future zombie content. Its catchy name stands out like a bite on the arm. The posts and videos seeded across social media by Durfee’s followers—and the secondary audience coming across the work of those Durfee deputized—are plausible and context-free. 

Continue Reading

Copyright © 2021 Seminole Press.