“We started our company knowing that women over 40 are prescribed antidepressants at more than three to four times the rate of men, which has led to one in every five women taking an antidepressant to get through the day,” says Juan Pablo Cappello, cofounder and CEO of the ketamine therapy platform Nue Life, which is FDA approved and raised $23 million in April.
Through platforms like Nue Life, or in one of the hundreds of ketamine therapy clinics across the US, patients can take a controlled amount of a psychoactive substance under the careful guidance of a trained clinician to induce an altered state of consciousness (a trip). Having received tons of airtime in recent years for its supposed ability to treat PTSD, anxiety, and substance abuse, ketamine is now being studied as an effective way to alleviate symptoms of postpartum depression as well.
A recent study in the Journal of Affective Disorders suggests that in patients at high risk of postpartum depression, a single dose of ketamine administered before anesthesia during cesarean sections could be effective in preventing it. Another ketamine therapy startup, Field Trip, is also about to start in-person, phase I clinical trials for FT-104, a psychedelic molecule that’s similar to psilocybin but has a much shorter trip time. (Nikhita Singhal’s father, Sanjay Singhal, an entrepreneur who started audiobooks.com, is an advisor to Field Trip.) “FT-104 has all the characteristics that make psilocybin so interesting and attractive from a therapeutic perspective—safety and efficacy—but with a very short duration of action,” Field Trip cofounder and executive chairman Ronan Levy told me. According to Levy, Field Trip’s existing preclinical studies signal that FT-104 will leave the body after 12 hours, meaning breastfeeding can hypothetically resume within 24 hours—something that will need to eventually be validated in human trials and undergo scientific peer review.
Kelsey Ramsden, the former CEO of Vancouver-based psychedelics company Mindcure (which was researching MDMA-assisted psychotherapy to help women with a lack of sexual desire until it shut down earlier this year for lack of funds), also says the postpartum depression market is appealing for psychedelic development because there’s currently only one drug for the condition (Zulresso). Ramsden is a believer in part because psychedelics worked to alleviate her own symptoms after she had her first child. “The change in my lived experience resulted in recurring depressive cycles, and it wasn’t necessarily a hormonal thing that was the ongoing problem,” she says. “It was just the change in my experience as the result of becoming a mother in a society that expected me to be a certain way.” She says she tried SSRIs and traditional therapy at first, but she finally arrived on stable footing after trying psychedelic-assisted psychotherapy.
Ramsden believes that the entire psychedelic industry is still in its earliest days. But she can envision a culture where it is normal for women to openly take psychedelic drugs. When something health-related works for women, she believes, the good news spreads like wildfire.
Allison Feduccia, who has a PhD in neuropharmacology, believes that the best evidence we have of how psychedelics affect women is still mostly anecdotal. For example, there are accounts suggesting that peyote boosts milk production, an idea supported by preliminary research from the 1970s. For years, folks have reported the ways psychedelics have altered their menstrual cycle, linking them to heavier periods, a period that arrives early, or—alternatively—a more regular cycle. Research has shown that estrogen intensifies the brain’s dopamine reward pathway, so it’s also possible that a woman’s reaction to a particular drug is more pleasurable depending on the phase of her menstrual cycle.
Feduccia posits that psychedelics might be particularly helpful for the “rites of passage” that most women go through. “Psychedelics could bring better perspective when you get your first period, have your first child, and then go through menopause,” she says. “I just hope that women can benefit [from psychedelics] without having to drop $20,000 for a guided approach.”
That guided approach is not only expensive but fraught with ethical concerns. Multiple high-profile cases of abuse in psychedelic therapy have made headlines in recent years. Richard Yensen, an unlicensed therapist who was a sub-investigator for MAPS, was accused of sexually assaulting a PTSD patient during a MAPS clinical trial on MDMA. Allegations of sexual abuse were also made against Aharon Grossbard and his wife, Françoise Bourzat, leaders of a prominent group in the Bay Area that has been practicing psychedelic-assisted therapy for over 30 years.
Meta’s new AI can turn text prompts into videos
Although the effect is rather crude, the system offers an early glimpse of what’s coming next for generative artificial intelligence, and it is the next obvious step from the text-to-image AI systems that have caused huge excitement this year.
Meta’s announcement of Make-A-Video, which is not yet being made available to the public, will likely prompt other AI labs to release their own versions. It also raises some big ethical questions.
In the last month alone, AI lab OpenAI has made its latest text-to-image AI system DALL-E available to everyone, and AI startup Stability.AI launched Stable Diffusion, an open-source text-to-image system.
But text-to-video AI comes with some even greater challenges. For one, these models need a vast amount of computing power. They are an even bigger computational lift than large text-to-image AI models, which use millions of images to train, because putting together just one short video requires hundreds of images. That means it’s really only large tech companies that can afford to build these systems for the foreseeable future. They’re also trickier to train, because there aren’t large-scale data sets of high-quality videos paired with text.
To work around this, Meta combined data from three open-source image and video data sets to train its model. Standard text-image data sets of labeled still images helped the AI learn what objects are called and what they look like. And a database of videos helped it learn how those objects are supposed to move in the world. The combination of the two approaches helped Make-A-Video, which is described in a non-peer-reviewed paper published today, generate videos from text at scale.
Tanmay Gupta, a computer vision research scientist at the Allen Institute for Artificial Intelligence, says Meta’s results are promising. The videos it’s shared show that the model can capture 3D shapes as the camera rotates. The model also has some notion of depth and understanding of lighting. Gupta says some details and movements are decently done and convincing.
However, “there’s plenty of room for the research community to improve on, especially if these systems are to be used for video editing and professional content creation,” he adds. In particular, it’s still tough to model complex interactions between objects.
In the video generated by the prompt “An artist’s brush painting on a canvas,” the brush moves over the canvas, but strokes on the canvas aren’t realistic. “I would love to see these models succeed at generating a sequence of interactions, such as ‘The man picks up a book from the shelf, puts on his glasses, and sits down to read it while drinking a cup of coffee,’” Gupta says.
How AI is helping birth digital humans that look and sound just like us
Jennifer: And the team has also been exploring how these digital twins can be useful beyond the 2D world of a video conference.
Greg Cross: I guess the.. the big, you know, shift that’s coming right at the moment is the move from the 2D world of the internet, into the 3D world of the metaverse. So, I mean, and that, and that’s something we’ve always thought about and we’ve always been preparing for, I mean, Jack exists in full 3D, um, You know, Jack exists as a full body. So I mean, Jack can, you know, today we have, you know, we’re building augmented reality, prototypes of Jack walking around on a golf course. And, you know, we can go and ask Jack, how, how should we play this hole? Um, so these are some of the things that we are starting to imagine in terms of the way in which digital people, the way in which digital celebrities. Interact with us as we move into the 3D world.
Jennifer: And he thinks this technology can go a lot further.
Greg Cross: Healthcare and education are two amazing applications of this type of technology. And it’s amazing because we don’t have enough real people to deliver healthcare and education in the real world. So, I mean, so you can, you know, you can imagine how you can use a digital workforce to augment. And, and extend the skills and capability, not replace, but extend the skills and, and capabilities of real people.
Jennifer: This episode was produced by Anthony Green with help from Emma Cillekens. It was edited by me and Mat Honan, mixed by Garret Lang… with original music from Jacob Gorski.
If you have an idea for a story or something you’d like to hear, please drop a note to podcasts at technology review dot com.
Thanks for listening… I’m Jennifer Strong.
A bionic pancreas could solve one of the biggest challenges of diabetes
The bionic pancreas, a credit card-sized device called an iLet, monitors a person’s levels around the clock and automatically delivers insulin when needed through a tiny cannula, a thin tube inserted into the body. It is worn constantly, generally on the abdomen. The device determines all insulin doses based on the user’s weight, and the user can’t adjust the doses.
A Harvard Medical School team has submitted its findings from the study, described in the New England Journal of Medicine, to the FDA in the hopes of eventually bringing the product to market in the US. While a team from Boston University and Massachusetts General Hospital first tested the bionic pancreas in 2010, this is the most extensive trial undertaken so far.
The Harvard team, working with other universities, provided 219 people with type 1 diabetes who had used insulin for at least a year with a bionic pancreas device for 13 weeks. The team compared their blood sugar levels with those of 107 diabetic people who used other insulin delivery methods, including injection and insulin pumps, during the same amount of time.
The blood sugar levels of the bionic pancreas group fell from 7.9% to 7.3%, while the standard care group’s levels remained steady at 7.7%. The American Diabetes Association recommends a goal of less than 7.0%, but that’s only met by approximately 20% of people with type 1 diabetes, according to a 2019 study.
Other types of artificial pancreas exist, but they typically require the user to input information before they will deliver insulin, including the amount of carbohydrates they ate in their last meal. Instead, the iLet takes the user’s weight and the type of meal they’re eating, such as breakfast, lunch, or dinner, added by the user via the iLet interface, and it uses an adaptive learning algorithm to deliver insulin automatically.