Connect with us

Tech

Google just launched Bard, its answer to ChatGPT—and it wants you to make it better

Published

on

Google just launched Bard, its answer to ChatGPT—and it wants you to make it better


Google has a lot riding on this launch. Microsoft partnered with OpenAI to make an aggressive play for Google’s top spot in search. Meanwhile, Google blundered straight out of the gate when it first tried to respond. In a teaser clip for Bard that the company put out in February, the chatbot was shown making a factual error. Google’s value fell by $100 billion overnight.

Google won’t share many details about how Bard works: large language models, the technology behind this wave of chatbots, have become valuable IP. But it will say that Bard is built on top of a new version of LaMDA, Google’s flagship large language model. Google says it will update Bard as the underlying tech improves. Like ChatGPT and GPT-4, Bard is fine-tuned using reinforcement learning from human feedback, a technique that trains a large language model to give more useful and less toxic responses.

Google has been working on Bard for a few months behind closed doors but says that it’s still an experiment. The company is now making the chatbot available for free to people in the US and the UK who sign up to a waitlist. These early users will help test and improve the technology. “We’ll get user feedback, and we will ramp it up over time based on that feedback,” says Google’s vice president of research, Zoubin Ghahramani. “We are mindful of all the things that can go wrong with large language models.”

But Margaret Mitchell, chief ethics scientist at AI startup Hugging Face and former co-lead of Google’s AI ethics team, is skeptical of this framing. Google has been working on LaMDA for years, she says, and she thinks pitching Bard as an experiment “is a PR trick that larger companies use to reach millions of customers while also removing themselves from accountability if anything goes wrong.” 

Google wants users to think of Bard as a sidekick to Google Search, not a replacement. A button that sits below Bard’s chat widget says “Google It.” The idea is to nudge users to head to Google Search to check Bard’s answers or find out more. “It’s one of the things that help us offset limitations of the technology,” says Krawczyk.

“We really want to encourage people to actually explore other places, sort of confirm things if they’re not sure,” says Ghahramani.

This acknowledgement of Bard’s flaws has shaped the chatbot’s design in other ways, too. Users can interact with Bard only a handful of times in any given session. This is because the longer large language models engage in a single conversation, the more likely they are to go off the rails. Many of the weirder responses from Bing Chat that people have shared online emerged at the end of drawn-out exchanges, for example.   

Google won’t confirm what the conversation limit will be for launch, but it will be set quite low for the initial release and adjusted depending on user feedback.

Bard in action

GOOGLE

Google is also playing it safe in terms of content. Users will not be able to ask for sexually explicit, illegal, or harmful material (as judged by Google) or personal information. In my demo, Bard would not give me tips on how to make a Molotov cocktail. That’s standard for this generation of chatbot. But it would also not provide any medical information, such as how to spot signs of cancer. “Bard is not a doctor. It’s not going to give medical advice,” says Krawczyk.

Perhaps the biggest difference between Bard and ChatGPT is that Bard produces three versions of every response, which Google calls “drafts.” Users can click between them and pick the response they prefer, or mix and match between them. The aim is to remind people that Bard cannot generate perfect answers. “There’s the sense of authoritativeness when you only see one example,” says Krawczyk. “And we know there are limitations around factuality.”

Tech

The Download: AI films, and the threat of microplastics

Published

on

Welcome to the new surreal. How AI-generated video is changing film.


The Frost nails its uncanny, disconcerting vibe in its first few shots. Vast icy mountains, a makeshift camp of military-style tents, a group of people huddled around a fire, barking dogs. It’s familiar stuff, yet weird enough to plant a growing seed of dread. There’s something wrong here.

Welcome to the unsettling world of AI moviemaking. The Frost is a 12-minute movie from Detroit-based video creation company Waymark in which every shot is generated by an image-making AI. It’s one of the most impressive—and bizarre—examples yet of this strange new genre. Read the full story, and take an exclusive look at the movie.

—Will Douglas Heaven

Microplastics are everywhere. What does that mean for our immune systems?

Microplastics are pretty much everywhere you look. These tiny pieces of plastic pollution, less than five millimeters across, have been found in human blood, breast milk, and placentas. They’re even in our drinking water and the air we breathe.

Given their ubiquity, it’s worth considering what we know about microplastics. What are they doing to us? 

The short answer is: we don’t really know. But scientists have begun to build a picture of their potential effects from early studies in animals and clumps of cells, and new research suggests that they could affect not only the health of our body tissues, but our immune systems more generally. Read the full story.

—Jessica Hamzelou

Continue Reading

Tech

Microplastics are everywhere. What does that mean for our immune systems?

Published

on

Microplastics are everywhere. What does that mean for our immune systems?


Here, bits of plastic can end up collecting various types of bacteria, which cling to their surfaces. Seabirds that ingest them not only end up with a stomach full of plastic—which can end up starving them—but also get introduced to types of bacteria that they wouldn’t encounter otherwise. It seems to disturb their gut microbiomes.

There are similar concerns for humans. These tiny bits of plastic, floating and flying all over the world, could act as a “Trojan horse,” introducing harmful drug-resistant bacteria and their genes, as some researchers put it.

It’s a deeply unsettling thought. As research plows on, hopefully we’ll learn not only what microplastics are doing to us, but how we might tackle the problem.

Read more from Tech Review’s archive

It is too simplistic to say we should ban all plastic. But we could do with revolutionizing the way we recycle it, as my colleague Casey Crownhart pointed out in an article published last year. 

We can use sewage to track the rise of antimicrobial-resistant bacteria, as I wrote in a previous edition of the Checkup. At this point, we need all the help we can get …

… which is partly why scientists are also exploring the possibility of using tiny viruses to treat drug-resistant bacterial infections. Phages were discovered around 100 years ago and are due a comeback!

Our immune systems are incredibly complicated. And sex matters: there are important differences between the immune systems of men and women, as Sandeep Ravindran wrote in this feature, which ran in our magazine issue on gender.

Continue Reading

Tech

Welcome to the new surreal. How AI-generated video is changing film.

Published

on

Welcome to the new surreal. How AI-generated video is changing film.


Fast and cheap

Artists are often the first to experiment with new technology. But the immediate future of generative video is being shaped by the advertising industry. Waymark made The Frost to explore how generative AI could be built into its products. The company makes video creation tools for businesses looking for a fast and cheap way to make commercials. Waymark is one of several startups, alongside firms such as Softcube and Vedia AI, that offer bespoke video ads for clients with just a few clicks.

Waymark’s current tech, launched at the start of the year, pulls together several different AI techniques, including large language models, image recognition, and speech synthesis, to generate a video ad on the fly. Waymark also drew on its large data set of non-AI-generated commercials created for previous customers. “We have hundreds of thousands of videos,” says CEO Alex Persky-Stern. “We’ve pulled the best of those and trained it on what a good video looks like.”

To use Waymark’s tool, which it offers as part of a tiered subscription service starting at $25 a month, users supply the web address or social media accounts for their business, and it goes off and gathers all the text and images it can find. It then uses that data to generate a commercial, using OpenAI’s GPT-3 to write a script that is read aloud by a synthesized voice over selected images that highlight the business. A slick minute-long commercial can be generated in seconds. Users can edit the result if they wish, tweaking the script, editing images, choosing a different voice, and so on. Waymark says that more than 100,000 people have used its tool so far.

The trouble is that not every business has a website or images to draw from, says Parker. “An accountant or a therapist might have no assets at all,” he says. 

Waymark’s next idea is to use generative AI to create images and video for businesses that don’t yet have any—or don’t want to use the ones they have. “That’s the thrust behind making The Frost,” says Parker. “Create a world, a vibe.”

The Frost has a vibe, for sure. But it is also janky. “It’s not a perfect medium yet by any means,” says Rubin. “It was a bit of a struggle to get certain things from DALL-E, like emotional responses in faces. But at other times, it delighted us. We’d be like, ‘Oh my God, this is magic happening before our eyes.’”

This hit-and-miss process will improve as the technology gets better. DALL-E 2, which Waymark used to make The Frost, was released just a year ago. Video generation tools that generate short clips have only been around for a few months.  

The most revolutionary aspect of the technology is being able to generate new shots whenever you want them, says Rubin: “With 15 minutes of trial and error, you get that shot you wanted that fits perfectly into a sequence.” He remembers cutting the film together and needing particular shots, like a close-up of a boot on a mountainside. With DALL-E, he could just call it up. “It’s mind-blowing,” he says. “That’s when it started to be a real eye-opening experience as a filmmaker.”

Continue Reading

Copyright © 2021 Seminole Press.