Flexibility is key when navigating the future of 6G
The differences between 5G and 6G are not just about what collection of bandwidths will make up 6G in the future and how users will connect to the network, but also about the intelligence built into the network and devices. “The collection of networks that will create the fabric of 6G must work differently for an augmented reality (AR) headset than for an e-mail client on a mobile device,” says Shahriar Shahramian, a research lead with Nokia Bell Laboratories. “Communications providers need to solve a plethora of technical challenges to make a variety of networks based on different technologies work seamlessly,” he says. Devices will have to jump between different frequencies, adjust data rates, and adapt to the needs of the specific application, which could be running locally, on the edge of the cloud, or on a public service.
“One of the complexities of 6G will be, how do we bring the different wireless technologies together so they can hand off to each other, and work together really well, without the end user even knowing about it,” Shahramian says. “That handoff is the difficult part.”
Although the current 5G network allows consumers to experience more seamless handoffs as devices move through different networks—delivering higher bandwidth and lower latency—6G will also usher in a self-aware network capable of supporting and facilitating emerging technologies that are struggling for a foothold today—virtual reality and augmented reality technologies, for example, and self-driving cars. Artificial intelligence and machine learning technology, which will be integrated into 5G as that standard evolves into 5G-Advanced, will be architected into 6G from the beginning to simplify technical tasks, such as optimizing radio signals and efficiently scheduling data traffic.
“Eventually these [technologies] could give radios the ability to learn from one other and their environments,” two Nokia researchers wrote in a post on the future of AI and ML in communications networks. “Rather than engineers telling … nodes of the network how they can communicate, those nodes could determine for themselves—choosing from millions of possible configurations—the best possible to way to communicate.”
Testing technology that doesn’t yet exist
Although this technology is still nascent, it is complex, so it’s clear that testing will play a critical role in the process. “The companies creating the testbeds for 6G must contend with the simple fact that 6G is an aspirational goal, and not yet a real-world specification,” says Jue. He continues, “The network complexity needed to fulfill the 6G vision will require iterative and comprehensive testing of all aspects of the ecosystem; but because 6G is a nascent network concept, the tools and technology to get there need to be adaptable and flexible.”
Even determining which bandwidths will be used and for what application will require a great deal of research. Second- and third-generation cellular networks used low- and mid-ranged wireless bands, with frequencies up to 2.6GHz. The next generation, 4G, extended that to 6Ghz, while the current technology, 5G, goes even further, adding so-called “mmWave” (millimeter wave) up to 71GHz.
To power the necessary bandwidth requirements of 6G, Nokia and Keysight are partnering to investigate the sub-terahertz spectrum for communication, which raises new technical issues. Typically, the higher the frequency of the cellular spectrum, the wider the available contiguous bandwidths, and hence the greater the data rate; but this comes at the cost of decreased range for a particular strength of signal. Low-power wi-fi networks using the 2.6Ghz and 5Ghz bands, for example, have a range in tens of meters, but cellular networks using 800Mhz and 1.9Ghz, have ranges in kilometers. The addition of 24-71GHz in 5G means that associated cells are even smaller (tens to hundreds of meters). And for bands above 100GHz, the challenges are even more significant.
“That will have to change,” says Jue. “One of the new key disruptors for 6G could be the move from the millimeter bands used in 5G, up to the sub-terahertz bands, which are relatively unexplored for wireless communication,” he says. “Those bands have the potential to offer broad swaths of spectrum that could be used for high data-throughput applications, but they present a lot of unknowns as well.”
Adding sub-terahertz bands to the toolbox of wireless communications devices could open up massive networks of sensing devices, high-fidelity augmented reality, and locally networked vehicles, if technology companies can overcome the challenges.
In addition to different spectrum bands, current ideas for the future 6G network will have to make use of new network architectures and better methods of security and reliability. In addition, the devices will need extra sensors and processing capabilities to adapt to network conditions and optimize communications. To do all of this, 6G will require a foundation of artificial intelligence and machine learning to manage the complexities and interactions between every part of the system.
“Every time you introduce a new wireless technology, every time you bring in new spectrum, you make your problem exponentially harder,” Nokia’s Shahramian says.
Nokia expects to start rolling out 6G technology before 2030. Because the definition of 6G remains fluid, development and testing platforms need to support a diversity of devices and applications, and they must accommodate a wide variety of use cases. Moreover, today’s technology may not even support the requirements necessary to test potential 6G applications, requiring companies like Keysight to create new testbed platforms and adapt to changing requirements.
Simulation technology being developed and used today, such as digital twins, will be used to create adaptable solutions. The technology allows real-world data from physical prototypes to be integrated back into the simulation, resulting in future designs that work better in the real world.
“However, while real physical data is needed to create accurate simulations, digital twins would allow more agility for companies developing the technology,” says Keysight’s Jue.
Simulation helps avoid many of the interative, and time-consuming, design steps that can slow down development that relies on successive physical prototypes.
“Really, kind of the key here, is a high degree of flexibility, and helping customers to be able to start doing their research and their testing, while also offering the flexibility to change, and navigate through that change, as the technology evolves,” Jue says. “So, starting design exploration in a simulation environment and then combining that flexible simulation environment with a scalable sub-THz testbed for 6G research helps provide that flexibility.”
Nokia’s Shahramian agrees that this is a long process, but the goal is clear “For technology cycles, a decade is a long loop. For the complex technological systems of 6G, however, 2030 remains an aggressive goal. To meet the challenge, the development and testing tools must match the agility of the engineers striving to create the next network. The prize is significant—a fundamental change to the way we interact with devices and what we do with the technology.”
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
The Download: how we can limit global warming, and GPT-4’s early adopters
Time is running short to limit global warming to 1.5°C (2.7 °F) above preindustrial levels, but there are feasible and effective solutions on the table, according to a new UN climate report.
Despite decades of warnings from scientists, global greenhouse-gas emissions are still climbing, hitting a record high in 2022. If humanity wants to limit the worst effects of climate change, annual greenhouse-gas emissions will need to be cut by nearly half between now and 2030, according to the report.
That will be complicated and expensive. But it is nonetheless doable, and the UN listed a number of specific ways we can achieve it. Read the full story.
How people are using GPT-4
Last week was intense for AI news, with a flood of major product releases from a number of leading companies. But one announcement outshined them all: OpenAI’s new multimodal large language model, GPT-4. William Douglas Heaven, our senior AI editor, got an exclusive preview. Read about his initial impressions.
Unlike OpenAI’s viral hit ChatGPT, which is freely accessible to the general public, GPT-4 is currently accessible only to developers. It’s still early days for the tech, and it’ll take a while for it to feed through into new products and services. Still, people are already testing its capabilities out in the open. Read about some of the most fun and interesting ways they’re doing that, from hustling up money to writing code to reducing doctors’ workloads.
Google just launched Bard, its answer to ChatGPT—and it wants you to make it better
Google has a lot riding on this launch. Microsoft partnered with OpenAI to make an aggressive play for Google’s top spot in search. Meanwhile, Google blundered straight out of the gate when it first tried to respond. In a teaser clip for Bard that the company put out in February, the chatbot was shown making a factual error. Google’s value fell by $100 billion overnight.
Google won’t share many details about how Bard works: large language models, the technology behind this wave of chatbots, have become valuable IP. But it will say that Bard is built on top of a new version of LaMDA, Google’s flagship large language model. Google says it will update Bard as the underlying tech improves. Like ChatGPT and GPT-4, Bard is fine-tuned using reinforcement learning from human feedback, a technique that trains a large language model to give more useful and less toxic responses.
Google has been working on Bard for a few months behind closed doors but says that it’s still an experiment. The company is now making the chatbot available for free to people in the US and the UK who sign up to a waitlist. These early users will help test and improve the technology. “We’ll get user feedback, and we will ramp it up over time based on that feedback,” says Google’s vice president of research, Zoubin Ghahramani. “We are mindful of all the things that can go wrong with large language models.”
But Margaret Mitchell, chief ethics scientist at AI startup Hugging Face and former co-lead of Google’s AI ethics team, is skeptical of this framing. Google has been working on LaMDA for years, she says, and she thinks pitching Bard as an experiment “is a PR trick that larger companies use to reach millions of customers while also removing themselves from accountability if anything goes wrong.”
Google wants users to think of Bard as a sidekick to Google Search, not a replacement. A button that sits below Bard’s chat widget says “Google It.” The idea is to nudge users to head to Google Search to check Bard’s answers or find out more. “It’s one of the things that help us offset limitations of the technology,” says Krawczyk.
“We really want to encourage people to actually explore other places, sort of confirm things if they’re not sure,” says Ghahramani.
This acknowledgement of Bard’s flaws has shaped the chatbot’s design in other ways, too. Users can interact with Bard only a handful of times in any given session. This is because the longer large language models engage in a single conversation, the more likely they are to go off the rails. Many of the weirder responses from Bing Chat that people have shared online emerged at the end of drawn-out exchanges, for example.
Google won’t confirm what the conversation limit will be for launch, but it will be set quite low for the initial release and adjusted depending on user feedback.
Google is also playing it safe in terms of content. Users will not be able to ask for sexually explicit, illegal, or harmful material (as judged by Google) or personal information. In my demo, Bard would not give me tips on how to make a Molotov cocktail. That’s standard for this generation of chatbot. But it would also not provide any medical information, such as how to spot signs of cancer. “Bard is not a doctor. It’s not going to give medical advice,” says Krawczyk.
Perhaps the biggest difference between Bard and ChatGPT is that Bard produces three versions of every response, which Google calls “drafts.” Users can click between them and pick the response they prefer, or mix and match between them. The aim is to remind people that Bard cannot generate perfect answers. “There’s the sense of authoritativeness when you only see one example,” says Krawczyk. “And we know there are limitations around factuality.”
How AI experts are using GPT-4
Hoffman got access to the system last summer and has since been writing up his thoughts on the different ways the AI model could be used in education, the arts, the justice system, journalism, and more. In the book, which includes copy-pasted extracts from his interactions with the system, he outlines his vision for the future of AI, uses GPT-4 as a writing assistant to get new ideas, and analyzes its answers.
A quick final word … GPT-4 is the cool new shiny toy of the moment for the AI community. There’s no denying it is a powerful assistive technology that can help us come up with ideas, condense text, explain concepts, and automate mundane tasks. That’s a welcome development, especially for white-collar knowledge workers.
However, it’s notable that OpenAI itself urges caution around use of the model and warns that it poses several safety risks, including infringing on privacy, fooling people into thinking it’s human, and generating harmful content. It also has the potential to be used for other risky behaviors we haven’t encountered yet. So by all means, get excited, but let’s not be blinded by the hype. At the moment, there is nothing stopping people from using these powerful new models to do harmful things, and nothing to hold them accountable if they do.
Chinese tech giant Baidu just released its answer to ChatGPT
So. Many. Chatbots. The latest player to enter the AI chatbot game is Chinese tech giant Baidu. Late last week, Baidu unveiled a new large language model called Ernie Bot, which can solve math questions, write marketing copy, answer questions about Chinese literature, and generate multimedia responses.
A Chinese alternative: Ernie Bot (the name stands for “Enhanced Representation from kNowledge IntEgration;” its Chinese name is 文心一言, or Wenxin Yiyan) performs particularly well on tasks specific to Chinese culture, like explaining a historical fact or writing a traditional poem. Read more from my colleague Zeyi Yang.
Even Deeper Learning
Language models may be able to “self-correct” biases—if you ask them to
Large language models are infamous for spewing toxic biases, thanks to the reams of awful human-produced content they get trained on. But if the models are large enough, they may be able to self-correct for some of these biases. Remarkably, all we might have to do is ask.