Building the necessary skills for digital transformation
Daniela: Absolutely. It’s a total driver for innovation and it is also a driver to create something like a company memory of expertise and knowledge. Because you can bring together through one single point of entry, a universe of learning opportunities to the people.
You have so many great people and organizations that can contribute with latest insights, topics that they want to position and bring to the people. We haven’t had that in the past. Imagine a company like Siemens, a huge technology company active in so many industries. It means that we need to bring together learning opportunities from, let’s say, a functional perspective. So, if you are in finance or in supply chain, we need to then also complete it by, we call it cross-functional learning opportunities, which are topics that are relevant for everybody like languages or communication. We also have a whole learning landscape available on technology topics, on product-specific topics, on market-specific topics. It’s a huge landscape of learning opportunities, and everybody needs a subset, and everybody needs a very individual specialized subset. That is a huge benefit to be able to tailor it to that. And by having such an approach, I must also say it’s much more efficient and productive because it saves time and money. People can have access to a whole universe. They don’t have to travel, don’t have to then encounter programs where maybe only a certain percentage of it is relevant to them. It’s really helpful also to drive the overall business success.
Laurel: And part of that business success is digital transformation, right? Adopting and rolling out new technologies like automation and artificial intelligence. This will create a new division of labor between humans and machines, which will disrupt jobs globally. But as these jobs evolve, new roles will be created with people having specific advantages over machines and AI like managing and decision making and communicating and interacting — all of those things that humans are really good at. How can business people prepare and prepare their employees for this shift from automation?
Daniela: Yeah, I think it is something that accompanied us already since quite a few years. But there, again, the speed and also the level of skills needed has increased so significantly. I would say it’s almost like a bouquet of things that you can do and should do. You need to, as a company, create an identity and first of all, say that you really think learning and individual growth is super important. It is a priority for the company, and you need to give it a positive spin. It is there for you, it is there to support you, it starts with you. That is why we have initiated a company-wide campaign that we call MyGrowth.
It’s much more than a campaign, it’s an overall concept and approach. But it is really meant to inspire and engage people to try out the different experiences that we provide and help them to navigate and give orientation what they should and can use. Then we have also initiated a target on learning hours because we really wanted to nudge people and say, “Look, it’s important that you take the time and that you take it as a priority.”
With regard to the specific skills that you were mentioning around automation and digitalization, we then can include specific strategic topics that we push to our people. We drive awareness campaigns through learning opportunities. Those can be targeted for certain audiences because people also need different skill levels. Or we can push it at scale. This is a highly flexible system. If I may give you an example, we have one pocket in our businesses that is called Digital Industries Software. It fits very nicely to what you were mentioning. The CEO of that business last year said we are in a software business, so AI is a major driver for everything that we are doing. Therefore, my whole organization needs to understand what first of all, artificial intelligence is, let’s say on a very generic level. But also, people need to understand how we are using it as a technology internally, but also as a driver for our business and software solutions. And then we created different learning paths for different expertise layers, and could therefore, bring the whole topic in a very comprehensive manner to thousands of people of our Digital Industries business.
Laurel: So, you are doing two things. One, you’re pushing out what you think that everyone needs to know and learn, artificial intelligence being a big topic. But then how do you then also do assessments of people and their skills to identify skill gaps and then align learning programs with the business strategy to basically not just get a return on investment? Of course, everything does come back to profit, but also return on investment on the employee’s time and expertise. Because that is also something you’re growing.
Daniela: Yes. And the skills topic is a very hot one, I can tell you. It’s all over the place and coming from very different lenses and use cases. Technology plays a major role. A platform-based learning ecosystem with a learning experience platform at the core enables you to gain insights that we never had in the past. We can see what interests people. We can see why and for what are they engaging in learning, what are they then actually learning or what are they not learning, and then therefore, leaving. If you then multiply that and you see that over the overall workforce, you see also what are hot topics, what are skills that are coming on the horizon. You can see that in certain communities. We have certain communities that are, for instance, we call them digital talents, like tech talents. And there, you already see what the next topics are that will come on the horizon. And then we can match as a learning function, do we already have the right learning opportunities for the topics that are being searched for? That is one thing. But that is more the bottom-up part of it that is super important.
Google just launched Bard, its answer to ChatGPT—and it wants you to make it better
Google has a lot riding on this launch. Microsoft partnered with OpenAI to make an aggressive play for Google’s top spot in search. Meanwhile, Google blundered straight out of the gate when it first tried to respond. In a teaser clip for Bard that the company put out in February, the chatbot was shown making a factual error. Google’s value fell by $100 billion overnight.
Google won’t share many details about how Bard works: large language models, the technology behind this wave of chatbots, have become valuable IP. But it will say that Bard is built on top of a new version of LaMDA, Google’s flagship large language model. Google says it will update Bard as the underlying tech improves. Like ChatGPT and GPT-4, Bard is fine-tuned using reinforcement learning from human feedback, a technique that trains a large language model to give more useful and less toxic responses.
Google has been working on Bard for a few months behind closed doors but says that it’s still an experiment. The company is now making the chatbot available for free to people in the US and the UK who sign up to a waitlist. These early users will help test and improve the technology. “We’ll get user feedback, and we will ramp it up over time based on that feedback,” says Google’s vice president of research, Zoubin Ghahramani. “We are mindful of all the things that can go wrong with large language models.”
But Margaret Mitchell, chief ethics scientist at AI startup Hugging Face and former co-lead of Google’s AI ethics team, is skeptical of this framing. Google has been working on LaMDA for years, she says, and she thinks pitching Bard as an experiment “is a PR trick that larger companies use to reach millions of customers while also removing themselves from accountability if anything goes wrong.”
Google wants users to think of Bard as a sidekick to Google Search, not a replacement. A button that sits below Bard’s chat widget says “Google It.” The idea is to nudge users to head to Google Search to check Bard’s answers or find out more. “It’s one of the things that help us offset limitations of the technology,” says Krawczyk.
“We really want to encourage people to actually explore other places, sort of confirm things if they’re not sure,” says Ghahramani.
This acknowledgement of Bard’s flaws has shaped the chatbot’s design in other ways, too. Users can interact with Bard only a handful of times in any given session. This is because the longer large language models engage in a single conversation, the more likely they are to go off the rails. Many of the weirder responses from Bing Chat that people have shared online emerged at the end of drawn-out exchanges, for example.
Google won’t confirm what the conversation limit will be for launch, but it will be set quite low for the initial release and adjusted depending on user feedback.
Google is also playing it safe in terms of content. Users will not be able to ask for sexually explicit, illegal, or harmful material (as judged by Google) or personal information. In my demo, Bard would not give me tips on how to make a Molotov cocktail. That’s standard for this generation of chatbot. But it would also not provide any medical information, such as how to spot signs of cancer. “Bard is not a doctor. It’s not going to give medical advice,” says Krawczyk.
Perhaps the biggest difference between Bard and ChatGPT is that Bard produces three versions of every response, which Google calls “drafts.” Users can click between them and pick the response they prefer, or mix and match between them. The aim is to remind people that Bard cannot generate perfect answers. “There’s the sense of authoritativeness when you only see one example,” says Krawczyk. “And we know there are limitations around factuality.”
How AI experts are using GPT-4
Hoffman got access to the system last summer and has since been writing up his thoughts on the different ways the AI model could be used in education, the arts, the justice system, journalism, and more. In the book, which includes copy-pasted extracts from his interactions with the system, he outlines his vision for the future of AI, uses GPT-4 as a writing assistant to get new ideas, and analyzes its answers.
A quick final word … GPT-4 is the cool new shiny toy of the moment for the AI community. There’s no denying it is a powerful assistive technology that can help us come up with ideas, condense text, explain concepts, and automate mundane tasks. That’s a welcome development, especially for white-collar knowledge workers.
However, it’s notable that OpenAI itself urges caution around use of the model and warns that it poses several safety risks, including infringing on privacy, fooling people into thinking it’s human, and generating harmful content. It also has the potential to be used for other risky behaviors we haven’t encountered yet. So by all means, get excited, but let’s not be blinded by the hype. At the moment, there is nothing stopping people from using these powerful new models to do harmful things, and nothing to hold them accountable if they do.
Chinese tech giant Baidu just released its answer to ChatGPT
So. Many. Chatbots. The latest player to enter the AI chatbot game is Chinese tech giant Baidu. Late last week, Baidu unveiled a new large language model called Ernie Bot, which can solve math questions, write marketing copy, answer questions about Chinese literature, and generate multimedia responses.
A Chinese alternative: Ernie Bot (the name stands for “Enhanced Representation from kNowledge IntEgration;” its Chinese name is 文心一言, or Wenxin Yiyan) performs particularly well on tasks specific to Chinese culture, like explaining a historical fact or writing a traditional poem. Read more from my colleague Zeyi Yang.
Even Deeper Learning
Language models may be able to “self-correct” biases—if you ask them to
Large language models are infamous for spewing toxic biases, thanks to the reams of awful human-produced content they get trained on. But if the models are large enough, they may be able to self-correct for some of these biases. Remarkably, all we might have to do is ask.
Texas is trying out new tactics to restrict access to abortion pills online
Texas is trying to limit access to abortion pills by cracking down on internet service providers and credit card processing companies. These tactics reflect the reality that, post-Roe, the internet is a critical channel for people seeking information about abortion or trying to buy pills to terminate a pregnancy—especially in states where they can no longer access these things in physical pharmacies or medical centers.
Texas has long been a laboratory for anti-abortion political tactics, and on March 15, a US District Judge heard arguments in a case that’s seeking to reverse the FDA approval of mifepristone, a drug that can be used to terminate an early pregnancy. The case would limit online-facilitated abortions and would have far-reaching consequences even in states that are not trying to restrict abortion.
Earlier this month, Republicans in the Texas state legislature introduced two bills to restrict access to abortion pills. The first bill, HB 2690, would require internet service providers (ISPs) to ban sites that provide access to the pills or information about obtaining them. Companies like AT&T and Spectrum would have to “make every reasonable and technologically feasible effort to block Internet access to information or material intended to assist or facilitate efforts to obtain an elective abortion or an abortion-inducing drug.” The bill would also forbid both publishers and ordinary people from providing information about access to abortion-inducing drugs.
The second bill, SB 1440, would make it a felony for credit card companies to process transactions for abortion pills, and would also make them liable to lawsuits from the public.
Blair Wallace, a policy and advocacy strategist at the ACLU of Texas, a nonprofit that advocates for civil liberties and reproductive choice, said the recent developments mark “a new frontier for the ways in which they’re coming for [abortion access],” adding: “It is really terrifying.”
Wallace sees it as a continuation of a strategy that seeks to criminalize whole abortion care networks with the aim of isolating people seeking abortions. More broadly, this strategy of censoring information and language has become a popular tactic in US culture wars in the last several years, and the proposed bill could incentivize platforms to aggressively remove information about abortion access out of concern for legal risk. Some sites, like Meta’s Instagram and Facebook, have reportedly removed information about abortion pills in the past.
So what might the outcome of all the Texas action be? Both the bill that targets ISPs and the mifepristone case this week are unprecedented, which means neither is likely to be successful. That said, the tactics are likely to stay. “Will we see it again next session? Will we see parts of this bill stripped down and put into amendments? There’s like a million ways that this can play out,” says Wallace. Anti-abortion political strategy is coordinated nationally even though the fights are playing out at a state level, and it’s likely that other states will target online spaces going forward.
Online abortion resources can pose risks to privacy. But there are lots of ways to access them more safely. Here are some resources I recommend.