Connect with us

Tech

Multi-skilled AI

Published

on

Multi-skilled AI


In late 2012, AI scientists first figured out how to get neural networks to “see.” They proved that software designed to loosely mimic the human brain could dramatically improve existing computer-vision systems. The field has since learned how to get neural networks to imitate the way we reason, hear, speak, and write.

But while AI has grown remarkably human-like—even superhuman—at achieving a specific task, it still doesn’t capture the flexibility of the human brain. We can learn skills in one context and apply them to another. By contrast, though DeepMind’s game-playing algorithm AlphaGo can beat the world’s best Go masters, it can’t extend that strategy beyond the board. Deep-learning algorithms, in other words, are masters at picking up patterns, but they cannot understand and adapt to a changing world.

Researchers have many hypotheses about how this problem might be overcome, but one in particular has gained traction. Children learn about the world by sensing and talking about it. The combination seems key. As kids begin to associate words with sights, sounds, and other sensory information, they are able to describe more and more complicated phenomena and dynamics, tease apart what is causal from what reflects only correlation, and construct a sophisticated model of the world. That model then helps them navigate unfamiliar environments and put new knowledge and experiences in context.

AI systems, on the other hand, are built to do only one of these things at a time. Computer-vision and audio-recognition algorithms can sense things but cannot use language to describe them. A natural-­language model can manipulate words, but the words are detached from any sensory reality. If senses and language were combined to give an AI a more human-like way to gather and process new information, could it finally develop something like an understanding of the world?

The hope is that these “multimodal” systems, with access to both the sensory and linguistic “modes” of human intelligence, should give rise to a more robust kind of AI that can adapt more easily to new situations or problems. Such algorithms could then help us tackle more complex problems, or be ported into robots that can communicate and collaborate with us in our daily life.

New advances in language-­processing algorithms like OpenAI’s GPT-3 have helped. Researchers now understand how to replicate language manipulation well enough to make combining it with sensing capabilities more potentially fruitful. To start with, they are using the very first sensing capability the field achieved: computer vision. The results are simple bimodal models, or visual-language AI.

In the past year, there have been several exciting results in this area. In September, researchers at the Allen Institute for Artificial Intelligence, AI2, created a model that can generate an image from a text caption, demonstrating the algorithm’s ability to associate words with visual information. In November, researchers at the University of North Carolina, Chapel Hill, developed a method that incorporates images into existing language models, which boosted the models’ reading comprehension.

OpenAI then used these ideas to extend GPT-3. At the start of 2021, the lab released two visual-language models. One links the objects in an image to the words that describe them in a caption. The other generates images based on a combination of the concepts it has learned. You can prompt it, for example, to produce “a painting of a capybara sitting in a field at sunrise.” Though it may have never seen this before, it can mix and match what it knows of paintings, capybaras, fields, and sunrises to dream up dozens of examples.

Achieving more flexible intelligence wouldn’t just unlock new AI applications: it would make them safer, too.

More sophisticated multimodal systems will also make possible more advanced robotic assistants (think robot butlers, not just Alexa). The current generation of AI-powered robots primarily use visual data to navigate and interact with their surroundings. That’s good for completing simple tasks in constrained environments, like fulfilling orders in a warehouse. But labs like AI2 are working to add language and incorporate more sensory inputs, like audio and tactile data, so the machines can understand commands and perform more complex operations, like opening a door when someone is knocking.

In the long run, multimodal breakthroughs could help overcome some of AI’s biggest limitations. Experts argue, for example, that its inability to understand the world is also why it can easily fail or be tricked. (An image can be altered in a way that’s imperceptible to humans but makes an AI identify it as something completely different.) Achieving more flexible intelligence wouldn’t just unlock new AI applications: it would make them safer, too. Algorithms that screen résumés wouldn’t treat irrelevant characteristics like gender and race as signs of ability. Self-driving cars wouldn’t lose their bearings in unfamiliar surroundings and crash in the dark or in snowy weather. Multimodal systems might become the first AIs we can really trust with our lives.

Tech

The US Supreme Court just gutted the EPA’s power to regulate emissions

Published

on

The US Supreme Court just gutted the EPA’s power to regulate emissions


What was the ruling?

The decision states that the EPA’s actions in a 2015 rule, which included caps on emissions from power plants, overstepped the agency’s authority.

“Capping carbon dioxide emissions at a level that will force a nationwide transition away from the use of coal to generate electricity may be a sensible ‘solution to the crisis of the day,’” the decision reads. “But it is not plausible that Congress gave EPA the authority to adopt on its own such a regulatory scheme.”

Only Congress has the power to make “a decision of such magnitude and consequence,” it continues. 

This decision is likely to have “broad implications,” says Deborah Sivas, an environmental law professor at Stanford University. The court is not only constraining what the EPA can do on climate policy going forward, she adds; this opinion “seems to be a major blow for agency deference,” meaning that other agencies could face limitations in the future as well.

The ruling, which is the latest in a string of bombshell cases from the court, fell largely along ideological lines. Chief Justice John Roberts authored the majority opinion, and he was joined by his fellow conservatives: Justices Samuel Alito, Amy Coney Barrett, Neil Gorsuch, Brett Kavanaugh, and Clarence Thomas. Justices Stephen Breyer, Elena Kagan, and Sonia Sotomayor dissented.

What is the decision all about?

The main question in the case was how much power the EPA should have to regulate carbon emissions and what it should be allowed to do to accomplish that job. That question was occcasioned by a 2015 EPA rule called the Clean Power Plan.

The Clean Power Plan targeted greenhouse-gas emissions from power plants, requiring each state to make a plan to cut emissions and submit it to the federal government.

Several states and private groups immediately challenged the Clean Power Plan when it was released, calling it an overreach on the part of the agency, and the Supreme Court put it on hold in 2016. After a repeal of the plan during Donald Trump’s presidency and some legal back-and-forth, a Washington, DC, district court ruled in January 2021 that the Clean Power Plan did fall within the EPA’s authority.

Continue Reading

Tech

How to track your period safely post-Roe

Published

on

How to track your period safely post-Roe


3. After you delete your app, ask the app provider to delete your data. Just because you removed the app from your phone does not mean the company has gotten rid of your records. In fact, California is the only state where they are legally required to delete your data. Still, many companies are willing to delete it upon request. Here’s a helpful guide from the Washington Post that walks you through how you can do this.

Here’s how to safely track your period without an app.

1. Use a spreadsheet. It’s relatively easy to re-create the functions of a period tracker in a spreadsheet by listing out the dates of your past periods and figuring out the average length of time from the first day of one to the first day of the next. You can turn to one of the many templates already available online, like the period tracker created by Aufrichtig and the Menstrual Cycle Calendar and Period Tracker created by Laura Cutler. If you enjoy the science-y aspect of period apps, templates offer the ability to send yourself reminders about upcoming periods, record symptoms, and track blood flow.

2. Use a digital calendar. If spreadsheets make you dizzy and your entire life is on a digital calendar already, try making your period a recurring event, suggests Emory University student Alexa Mohsenzadeh, who made a TikTok video demonstrating the process

Mohsenzadeh says that she doesn’t miss apps. “I can tailor this to my needs and add notes about how I’m feeling and see if it’s correlated to my period,” she says. “You just have to input it once.” 

3. Go analog and use a notebook or paper planner. We’re a technology publication, but the fact is that the safest way to keep your menstrual data from being accessible to others is to take it offline. You can invest in a paper planner or just use a notebook to keep track of your period and how you’re feeling. 

If that sounds like too much work, and you’re looking for a simple, no-nonsense template, try the free, printable Menstrual Cycle Diary available from the University of British Columbia’s Centre for Menstrual Cycle and Ovulation Research.

4. If your state is unlikely to ban abortion, you might still be able to safely use a period-tracking app. The crucial thing will be to choose one that has clear privacy settings and has publicly promised not to share user data with authorities. Quintin says Clue is a good option because it’s beholden to EU privacy laws and has gone on the record with its promise not to share information with authorities. 

Continue Reading

Tech

Composable enterprise spurs innovation

Published

on

Composable enterprise spurs innovation


Overall, 74% of companies accelerated plans to move to the cloud by more than a year, jettisoning legacy technologies and operating models to embrace data and applications, according to business analysis firm ZK Research.

A key part of that transformation relied on using applications, usually in the cloud, that integrated apps and data with low-code functionality to create more efficient workflows, more quickly than ever. Low-code is a software development approach for building processes and functionality with little or no code, which allows non-software developers to create applications.

Companies that structure daily workflows around these so-called “composable applications”—often called composable enterprises—have a much tighter relationship between technology and business units and can quickly assemble new applications and services at a fraction of the historical cost.

Composable applications provide a way to build on or add to applications in an easy way—think of building blocks: the work has already been done and additional functionality can be added to the foundational ability.

That flexibility is necessary for the variability of the current workplace and economy, says Zeus Kerravala, founder and principal analyst at ZK Research. “We’re moving to an era where in any given moment, you could have everyone in the office, no one in the office, or every reasonable combination in between,” Kerravala says. “You could have all your shoppers online, only a few, or—depending on your industry—no shoppers online and every possible combination between. The pandemic has created these dramatic shifts in the way we learn, the way we live, and the way we work, based on forces that are outside of anyone’s control.”

When it comes to cloud infrastructure, companies have often pursued half measures—adopting it in such a way as to reinforce old business models, creating private clouds that mimic their on-premises infrastructure. But composability gives enterprises the ability to adapt to changes in operations and in their markets by creating new applications to support needed workflows without hiring additional or outside software developers to implement the changes.

Composable cloud services further liberate companies from relying on running their own software instances solely to customize the code to their needs. Composable applications bring together cloud, customization, integration, and workflow management, allowing companies to be flexible and innovate quickly.

When businesses suffered pandemic disruptions to critical business functions—such as call centers, IT support, and medical administration—composable applications allowed firms to adapt and continue. In one case, a company needed to extend its call-center system, which was hosted in a controlled environment, to allow access to employees through web browsers running on an Amazon virtual machine, says David Lee, vice president of products at RingCentral, an enterprise communications platform that has focused on composability. “They had to make these changes work overnight at employees’ homes, and that was a great challenge for a lot of organizations,” Lee says. “Companies well-adapted to potential change actually made these transitions very easy by composing new applications and workflows.”

Continue Reading

Copyright © 2021 Seminole Press.