Connect with us

Tech

Work in Asia’s data age

Published

on

Work in Asia’s data age


Technology forecaster Forrester have found nearly half of Asian managers surveyed expect permanent increases in their full-time remote workforce; many will seek to use AI-enhanced workforce engagement tools to try to increase workplace communication to reduce the new distance this creates. 

As part of the Global AI Agenda 2021 program, in association with Cornerstone OnDemand, MIT Technology Review Insights surveyed more than 1,500 senior decision-makers and technology leaders to understand how AI is being used in organizations in Asia and globally to accelerate revenue growth and digital collaboration, and to augment human resource capabilities.   

AI, top to bottom

Globally, corporates are deploying AI tools and analytics in increasing numbers, to squeeze more productivity out of manufacturing, help employees understand customer requirements more precisely, and support business outcomes. Like many technology adoption strategies, digitally-enabled insight is traditionally seen as a bottom-line tool—for example, more visibility across a supply chain allows a manufacturer to quickly identify places to trim costs. Like many strategic pivots over the last 18 months, the impact of covid-19 has sped this up.

Allan Tate, the executive chair of the MIT Sloan School of Management’s CIO Symposium, refers to this as “the Big Reset: where enterprises undergo two years of digital transformation in two months.” While he concedes that “right now using AI to increase efficiency and reduce costs is probably the most common use case, AI-enabled data usage is quickly becoming a key way of driving revenue for many corporations.”

This view is borne out by our global survey on AI adoption strategies in enterprises: nearly half of our respondents indicate that they have either deployed AI to achieve revenue growth, or are accelerating their efforts to do so. A quarter have plans to step up the use of AI in top-line initiatives, and only 12% indicate that it is a tool only for cost containment.

The perspective from respondents based in Asia largely echoes the global trend, but also reveals a region that is simultaneously behind the curve, and ready to leapfrog it. Asian respondents indicate lower current use of AI for revenue growth than the global average, but are much more likely to undertake “top line” AI initiatives, and over a third have plans to increase its use.

This increasing current emphasis on “top line” AI, which often supports customer-facing teams through increased customer insight, drives business expansion. This, in turn, drives efforts to build capabilities for marketing and business development professionals, such as augmenting their workflows and serving as a catalyst for skills development. Asian respondents, on average, are slightly more aligned toward revenue growth performance in their AI project deployment than the global average (see Figure 2).

Organizations focusing on “bottom line” AI initiatives—which fall into cost efficiency and resource optimization categories—are more likely looking to increase automating functions and drive change in operations, which could lead to task redefinition for operations and internal teams.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Tech

Turning medical data into actionable knowledge

Published

on

Turning medical data into actionable knowledge


PACS remains an indispensable tool for viewing and interpreting imaging results, but leading health care providers are now beginning to move beyond PACS. The new paradigm brings data from multiple medical specialties together into a single platform, with a single user interface that strives to provide a holistic understanding of the patient and facilitate clinical reporting. By connecting data from multiple specialties and enabling secure and efficient access to relevant patient data, advanced information technology platforms can enhance patient care, simplify workflows for clinicians, and reduce costs for health care organizations. This organizes data around patients, rather than clinical departments.

Meeting patient expectations

Health care providers generate an enormous volume of data. Today, nearly one-third of the world’s data volume is generated by the health care industry. The growth in health care data outpaces media and entertainment, whose data is expanding at a 25% compound annual growth rate ,compared to the 36% rate for health care data. This makes the need for a comprehensive health care data management systems increasingly urgent.

The volume of health care industry data is only part of the challenge. Different data types stored in different formats create an additional hurdle to the efficient storage, retrieval, and sharing of clinically important patient data.

PACS was designed to view and store data in the Digital Imaging and Communications in Medicine (DICOM) standard, and a process known as “DICOM-wrapping” is used for PACS to provide access to patient information stored in PDF, MP4, and other file formats. In addition to adding additional steps that impede efficient workflow, DICOM-wrapping makes it difficult for clinicians to work with a file in its native format. PACS users are given what is essentially a screen shot of an Excel file, which makes it impossible to use the data analysis features in the Excel software.

With an open image and data management (IDM) system coupled with an intuitive reading and reporting workspace, patient data can be consolidated in one location instead of in multiple data silos, providing clinicians with the information they need to provide the highest level of patient-centered care. In a 2017 survey by health insurance company Humana, its patients said they aren’t interested in the details of health care IT, but are nearly unanimous when it comes to their expectations, with 97% of patients saying that their health care providers should have access to their complete medical history.

Adapting to clinical needs

To meet patient expectations and needs, health care IT seeks to meet the needs of health care providers and systems by offering flexibility—both in its initial setup and in its capacity to scale to meet evolving organizational demands.

A modular architecture enables health care providers and systems to tailor their system to their specific needs. Depending on clinical needs, health care providers can integrate specialist applications for reading and reporting, AI-powered functionalities, advanced visualization, and third-party tools. The best systems are scalable, so that they can grow as an organization grows, with the ability to flexibly scale hardware by expanding the number of servers and storage capacity.

A simple, unified UI enables a quick learning curve across the organization, while the adoption of a single enterprise system helps reduce IT costs by enabling the consolidation and integration of previously distinct systems. Through password-protected data transfers, these systems can also facilitate communication with patients.

Continue Reading

Tech

Why Big Tech’s bet on AI assistants is so risky

Published

on

Why Big Tech’s bet on AI assistants is so risky


OpenAI unveiled new ChatGPT features that include the ability to have a conversation with the chatbot as if you were making a call, allowing you to instantly get responses to your spoken questions in a lifelike synthetic voice, as my colleague Will Douglas Heaven reported. OpenAI also revealed that ChatGPT will be able to search the web.  

Google’s rival bot, Bard, is plugged into most of the company’s ecosystem, including Gmail, Docs, YouTube, and Maps. The idea is that people will be able to use the chatbot to ask questions about their own content—for example, by getting it to search through their emails or organize their calendar. Bard will also be able to instantly retrieve information from Google Search. In a similar vein, Meta too announced that it is throwing AI chatbots at everything. Users will be able to ask AI chatbots and celebrity AI avatars questions on WhatsApp, Messenger, and Instagram, with the AI model retrieving information online from Bing search. 

This is a risky bet, given the limitations of the technology. Tech companies have not solved some of the persistent problems with AI language models, such as their propensity to make things up or “hallucinate.” But what concerns me the most is that they are a security and privacy disaster, as I wrote earlier this year. Tech companies are putting this deeply flawed tech in the hands of millions of people and allowing AI models access to sensitive information such as their emails, calendars, and private messages. In doing so, they are making us all vulnerable to scams, phishing, and hacks on a massive scale. 

I’ve covered the significant security problems with AI language models before. Now that AI assistants have access to personal information and can simultaneously browse the web, they are particularly prone to a type of attack called indirect prompt injection. It’s ridiculously easy to execute, and there is no known fix. 

In an indirect prompt injection attack, a third party “alters a website by adding hidden text that is meant to change the AI’s behavior,” as I wrote in April. “Attackers could use social media or email to direct users to websites with these secret prompts. Once that happens, the AI system could be manipulated to let the attacker try to extract people’s credit card information, for example.” With this new generation of AI models plugged into social media and emails, the opportunities for hackers are endless. 

I asked OpenAI, Google, and Meta what they are doing to defend against prompt injection attacks and hallucinations. Meta did not reply in time for publication, and OpenAI did not comment on the record. 

Regarding AI’s propensity to make things up, a spokesperson for Google did say the company was releasing Bard as an “experiment,” and that it lets users fact-check Bard’s answers using Google Search. “If users see a hallucination or something that isn’t accurate, we encourage them to click the thumbs-down button and provide feedback. That’s one way Bard will learn and improve,” the spokesperson said. Of course, this approach puts the onus on the user to spot the mistake, and people have a tendency to place too much trust in the responses generated by a computer. Google did not have an answer for my question about prompt injection. 

For prompt injection, Google confirmed it is not a solved problem and remains an active area of research. The spokesperson said the company is using other systems, such as spam filters, to identify and filter out attempted attacks, and is conducting adversarial testing and red teaming exercises to identify how malicious actors might attack products built on language models. “We’re using specially trained models to help identify known malicious inputs and known unsafe outputs that violate our policies,” the spokesperson said.  

Continue Reading

Tech

Child online safety laws will actually hurt kids, critics say

Published

on

Child online safety laws will actually hurt kids, critics say


At the same time, we’ve also seen many states pick up (and politicize) laws about online safety for kids in recent months. These policies vary quite a bit from state to state, as I wrote back in April. Some focus on children’s data, and others try to limit how much and when kids can get online. 

Supporters say these laws are necessary to mitigate the risks that big tech companies pose to young people—risks that are increasingly well documented. They say it’s well past time to put guardrails in place and limit the collecting and selling of minors’ data.

“What we’re doing here is creating a duty of care that makes the social media platforms accountable for the harms they’ve caused,” said Senator Richard Blumenthal, who is co-sponsoring a child online safety bill in the Senate, in an interview with Slate. “It gives attorneys general and the FTC the power to bring lawsuits based on the product designs that, in effect, drive eating disorders, bullying, suicide, and sex and drug abuse that kids haven’t requested and that can be addictive.”

But—surprise, surprise—as with most things, it’s not really that simple. There are also vocal critics who argue that child safety laws are actually harmful to kids because all these laws, no matter their shape, have to contend with a central tension: in order to implement laws that apply to kids online, companies need to actually identify which users are kids—which requires the collection or estimation of sensitive personal information. 

I was thinking about this when the prominent New York–based civil society organization S.T.O.P. (which stands for the Surveillance Technology Oversight Project) released a report on September 28 that highlights some of these potential harms and makes the case that all bills requiring tech companies to identify underage users, even if well intentioned, will increase online surveillance for everyone. 

“These bills are sold as a way to protect teens, but they do just the opposite,” S.T.O.P. executive director Albert Fox Cahn said in a press release. “Rather than misguided efforts to track every user’s age and identity, we need privacy protections for every American.”  

There’s a wide range of regulations out there, but the report calls out several states that are creating laws imposing stricter—even drastic—restrictions on minors’ internet access, effectively limiting online speech. 

A Utah law that will take effect in March 2024, for instance, will require that parents give consent for their kids to access social media outside the hours of 6:30 a.m. to 10:30 p.m., and that social media companies build features enabling parents to access their kids’ accounts. 

Continue Reading

Copyright © 2021 Seminole Press.