A persona is an imaginary figure representing a segment of real people, and it is a communicative design technique aimed at enhanced user understanding. Through several decades of use, personas were data structures, static frameworks user attributes with no interactivity. A persona was a means to organize data about the imaginary person and to present information to the decision-makers. This wasn’t really actionable for most situations.
How personas and data work together
With increasing analytics data, personas can now be generated using big data and algorithmic approaches. This integration of personas and analytics offers impactful opportunities to shift personas from flat files of data presentation to interactive interfaces for analytics systems. These personas analytics systems provide both the empathic connection of personas and the rational insights of analytics. With persona analytics systems, the persona is no longer a static, flat file. Instead, they are operational modes of accessing user data. Combining personas and analytics also makes the user data less challenging to employ for those lacking the skills or desire to work with complex analytics. Another advantage of persona analytics systems is that one can create hundreds of data-driven personas to reflect the various behavioral and demographic nuances in the underlying user population.
A “personas as interfaces” approach offers the benefits of both personas and analytics systems and addresses each’s shortcomings. Transforming both the persona and analytics creation process, personas as interfaces provide both theoretical and practical implications for design, marketing, advertising, health care, and human resources, among other domains.
This persona as interface approach is the foundation of the persona analytics system, Automatic Persona Generation (APG). In pushing advancements of both persona and analytics conceptualization, development, and use, APG presents a multi-layered full-stack integration affording three levels of user data presentation, which are (a) the conceptual persona, (b) the analytical metrics, and (c) the foundational data.
APG generates casts of personas representing the user population, with each segment having a persona. Relying on regular data collection intervals, data-driven personas enrich the traditional persona with additional elements, such as user loyalty, sentiment analysis, and topics of interest, which are features requested by APG customers.
Leveraging intelligence system design concepts, APG identifies unique behavioral patterns of user interactions with products (i.e., these can be products, services, content, interface features, etc.) and then associates these unique patterns to demographic groups based on the strength of association to the unique pattern. After obtaining a grouped interaction matrix, we apply matrix factorization or other algorithms for identifying latent user interaction. Matrix factorization and related algorithms are particularly suited for reducing the dimensionality of large datasets by discerning latent factors.
How APG data-driven personas work
APG enriches the user segments produced by algorithms via adding an appropriate name, picture, social media comments, and related demographic attributes (e.g., marital status, educational level, occupation, etc.) via querying the audience profiles of prominent social media platforms. APG has an internal meta-tagged database of thousand of purchased copyright photos that are age, gender, and ethnically appropriate. The system also has an internal database of hundreds of thousands of names that are also age, gender, and ethnically appropriate. For example, for a persona of an Indian female in her twenties, APG automatically selects a popular name for females twenty years ago in India. The APG data-driven personas are then displayed to the users from the organization via the interactive online system.
APG employs the foundational user data that the system algorithms act upon, transforming this data into information about users. This algorithmic processing outcome is actionable metrics and measures about the user population (i.e., percentages, probabilities, weights, etc.) of the type that one would typically see in industry-standard analytics packages. Employing these actionable metrics is the next level of abstraction taken by APG. The result is a persona analytics system capable of presenting user insights at different granularity levels, with levels both integrated and appropriate to the task.
For example, C-level executives may want a high-level view of the users for which personas would be applicable. Operational managers may want a probabilistic view for which the analytics would appropriate. The implementers need to take direct user action, such as for a marketing campaign, for which the individual user data is more suitable.
Each level of the APG can be broken down as follows:
Conceptual level, personas. The highest level of abstraction, the conceptual level, is the set of personas that APG generates from the data using the method described above, with a default of ten personas. However, APG theoretically can generate as many personas as needed. The persona has nearly all the typical attributes that one finds in traditional flat-file persona profiles. However, in APG, personas as interfaces allow for dramatically increased interactivity in leveraging personas within organizations. Interactivity is provided such that the decision-maker can alter the default number to generate more or fewer personas, with the system currently set for between five and 15 personas. The system can allow for searching a set of personas or leveraging analytics to predict persona interests.
Analytics level: percentages, probabilities, and weights. At the analytics level, APG personas act as interfaces to the underlying information and data used to create the personas. The specific information may vary somewhat by the data source. Still, the analytics level will reflect the metrics and measures generated from the foundational user data and create the personas. In APG, the personas provide affordance to the various analytics information via clickable icons on the persona interface. For example, APG displays the percentage of the entire user population that a particular persona is representing. This analytic insight is valuable for decision-makers to determine the importance of designing or developing for a specific persona and helps address the issue of the persona’s validity in representing actual users.
User level: individual data. Leveraging the demographic metadata from the underlying factorization algorithm, decision-makers can access the specific user level (i.e., individual or aggregate) directly within APG. The numerical user data (in various forms) are the foundation of the personas and analytics.
The implications of data-driven personas
The conceptual shift of personas from flat files to personas as interfaces for enhanced user understanding opens new possibilities for interaction among decision-makers, personas, and analytics. Using data-driven personas embedded as the interfaces to analytics systems, decision-makers can, for example, imbue analysis systems with the benefit of personas to form a psychological bond, via empathy, between stakeholders and user data and still have access to the practical user numbers. There are several practical implications for managers and practitioners. Namely, personas are now actionable, as the personas accurately reflect the underlying user data. This full-stack implementation aspect has not been available with either personas or analytics previously.
APG is a fully functional system deployed with real client organizations. Please visit https://persona.qcri.org to see a demo.
This content was written by Qatar Computing Research Institute, Hamad Bin Khalifa University, a member of Qatar Foundation. It was not written by MIT Technology Review’s editorial staff.
A new training model, dubbed “KnowNo,” aims to address this problem by teaching robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.
Andy Zeng, a research scientist at Google DeepMind who helped develop the new technique, says that while robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense.
For example, when asked to bring you a Coke, the robot needs to first understand that it needs to go into the kitchen, look for the refrigerator, and open the fridge door. Conventionally, these smaller substeps had to be manually programmed, because otherwise the robot would not know that people usually keep their drinks in the kitchen.
That’s something large language models (LLMs) could help to fix, because they have a lot of common-sense knowledge baked in, says Zeng.
Now when the robot is asked to bring a Coke, an LLM, which has a generalized understanding of the world, can generate a step-by-step guide for the robot to follow.
The problem with LLMs, though, is that there’s no way to guarantee that their instructions are possible for the robot to execute. Maybe the person doesn’t have a refrigerator in the kitchen, or the fridge door handle is broken. In these situations, robots need to ask humans for help.
KnowNo makes that possible by combining large language models with statistical tools that quantify confidence levels.
When given an ambiguous instruction like “Put the bowl in the microwave,” KnowNo first generates multiple possible next actions using the language model. Then it creates a confidence score predicting the likelihood that each potential choice is the best one.
The news: A new robot training model, dubbed “KnowNo,” aims to teach robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.
Why it matters: While robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense. That’s something large language models could help to fix, because they have a lot of common-sense knowledge baked in. Read the full story.
—June Kim
Medical microrobots that travel inside the body are (still) on their way
The human body is a labyrinth of vessels and tubing, full of barriers that are difficult to break through. That poses a serious hurdle for doctors. Illness is often caused by problems that are hard to visualize and difficult to access. But imagine if we could deploy armies of tiny robots into the body to do the job for us. They could break up hard-to-reach clots, deliver drugs to even the most inaccessible tumors, and even help guide embryos toward implantation.
We’ve been hearing about the use of tiny robots in medicine for years, maybe even decades. And they’re still not here. But experts are adamant that medical microbots are finally coming, and that they could be a game changer for a number of serious diseases. Read the full story.
We haven’t always been right (RIP, Baxter), but we’ve often been early to spot important areas of progress (we put natural-language processing on our very first list in 2001; today this technology underpins large language models and generative AI tools like ChatGPT).
Every year, our reporters and editors nominate technologies that they think deserve a spot, and we spend weeks debating which ones should make the cut. Here are some of the technologies we didn’t pick this time—and why we’ve left them off, for now.
New drugs for Alzheimer’s disease
Alzmeiher’s patients have long lacked treatment options. Several new drugs have now been proved to slow cognitive decline, albeit modestly, by clearing out harmful plaques in the brain. In July, the FDA approved Leqembi by Eisai and Biogen, and Eli Lilly’s donanemab could soon be next. But the drugs come with serious side effects, including brain swelling and bleeding, which can be fatal in some cases. Plus, they’re hard to administer—patients receive doses via an IV and must receive regular MRIs to check for brain swelling. These drawbacks gave us pause.
Sustainable aviation fuel
Alternative jet fuels made from cooking oil, leftover animal fats, or agricultural waste could reduce emissions from flying. They have been in development for years, and scientists are making steady progress, with several recent demonstration flights. But production and use will need to ramp up significantly for these fuels to make a meaningful climate impact. While they do look promising, there wasn’t a key moment or “breakthrough” that merited a spot for sustainable aviation fuels on this year’s list.
Solar geoengineering
One way to counteract global warming could be to release particles into the stratosphere that reflect the sun’s energy and cool the planet. That idea is highly controversial within the scientific community, but a few researchers and companies have begun exploring whether it’s possible by launching a series of small-scale high-flying tests. One such launch prompted Mexico to ban solar geoengineering experiments earlier this year. It’s not really clear where geoengineering will go from here or whether these early efforts will stall out. Amid that uncertainty, we decided to hold off for now.