Sandeep: Sure. Using an example is great because this is such a wide field, both commercial real estate and the application of AI/ML in commercial real estate. In the area of smart buildings, we are focused on enabling three outcomes for our clients: energy, efficiency, and experience; which is how do they manage their energy usage, how do they get more efficient in everything that they do with respect to managing a property? And then what is the workplace experience for the employees in a building?
And let me just take an example of efficiency. There was a certain way in which buildings were managed previously. And with the application of cloud native global technology solutions, that we have that are infused with AI/ML, we are now able to manage facilities in a smarter manner, what we call Smart FM. We are able to look at occupancy and dynamically clean the environment rather than having people cleaning the environment on a regular schedule, we are able to save our clients a lot of money with respect to dynamic cleaning. We are able to detect anomalies in how we manage buildings and assets, which can then further reduce the false alarms and the number of truck rolls that need to happen with respect to managing a building. So there are so many different ways in which we infuse AI/ML.
Laurel: That’s really interesting. So according to a 2019 International Energy Agency global status report, the real estate industry contributed 39% of global carbon emissions. Could you offer us an example of how smart technologies, like what you’re talking about now, could boost operational efficiencies and then also help reduce emissions and improve sustainability?
Sandeep: Yeah, absolutely. I think there are two ways in which we look at this space. As you indicated that 39% of carbon emissions are contributed by real estate, and so therefore the industry has a huge role to play. Part of those emissions are at the time of construction itself, and the remainder is for the life cycle of the asset. Right at the time of construction, we’ve built capabilities where we are able to design and redesign based on a certain energy emission target for a building. We are able to select our suppliers based on a certain energy emission target for the building.
And then at the time of managing the building, there are many solutions that offer instant gratification, stick sensors up, light up a building, and they all work well if all you need to do is to light up a building. But in order to meet the scale and the global net-zero targets that our clients have set, our solutions need to be at portfolio scale and need to be multidimensional.
And so therefore what we do is we have the ability to ingest data from various different sources, from sensors, and are able to harmonize that and land it against a standard taxonomy. And then we are able to assess that in many different ways. We are able to bring together different aspects of looking at energy and looking at occupancy and managing the building based on the occupancy in the building. Those interventions, for example, at one of our clients recently, meant we were able to stand up those interventions at 25-plus buildings. And that led to a reduction in peak usage energy for them and also reduction in reactive maintenance work orders, reducing truck rolls, and supporting their energy goals.
Laurel: So you also are talking about this on a portfolio level. And CBRE’s own corporate responsibility and environmental social and governance or ESG goals are as follows: scale to a low-carbon future, create opportunities for employees to thrive through diversity, equity, inclusion initiatives and to build trust through integrity. How is CBRE using emerging technologies like artificial intelligence and machine learning to then become more efficient and also meet those ESG goals?
Sandeep: I think a lot of the ESG problem is a data problem. Today, if you talk to most who are trying and most are grappling with this problem right now, what they’ll say is that do they have a clear line of sight of what their, for example, scope 1 and scope 2, scope 3 emissions are? Are they able to capture the data in a reliable manner, audit it in a reliable manner, and then report against it? While they report against it, can they also manage usage? Because if you are able to look at the data, then you will know where corrective actions are required. Building on the foundation of the data platform that we’ve built on, which is 100% cloud native, by the way, we can then, on top of that, apply these technologies where we can apply ML models to detect anomalies. We take a digital twins perspective to map our data against the buildings and manage the end-to-end lifecycle of that real estate process.
These robots know when to ask for help
A new training model, dubbed “KnowNo,” aims to address this problem by teaching robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.
Andy Zeng, a research scientist at Google DeepMind who helped develop the new technique, says that while robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense.
For example, when asked to bring you a Coke, the robot needs to first understand that it needs to go into the kitchen, look for the refrigerator, and open the fridge door. Conventionally, these smaller substeps had to be manually programmed, because otherwise the robot would not know that people usually keep their drinks in the kitchen.
That’s something large language models (LLMs) could help to fix, because they have a lot of common-sense knowledge baked in, says Zeng.
Now when the robot is asked to bring a Coke, an LLM, which has a generalized understanding of the world, can generate a step-by-step guide for the robot to follow.
The problem with LLMs, though, is that there’s no way to guarantee that their instructions are possible for the robot to execute. Maybe the person doesn’t have a refrigerator in the kitchen, or the fridge door handle is broken. In these situations, robots need to ask humans for help.
KnowNo makes that possible by combining large language models with statistical tools that quantify confidence levels.
When given an ambiguous instruction like “Put the bowl in the microwave,” KnowNo first generates multiple possible next actions using the language model. Then it creates a confidence score predicting the likelihood that each potential choice is the best one.
The Download: inside the first CRISPR treatment, and smarter robots
The news: A new robot training model, dubbed “KnowNo,” aims to teach robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.
Why it matters: While robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense. That’s something large language models could help to fix, because they have a lot of common-sense knowledge baked in. Read the full story.
Medical microrobots that travel inside the body are (still) on their way
The human body is a labyrinth of vessels and tubing, full of barriers that are difficult to break through. That poses a serious hurdle for doctors. Illness is often caused by problems that are hard to visualize and difficult to access. But imagine if we could deploy armies of tiny robots into the body to do the job for us. They could break up hard-to-reach clots, deliver drugs to even the most inaccessible tumors, and even help guide embryos toward implantation.
We’ve been hearing about the use of tiny robots in medicine for years, maybe even decades. And they’re still not here. But experts are adamant that medical microbots are finally coming, and that they could be a game changer for a number of serious diseases. Read the full story.
5 things we didn’t put on our 2024 list of 10 Breakthrough Technologies
We haven’t always been right (RIP, Baxter), but we’ve often been early to spot important areas of progress (we put natural-language processing on our very first list in 2001; today this technology underpins large language models and generative AI tools like ChatGPT).
Every year, our reporters and editors nominate technologies that they think deserve a spot, and we spend weeks debating which ones should make the cut. Here are some of the technologies we didn’t pick this time—and why we’ve left them off, for now.
New drugs for Alzheimer’s disease
Alzmeiher’s patients have long lacked treatment options. Several new drugs have now been proved to slow cognitive decline, albeit modestly, by clearing out harmful plaques in the brain. In July, the FDA approved Leqembi by Eisai and Biogen, and Eli Lilly’s donanemab could soon be next. But the drugs come with serious side effects, including brain swelling and bleeding, which can be fatal in some cases. Plus, they’re hard to administer—patients receive doses via an IV and must receive regular MRIs to check for brain swelling. These drawbacks gave us pause.
Sustainable aviation fuel
Alternative jet fuels made from cooking oil, leftover animal fats, or agricultural waste could reduce emissions from flying. They have been in development for years, and scientists are making steady progress, with several recent demonstration flights. But production and use will need to ramp up significantly for these fuels to make a meaningful climate impact. While they do look promising, there wasn’t a key moment or “breakthrough” that merited a spot for sustainable aviation fuels on this year’s list.
One way to counteract global warming could be to release particles into the stratosphere that reflect the sun’s energy and cool the planet. That idea is highly controversial within the scientific community, but a few researchers and companies have begun exploring whether it’s possible by launching a series of small-scale high-flying tests. One such launch prompted Mexico to ban solar geoengineering experiments earlier this year. It’s not really clear where geoengineering will go from here or whether these early efforts will stall out. Amid that uncertainty, we decided to hold off for now.