IDC predicts that by 2023 more than 50% of new enterprise IT infrastructure deployed will be at the edge rather than in corporate data centers, up from less than 10% in 2020. By 2024, the number of apps at the edge will increase 800%. This growth is led by host of industries: edge computing enables innovations in retail, health care, and manufacturing. For example, retailers can deploy video analytics technologies on an edge computing node, or piece of hardware with storage and networking capabilities, located near their store locations, enabling them to predict theft.
“The video analytics system operates at the edge, analyzing customer movements to detect behaviors in real time that are predictive of theft,” a workload that is unsuited to public cloud for speed and cost reasons, says Paul Savill, senior vice president of product management and services at technology company Lumen, which offers an edge computing platform. There’s no need to deploy edge computing at every retail location. “From one centralized node in one market area, say, the size of Denver, edge computing can serve many more retail locations within five milliseconds,” says Savill.
There can be consumer privacy concerns when it comes to analytics that flag certain behaviors. But with the right practices, such as anonymization, this type of application can be an important tool in the arsenal as many retailers, pinched by the lockdowns and restrictions that followed the 2020 coronavirus pandemic, struggle to find ways to operate profitably.
“From one centralized node in one market area, say, the size of Denver, edge computing can serve many more retail locations within five milliseconds.”
Paul Savill, Senior Vice President, Product Management, Lumen
A mammoth US retailer, with 2019 revenues of $16.4 billion, Gap was an early user of edge computing. One of its biggest edge use cases is at the cash registers or other points of sale at its more than 2,500 retail stores, where millions of transactions are processed. Edge computing allows Gap to get nearly up-to-the-second data on sales performance. And during the pandemic, edge helps the retailer keep track of how many people are in its stores.
“The compliance rules for the number of customers allowed in a store were changing based on how each state and each county were in the situation of the pandemic,” says Shivkumar Krishnan, head of stores engineering at Gap, referring to regulations designed to limit the spread of the deadly disease. “So, to ensure capacity is not exceeded, we had to make sure we were measuring the occupancy in near real time.”
Processing data on an edge node eliminates the many points of failure that exist from the store to the cloud, according to Krishnan, everything from switches, routers, the telecom circuit, and cloud providers themselves. The edge gives the retailer full capability to process all transactions at any store, and they only go to the cloud if the edge fails. Krishnan can remotely monitor and manage most of the retailer’s more than 100,000 devices used for sales and other store operations.
The reason we can’t just wish away or “fix” complexity is that every solution—whether it’s a technology or methodology—redistributes complexity in some way. Solutions reorganize problems. When microservices emerged (a software architecture approach where an application or system is composed of many smaller parts), they seemingly solved many of the maintenance and development challenges posed by monolithic architectures (where the application is one single interlocking system). However, in doing so microservices placed new demands on engineering teams; they require greater maturity in terms of practices and processes. This is one of the reasons why we cautioned people against what we call “microservice envy” in a 2018 edition of the Technology Radar, with CTO Rebecca Parsons writing that microservices would never be recommended for adoption on Technology Radar because “not all organizations are microservices-ready.” We noticed there was a tendency to look to adopt microservices simply because it was fashionable.
This doesn’t mean the solution is poor or defective. It’s more that we need to recognize the solution is a tradeoff. At Thoughtworks, we’re fond of saying “it depends” when people ask questions about the value of a certain technology or approach. It’s about how it fits with your organization’s needs and, of course, your ability to manage its particular demands. This is an example of essential complexity in tech—it’s something that can’t be removed and which will persist however much you want to get to a level of simplicity you find comfortable.
In terms of microservices, we’ve noticed increasing caution about rushing to embrace this particular architectural approach. Some of our colleagues even suggested the term “monolith revivalists” to describe those turning away from microservices back to monolithic software architecture. While it’s unlikely that the software world is going to make a full return to monoliths, frameworks like Spring Modulith—a framework that helps developers structure code in such a way that it becomes easier to break apart a monolith into smaller microservices when needed—suggest that practitioners are becoming more keenly aware of managing the tradeoffs of different approaches to building and maintaining software.
Supporting practitioners with concepts and tools
Because technical solutions have a habit of reorganizing complexity, we need to carefully attend to how this complexity is managed. Failing to do so can have serious implications for the productivity and effectiveness of engineering teams. At Thoughtworks we have a number of concepts and approaches that we use to manage complexity. Sensible defaults, for instance, are starting points for a project or piece of work. They’re not things that we need to simply embrace as a rule, but instead practices and tools that we collectively recognize are effective for most projects. They give individuals and teams a baseline to make judgements about what might be done differently.
One of the benefits of sensible defaults is that they can guard you against the allure of novelty and hype. As interesting or exciting as a new technology might be, sensible defaults can anchor you in what matters to you. This isn’t to say that new technologies like generative AI shouldn’t be treated with enthusiasm and excitement—some of our teams have been experimenting with these tools and seen impressive results—but instead that adopting new tools needs to be done in a way that properly integrates with the way you work and what you want to achieve. Indeed, there are a wealth of approaches to GenAI, from high profile tools like ChatGPT to self-hosted LLMs. Using GenAI effectively is as much a question of knowing the right way to implement for you and your team as it is about technical expertise.
Interestingly, the tools that can help us manage complexity aren’t necessarily new. One thing that came up in the latest edition of Technology Radar was something called risk-based failure modeling, a process used to understand the impact, likelihood and ability of detecting the various ways that a system can fail. This has origins in failure modes and effects analysis (FMEA), a practice that dates back to the period following World War II, used in complex engineering projects in fields such as aerospace. This signals that there are some challenges that endure; while new solutions will always emerge to combat them, we should also be comfortable looking to the past for tools and techniques.
Learning to live with complexity
McKinsey’s argument that the productivity of development teams can be successfully measured caused a stir across the software engineering landscape. While having the right metrics in place is certainly important, prioritizing productivity in our thinking can cause more problems than it solves when it comes to complex systems and an ever-changing landscape of solutions. Technology Radar called this out with an edition with the theme, “How productive is measuring productivity?”This highlighted the importance of focusing on developer experience with the help of tools like DX DevEx 360.
Focusing on productivity in the way McKinsey suggests can cause us to mistakenly see coding as the “real” work of software engineering, overlooking things like architectural decisions, tests, security analysis, and performance monitoring. This is risky—organizations that adopt such a view will struggle to see tangible benefits from their digital projects. This is why the key challenge in software today is embracing complexity; not treating it as something to be minimized at all costs but a challenge that requires thoughtfulness in processes, practices, and governance. The key question is whether the industry realizes this.
This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.
This works because when the subjects imagine saying words, the electrodes measure their motor neurons, whose firing rate contains information about how they are trying to move their tongue and larynx. From these data it is now possible to determine what words people are thinking of saying with surprising accuracy. Researchers believe that with more electrodes listening to more neurons, and more bandwidth, they’ll get even better at it.
“We don’t need more electrodes for cursor control, but for speech, we are in a regime where data rate matters a lot,” says Angle. “It’s very clear we need to increase the channel count to make those systems viable. With a thousand electrodes, it will be as good as a cell phone transcribing your speech. So in this situation, yes, you’re increasing the information rate by 10 or a hundred times.”
Bottom line: When it comes to enhancing communication between nondisabled people my sources were skeptical that more bandwidth matters. The brain’s going to get in the way. But when it comes to restoring function, it does matter. It takes a lot of neurons—and a lot of data—to get a patient back to communicating at that basic 40 bits a second.
Read more from Tech Review’s archive
In 2021, I profiled Dennis DeGray, a paralyzed man who, at that time, was the world record holder for direct brain-to-computer communication. He could type via his thoughts at 18 words a minute “It’s almost a conversation between the device and myself,” DeGray told me. “It’s a very personal interaction.”
But speed records keep falling. This August, researchers demonstrated that two people who’d lost the ability to speak–one due to a stroke, another because of ALS–were able to quickly utter words through a computer connected to implants placed in their brains. Read the report by Cassandra Willyard here.
A few years back, Adam Piore recounted the bizarre tale of Phil Kennedy, a pioneering brain-implant researcher who took the extreme step of getting an implant installed in his own brain.
From around the web
A second person has received a heart from a gene-modified pig. Lawrence Faucette, a Navy vet with heart failure, underwent transplant surgery on September 20 in Maryland. The previous subject lived two months after the surgery. (Associated Press)
Scientific sleuths are getting better at uncovering rotten research. (WSJ)
Those new generation weight-loss drugs were prescribed to 1.7% of Americans in 2023. And you can expect the market for semaglutide to expand fast. That’s because more than 40% of Americans are obese. (CNN)
With those reactions, fusion reached what’s sometimes called scientific breakeven—a huge milestone by any definition. But, of course, there were caveats.
The lasers in this reactor are some of the most powerful in the world, but they’re also pretty inefficient. In the end, more power was pulled from the grid than what the fusion reactions produced. And most experts agree that this version of fusion isn’t super practical for power plants, at least in the near term.
While this was a milestone, it was more symbolic than practical. And it’s notable that in the meantime, the world’s largest and most famous fusion project is languishing—the massive international collaboration ITER (International Thermonuclear Experimental Reactor) has been plagued with delays and exploding costs.
While no private fusion company has achieved net energy (or at least, hasn’t announced it), there have been some milestones to mark. Commonwealth Fusion Systems has broken records for magnetic field strength with its new superconductor materials, a technology that could be the key to making fusion work economically at scale. Other startups, like TAE Technologies, have celebrated temperatures of 75 million °C, or even hotter, another key stepping stone to reaching viable fusion reactors.
I think it’s exciting to see more startups jumping in on fusion energy. There’s a sense of urgency from these companies, because they need to make progress and continue raising money or risk going out of business.