The world is in the grips of a major silicon-components shortage, and it’s affecting production in nearly every sector. At the start of the pandemic, demand for components soared along with the demand for computers, servers, and gaming consoles.
Social distancing at factories and chip-hoarding among tech giants has led to the most serious “chip-famine” in recent memory. The shortage will undoubtedly affect the price and availability of electronic devices through 2021 — but smart home experience pioneer Plume isn’t worried.
You may know Plume as the creator of the world’s first self-optimizing WiFi, but the company is more than a WiFi solutions provider. Lately, Plume has been on the front lines of bringing order and integration to the smart home. Its open-source software has paved the way for an industry-standard that could provide some relief during the shortage.
Most remarkably, Plume is finally delivering on the original vision for the smart home — complete interoperability.
Interoperability is talking about the ability of a system such as a computer system or a type of software to be able to use information-interoperability between devices that are made by different manufacturers.
OpenSync Could be the Missing Link for the Connected Home
In 2018, Plume and Samsung, among other industry players, announced OpenSync — the first multi-industry open-source framework — designed to connect in-home hardware to the cloud.
OpenSync is a cloud-agnostic, CPE-agnostic, and silicon-agnostic layer of software that operates across WiFi-enabled devices. It allows Internet service providers to deliver, manage, and support residential cloud services for their customers. The cloud-agnostic application allows workloads to be moved seamlessly between cloud platforms and other infrastructures without problems with operating dependency.
Licensing of the Open Source Framework
OpenSync is available under a BSD-3 open-source license. The BSD-3 is the Modified BSD License (3-clause) — which is in the family of permissive free software licenses. This license is compatible with RDK, OpenWRT, and prplWRT. It’s also integrated into the SDKs and designs from the industry’s leading silicon providers.
While OpenSync may create some more competition for Plume, it will ultimately safeguard the company’s future through versatility, compatibility, and its ability to scale.
According to Plume CEO Fahri Diner, consumer needs are evolving. We’ve gone from simply needing Internet connectivity to craving faster speeds for entertainment and socialization. Now consumers are looking for personalized cross-device experiences.
Connectivity to Support All Services
Connectivity to the home and services in the home are starting to decouple, says Diner. Companies such as Amazon, Apple, and Google have bypassed Internet providers to offer services directly to the consumer. To stay competitive, providers have realized they must be able to curate, deliver, and support these services.
Liberty Global, Bell Canada, and Comcast have already joined the OpenSync initiative. And why not? It makes it incredibly easy for Communications Service Providers (CSPs) to swiftly deliver the services their customers want. It’s also a natural fit for Plume, who has always sought to partner with—rather than compete against—providers.
How OpenSync Enables Smart Home 2.0
OpenSync doesn’t just help usher service providers into the smart home era. In a big-picture sense, creating a common software layer allows Plume to fulfill the original promise of the smart home. Every device and every service can be fully integrated and controlled from the cloud.
“Consumers today demand choice when bringing products and services into their home that work best for their lifestyle, without being locked into any one ecosystem,” says Samsung Vice President Chanwoo Park.
The Consumer Wants Easy Install and Interoperability for All Smart Home Devices
A smart home user might have an Amazon Echo, a Nest camera, a Samsung smart refrigerator, and Apple TV. Ease and convenience go out the window the minute those things don’t play well together.
With OpenSync, consumers can equip their homes with smart networking gear from many different suppliers regardless of their CSP (service provider). Consumers don’t want to have to stick to a single brand of networking devices. These issues will become especially relevant as we see tech companies try to exert more control over the smart home ecosystem.
Demand for the New “Works-With” Generation of Products
Last May, for instance, Google ended “Works With Nest” and transitioned to “Works With Google Assistant.” In a nutshell, Google didn’t want users controlling Nest products through third-party smart home applications. Google also makes its own proprietary WiFi system.
Imagine if the “works-with” generation of products is extended to “Works With Google WiFi” and made it more difficult for consumers with Google WiFi to utilize IoT devices from other brands?
Open-industry standards supplied by multiple vendors, such as OpenSync, are the best way to fight any attempts at monopolizing the smart home through proprietary WiFi networking.
The Beauty of Plume’s Hardware-Agnostic Solutions
Plume seems to have anticipated that major brands and service providers would soon be vying for consumers’ complete devotion. The company cleverly side-stepped this problem by making its smart home solutions hardware-agnostic. Plume’s adaptive WiFi works seamlessly with a customer’s existing CSP and any OpenSync-enabled hardware.
Plume has always been hardware-agnostic, putting the software required to connect a networking device to the cloud onto any brand of device, with any type of chipset inside. Plume took this to the next level by open-sourcing that software with its partners in 2018.
Total Hardware-Agnostic Capabilities
Working toward total hardware-agnostic capabilities allows not just Plume, but anyone, to put the software on any device with any chipset. Flexibility and availability have been a win for Plume and its partners. Wireless customers can get all the frills of adaptive WiFi and other smart home services without locking themselves into a particular hardware supplier or chipset vendor.
OpenSync gave Plume an opportunity to make inroads with every major provider because it solved a growing problem CSPs had. Their customers were rapidly adopting smart home tech — along with a layer of new services.
Until OpenSync came along, providers didn’t have a good solution to help their customers manage this. With OpenSync, the Internet of Things suddenly just works.
Why OpenSync is a Shock-Absorber to the Silicon Crisis
An open-source framework could prove to be even more valuable as the silicon shortage becomes dire. Plume has forged partnerships with major chipset vendors, and the top 20 leading WiFi CPEs are supported by OpenSync.
All OpenSync-powered CPEs can coexist on the same network, even devices of different WiFi generations, each utilized optimally in a mixed network. This extends the lifecycle of hardware that would otherwise be rendered obsolete and provides peace of mind to CSPs who issue these pieces.
Fully Adaptive WiFi and Smart Home Solutions
In many ways, Plume has positioned itself fully as an adaptive WiFi and smart home solutions provider. It has brought peace and harmony to the device-integration madness without stepping on anybody’s toes. Consequently, over 22 million homes are being powered by Plume.
The silicon shortage will inevitably bring more manufacturing bottlenecks and higher prices for the consumer. Users won’t want to replace their hardware as often, and the smart home will have to adapt. That’s where OpenSync comes in.
When all players are using a common layer of software, users can mix and match to customize their experience.
Image Credit: fauxels; pexels
How Preql is Transforming Data Transformation
More than one million small businesses use ecommerce platform Shopify to reach a global audience of consumers. That includes direct-to-consumer (DTC) all-stars like Allbirds, Rothy’s and Beefcake Swimwear.
But online sellers like these are also ingesting data from platforms like Google Analytics, Klaviyo, Attentive and Facebook Ads, which quickly complicates weekly reporting.
That’s where data transformation comes in.
dbt and Preql
As the name implies, data transformation tools help convert data from its raw format to clean, usable data that enables analytics and reporting. Centralizing and storing data is easier than it’s ever been, but creating reporting-ready datasets requires aligning on business definitions, designing output tables, and encoding logic into a series of interdependent SQL scripts, or “transformations.” Businesses are making significant investments in data infrastructure tooling, such as ingestion tools, data storage, and visualization/BI without having the internal expertise to transform their data effectively. But they quickly learn if you can’t effectively structure your data for reporting, they won’t get value from the data they’re storing—or the investment they’ve made.
The space includes two major players: dbt and startups.
Founded in 2016, dbt “built the primary tool in the analytics engineering toolbox,” as the company says, and it is now used by more than 9,000 companies—and it is backed by more than $414 million.
But dbt is a tool for developers at companies with established analytics engineering teams.
Preql, on the other hand, is a startup building no-code data transformation tool that targets business users who might not have expertise in programming languages but who nevertheless need trusted, accessible data.
Preql’s goal is to automate the hardest, most time-intensive steps in the data transformation process so businesses can be up and running within days as opposed to the six- to 12-month window for other tools.
“We built Preql because the transformation layer is the most critical part of the data stack, but the resources and talent required to manage it make reliable reporting and analytics inaccessible for companies without large data functions,” said Gabi Steele, co-founder and co-CEO of Preql.
The startup is therefore positioning itself as an alternative to hiring full analytics engineering teams solely to model and manage business definitions—especially among early-stage companies that are first building out their data capabilities.
In other words, Preql is the buffer between the engineering team and the people who actually need to use the data.
“Data teams tend to be highly reactive. The business is constantly asking for data to guide decision making, but in the current transformation ecosystem, even small changes to data models require time and expertise. If business users can truly manage their own metrics, data talent will be able to step out of the constant back and forth of fulfilling reporting requests and focus on more sophisticated analyses,” said Leah Weiss, co-founder and co-CEO of Preql.
But that’s not to say dbt and Preql are bitter rivals. In fact, they are part of the same data transformation community—and there’s a forthcoming integration.
“One way to think about it is we want to help the organizations get up and running really quickly and get the time to value from the data they’re already collecting and storing without having to have the specialized talent that’s really well versed in dbt,” Steele added. “But as these companies become more sophisticated, we will be outputting dbt, so they can leverage it if that’s the tool that they’re most comfortable with.”
A Closer Look at Preql
The startup raised a $7 million seed round in May, led by Bessemer Venture Partners, with participation from Felicis.
Preql collects business context and metric definitions and then abstracts away the data transformation process. It helps organizations get up and running with a central source of truth for reporting without having a data team or writing SQL.
Preql reads in data from the warehouse and writes back clean, reporting-ready schemas. It partners with data ingestion tools that move data from source applications into the warehouse such as Airbyte and Fivetran and cloud data warehouses like Snowflake, Redshift and BigQuery. For businesses who consume data in BI tools, it also partners with Looker, Tableau and Sigma Computing.
Preql is initially focused on the DTC market in part because the metrics, such as cost of customer acquisition (CAC), conversion rate and life-time value (LTV), are standardized. They also tend to have lean operations.
“We’ve found that these companies are working really hard to download data from disparate sources—third-party platforms that they use, Shopify, their paid marketing platforms—in order to get a sense of even basic business health and performance,” Weiss said.
They also tend to use manual reporting processes, which means “it’s often an operations person who’s downloading data from a bunch of sources, consolidating that in spreadsheets, making a bunch of manual interventions and then outputting weekly reporting or quarterly reporting,” she added.
But much of what these companies want to measure about performance is consistent and a lot of the data sources are structured the same way.
“With Preql, we were able to make some assumptions about what we wanted to measure with the flexibility to customize a few of those definitions that are specific to our business,” added Cynthia Plotch, co-founder at Stix, a women’s health essentials ecommerce site. “Preql gave us clean, usable data for reporting. We were up and running with weekly reporting within days, saving us months of effort if we had to invest in data engineering teams.”
Data Transformation in 2027
Steele and Weiss believe the next five years will be about “delivering on the promise of the modern data stack.”
In other words, answering questions like: Now that we have scalable storage and ingestion, how can we make sure we can actually leverage data for decision making? And how can we build trust in reporting so we can build workflows around it and act on it?
This is because a lot of companies struggle to move on to predictive analytics and machine learning because they never solved the fundamental issue of creating trusted, accessible data.
What’s more, Preql believes the next phase of tools will go beyond building infrastructure to deliver more value as data talent sits closer and closer to the business.
“Data analytics will only get more complicated because the number of data sources is growing, along with their complexity, and the need is becoming more acute for real time results. And the more data you have, the more granular the questions become and even more is expected of it,” Amit Karp, partner at Bessemer Venture Partners added. “I think we’re in the very early innings of what’s going to be a very long wave—five, ten or even 20 years down the road. It’s a giant market.”
Can Traditional Companies Act Like Start-Ups?
Much has been made about the culture clash between older, slower, more traditional companies and younger, more dynamic, faster-moving tech start-ups. Each has advantages and disadvantages, but, generally speaking, it is very hard to reconcile the two approaches, as they are naturally in opposition to each other.
The general motto among start-ups of “move fast and break things” has led to very quick yet massive successes, with some companies, Google and Amazon being the most obvious examples, growing larger than traditional competitors who have been around for decades and decades. But it has also led to a lot of unconsidered damage to traditional industries like transportation and publishing, their ‘disruption’ doing as much harm as good. And, more often than not, start-ups can see millions or even billions in investment being wasted on bad ideas and unproven tech (Theranos, anyone?). “Fake it till you make it” means that, eventually, you actually do need to make it.
Meanwhile, traditional companies, while providing more useful and regular forms of employment, great institutional knowledge, and decades of business experience, have their own problems. Because they often resemble large, inefficient bureaucracies, they are slow to move and respond to change. Old companies can be blind to, and even fearful of, innovation and new technology. This can leave them dead in the water when the future finally arrives. Kodak, for example, went from venerated, dominant business to almost nothing in just a few years because it refused to accept the revolution of digital photography.
But is there a way to integrate the two approaches? To take the best from both cultures and business plans and use those aspects to move into the future? To get big, old businesses to work, at least in some ways, like small, agile, young start-ups? Yes, but it isn’t easy.
Innovation Without Disruption
As stated, one of the greatest fears of traditional companies is having their business, or their entire sector, undercut by a growing start-up. While independent start-ups are expected to disrupt, be change agents, or however you want to put it, more traditional companies are prone to be much more risk averse. Naturally, one of the smartest things that an old company can do to avoid being left behind is to lead the disruption themselves.
Many traditional businesses are currently investing in, and should continue to invest in, the digital transformation of their business model, from top to bottom. This, however, is a slow process, especially in sizable companies. The use of machine learning, predictive analysis, AI, and other cutting edge digital tools allows old business models to become more efficient, and respond to changes in supply and demand, and market tumult, in better and smarter ways. But it isn’t as easy as flipping a switch.
A New Business to Try New Things
Quite a few traditional businesses are spinning out new sectors, tech labs, and other separate silos to do the work of digital innovation for them. This isn’t uncommon. Businesses have, basically forever, had subsidiaries. The problem is that old businesses have trouble actually committing to the idea.
Often, the business that is spun-out is, essentially, a temporary one. The leaders of the core business get cold feet, limit the new project’s mandate, and pull it back in as soon as possible. Such hesitance is limiting in today’s digital world, where the next revolutionary innovation is always just around the corner.
Furthermore, spin-outs with good ideas and potential for growth are frequently allowed to die on the vine, just as often they go to seed. Or, to make things clearer, the core business doesn’t invest in the digital spin-out’s success. The great advance of digital companies is their ability to scale with almost lightning speed. But core business have to be ready with resources and support for the scale-up to even happen, let alone work. Otherwise, a grand opportunity will go to waste.
If a business spin-out does well enough, it should be allowed to grow and change as it needs to, provided that it remains successful and worthwhile. Whether the goal is for the new business to simply make money in an area the core business isn’t directly addressing, or developing digital innovations for the core business to take up, if it works it works. Don’t get in the way of success just because it is new, or comes in an unfamiliar form. At the same time, core businesses must be careful of how they measure success for these new experiments. Measuring the new company or spin-out with the same metrics as the core business can sometimes choke the momentum and not give an accurate picture. Afterall, newer, smaller businesses, or initiatives shouldn’t be expected to be profitable immediately.
Cultural Change, From the Executive Level On Down
All the innovation in the world won’t mean anything if the people running the business itself refuse to change. Older companies, and older executives, can become set in their ways, dismissive of new technologies and ways of doing business, and ignore the automation and efficiencies of advanced digital tools. We saw this at the beginning of the widespread use of the internet twenty years ago, and we’re seeing it now.
More important than this, is the need for people in positions of real power in companies to implement the changes needed for innovation and advancement, and do so thoroughly and effectively. There must be a willingness to let the start-up culture infiltrate and influence the way business is done at every level, or it won’t be effective enough to help.
It is painfully common for large, traditional companies to put money into research and development of new ideas and new technologies, only for executives and other decision makers to ignore what’s in front of them, either because of cost, or risk, or something as simple as a fear of the future.
But the future of business is changing in a digital world. Things move and change with an almost frightening speed. The Covid-19 pandemic is absolute proof of that; it wasn’t just companies with digital tools at the ready that were able to survive. While they had an advantage, it was the companies that were able to acknowledge the rapidly changing situation, and react to it quickly and efficiently, that kept things going and in some cases, even improved their bottom lines.
But It’s More Than Just a Cultural Change
One of the biggest advantages of tech start up culture is that it is forward-facing. It is an attitude towards business and technology that is not just looking towards the future (every business does that), but is actively trying to grapple with it, and even to shape it, if possible. Traditional, legacy businesses need to admit that the world is not static, and they have a responsibility in influencing how their industry develops.
Part of that responsibility is letting innovators be innovators. If a large company spins out a business unit to study and improve its digital technology, that company can’t then balk when those innovators recommend widespread change, or create a new idea that could shake the company, or its whole industry, to its core.
To put it as simply as possible, for an older, more traditional company to reap the benefits of adopting a start-up model, it has to actually adopt it. It can’t just make superficial changes, it needs to truly invest. But that kind of investment carries risk, which can make more traditional companies nervous. The work of transformation must actually be done.
That means supporting digital innovations and changes when they make things more efficient. It means letting spin-out businesses actually try new things, and grow to scale when they hit upon something new and successful. It means executives getting out of the way so the forces of change can actually, you know, change things. Otherwise, the ‘traditional’ company will just be the ‘old’ company, sitting around waiting for some new tech upstart to disrupt it into obsolescence.
Understanding Edge Computing and Why it Matters to Businesses Today
The edge computing market is expected to reach $274 billion by 2025, focusing on segments like the internet of things, public cloud services, and patents and standards.
Most of this contribution is backed by enterprises shifting their data centers to the cloud. This has enabled enterprises to move beyond cloud systems to edge computing systems and extract the maximum potential from their computing resources.
This blog will provide a closer understanding of edge computing and how it helps businesses in the technology sector.
Understanding edge computing
From a technical standpoint, edge computing is a distributed computing framework that bridges the gap between enterprise applications and data sources, including IoT devices or local edge servers.
For an easier understanding, edge computing helps businesses recreate experiences for people and profitability through improved response time and bandwidth availability.
Why does edge computing matter for businesses?
When we talk about the most significant industry zones worldwide, for instance, the GCC region, which is heavily focused on the focus areas like cloud services, the transition from cloud technology to edge computing is now more prominent than ever for enterprises to leverage the potential of the technology.
And with only 3% of businesses at an advanced stage in digital transformation initiatives, the potential of edge computing is up for grabs.
It doesn’t matter if you’re running a mobile app development company, a grocery store next door, or a next-gen enterprise. You need to understand how cloud edge helps businesses and invest in this open-source technology.
Edge computing is primarily sought in industries where value-added assets have a massive impact on the business in case of losses.
The technology has enabled reports delivery systems to send and receive documentation in seconds, usually taking days to weeks.
Consider the example of the oil and gas industry, where some enterprises utilize edge computing. The predictive maintenance allowed them to proactively manage their pipeline and locate the underlying issues to prevent any accumulated problems.
Support for remote operations
The pandemic has forced businesses to opt for remote operations, or a hybrid work model at the least, with the workforce, spread across different geographical boundaries.
This drastic shift has brought in the use of edge apps that would permit employees to secure access to their organization’s official servers and systems.
Edge computing helps remote operations and hybrid teams by reducing the amount of data volume commuting via networks, providing computing density and adaptability, limiting data redundancy, and helping users comply with compliance and regulatory guidelines.
Faster response time
Businesses can enjoy lower latency by deploying computational processes near edge devices. For instance, employees typically experience delays when corresponding with their colleagues on another floor due to a server connected in any part of the world.
While an edge computing application would route data transfer across the office premises, lower the delays, and considerably save bandwidth at the same time.
You can quickly scale this example of in-office communication to the fact that around 50% of data created by businesses worldwide gets created outside the cloud. Putting it simply, edge computing allows instant transmission of data.
Robust data security
According to Statista, by 2025, global data production is expected to exceed 180 zettabytes. However, the data security concerns will equally increase proportionately.
And with businesses producing and relying on data more than ever, edge computing is a solid prospect to process large amounts of data sets more efficiently and securely when done near the data source.
When businesses take the cloud as their sole savior for data storage in a single centralized location, it opens up risks for hacking and phishing activities.
On the other hand, an edge-computing architecture puts an extra layer of security as it doesn’t depend on a single point of storage or application. In fact, it is distributed to different devices.
In case of a hack or phishing attempt, a single compromised component of the network can be disconnected from the rest of the network, preventing a complete shutdown.
Convenient IoT adoption
Global IoT spending is expected to surpass $410 billion by 2025. For businesses, especially in the manufacturing sector, who rely on connected technology, the internet of things is at the thickest of things in the global industry today.
Such organizations are on the constant hunt to up their computational potential and probe into IoT through a more dedicated data center.
The adoption of edge computing makes the subsequent adoption of enterprise IoT quite cheap and puts little stress on the network’s bandwidth.
Businesses with computational prowess can leverage the IoT market without adding any major infrastructure expenses.
Lower IT costs
The global IT spending on devices, enterprise software, and communication services rose from $4.21 trillion to $4.43 trillion in 2022. While a considerable share of the global spending accounts for cloud solutions, obviously as the pandemic has only pushed the remote operations and hybrid working model further up.
When users keep the data physically closer to the network’s edge, the cost of sending the data to the cloud reduces. Consequently, it encourages businesses to save on IT expenses.
Besides cutting costs, edge computing also contributes to helping businesses increase their ROI through enhanced data transmission speed and improved networks needed to experiment with new models.
How is edge computing different from cloud computing?
Although edge computing and cloud computing are each other’s counterparts for data storage and distribution, there are some key differences regarding the user’s context.
Edge computing deploys resources at the point where data generates. In contrast, cloud computing deploys resources at global locations.
Edge computing operates in a decentralized fashion, while cloud computing is centralized.
Edge is made on a stable architecture, and cloud resources are made on loose-coupled components.
Edge-based resources respond instantaneously, and cloud resources have a higher response time.
Edge computing requires lower bandwidth, while the cloud counterpart consumes a higher bandwidth.
Although, the above difference makes edge computing a clear winner in all aspects for any business. But there’s a catch!
Suppose your business resides at multiple physical locations, and you need a lower latency network to promptly cater to your customers who are away from your on-prem location. In that case, edge computing is the right choice for you.
Top edge computing use cases
Although there are numerous examples of edge computing use cases, I’ll talk about a few that I find the most interesting.
Autonomous flocking of truck convoys is the easiest example we can come for autonomous vehicles. With the entire fleet traveling close while saving fuel expenses and limiting congestion, edge computing has the power to eliminate the needs of all the drivers except the one in the front vehicle.
The idea being the trucks will be able to communicate with the others via low latency.
Remote monitoring of oil and gas industry assets
Oil and gas accidents have proved catastrophic throughout the industry’s history. This requires extreme vigilance when monitoring the assets.
Although oil and gas assets are placed at remote locations, the edge computing technology facilitates real-time analytics with processing closer to the asset, indicating less dependency on high-quality connectivity to a centralized cloud.
Edge computing is on course to elevate the adoption of smart grids, enabling enterprises to handle their energy consumption better.
Modern factories, plants, and office buildings use edge platform-connected sensors and IoT devices to observe energy usage and examine their consumption in real-time.
The data from real-time analytics will aid energy management companies in creating suitable, efficient workarounds. For example, watching where high energy consumption machinery runs during off-peak hours for electricity demand.
Cloud gaming, seemingly the next-big-thing in the gaming business like Google Stadia, PlayStation Now, etc., dramatically leans on latency.
Moreover, cloud gaming companies are on the quest to build edge servers as close to gamers as possible to reduce latency and provide a fully immersive, glitch-less experience.
This concludes our discussion on understanding edge computing and how it matters for enterprises worldwide.
Now that you understand the benefits of edge computing and its applications in different industries and use cases, it is evident that it’s a great value proposition for businesses that want to acquire competitive advantages and lead their spaces from the front line.
Featured Image Credit: Provided by the Author; Thank you!