People often hear the phrase “edge computing” and assume that it is the adversary of the cloud. Well I’m here to tell you - that’s not true. The goal of this blog is to clear up some of the common misconceptions and myths surrounding “edge computing.”
What exactly is “the edge?”
In my humble opinion, the industry doesn’t have consensus on what or where the edge is and how to properly define it. That said, I don’t think we know precisely what “cloud computing” is either. We have a pretty good idea, but it encompasses many different options: hosted cloud, outsourced data centers, private data centers, pay-as-you-go, etc, etc. The concept of the edge is the same, it’s a broad description of distributing computing power closer to the source of data. I see the edge as any sort of computing device that is outside of the cloud, and this can range from local servers to IoT devices.
When did “the edge” become a thing?
Over time, computing paradigms have gone back and forth between local and remote technology. Remember when mainframes and terminals were the standard? Well, I don’t, but I’ve read about them in history books. All processing power was centralized. At this point there was no high speed Internet to rely on, but local networks handled all processing. From there the next major shift was to the Personal Computer (that’s what “PC” stands for, for the kids too young to know). This meant that all major processing now took place locally, on your machine. The PC is effectively edge computing, we just never referred to it as that. For example, all manufacturing lines using PCs to operate are making local decisions, on the edge.
The term Edge Computing may be new, but the concept certainly is not. I suspect the rapid growth of the term has to do with the recent advancement in affordable single board computers, the most popular being the Raspberry Pi, and the decreasing price of standard computer hardware. This incredible access to cheap and powerful machines makes the realization of the edge much more possible today than it has ever been before.
What about the cloud?
I don’t think the cloud is going anywhere, nor do I want it to. There are those who believe that edge computing will bring about the end of the cloud computing era, but I completely disagree. The union of edge and cloud into a hybrid solution is where the most promise lies. We might call this a tiered approach to processing data, a filter from the bottom up to the cloud. High volumes of high frequency data are generated by sensors all the time. Utilizing the edge to process that data and make real-time decisions without concern for bandwidth and response time are vital. This means that any and all critical decisions are handled without any reliance on the Internet or external compute. The edge can also pare data down to a reasonable amount for the cloud to handle. Summaries, rollups, averages, etc, whatever computations are necessary to perform the next level of data science. When you think about it, a lot of small computers can be incredibly powerful at handling simple data streams, and typically more cost effective.
Where large computing power is needed is where the cloud will always be most valuable. Artificial Intelligence (AI), Machine Learning (ML), deep learning, neural networks, data science, etc. All of these techniques for processing data require high powered machines. Regardless of whether we’re talking Amazon or Google or private data centers, there will, for the foreseeable future, be the need for the cloud.
What does HarperDB have to say about that?
That’s a big reason that we exist. In fact, here’s a direct quote from our patent: “[HarperDB] can be deployed on a device with limited resources (e.g., a Raspberry Pi or other IoT devices) while also being scalable to take advantage of systems with massive cloud storage and processing power.” We’re huge proponents of the hybrid cloud/edge solution. In fact, we’re designed to help weave the data fabric between all compute layers. As the number of data sources continues to grow, the ability to gracefully and affordably handle this data will become more and more necessary. A data fabric is the data layer within the hybrid cloud solution. It’s the tiered processing of data from ingestion to analytical result. HarperDB is that solution.
Does that make sense?
Anyone who knows me knows that I ask that question too much, but I figured it makes for a decent summary headline. In this blog I attempted to clear up some of the misconceptions about whether or not the edge and cloud are adversarial solutions. I don’t believe they are, in fact, I think by combining them into a single solution they yield more power than either of the two could separately. Weaving a data fabric between the edge and cloud will bring about the next generation of computing paradigms.
If this concept is not clear or if you’d like to learn more about our data fabric solution, feel free to reach out to us! We love talking about data and architecture, and we’re happy to help.