Over the past two years, I’ve been making the point that near edge and far edge are utilitarian terms at best, but they fail to capture some really important architectural and delivery mechanisms for edge solutions. Some of those include as-a-service consumption versus purchasing hardware, global networks versus local deployments, or suitability for digital services versus suitability for industrial use cases. This distinction came into play as I began work on a new report with a focus on specific edge solutions.
The first edge report I wrote was on edge platforms (now edge dev platforms), which was essentially a take on content delivery networks (CDN) plus edge compute, or a far-edge solution. Within that space, there was a lot of attention on where the edge is, which is irrelevant from a buying perspective. I won’t base a selection on whether a solution is a service provider edge or a cloud edge as long as it meets my requirements—which may involve latency but are more likely to be ones I mentioned in the opening paragraph.
Near Edge Vs. Far Edge
I talked about this CDN perspective in an episode of Utilizing Edge. The conversation— co-hosted by former GigaOm analyst, Alastair Cooke—went into the far-edge and near-edge conundrum. Alastair, who wrote the GigaOm Radar for Hyperconverged Infrastructure (HCI): Edge Deployments report (which I didn’t realize until a year later), brought experience from the near-edge perspective, just as I came in with the far-edge background.
One of my takeaways from this conversation is that the difference between CDN-based edges (far edge) and HCI deployments (near edge) is pushing versus pulling. I’m glad I only realized Alastair wrote the Edge HCI report after the fact because I had to work through this push versus pull thing myself. It’s quite obvious in retrospect, mainly because a CDN delivers content, so it’s always been about web resources centrally hosted somewhere that get pushed to the users’ locations. On the other hand, an edge solution deployed on location has the data generated at the edge, which you can then pull to a central location if necessary.
So, I made the case to also write a report on the near edge, where we evaluate solutions that are deployed on customers’ preferred locations for local processing and can call back to the cloud when necessary.
Why the Edge?
You may ask yourself, what’s the difference between deploying this type of solution at the edge and just deploying traditional servers? Well, if your organization has edge use cases, you likely have a lot of locations to manage, so a traditional server architecture can only scale linearly, which includes time and effort.
An edge solution would need to make this worthwhile, which means it must be:
- Converged: I want to deploy a single appliance, not a server, a switch, external storage, and a firewall.
- Hyperconverged: As per the above, but with software-defined resources, namely through virtualization and/or containerization.
- Centrally managed: A single management plane to control all these geographically distributed deployments and all their resources.
- Plug-and-play: The solution will provide everything needed to run applications. For example, I do not want to bring my own operating system and manage it if I don’t have to.
In other words, these must be full-stack solutions deployed at the edge. And because I like my titles to be representative, I’ve called this evaluation “full-stack edge deployment.”
Defining Full-Stack Edge
All the bullet points above became the table stakes—features that all solutions in the sector support and therefore do not materially impact comparative assessment. Table stakes define the minimum acceptable functionality for solutions under consideration in GigaOm’s Radar reports. The most considerable change between the initial scoping phase and the finished report is the hardware requirement. I first defined the report by looking at integrated hardware-software solutions, such as Azure Stack Edge, AWS Outposts, and Google Cloud Edge. I have since dropped the hardware requirement as long as the solution can run on converged hardware. This is for two reasons:
- The first reason is that evaluating hardware as part of the report would take away from all the other value-adding features I was looking to evaluate.
- The second reason is that we had a lot of engagement from software-only vendors for this report, which is a rear-view way of gauging that there is demand in this market for just the software component. These software-only vendors typically have partnerships with bare metal hardware providers, so there is little to no friction for a customer to procure both at the same time.
The final output of this year-long scoping exercise—the full-stack edge deployment Key Criteria and Radar Reports—defines the features and architectural concepts that are relevant when deploying an edge solution on your preferred location.
Simply saying “near edge” will never capture nuances such as an integrated hardware-software solution running a host OS with a type 2 hypervisor where virtual resources can be defined across clusters and third-party edge-native applications can be provisioned through a marketplace. But full-stack edge deployments will.
Next Steps
To learn more, take a look at GigaOm’s full-stack edge deployment Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.
- GigaOm Key Criteria for Evaluating Full-Stack Edge Deployment Solutions
- GigaOm Radar for Full-Stack Edge Deployment
If you’re not yet a GigaOm subscriber, sign up here.