When considering the merits of moving application workloads to an edge computing deployment, it’s necessary to look not just at technical considerations but also overall business drivers and benefits – particularly how those apply to a given organization, application, and user base.
At its simplest level, the business decision around where to place application workloads boils down to whether you are able to cost-effectively deliver the application experience your users expect. Is the edge the right place for this – for your business?
Also see: Top Edge Companies
Two Key Factors for Edge
For most organizations, the pandemic kicked digital transformation into overdrive. Businesses have shifted more of their user interactions and engagement to application workloads, and users now expect more from the applications they use. More features, more functionality, more responsiveness, more availability.
These expectations are only growing more demanding, thanks to the ubiquity of mobile and SaaS apps in day-to-day life, and these expectations will likely never reset. Organizations that haven’t come to grips with this new reality will inevitably fall behind.
Satisfying these user experience demands comes down to two key factors, application responsiveness and availability.
Responsiveness
Among other factors, responsiveness is a function of latency, or how long it takes for data to transfer from one point on a network to another. According to a recent survey by Quadrant Strategies, 86% of C-suite executives and senior IT decision makers agree that low-latency applications help differentiate their organizations from the competition.
At the same time, 80% of respondents are concerned that high latency is impacting the quality of their applications. More than 60% of respondents further defined low latency for mission critical apps as 10 milliseconds or less.
Accounting for your user base and the latency they may experience is one of the biggest factors to consider when deploying an application to the edge. Once you’ve accounted for all of the other factors, the only way to improve latency is to physically move workloads and data closer to users – in other words, toward the edge.
The more geographically dispersed your user base, the more important this becomes. For a global user base, for instance, centralized cloud deployments quickly become untenable as workload scales; the only answer is edge deployment.
Availability
Availability is the other side of the experience coin, and unfortunately, any given network will inevitably go down, resulting in a steady stream of headlines about major cloud outages and frustrated users.
The way around this is to build in redundancy and resiliency for application workloads. Centralized cloud deployments have finite resilience, as they are dependent on that cloud provider. When the network experiences an outage, so do the applications.
Edge deployments, on the other hand, can readily work around this, provided that the deployment isn’t tied to a single network operator. Workloads must be broadly distributed across heterogeneous providers, so that if, or when, one goes down, the problem can be routed around to ensure continued application availability.
Also see: Top Cloud Companies
Edge Computing Cost Concerns
Determining how cost-effectively the expected experience can be delivered can quickly devolve into technical considerations around workload scalability, allocation of compute resources, network operations, workload isolation, and data compliance.
There are inevitable pros and cons to different deployment strategies that must be considered. However, all things being equal, the distributed edge beats centralized cloud every time.
Overall, there’s a solid case for considering a managed service for edge deployment; you get the benefit of edge deployment for your app workloads, without the added cost of managing or operating your own distributed network.
If you decide to go this direction, be sure to consider whether the edge provider requires new CI/CD (continuous integration and continuous delivery) workflows and proprietary tools to support deployments. While a proprietary approach takes away the responsibility of managing your own network, employing new workflows, tools, and processes can upend your current DevOps processes, leading to further workflow issues.
The Key Consideration for Edge
Many organizations are already rushing to modernize applications with multi-cluster Kubernetes deployments, and the edge is a natural extension of that strategy, delivering significant benefits in performance, scalability, resilience, isolation, and more.
The modern edge provides a significantly better application experience, but only if it can be simple and affordable to adopt. In other words, the key is for organizations to eliminate the challenge of deploying applications to the distributed edge while still maintaining their existing containerized environment and using familiar Kubernetes tooling.
In addition, organizations that can find ways to orchestrate and scale workloads to meet real-time traffic demand and ensure cost-effective low-latency responsiveness for users – no matter how many there are or where they’re located – are most likely to reap the rewards of the distributed edge.
In sum, the business case for edge computing is that the edge offers enormous advantages – as long it can be deployed relatively simply and efficiently.
Also see: Tech Predictions for 2022: Cloud, Data, Cybersecurity, AI and More
About the Author:
Stewart McGrath, CEO of Section.