Edge Computing is a Distributed Data Problem
We are told that low latency and imagination are the only prerequisites for building tomorrow’s edge applications. Tragically, this is an incomplete and false hope.
In order for robust, world-changing edge native applications to emerge, we must first solve the very thorny problem of bringing stateful data to the edge. Without stateful data, the edge will be doomed to forever being nothing more than a place to execute stateless code that routes requests, redirects traffic or performs simple local calculations via serverless function. This would be the technological equivalent of Leonard Shelby in Christopher Nolan’s excellent movie Memento. Like Shelby, these edge applications would be incapable of remembering anything of significance, forced, instead, to constantly look up state somewhere else (e.g., the centralized cloud) for anything more than the most basic services.
Edge computing is a distributed data problem. It’s more than simply a distributed compute problem. It’s full power cannot be realized by simply spinning up stateless compute on the edge . Conventional enterprise-grade databases systems cannot deliver geo-distributed databases to the edge while also providing strong consistency guarantees. Conventional approaches fail at the large globally-distributed scale ng because our current database architectures are built around fundamental tenets of centralizing the coordination of state change and data.