Amsterdam – In a wide-ranging discussion held at KubeCon + CloudNativeCon Europe 2026 in Amsterdam, two key leaders from Akamai Technologies, Lena Hall, Senior Director of Developers & AI Engineering, and Thorsten Hans, Senior Developer Advocate, offered insights into the company’s evolving role in the cloud-native AI landscape. Their conversation, captured by The New Stack Makers, highlighted Akamai’s strategic pivot beyond its traditional Content Delivery Network (CDN) roots to become a comprehensive, developer-centric cloud infrastructure provider poised to support the burgeoning demands of artificial intelligence at the edge.
The annual KubeCon + CloudNativeCon Europe event, a cornerstone for the cloud-native community, convenes thousands of developers, operators, and industry leaders to explore the latest advancements in Kubernetes, cloud-native technologies, and their impact on enterprise IT. This year’s edition in Amsterdam served as a fitting backdrop for Akamai to articulate its vision for distributed computing in the age of AI, emphasizing its extensive global network and its commitment to simplifying complex infrastructure for developers.
Evolving Beyond CDN: Akamai’s AI-Ready Infrastructure
Historically recognized for its robust CDN services, cybersecurity expertise, and software application development technologies, Akamai is now actively positioning itself as a modern, developer-friendly cloud infrastructure business. This transformation aims to cater to the unique requirements of AI workloads, which often demand extremely low latency and distributed processing capabilities.
"There are so many use cases that benefit from really low latency distributed processing, and Akamai has always been known for our services around distributed computing," stated Hall. "So this is why we have developed managed container services for Kubernetes; this technology works fluidly with our low-latency serverless functions and our distributed AI inference platform." This strategic integration signifies Akamai’s intent to leverage its existing strengths in distributed systems to power the next generation of AI applications.
Bringing Compute Closer: Akamai’s Global Distributed Reach
A core tenet of Akamai’s strategy, as explained by Hall and Hans, is the principle of "bringing compute closer" to the end-user. This is achieved through an expansive infrastructure footprint that extends far beyond traditional centralized data centers. Akamai operates 41 core data centers across 36 countries, augmented by a network of approximately 4,400 smaller, "distributed reach" data centers worldwide. This vast network is designed to minimize latency by processing data and running applications closer to the points of origin.
"The intention is to bring compute closer to wherever the user is around the planet in order to reduce latency," Hall elaborated. "But it’s important to remember that there are so many different types of workloads that users like to run. There are those that require really deep thinking and a lot of computing, so this is where centralized data centers do a great job. But when you combine those stacks with distributed edge capabilities, you can deliver faster feedback loops when required."
This hybrid approach, combining the power of centralized resources with the responsiveness of edge computing, is crucial for applications where even millisecond delays can have significant consequences. Use cases such as real-time robotics, instant fraud detection, and highly interactive conversational agents stand to benefit immensely from this architectural paradigm. The ability to achieve faster feedback loops is not merely a performance enhancement but a critical business differentiator in these rapidly evolving fields.
Addressing Concerns of Infrastructure Complexity
The integration of a vast, distributed network of compute resources naturally raises questions about the potential for increased complexity and fragility. Critics might question whether such a decentralized architecture introduces brittle integration points that could undermine the stability of AI workloads.
Hall directly addressed these concerns, asserting that Akamai’s core competency lies precisely in managing such complexity. "Doing this correctly is precisely the infrastructure service layer that Akamai is capable of providing," she explained. "We’re used to delivering this for really large corporations in a managed way with a simplified setup. Users can then move forward to develop new services without having to manage the infrastructure element of the equation and leverage the tools we have."
Akamai’s emphasis on self-service systems and developer enablement is central to this approach. Hall and Hans highlighted a computing landscape where developers are provided with comprehensive toolkits and services, enabling them to rapidly deploy applications and build ecosystems, often with a single command. This democratizes access to powerful infrastructure, allowing developers to focus on innovation rather than infrastructure management.
Underpinning Open Source and Managed Kubernetes
A significant aspect of Akamai’s strategy involves a deep commitment to open-source technologies and providing managed solutions for popular cloud-native projects. The company’s managed container services for Kubernetes, including its offering on Linode Kubernetes Engine (LKE), are a prime example. Akamai further enhances this by providing an application platform that runs on top of LKE, pre-packaging frequently used open-source projects. This allows users to access these tools through Akamai’s unified interface, eliminating the need for manual installation and configuration of each individual software component.
This approach streamlines the developer experience, enabling them to provision and deploy applications with unprecedented speed. The vision presented is one where a developer can transition from an initial concept ("a blinking cursor") to a globally distributed, live production application on the Akamai cloud in approximately two minutes. This dramatic acceleration is a testament to the power of managed services and integrated developer tooling.
The Rise of Serverless and WebAssembly at the Edge
Developer advocate Thorsten Hans underscored the importance of serverless technologies within Akamai’s platform, particularly for developers of all skill levels. Akamai Functions, for instance, is designed to abstract away infrastructure management, allowing developers to build, deploy, and scale applications and AI workloads using WebAssembly (Wasm) functions across Akamai’s distributed cloud.
"We put developers at the center, always," Hans emphasized. Akamai’s involvement with the Cloud Native Computing Foundation (CNCF) and its collaboration on the sandbox project known as Spin, a framework for building and deploying serverless applications in WebAssembly, further illustrates this commitment. This initiative aligns with the broader trend towards "NoOps," where the operational overhead for developers is significantly reduced.
Akamai’s recent acquisition of Fermyon, a cloud-native Wasm Function-as-a-Service company, in December 2025, has been instrumental in accelerating its serverless and WebAssembly capabilities. The Spin team, known for its efforts to minimize cold start times for Wasm functions to under 1 millisecond, also developed SpinKube in 2024, a Kubernetes runtime. Hans has been a driving force behind the increased adoption of Wasm within Akamai, advocating for its potential to execute lightweight code efficiently at the edge.
Focusing on Logic, Not Logistics
The overarching philosophy guiding Akamai’s platform development is to empower developers by allowing them to concentrate on their application logic and business requirements, rather than getting bogged down in infrastructure logistics. Hans explained that Akamai aims to "meet developers where they are" by providing extensive resources such as tutorials, hands-on labs, and ready-to-use application templates.
"In practice, it’s all about telling developers that they shouldn’t spend time worrying about server provisioning and management; they should be able to see what’s inside the box (of any given Akamai service) and think about how they can apply that to their environment’s requirements," Hans stated.
While acknowledging that some software engineering teams will always require granular control over their underlying infrastructure, Hall and Hans indicated that Akamai’s strategy is primarily focused on providing a higher level of abstraction for the majority of its customer base. This allows engineers to abstract away the complexities of server management and instead focus on the core business processes and user functionalities they aim to deliver. This shift in focus from infrastructure management to application development and innovation is a key benefit of Akamai’s evolving cloud-native AI strategy.
The convergence of Akamai’s vast global network, its expertise in distributed systems, and its embrace of cutting-edge technologies like Kubernetes and WebAssembly positions the company as a significant player in the future of cloud-native AI. By prioritizing developer experience and simplifying complex infrastructure, Akamai aims to accelerate the deployment and adoption of AI applications across a diverse range of industries.
