At the recent MCP Summit in New York City, a pivotal gathering for developers and stakeholders in the burgeoning field of agentic artificial intelligence, The New Stack engaged in an in-depth conversation with Clare Liguori, Senior Principal Software Engineer at Amazon Web Services (AWS) and a core maintainer of the open-source Model Context Protocol (MCP) project. The discussion provided critical insights into AWS’s substantial contributions to MCP, its current real-world applications, and the strategic trajectory of this foundational technology for AI agent interoperability.
The MCP, first unveiled by Anthropic in late 2024, has rapidly established itself as the de facto standard for enabling AI agents to seamlessly connect with external tools and data sources. This crucial interoperability layer addresses a fundamental challenge in the deployment of advanced AI: ensuring that agents can access and utilize the information and functionalities necessary to perform complex tasks. The significance of this problem was underscored when Anthropic, the protocol’s progenitor, transferred stewardship of MCP to the Linux Foundation in late 2025, signaling a commitment to its open and collaborative development. As agentic AI continues its expansion into enterprise environments, the demand for robust and standardized communication protocols like MCP has intensified. The ability of AI applications to "reach for the fact or shovel that it needs" is no longer a theoretical advantage but a practical necessity for unlocking their full potential.
Amazon’s Strategic Commitment to the MCP Project
Clare Liguori’s dual role as a senior engineer at AWS and a core maintainer of MCP places her at the nexus of enterprise AI development and open-source innovation. In her capacity as an MCP maintainer, Liguori plays a key role in shaping the protocol’s technical roadmap, influencing decisions on which features are incorporated and which are deferred. She shared with The New Stack that her current focus involves integrating webhooks, events, and notifications into the MCP specification. This emphasis on event-driven architectures is particularly noteworthy, as it signals a shift towards more dynamic and responsive AI agents.
"We’re starting to see," Liguori explained, "especially with things like OpenClaw and some other agent runtimes that are coming about – agents that are always on, agents that are waiting for events to come in, and they will start acting on them." This evolving paradigm moves beyond earlier models where AI agents were often constrained by limited, synchronous interactions. The introduction of asynchronous event handling and continuous listening capabilities within MCP promises to unlock more sophisticated autonomous behaviors and proactive AI systems. This indicates that the development of MCP is far from complete, with ongoing efforts to expand its capabilities to meet the demands of increasingly complex agentic workflows.
The synergy between AWS’s offerings and the MCP project is particularly strong, given that AWS provides managed MCP servers, facilitating its adoption and deployment for a wide range of users. AWS has already made significant contributions to the MCP, including the development of "Tasks" and "Elicitations." Tasks refer to the protocol’s ability to handle longer request timelines, accommodating more complex AI operations that may require extended processing or interaction. Elicitations, on the other hand, enable AI agents to prompt human users for additional context when needed, a critical feature for disambiguation and ensuring the accuracy of AI-driven decisions. These contributions highlight AWS’s active role in enhancing the practical utility of MCP for enterprise-grade applications.
Liguori further elaborated on the collaborative dynamic, describing AWS as an "experimental playground for some of these new and upcoming concepts in MCP that are still in the draft spec, that we’re still tuning and working on with feedback." This approach allows AWS to test and refine nascent MCP features in a real-world environment, providing invaluable feedback to the broader open-source community. By having an official implementation available for developers to experiment with, AWS not only validates new concepts but also accelerates their development and adoption. This model of corporate sponsorship and proxy participation in open-source projects demonstrates a powerful feedback loop: sponsor funds support the foundational infrastructure of open-source foundations, while their platforms and engineering resources actively drive and shape the evolution of the technologies themselves. This symbiotic relationship is crucial for the sustained growth and innovation within the open-source AI ecosystem.
Charting the Future of Agentic AI with MCP
The role of MCP within the broader agentic AI stack is becoming increasingly defined as the essential connective tissue that brings context to the work of AI agents and their underlying models. However, the question remains: how can companies that are not at the forefront of technological innovation, or those lacking dedicated in-house development teams, leverage this powerful technology? Liguori offered a glimpse into AWS’s strategy for democratizing AI development tools, citing the recent broad release of Amazon’s Kiro AI development tool. Initially intended for AWS engineers, Kiro was made available to all roles across all job families within the company, a move that surprised even Amazon itself with its widespread adoption.
This expansion of access to AI development tools, coupled with the growing availability of beginner-friendly platforms like Amazon QuickSight which increasingly incorporate MCP, suggests a future where AI is no longer confined to the realm of specialized technology departments. The implications are profound: as AI development tools become more accessible, and protocols like MCP ensure their interoperability and ease of integration, AI-powered automation is poised to extend its reach into even the smallest businesses. This democratization of AI promises to unlock new levels of efficiency and innovation across a diverse range of industries, fundamentally reshaping how businesses operate and compete. The continued evolution of MCP, driven by contributions from industry leaders like AWS and a vibrant open-source community, is therefore central to this transformative shift.
The MCP Summit itself served as a microcosm of this rapid evolution. Held in the dynamic environment of New York City, the event convened key figures from leading technology organizations, research institutions, and the open-source community. Discussions ranged from the technical intricacies of protocol design to the strategic imperatives of widespread adoption. Panels and workshops explored use cases in areas such as enterprise data management, customer service automation, and scientific research, all of which benefit from standardized agentic communication. The presence of major cloud providers like AWS, alongside AI pioneers and foundational open-source projects, underscored the collaborative and rapidly developing nature of the agentic AI landscape.
Supporting Data and Broader Implications
The increasing adoption of agentic AI is not merely a theoretical trend but is supported by growing market data. Industry analysts predict a significant expansion of the agentic AI market, with some projecting it to reach hundreds of billions of dollars within the next decade. This growth is directly tied to the ability of these AI systems to interact with external environments, a capability that MCP is designed to facilitate. The protocol’s focus on standardization addresses a critical market need for interoperability, reducing fragmentation and enabling developers to build applications that can leverage a wider array of AI models and tools without being locked into proprietary ecosystems.
The contribution of large cloud providers like AWS to open-source projects like MCP is a strategic imperative for them as well. By investing in open standards, they can foster an ecosystem that drives innovation and, by extension, increases demand for their underlying cloud infrastructure and managed services. AWS’s role as an "experimental playground" for MCP features means that the protocol benefits from rigorous testing and validation on a massive scale. This not only accelerates the protocol’s maturity but also provides AWS with a competitive advantage by offering cutting-edge AI integration capabilities to its customers.
Looking ahead, the evolution of MCP is likely to focus on several key areas. Enhanced security features will be paramount as AI agents gain access to increasingly sensitive data and systems. The development of robust authentication, authorization, and data privacy mechanisms within MCP will be critical for enterprise adoption. Furthermore, ongoing work on extensibility will allow the protocol to adapt to new types of AI models, tools, and data formats as they emerge. The ongoing efforts to integrate webhooks, events, and notifications, as highlighted by Liguori, are indicative of this trend towards more dynamic and context-aware AI interactions.
The broader impact of MCP’s standardization extends beyond the technical realm. By providing a common language for AI agents, it lowers the barrier to entry for developers and businesses looking to integrate AI into their workflows. This democratization of AI technology has the potential to drive significant economic growth and societal transformation. As agentic AI becomes more capable and accessible, it can automate complex tasks, augment human capabilities, and unlock new avenues for creativity and problem-solving across virtually every sector. The continued investment and active participation of organizations like AWS in the open-source development of MCP are therefore crucial for realizing this transformative potential. The summit in New York served as a powerful testament to the collaborative spirit and the shared vision driving this important technological advancement.
