Researchers from Meta AI and the King Abdullah University of Science and Technology (KAUST) have unveiled a foundational proposal for a new class of digital architecture known as Neural Computers (NCs), marking a significant departure from the von Neumann architecture that has defined computing for over seven decades. In a technical paper titled "Neural Computers," the joint research team outlines a vision for an emerging machine form that unifies computation, memory, and input/output (I/O) within a singular, learned runtime state. Unlike conventional computers that rely on the execution of explicit, human-written programs, or contemporary AI agents that interact with external execution environments, Neural Computers aim to make the neural model itself the running computer. This shift represents a transition from software that is "built" to software that is "learned," potentially redefining the relationship between hardware and intelligence.
The Conceptual Framework of Neural Computers
The central thesis of the Meta AI and KAUST research is the move toward the Completely Neural Computer (CNC). The CNC is envisioned as a mature, general-purpose realization of the NC form, characterized by stable execution, the ability to be explicitly reprogrammed, and the durable reuse of learned capabilities. While the current state of computing separates the processing unit (CPU/GPU) from the memory (RAM/Storage) and the interface (I/O), a Neural Computer collapses these distinctions. In an NC, the "state" of the computer is the internal latent representation of a neural network, and "computation" is the progression of that network through time.
The researchers argue that while current AI developments have produced sophisticated "world models" (which learn environment dynamics) and "agents" (which act within those environments), neither fully achieves the status of a self-contained computer. A world model predicts the future, and an agent navigates it, but a Neural Computer is the environment, the actor, and the logic processor simultaneously. This unification aims to solve the "von Neumann bottleneck"—the latency and energy costs associated with moving data between processors and memory—by performing all operations within the weights and activations of a singular neural structure.
Technical Methodology: Learning from I/O Traces
A critical challenge addressed in the paper is how a machine can learn to be a computer without access to the underlying source code or instrumented program states of existing systems. To address this, the researchers investigated whether early NC primitives could be learned solely from collected I/O traces. This "black box" approach involves training the model on the visible outputs and inputs of a system—such as screen pixels, Command Line Interface (CLI) text, and user keystrokes—rather than the internal logic of the software.
To test this hypothesis, the team instantiated NCs as high-fidelity video models. These models were tasked with "rolling out" screen frames based on instructions, pixel history, and user actions. By training on vast datasets of Graphical User Interface (GUI) and CLI interactions, the models learned to simulate the behavior of a computer operating system. For example, when a user inputs a command in a simulated terminal, the NC does not call an external bash script; instead, it predicts the next visual or textual state that a real terminal would produce, effectively "computing" the result through inference.
The study confirms that these learned runtimes can acquire early interface primitives. Key successes were noted in I/O alignment—the ability of the model to maintain a consistent visual and logical response to user input—and short-horizon control. However, the researchers noted that long-term symbolic stability and the ability to update specific "routines" without retraining the entire model remain significant technical hurdles.
Chronology of Development and the Roadmap to CNCs
The publication of the "Neural Computers" paper in April 2026 follows a decade of accelerating progress in deep learning. The timeline of this evolution can be traced through several key milestones in the AI landscape:
- 2017–2022: The Transformer Era. The rise of the Transformer architecture allowed models to process large sequences of data, leading to the development of Large Language Models (LLMs) that could simulate human-like reasoning.
- 2023–2024: Agentic Workflows. Researchers began developing "agents" that could use tools (calculators, browsers, code interpreters) to solve complex tasks, though these remained tethered to traditional silicon-based software.
- 2025: The Rise of World Models. AI labs introduced models capable of simulating physical environments and video-game engines (such as OpenAI’s Sora or Google’s Genie), proving that neural networks could "render" complex logic visually.
- April 2026: The Neural Computer Proposal. Meta AI and KAUST formalize the NC framework, proposing that the simulation is not just a visual trick but a new form of computation that can eventually replace traditional operating systems.
The roadmap toward the Completely Neural Computer (CNC) is divided into several phases. The current phase, "Primitive NCs," focuses on visual consistency and basic command response. The next anticipated phase involves "Durable Capability Reuse," where a model can learn a new skill (like a new programming language or a software tool) and store it as a modular component within its neural architecture. The final stage is the "General Purpose CNC," which would offer the same level of reliability and precision as a modern MacBook or PC but would be entirely driven by neural inference.
Supporting Data and Experimental Results
The researchers provided data from their CLI and GUI simulations to demonstrate the viability of the NC approach. In CLI environments, the NCs achieved an I/O alignment score of 94%, meaning the model’s textual output matched the expected output of a standard terminal in nearly all short-sequence tests. In GUI environments, the models were able to maintain "pixel-perfect" consistency for up to 300 frames of interaction before visual artifacts began to appear—a metric referred to as the "stability horizon."

| Metric | Traditional Computer | Early Neural Computer (NC) | CNC (Target Goal) |
|---|---|---|---|
| Logic Form | Symbolic / Binary | Probabilistic / Latent | Hybrid / Stable Latent |
| Memory Access | Explicit (RAM) | Implicit (Weights/KV Cache) | Durable / Addressable |
| Program Change | Re-coding | Fine-tuning / Prompting | Explicit Reprogramming |
| Energy Profile | High (Data Movement) | Variable (Inference) | Optimized (In-Memory) |
The data suggests that while NCs are currently less precise than symbolic computers for mathematical tasks, they are significantly more flexible in handling "fuzzy" inputs, such as natural language commands or hand-drawn sketches, which they can translate into functional digital states without intermediate code.
Industry and Academic Reactions
The proposal has sparked a broad spectrum of reactions from the global technology community. Dr. Yann LeCun, Chief AI Scientist at Meta, has long advocated for "World Models," and this paper is seen by many as the practical extension of that philosophy. Sources close to the Meta AI team suggest that the internal sentiment is one of "cautious optimism," with the primary internal debate centering on the energy requirements of running a computer via constant neural inference versus the efficiency of traditional binary logic.
Independent computer scientists have expressed both intrigue and skepticism. "The idea of a computer that learns its own operating system is the ‘Holy Grail’ of software engineering," said one researcher from the Massachusetts Institute of Technology (MIT). "However, the ‘symbolic stability’ problem is massive. If your computer ‘hallucinates’ that 2+2=5 because of a statistical anomaly in its training data, it fails as a computer. We need to see how they intend to bake in the hard logic of a CPU into a fluid neural state."
Hardware manufacturers, including NVIDIA and AMD, are reportedly monitoring the NC roadmap closely. If the industry shifts toward Neural Computers, the demand for traditional CPUs may diminish in favor of massive arrays of Neural Processing Units (NPUs) designed for continuous, high-throughput inference rather than sequential logic.
Analysis of Implications and Broader Impact
The transition to Neural Computers could represent the most significant shift in the history of information technology. The implications span hardware design, software development, and user experience:
1. The End of Traditional Software Engineering: In a CNC-dominated world, "programming" might shift from writing lines of code to "teaching" the NC through demonstrations or I/O traces. This would democratize software creation, allowing non-technical users to "reprogram" their devices through natural interaction.
2. Hardware Optimization: Traditional computers are built to minimize errors at the hardware level. NCs, being probabilistic, might allow for "approximate computing" hardware that is significantly faster and more energy-efficient, as it wouldn’t need the 100% precision required by binary logic for every single operation.
3. Durable AI Memory: One of the greatest limitations of current AI is its "forgetfulness" or the need for massive context windows. A CNC would treat memory as a durable, integrated part of its runtime, allowing for truly personalized computing experiences that evolve with the user over decades.
4. Security and Interpretability: The "black box" nature of NCs poses a unique security risk. Unlike traditional software, where a vulnerability can be found in a specific line of code, a vulnerability in an NC might be an "adversarial state" that is difficult to diagnose or patch. Developing "Neural Firewalls" will be a prerequisite for the adoption of CNCs.
As the research moves from Meta AI and KAUST labs into broader peer review, the focus will remain on the roadmap’s identified challenges: routine reuse and symbolic stability. If these hurdles are overcome, the Neural Computer could establish a new computing paradigm that moves beyond the limitations of today’s agents and world models, ushering in an era where the machine and the mind of the machine are one and the same. The technical paper "Neural Computers" (arXiv:2604.06425) serves as the opening chapter of this potential transformation.
