This neuromorphic chip runs AI on physics, not code. It's 2,000x more efficient.
ScienceApril 3, 2026· 5 min read

This neuromorphic chip runs AI on physics, not code. It's 2,000x more efficient.

Eli VossBy Eli VossAI-GeneratedAnalysisAuto-published5 sources citedMedium confidence · 5 sources

Loughborough University physicists have built a computer chip that processes information using the physical properties of its own material, not software. In benchmark tests, it consumed up to 2,000 times less energy than conventional software-based approaches for certain time-series tasks. The research, published in Advanced Intelligent Systems, describes a new class of device the team calls "Physical AI."

The chip won't replace your GPU anytime soon. But for a specific and growing category of AI work, processing data that changes over time, it offers an efficiency gain so large it deserves close attention.

What the chip actually does

The device is a memristor, an electronic component that retains information about past electrical signals and uses that history to shape its response to new inputs. This particular memristor is built from nanoporous niobium oxide: a thin film riddled with random nanoscale pores that create multiple electrical pathways through the material.

Those pathways function like the hidden processing layer in a neural network. Instead of running incoming data through software-defined layers on a conventional processor, the chip's own physics transforms the signal. The technique is called reservoir computing, and the Loughborough team, led by Dr. Pavel Borisov, a Senior Lecturer in Physics, has implemented it entirely in hardware.

"Inspired by the way the human brain forms very numerous and seemingly random neuronal connections between all its neurons, we created complex, random, physical connections in an artificial neural network by designing pores in nanometre-thin films of niobium oxide as part of a novel electronic device," Borisov said in the university's press release.

The research was funded by the Engineering and Physical Sciences Research Council (EPSRC), and co-authored by Professor Sergey Saveliev, a theoretical physicist at Loughborough.

How they tested it

The team ran the chip through several benchmarks. The most notable was the Lorenz-63 system, a mathematical model of chaotic behavior famously tied to the "butterfly effect," where minuscule input changes produce wildly different outcomes. This is exactly the kind of time-dependent, noise-sensitive data that conventional AI systems burn enormous energy processing.

The memristor-processed data, when fed into a linear model, successfully predicted the short-term behavior of the Lorenz system and reconstructed missing data points. The team also tested it on pixelated digit recognition and basic logic operations. In all cases, a single device handled multiple task types while maintaining functional accuracy.

The 2,000x efficiency figure comes from comparing the chip's energy consumption against standard software-based reservoir computing on equivalent tasks. That number is the upper bound; actual gains vary by application. But even at a fraction of that figure, the implications for edge computing are substantial.

Why it matters for AI infrastructure

AI's energy problem is no longer theoretical. Data centers are straining power grids. Training runs for large language models can consume the annual electricity output of small towns. And as AI pushes further into real-time applications, like autonomous vehicles, industrial monitoring, and wearable health devices, the compute-per-watt equation becomes critical.

This chip targets that second category: not the massive training jobs, but the always-on inference tasks that need to run for hours or days on minimal power. Think heart rate monitors that detect strokes, engine sensors in vehicles, or environmental monitoring stations in remote locations.

"My end goal would be for this kind of technology to be used in a time-dependent signal. Whether that's in a car, a robot, a nuclear power plant, or in a smart watch," Borisov told Decrypt.

This approach also arrives amid broader momentum in neuromorphic hardware. Just last week, Cambridge researchers published work on hafnium oxide memristors with switching currents a million times lower than conventional devices. Intel's Loihi chip line and BrainChip's Akida platform are pushing similar brain-inspired architectures toward commercial deployment. The Loughborough work is distinct because it leans harder into material physics: the computation happens in the substrate rather than in a more traditional neuromorphic circuit design.

Meanwhile, the pressure on AI energy consumption continues to mount. Starcloud's recent $1.1B raise for orbital data centers is one signal of how seriously the industry is taking the power constraint. Hardware approaches like this one attack the problem from the opposite direction: not more power, but radically less of it.

What we don't know yet

  • The benchmarks used relatively simple tasks. The team has not yet demonstrated performance on the noisier, higher-dimensional data that real-world deployments would require.
  • Scalability is unproven. The paper describes a single device; building networks of these memristors and integrating them into production hardware is a separate engineering challenge.
  • The 2,000x figure is the best-case comparison against software-based reservoir computing specifically, not against all forms of AI hardware. How it stacks up against other neuromorphic chips like Intel's Loihi or BrainChip's Akida remains an open question.
  • There is no timeline for commercial availability or industry partnerships.

What comes next

Borisov's team plans to increase the complexity of the neural networks built from these devices and test them with noisier input data. "We believe this is a scalable and practical approach to creating small, industry-compatible devices for AI applications with much better energy efficiency and offline capabilities," he said.

Professor Saveliev framed the broader significance: "This is a great example of how fundamental physics can contribute to modern computations, avoiding huge computational overheads by using the complexity of physical systems as a high dimensional filter for data."

The paper, titled "Scalable Platform Enabling Reservoir Computing With Nanoporous Oxide Memristors for Image Recognition and Time Series Prediction," is available in Advanced Intelligent Systems.

If the efficiency gains hold at scale, this kind of hardware could become standard in the next generation of edge AI devices, the ones that need to think continuously without being tethered to a data center or a power outlet.

Eli Voss covers science and neuroscience for The Daily Vibe.

This article was AI-generated. Learn more about our editorial standards

Share:

Report an issue with this article