Neuromorphic to me
Recently Neuron published a short NeuroView piece titled “Neuromorphic Is Dead. Long Live Neuromorphic” by Giacomo Indiveri. It’s a great read and worth your time. I bring it up because it got me thinking, once again, about what “neuromorphic” actually means. It seems to mean something slightly different to everyone you talk to. My best attempt at a definition, it’s about emulating aspects of the brain more closely than standard digital architectures do. Things like threshold dynamics, analog computation, asynchronous spiking, and synaptic plasticity implemented on devices like memristors all fall under the umbrella of neuromorphics. Every one of those aspects is important for advancing our understanding of the brain. Any of them, if scaled and engineered well, could also power the next generation of AI.
That broadness is both a strength and a headache. It’s a strength because it lets people with very different goals fall under the same umbrella and cross pollinate ideas with each others. Some can focus on materials, some on circuits, some on algorithms, and some on theory. Nonetheless it is an headache because the label becomes a fuzzy marketing term, and we end up comparing apples to nuclear-powered server farms. The neuromorphic community often finds itself measured against very different benchmarks: energy efficiency, biological plausibility, or simply whether a neuromorphic device can compete with the free version of chatGPT. None of these metrics are wrong, but each highlights a different truth about why we do what we do.
It is an awkward truth that large-scale machine learning, LLMs and deep neural networks are insanely good at their designated jobs. It feels insurmountable that neuromorphic will even have the chance to compete. They've learned on pretty much the whole internet. They scale with data, and compute in a way that neuromorphic cannot compete with right now, if ever. Its becoming a ever unavoidable aspect of our lives. I even used an AI to fix the dyslexic grammar errors of this essay's first draft (the ideas are still mine, I swear). The fact remains, more neurons (or more parameters), more data, and more compute will always win. The “bitter lesson”, that general methods which leverage computation and data outperform methods that bake in human domain knowledge, still stings. It suggests that carefully crafted neuromorphic hardware will be outcompeted by raw scale and the sheer statistical power of LLMs trained on the whole internet.
That reality has pushed the field in two directions. One is the practical, engineering-driven path: show that neuromorphic hardware does something useful with far less energy. Event-based cameras are a poster-child example. They only report changes in brightness, so they consume far less power than a conventional camera that streams full-frame color at a fixed rate. For some tasks, event-based sensing with neuromorphic processing is not just “cool”, it’s measurably more efficient. On the hardware front, there are real gains to be had in domain-specific neuromorphic sensors. These are the wins you can point to when a funder asks, “How will this help industry?”
The other direction is more aspirational and sits under the label “neuroAI.” Here the hope is that studying the brain, in all its messy, gory detail, will yield algorithmic ideas as transformative as the transformer. Some people even argue that transformers were indirectly inspired by neuroscience: attention mechanisms have analogues in biological systems, and recurrent networks are, of course, brain-like in various ways. The chain of influence is quite far removed from neuroscience sill. The ideas have been iterated, abstracted, and optimized upon in computer science labs, so it’s hard to say whether the brain really “uses transformers.” Still, attention and recurrence are kernels of intuition that seeded something powerful in machine learning. NeuroAI is the collective bet that more such kernels exist, waiting in the biology for us to extract and translate into machine learning.
I find that duality, applied engineering wins versus speculative algorithmic breakthroughs, is unavoidable, and it mirrors a larger tension in science. Do you do research because it might change the world, or do you do it because you want to understand the world? For me, the answer has always leaned strongly toward understanding. I didn’t get into computational neuroscience to build the next commercial product, especially the next machine learning algorithm. I came into it because I wanted to understand mechanisms: how neurons compute with spikes, how spiking patterns are controlled, and what the underlying dynamical systems actually look like. The “how” question, how do these things actually work. Of course, being funded by taxpayers or industry means occasionally framing that curiosity in terms that highlight potential utility. Still, there’s a qualitative difference between curiosity-driven inquiry and targeted development.
It’s worth being honest: funding incentives pull scientists toward the applied side. Grant panels, industry partnerships, and job prospects all reward work that can point to concrete outcomes. When you need salary lines and graduate student stipends, one must make pragmatic arguments. But there’s also an intellectual case for preserving work that aims to explain rather than to immediately productize. Fundamental understanding has a long track record of paying off in unexpected ways. Even if a particular line of inquiry turns out to be a dead end for application, it can open conceptual doors that later lead to breakthroughs. The search itself can be worth it.
That’s not a justification for ignoring utility, far from it. It's just being honest with my (and probably other's) motivations. When I pitch my research, I ask, is this likely to teach us something important, and does it have a credible path to societal value if the ideas pan out? Neuromorphic work often passes that test. That's why we all gravitate towards the neuormorphic or NeuroAI label. It sells well. Whether through lower-power sensors, specialized accelerators, or new algorithmic motifs gleaned from biology, neuromorphic ideas have clear potential societal impact. The question is whether that impact will be incremental (useful, but niche), or structural (changing how we build AI). I’d love the latter, but I’m not placing all my bets there.
A practical reason I keep working in computational neuroscience is that spiking dynamics are simply fascinating. Spikes are not just binary events; they’re embedded in dynamical systems with timescales, thresholds, refractoriness, and feedback. The nonlinearities that come from those dynamics give rise to computation that’s qualitatively different from the feedforward activations people optimize in deep learning. There’s a richness to how networks of spiking neurons switch attractors, route signals, and implement control. That’s what keeps me awake in the middle of the night. The neuromorphic and neuroAI world is just the most likely area where these ideas may pay off.
I also appreciate that the neuromorphic “pie” has many slices. My slice is dynamical systems and theoretical modeling. Someone else’s slice might be materials science, inventing memristors or new nonvolatile devices that behave more neuron-like. Another slice is low-power system engineering, packaging neuromorphic chips with sensors and building real-world products. Yet another slice is cognitive modeling, using spiking networks to explain behavior or pathology. All those slices matter. Progress in one area can enable progress in others. You got to cast a wide net to catch some fish.
So what should the field do? I think we should keep exploring. Keep building small, elegant systems that show clear advantages in energy, sparseness, or latency. Keep probing biological circuits for algorithmic inspiration, but be realistic about the chain of translation from biology to engineering. Celebrate practical wins like event-based vision and low-power sensing, while also nurturing curiosity-driven work that maps the mechanisms of computation in the brain. Funders should recognize both pathways: the ones that promise near-term utility and the ones that promise conceptual breakthroughs, even if those payoffs are uncertain and long-term.
Finally, keep enjoying the work. That pleasure is the kind of fuel that keeps long-term science alive. It’s okay to be motivated by curiosity and by utility at the same time. Its the interplay between the two is often where the best science happens.
Neuromorphic to me, is both a toolbox and a set of questions. It’s a promise that brain-inspired dynamics and devices can lead to practical gains, and it’s an invitation to understand how brains compute in the first place. The field will be pulled in many directions, toward engineering, toward algorithms, and toward pure science, and that’s perfect. If we keep our hands in the dirt and keep our curiosity sharp, neuromorphic research will continue to make valuable contributions to neuroscience, computing, and society at large. And if it stumbles into the next “transformer-level” idea along the way, great. If it doesn’t, that’s okay too, the work will still have advanced our understanding, which is reason enough to keep going.
Author: Alexander James White
Paper: Indiveri, G. (2025). Neuromorphic is dead. Long live neuromorphic. Neuron, 113(20), 3311-3314



留言
張貼留言