Dynamical Alignment: A Principle for Adaptive Neural Computation
Abstract:
We introduce Dynamical Alignment, a unifying principle explaining how neural networks adapt their computational mode by matching input temporal structure with intrinsic neuronal timescales. Contrary to the prevailing assumption that computation is determined primarily by network architecture, we demonstrate that controlling input dynamics steers a fixed spiking network into two distinct operational modes: a dissipative mode with sparse, energy-efficient coding, and an expansive mode providing dense, high-capacity representations. By converting static data into controlled spatiotemporal trajectories, we reveal a bimodal computational landscape governed by global phase-space volume change (Σλᵢ), not local chaotic sensitivity. This alignment resolves classical SNN performance–efficiency trade-offs, elevating SNNs to ANN-level accuracy on vision benchmarks while retaining superior energy efficiency. Cross-domain validation—including deep vision, reinforcement learning, and feature binding—shows that dynamical alignment functions as dynamic software on fixed hardware, enabling task-dependent switching between stability and flexibility. These findings suggest a path toward adaptive, energy-aware neural systems closer to biological computation and challenge the focus on static architecture in favor of time-conditioned intelligent computation.
Citation: Chen, X. (2025). Dynamical Alignment: A Principle for Adaptive Neural Computation. arXiv preprint arXiv:2508.10064.
