image: Figure 1. Diagram showing an electrically active cell in a neuronal culture and the process of recording its transmembrane potential for further analysis
Credit: Pavel V. Kuptsov, Nataliya V. Stankevich, Reconstruction of neuromorphic dynamics from a single scalar time series using variational autoencoder and neural network map, Chaos, Solitons & Fractals, Volume 191, 2025
Researchers from HSE University in Nizhny Novgorod have shown that a neural network can reconstruct the dynamics of a brain neuron model using just a single set of measurements, such as recordings of its electrical activity. The developed neural network was trained to reconstruct the system's full dynamics and predict its behaviour under changing conditions. This method enables the investigation of complex biological processes, even when not all necessary measurements are available. The study has been published in Chaos, Solitons & Fractals.
Neurons are cells that enable the brain to process information and transmit signals. They communicate through electrical impulses, which either activate neighbouring neurons or slow them down. Each neuron has a membrane that allows charged particles, known as ions, to pass through channels in the membrane, generating electrical impulses.
Mathematical models are used to study the function of neurons. These models are often based on the Hodgkin-Huxley approach, which allows for the construction of relatively simple models but requires a large number of parameters and calculations. To predict a neuron's behaviour, several parameters and characteristics are typically measured, including membrane voltage, ion currents, and the state of the cell channels. Researchers from HSE University and the Saratov Branch of the Kotelnikov Institute of Radioengineering and Electronics of the Russian Academy of Sciences have demonstrated the possibility of considering changes in a single control parameter—the neuron's membrane electrical potential—and using a neural network to reconstruct the missing data.
The proposed method consisted of two steps. First, changes in a neuron's potential over time were analysed. This data was then fed into a neural network—a variational autoencoder—that identified key patterns, discarded irrelevant information, and generated a set of characteristics describing the neuron's state. Second, a different type of neural network—neural network mapping—used these characteristics to predict the neuron's future behaviour. The neural network effectively took on the functions of a Hodgkin-Huxley model, but instead of relying on complex equations, it was trained on the data.
Pavel Kuptsov
'With the advancement of mathematical and computational methods, traditional approaches are being revisited, which not only helps improve them but can also lead to new discoveries. Models reconstructed from data are typically based on low-order polynomial equations, such as the 4th or 5th order. These models have limited nonlinearity, meaning they cannot describe highly complex dependencies without increasing the error,' explains Pavel Kuptsov, Leading Research Fellow at the Faculty of Informatics, Mathematics, and Computer Science of HSE University in Nizhny Novgorod. 'The new method uses neural networks in place of polynomials. Their nonlinearity is governed by sigmoids, smooth functions ranging from 0 to 1, which correspond to polynomial equations (Taylor series) of infinite order. This makes the modelling process more flexible and accurate.’
Typically, a complete set of parameters is required to simulate a complex system, but obtaining this in real-world conditions can be challenging. In experiments, especially in biology and medicine, data is often incomplete or noisy. The scientists demonstrated by their approach that using a neural network makes it possible to reconstruct missing values and predict the system's behaviour, even with a limited amount of data.
'We take just one row of data, a single example of behaviour, train a model on it, and incorporate a control parameter into it. Imagine it as a rotating switch that can be turned to observe different behaviours. After training, if we start adjusting the switch—ie, changing this parameter—we will observe that the model reproduces various types of behaviours that are characteristic of the original system,' explains Pavel Kuptsov.
During the simulation, the neural network not only replicated the system modes it was trained on but also identified new ones. One of these involves the transition from a series of frequent pulses to single bursts. Such transitions occur when the parameters change, yet the neural network detected them independently, without having seen such examples in the data it was trained on. This means that the neural network does not just memorise examples; it actually recognises hidden patterns.
Natalya Stankevich
'It is important that the neural network can identify new patterns in the data,’ says Natalya Stankevich, Leading Research Fellow at the Faculty of Informatics, Mathematics, and Computer Science of HSE University in Nizhny Novgorod. 'It identifies connections that are not explicitly represented in the training sample and draws conclusions about the system's behaviour under new conditions.'
The neural network is currently operating on computer-generated data. In the future, the researchers plan to apply it to real experimental data. This opens up opportunities for studying complex dynamic processes where it is impossible to anticipate all potential scenarios in advance.
The study was carried out as part of HSE University's Mirror Laboratories project and supported by a grant from the Russian Science Foundation.
Journal
Chaos Solitons & Fractals
Article Publication Date
2-Feb-2025