Understanding the neural substrates of consciousness stands as one of the most significant scientific challenges encompassing both philosophical and theoretical dimensions. This challenge also stems from clinical practice, such as the need to characterize disorders of consciousness and their prognosis (Edlow et al., 2021; Giacino et al., 2018; Sanders et al., 2012). The detection of consciousness independently of sensory processing and motor behavior has recently improved with the development of the Perturbational Complexity Index that relies on the analysis of TMS-evoked potentials (Casarotto et al., 2016). Despite its remarkable performance, TMS-EEG can be challenging to implement in clinical practice as it requires complex equipment. In this work, we explored both the dynamics of spontaneous activity and responsiveness in a large-scale brain model and found strong evidence that common network-level factors govern the two. We demonstrate using a large-scale brain model that a similar dynamical regime of activity can cause perturbational complexity and complex (fluid) spontaneous activity. Using empirical data, we showed that spontaneous activity analyzed through appropriate metrics can be effective in assessing consciousness in wakefulness and anesthesia.
A large body of evidence highlights a relationship between the complexity of brain signals and conscious experience, as recently reviewed (Sarasso et al., 2021). The first theoretical argument started from a phenomenological description of conscious experience that led to theorizing the expected properties of the underlying neural activity (Tononi and Edelman, 1998; Tononi et al., 1998). In this view, the two building blocks of consciousness are functional integration (each experience is unified as a whole) and functional differentiation (each experience is unique and separated from others). These theoretical findings lead to the development of metrics rooted in information theory to estimate these functional properties on neural data. Of all these metrics, the PCI (relying on Lempel-Ziv complexity and entropy) revealed an interpretable and practical link between the spatio-temporal complexity of the propagation of a perturbation in the brain and the level of consciousness. In parallel, a large literature is dedicated to understanding brain function from the perspective of dynamical systems theory. In this field, the brain is viewed as a complex open system far from equilibrium composed of a large number of coupled components (e.g. neurons at the microscale or neural masses at the mesoscale) whose interactions are responsible for the emergence of macroscopic patterns. In such systems, drastic changes such as phase transitions or bifurcations can occur spontaneously or under the influence of a global parameter, with major effects on the dynamics. Brain simulations like the one presented in this work support the idea that a rich dynamics is possible when the system’s parameters are fine-tuned around specific values (Golos et al., 2015). In fact, although debated, one prominent theory in neuroscience posits that the brain self-regulates near a critical regime, in the vicinity of a phase transition (Cocchi et al., 2017; O’Byrne and Jerbi, 2022). Simply put, this property allows the brain to be flexible enough to reconfigure and adapt dynamically to a changing environment, all while being stable enough to engage in complex sustained activities. Among other things, the critical regime is known to maximize information processing (Beggs, 2008) or to be related to cognitive processes (Xu et al., 2022), and criticality in the brain is also possibly linked to consciousness (Tagliazucchi et al., 2016; Tagliazucchi, 2017; Toker et al., 2022). Dynamical systems theory in neuroscience has also found a strong paradigm with the concept of manifolds. It’s been shown that a high-dimensional nonlinear dynamical system displays low-dimensional but nevertheless complex behavior, which is equivalent to being constrained to an attractive low-dimensional hypersurface (aka manifold). Low-dimensional manifolds have been present in nonlinear dynamics for a long time, but as a concept for self-organization, it was first theorized in synergetics (Haken, 1983), but restricted to systems living close to a bifurcation; and later generalized to systems close to symmetries (Pillai and Jirsa, 2017; Jirsa and Sheheitli, 2022), notably symmetry breaking through large-scale connectivity in networks with time delays (Petkoski et al., 2016; Petkoski et al., 2018; Petkoski and Jirsa, 2019; Petkoski and Jirsa, 2022). Recent applications of this concept can be found in the domain of motor control (Brennan and Proekt, 2019) or in sleep/wake studies (Chaudhuri et al., 2019; Rué-Queralt et al., 2021). The link between dynamics and complexity is still under theoretical exploration (Jirsa and Sheheitli, 2022), and here we bring an empirical argument in that direction. Indeed, by using a large-scale brain model, we were able to control the dynamical regimes of the network and show that fluidity and responsiveness are maximal within the same parameter range. Fluidity captures the complexity of the manifold of the brain activity (i.e., number of attractors) and responsiveness quantifies the complexity upon perturbation. It could be argued that the noise level is not similar in both settings, but noise is only present to ensure continuous exploration of the manifold without affecting its topology (Ghosh et al., 2008). Complexity had already been shown to vary across conscious states (Hudetz et al., 2016), and the 1 /f slope of the EEG spectrum was also correlated with PCI on the same data we used (Colombo et al., 2019). Our validation results are in line with previous work on spontaneous activity and consciousness. In addition, we demonstrate that it’s possible to distinguish ketamine-induced unresponsiveness from wakefulness at the individual level with a single metric.
Several studies have employed computational modeling approaches to investigate the differences in brain dynamics across states of consciousness. These studies present varying degrees of physiological detail and focus on complementary aspects of unconsciousness. They start from simple abstract models (Ising model) addressing, for example, the increased correlation between structural and functional connectivity in anesthesia (Stramaglia et al., 2017), or oscillator-based models (Hopf model) capturing a brain state-dependent response to simulated perturbation (Deco et al., 2018). More neurobiologically realistic models (Dynamic Mean Field) have also been used to combine multimodal imaging data together with receptor density maps to address the macroscopic effects of general anesthesia and their relationship to spatially heterogeneous properties of the neuronal populations (Luppi et al., 2022). Similarly, using anatomically constrained parameters for brain regions has already been shown to increase the predictive value of brain network models (Wang et al., 2019; Kong et al., 2021). Furthermore, employing biophysically grounded mean-field and spiking neuron models (AdEx) allows addressing phenomena propagating in effect across multiple scales of description such as the molecular effects of anesthetics targeting specific receptor types (Sacha et al., 2025). Related work has shown that adaptation successfully reproduces dynamical regimes coherent with NREM and wakefulness (Cattani et al., 2023) with corresponding realistic PCI values (Goldman et al., 2021). Here, we do not address these biological questions but rather give a proof of concept that large-scale brain models can help understand the dynamics related to brain function. We used a model derived from QIF neurons (Montbrió et al., 2015) that lacks biological parameters such as ion concentration or synaptic adaptation. Nevertheless, we demonstrate that even the symmetry breaking caused by the connectome is sufficient for setting the global working point of the brain, which then links the brain’s capacity for generating complex behavior in the different paradigms, that is, rest and stimulation.
One major limitation of our study lies in the algorithm we used to assess complexity in the model. The empirical PCI calculation is done on the average evoked response following stimulation that only lasts a few hundred milliseconds before returning to baseline activity. In the model, we spanned this calculation over ten seconds, thus capturing a slower dynamic than in real data. Nevertheless, we believe it doesn’t affect our statement, and in theory, the neural mass model we used doesn’t have a specific timescale. In future work, some caveats of our work could also be addressed and remedied. First, the imperfect separation of groups for some of the metrics could be improved by personalized brain modeling. The size of the functional repertoire, for instance, distinguishes wakefulness from anesthesia (including Ketamine) but lacks predictive power at the group level. This could be solved using a personalized model including structural information and parameter inference. Second and last, more realistic parameters could be included in the models, such as neuromodulatory pathways (Taylor et al., 2022; Kringelbach et al., 2020) to improve explanatory power.