Dynamical systems workshop

EMPOWER Centre - GOAL team

October 23, 5:30-7:30pm

Abstracts

Hannah Lim

Dynamics Forecasting Using the Volterra Reservoir Kernel

State-space systems transform sequences of inputs into sequences of outputs and are determined by a state map and a readout map. Reservoir computers, a popular machine learning paradigm in time series forecasting, are state-space systems in which the state map is randomly generated and a functionally simple readout map is trained to proxy the data-generating process of a time series. The most canonical example of a reservoir computer is the echo state network. In recent years, various sequential kernels have been proposed for learning of time series data. These kernel methods recast high-dimensional time series regression problems -- where one must estimate optimal weights for a large number of covariates -- into kernel regression problems, where the complexity of the optimisation depends on the size of the dataset rather than the number of covariates. In particular, the Volterra reservoir kernel, proposed in [1], is a sequential kernel defined by the inner product of a functional determined by a particular choice of state map with states in the double tensor algebra. The Volterra reservoir kernel benefits from a recursive-in-data structure that makes it computationally efficient for time series forecasting and has performed well in numerical experiments. In this talk, we explain how time series forecasting is carried out in both reservoir computing and the Volterra reservoir kernel, and present the results of numerical experiments conducted with both approaches.

[1] Gonon, Lukas and Grigoryeva, Lyudmila and Ortega, Juan-Pablo (2022) "Reservoir kernels and Volterra series" arXiv:2212.14641

Florian Rossmannek

Stochastic dynamics learning with state-space systems

State-space systems provide a general modelling framework for dynamic input/output tasks and appear in various modern machine learning methods. We review the mechanisms underpinning these systems from a mathematical, dynamical systems point of view. This leads us to explore the memory of these systems and their ability to create ‘high-dimensional feature’ representations of their inputs, so-called echo states. We embed these notions in the framework of attractors of dynamical systems. Importantly, the resulting theory goes beyond classical studies that considered deterministic or statistically independent inputs, and it permits us to tackle random input time series with challenging probabilistic structures.

Cyrus Mostajeran

Invariant kernels on manifolds

Kernel methods have become a cornerstone of machine learning, enabling the modelling of complex and nonlinear relationships in data. However, applying these methods to non-Euclidean spaces, such as manifolds, presents unique challenges. In this talk, I will explore recent advancements in the systematic construction of positive definite kernels on manifolds, focusing on invariant kernels on Riemannian symmetric spaces. By leveraging tools from harmonic analysis, we develop a framework for analysing, constructing, and efficiently computing these kernels. These advances open the door to numerous future applications, particularly in fields where data resides on non-Euclidean domains.

James McAllister

Reservoir Computing and Neuroscience: A Symbiotic Relationship?

Reservoir Computing is a framework which uses the internal dynamics of recurrent, random, and fixed neural networks to perform complex transformations of input signals, enabling them to carry out various computations and tasks. Learning is restricted to the output layer and can be thought of as “reading out” from the dynamical states of the reservoir. With no training of the internal weights, reservoirs do not have the costly and difficult training associated with other kinds of deep neural networks. This talk addresses various points where Reservoir Computing and Computational Neuroscience may be of mutual benefit. Firstly, we present how the connectivity of brain networks can inspire sparse, efficient, and robust reservoirs, and how structural features like clustering, recurrency, and neuron cell-type are related to task performance and selectivity. Secondly, we demonstrate using linearised dynamics and generalised Hebbian learning algorithms how the Reservoir Computing framework can inspire a better understanding and modelling of potential brain mechanisms.