State-space models (SSMs) from control theory underlie systems across science. For example, recurrent neural networks (RNNs) in deep learning and Markov chain Monte Carlo (MCMC) in probabilistic modeling can both be cast as nonlinear SSMs. However, nonlinear SSMs have long been treated as “inherently sequential” and thus ill-suited to massively parallel GPU hardware.
In this talk, I will show how I have used mathematical techniques from electrical engineering to provide scalable parallelization of nonlinear SSMs.

Xavier Gonzalez,
Stanford
In particular, I use quasi-Newton methods from optimization to provide scalable parallelization; Kalman filtering to provide stable parallelization; and the largest Lyapunov exponent (LLE) of the SSM to provide guidance on when we should try to parallelize. Taken together, these results expand and clarify our ability to parallelize nonlinear SSMs.
Xavier Gonzalez is a final-year PhD student at Stanford, advised by Scott Linderman. Xavier’s research is in Machine Learning. He earned his undergraduate degree in Math from Harvard and a Masters in Statistics from Oxford as a Rhodes Scholar. His dissertation develops techniques to parallelize state space models for dramatic GPU speed-ups. Xavier is broadly interested in the intersection of algorithm, hardware, and software. Website: https://xaviergonzalez.github.io/