Much recent research effort has gone into the modeling of dynamic networks, though many problems remain. In particular, we focus on applications of these methods to neuroscience, a domain which presents a number of difficulties for existing methods. The core compromise that we seek to manage is the need to improve the interpretability and performance of network models while simultaneously incorporating and improving dimensionality reduction strategies. These goals can often be at odds with one another and we focus on methods that allow us to achieve all three for the systems under study.
We use generalizations of wavelets to discrete data for dimensionality reduction, as neurons exhibit sparse, multi-scale functional networks. Further, the successful capture of low-dimensional structure with wavelets suggest modeling trajectories in the transformed space implied by this basis. We aggregate and discuss existing methods from signal processing, dynamical systems, and statistics that allow us to accomplish these tasks. Finally, because of the existence of (potentially significant) process and measurement noise in these systems, we present recent encouraging attempts at modeling this noise.
These methods employ multi-level, variational inference to efficiently compute posterior distributions of the evolution of the latent and observed states of the system at hand. We conclude with a discussion of future work, noting that while the methods discussed constitute an impressive toolkit, there are also many limitations, leaving much room for future work.