- Diffusion Networks
-
Diffusion networks are continuous time, continuous state recurrent neural networks with a stochastic component. They are neural networks and stochastic diffusion processes, thus their name. Diffusion networks are related to a wide range of popular algorithms in the neural network and machine learning literature. For example Boltzmann machines are binary state versions of diffusion networks. Kalman filters are linear versions of diffusion networks.
Diffusion networks can be trained using a variety of approaches, including Minimum Velocity, Contrastive Divergence, and standard Maximum Likelihood estimation [1]. The goal of learning in general is to learn entire probability distributions. One of the main advantages of this approach, when compared to other generative approaches, is that once a distribution has been learned, optimal inference is trivial. All it takes is to clamp the observable units and let the unobservable units settle to stochastic equilibrium.
References
- ^ [1] Movellan J. R. "Contrastive Divergence in Gaussian Diffusion Processes" Neural Computation. In Press.
Categories:
Wikimedia Foundation. 2010.