 Bayesian experimental design

Bayesian experimental design provides a general probabilitytheoretical framework from which other theories on experimental design can be derived. It is based on Bayesian inference to interpret the observations/data acquired during the experiment. This allows accounting for both any prior knowledge on the parameters to be determined as well as uncertainties in observations.
The theory of Bayesian experimental design is to a certain extent based on the theory for making optimal decisions under uncertainty. The aim when designing an experiment is to maximize the expected utility of the experiment outcome. The utility is most commonly defined in terms of a measure of the accuracy of the information provided by the experiment (e.g. the Shannon information or the negative variance), but may also involve factors such as the financial cost of performing the experiment. What will be the optimal experiment design depends on the particular utility criterion chosen.
Contents
Relations to more specialized optimal design theory
Linear theory
If the model is linear, the prior probability density function (PDF) is homogeneous and observational errors are normally distributed, the theory simplifies to the classical optimal experimental design theory.
Approximate normality
In numerous publications on Bayesian experimental design, it is (often implicitly) assumed that all posterior PDFs will be approximately normal. This allows for the expected utility to be calculated using linear theory, averaging over the space of model parameters, an approach reviewed in Chaloner & Verdinelli (1995). Caution must however be taken when applying this method, since approximate normality of all possible posteriors is difficult to verify, even in cases of normal observational errors and uniform prior PDF.
Mathematical formulation
Notation parameters to be determined observation or data design PDF for making observation y, given parameter values θ and design ξ prior PDF marginal PDF in observation space posterior PDF utility of the design ξ utility of the experiment outcome after observation y with design ξ Given a vector θ of parameters to determine, a prior PDF p(θ) over those parameters and a PDF p(y  θ,ξ) for making observation y, given parameter values θ and an experiment design ξ, the posterior PDF can be calculated using Bayes' theorem
where p(y  ξ) is the marginal probability density in observation space
The expected utility of an experiment with design ξ can then be defined
where U(y,ξ) is some realvalued functional of the posterior PDF p(θ  y,ξ) after making observation y using an experiment design ξ.
Gain in Shannon information as utility
If the utility is defined as the priorposterior gain in Shannon information
Lindley (1956) noted that the expected utility will then be coordinateindependent and can be written in two forms
of which the latter can be evaluated without the need for evaluating individual posterior PDFs p(θ  y,ξ) for all possible observations y. Worth noting is that the first term on the second equation line will not depend on the design ξ, as long as the observational uncertainty doesn't. On the other hand, the integral of p(θ)log p(θ) in the first form is constant for all ξ, so if the goal is to choose the design with the highest utility, the term need not be computed at all. Several authors have considered numerical techniques for evaluating and optimizing this criterion, e.g. van den Berg, Curtis & Trampert (2003) and Ryan (2003).
See also
 Optimal Designs
 Active Learning
References
 van den Berg; Curtis; Trampert (2003), "Optimal nonlinear Bayesian experimental design: an application to amplitude versus offset experiments", Geophysical Journal International 55 (2): 411–421, http://www.geos.ed.ac.uk/homes/acurtis/Papers/vandenBerg_etal2003.pdf
 Chaloner, Kathryn; Verdinelli, Isabella (1995), "Bayesian experimental design: a review", Statistical Science 10 (3): 273–304, doi:10.1214/ss/1177009939, http://www.stat.uiowa.edu/~gwoodwor/AdvancedDesign/Chaloner%20Verdinelli.pdf
 DasGupta, A. (1996), "Review of optimal Bayes designs", in Ghosh, S. and Rao, C. R., Design and Analysis of Experiments, Handbook of Statistics, 13, NorthHolland, pp. 1099–1148, ISBN 0444820612, http://www.stat.purdue.edu/~dasgupta/publications/tr9504.pdf
 Lindley, D. V. (1956), "On a measure of information provided by an experiment", The Annals of Mathematical Statistics 27 (4): 986–1005, doi:10.1214/aoms/1177728069, http://projecteuclid.org/handle/euclid.aoms/1177728069
 Ryan, K. J. (2003), "Estimating Expected Information Gains for Experimental Designs With Application to the Random FatigueLimit Model", Journal of Computational and Graphical Statistics 12 (3): 585–603, doi:10.1198/1061860032012
Statistics Descriptive statistics Summary tablesPearson productmoment correlation · Rank correlation (Spearman's rho, Kendall's tau) · Partial correlation · Scatter plotBar chart · Biplot · Box plot · Control chart · Correlogram · Forest plot · Histogram · QQ plot · Run chart · Scatter plot · Stemplot · Radar chartData collection Designing studiesDesign of experiments · Factorial experiment · Randomized experiment · Random assignment · Replication · Blocking · Optimal designUncontrolled studiesStatistical inference Frequentist inferenceSpecific testsZtest (normal) · Student's ttest · Ftest · Pearson's chisquared test · Wald test · Mann–Whitney U · Shapiro–Wilk · Signedrank · Kolmogorov–Smirnov testCorrelation and regression analysis Errors and residuals · Regression model validation · Mixed effects models · Simultaneous equations modelsNonstandard predictorsPartition of varianceCategorical, multivariate, timeseries, or survival analysis Decomposition (Trend · Stationary process) · ARMA model · ARIMA model · Vector autoregression · Spectral density estimationApplications Categories: Bayesian statistics
 Design of experiments
 Statistical methods
 Optimal decisions
 Operations research
 Industrial engineering
 Systems engineering
 Information, knowledge, and uncertainty
Wikimedia Foundation. 2010.