In the last post we looked at identifying the parameters of a system based upon its output, using Bayes law. Specifically, estimating the values of a set of coins thrown onto a table, given the sum values of all the coins that land heads up. We saw that as we build up a wealth of information about the system, we are able to increase our confidence in our estimate of the parameter values. In this post, we'll try to get a sense for how our confidence improves as a function of the number of measurements we have taken.

In the charts below, we plot the number of remaining potential solutions and our confidence in our prediction for 420 different runs of our simulation and estimation procedure.

In the top chart, we can see that the estimator behaves in two general ways. In some cases, it is able to eliminate all but one possibility for the values of the coins, and emerge with 100% confidence in the answer. In other cases, there are between 4 and 36 different potential solutions remaining, but our confidence in one particular solution is above our threshold of 99.9% that we decide to commit to our answer.

From the top chart we can also qualitatively describe how the algorithm eliminates potential hypotheses. The largest number of hypotheses are eliminated after the first several datapoints are included into the estimation, and all eliminations (in our sample) occurred after 180 datapoints or less.

In the bottom chart, we see how our confidence in our best guess improves with time. Interesting qualitative behavior revealed by this chart is that our confidence is not monotonically increasing as we collect new information. Presumably this is due to the randomness of the coin-flipping process. There are certain to be times when the coins repeatedly land in such a way as to suggest a hypothesis other than the actual system, only to be corrected with further measurements. There is a marked persistence in the lower bound of the envelope near the 50% mark, possibly indicating ambiguity between two final hypotheses. We also get a sense of the mean time to converge to a certain level of confidence.

The above plot gives a histogram of the number of samples that need to be included in the Bayes estimate before our confidence threshold is reached, with an average of 81 samples and median of 51. In 43 out of 419 cases the estimator was able to reach confidence of 100% by eliminating alternatives, and in the remaining 376 cases the estimator concluded with greater than 99.9% confidence.