Determining the optimum allocation of resources for testing Department of Defense (DoD) systems is challenging, primarily due to the lack of an accepted and easily obtained value for test results. Past attempts to quantify test value have focused on prioritization schemes or estimates of cost savings postulated to occur by finding and fixing problems as early as possible. These methods have not gained traction, largely due to difficulties in obtaining cost estimates and historical data. In addition, the use of a cost metric does not capture the true value of DoD testing, which is to reduce technical uncertainty and programmatic risk. We propose a methodology to determine test value by estimating the amount of uncertainty reduction a particular test is expected to provide using Shannon's information entropy as a basis for the estimate. We apply the methodology to a small aircraft portfolio consisting of five actual DoD flight tests and a simulated large test portfolio with a single decision maker involved in a cost-constrained resource allocation. We conclude that using uncertainty reduction to measure test value is easy to apply, produces results that are intuitively appealing, and produces portfolios that outperform those selected using the existing subjective DoD process.