This article suggests two methods for deriving a statistical verdict from a null finding,allowing economists to more confidently conclude when “not significant" can in fact be interpreted as “no substantive effect." The proposed methodology can be extended to a variety of empirical contexts where size and power matter. The example used to demonstrate the method is the Economic Research Service's 2004 Report to Congress that was charged with statistically identifying any unintended negative employment consequences of the Conservation Reserve Program (the Program). The report failed to identify a statistically significant negative long-term effect of the Program on employment growth, but the authors correctly cautioned that the verdict of “no negative employment effect" was only valid if the econometric test was statistically powerful. We replicate the 2004 analysis and use new methods of statistical inference to resolve the two critical deficiencies that preclude estimation of statistical power by economists: 1) positing a compelling effect size, and 2) providing an estimate of the variability of an unobserved alternative distribution using simulation methods. We conclude that the test used in the report had high power for detecting employment effects of -1 percent or lower resulting from the Program, equivalent to job losses reducing a conservative estimate of environmental benefits by a third.