Forecasters’ projections of interest rates vary a great deal. We use a Taylor rule to investigate two possible reasons why. Namely, do differences arise because forecasters have different projections for output growth or inflation, or do they arise because forecasters follow different guidelines to predict what the Federal Reserve will do with the federal funds rate? We find evidence for both explanations. Forecasters appear to use very different projections for inflation and output growth, but they also seem to use dramatically different Taylor rule coefficients.