Climate data is highly correlated through the physics and dynamics of the atmosphere. Model evaluation often involves averages of various quantities over different regions and seasons making it difficult from a statistical perspective to quantify the significance of differences that arise between a model and observations. Here we present a strategy that makes use of a set of perfect modeling experiments to quantify the effects of these correlations on model evaluation metrics. This information is incorporated into Bayesian inference through a precision parameter with informative priors. These concepts are illustrated through an example of fitting a line through data that includes either uncorrelated or correlated noise as well as to the calibration of CAM3.1. The concept of a precision parameter may be applied as a strategy to weight different climate model evaluation metrics within a multivariate normal framework. From the example with CAM3.1, the precision parameter plays a central role in rescaling the estimated parametric uncertainties to better accommodate modeling structural errors.