Abstract

Price indexes based on the repeat‐sales model are revised all the way to the beginning of the sample every time a new quarter of information becomes available. Revisions can adversely affect practitioners. In this paper we examine this revision process both theoretically and empirically. The theory behind the repeat‐sales method says that revisions should lower the standard error of the estimated indexes; we prove that, in fact, the revised index is more efficient than the original one. This implies that large samples should make revisions trivial. However, our data, and the Freddie‐Fannie data, suggest that revisions are large, insensitive to sample size and systematic; revisions are more likely to be downward than upward. In Los Angeles and Fairfax, revisions are usually downward and statistically significant. This bias in initial repeat‐sales estimates is caused by sample selectivity; properties with only one or two years between sales do not appreciate at the same rate as other properties. We hypothesize that these “flips” are improved (possibly cosmetically) between sales. One implication of our analysis is that flips should be removed or downweighted before calculating repeat‐sales indexes. The same model estimated without flips appears free of bias. We find small increases in efficiency from adding up to 4,300 observations to a base of 1,200.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.