Inventory
info icon
Single family homes on the market. Updated weekly.Powered by Altos Research
722,032+456
30-yr Fixed Rate30-yr Fixed
info icon
30-Yr. Fixed Conforming. Updated hourly during market hours.
7.02%0.01

MBA: New Quarter, New Record in Foreclosure Activity

The MBA’s quarterly National Delinquency Survey is out today, and it shows that homes entering foreclosure during the second quarter set a record high, beating the previous record set during the first quarter — an indication that the current downturn in housing has yet to reach a bottom.

The rate of loans entering the foreclosure process was 0.65 percent on a seasonally adjusted basis, seven basis points higher than the previous quarter and up 22 basis points from one year ago. This quarter’s foreclosure starts rate is the highest in the history of the survey, with the previous high being last quarter’s rate.

The MBA said that residential delinquency rates stood at 5.12 percent of loans outstanding in the second quarter on a seasonally adjusted basis, up 28 basis points from one quarter ago, and up 73 basis points from one year ago. The delinquency rate does not include loans in the process of foreclosure. From the press statement:

… there is a clear divergence in performance between fixed rate and adjustable rate mortgages due to the impact of rate resets. While the seriously delinquent rate for prime fixed loans was essentially unchanged from the first quarter of the year to the second, and the rate actually fell for subprime fixed rate loans, that rate increased 36 basis points for prime ARM loans and 227 basis points for subprime loans.

The MBA unfortunately provided the same flawed analysis to this quarter’s numbers as it had in last quarter’s report, alleging that the foreclosure rate would have “dropped” if it weren’t for the effects of a few states:

“What continues to drive the national numbers, however, is what is happening in the states of California, Florida, Nevada and Arizona. Were it not for the increases in foreclosure starts in those four states, we would have seen a nationwide drop in the rate of foreclosure filings … the problems in these states will continue, and they will continue to drive the national numbers, but they do not represent a national problem.” [attributed to the MBA’s Doug Duncan]

In addition to the fallacy of trying to compartmentalize the real estate problem, the above language suggests that the MBA failed to properly analyze its data in order to support its claim. I’ve written about this before, but I’ll explain it again. Most with a rudimentary exposure to statistics will understand the concept of a trimmed mean — in plain English, you can’t compare the national foreclosure rate to some “modified” national foreclosure rate that excludes the four highest-rate states and then conclude that there is a difference between the two. That is especially true in this case, where Doug Duncan is essentially arguing that the four high-rate states are the statistical equivalent of outliers. Instead, you’d need to compare the national foreclosure rate to a “modified” foreclosure rate that is calculated as a proper trimmed mean — i.e., you’d have to exclude both the highest AND the lowest rate states from the “modified” rate before making any sort of meaningful comparison and conclusion. (*see note) I don’t see any evidence that this was done this quarter or last, which is why I’m suspect of the claims being thrown out there on this. And as an industry guy, I really hate lumping in the MBA with the NAR, but both have been showing an apparent penchant for sloppy analytical work. There’s much more in the full press statement — you should be sure to read all of it. *NOTE: For any stat-heads out there, yes, there are other methods that can be used to control for the effect of outliers, and there are other measures of central location that can be used as well. The trimmed mean example above is used more for illustrative purposes. Any attempt to control for the effect of outliers requires consideration of outliers high AND low, not just selectively removing the effects of high-end outliers in one period and the then comparing that result to a previous period (where the same data points ostensibly weren’t identified as outliers). Doing so leaves little doubt as to what the outcome of such a flawed comparison will ultimately be.

Most Popular Articles

3d rendering of a row of luxury townhouses along a street

Log In

Forgot Password?

Don't have an account? Please