In recent weeks I have addressed how Stockton is depicted in national “rankings.” I looked at Gallup’s obesity index as well as a study ranking Stockton at the bottom of the “Most Literate Cities” list. In both cases, I argue that the results are misleading: Gallup’s rating seems to vary widely year to year and the margin of error is pretty significant when you look at how closely packed the cities are, meaning Stockton probably was not the fattest city in America back in 2009. Similarly, I called into question the “literacy” findings of the Central Connecticut State University study, explaining that their rankings are not comprehensive and don’t actually measure literacy. For both of these studies, I conclude that their data and methodology is not necessarily wrong, but that there is an argument to be made that their conclusions are somewhat misleading in how they depict our city. Gallup and CCSU are respected institutions who do good research.
The same could not be said of Forbes and its “Most Miserable Cities” rankings.
There are so many things wrong with this “research” that no person who has the ability to objectively look at facts should ever take this list seriously.
Last year, after Forbes named Stockton the nation’s most miserable for the second time in three years, I wrote a piece that appeared in the Stockton Record detailing why Stocktonians should take almost no stock in the Forbes article. This year, the magazine rated the city 11th. Because the ranking system Forbes utilized essentially remained the same, I decided to simply repost what I wrote a year ago, because I feel it still accurately explains just how fraudulent Forbes’ methodology is. It is a bit lengthy, but it thoroughly dispels any semblance of credibility these rankings ever held. When someone tries to tell you we live in the “most miserable city,” please forward them this article. Enjoy…
This article originally appeared in the Record on February 12th, 2011.
Imagine for a moment that you came across an academic study that rates the ineffectiveness of drug treatment programs. Strangely, the full results of the study are not available, and only a fraction of the programs in the analysis are listed. Furthermore, this study for some reason does not have a methodology section and provides limited insight into how the ratings were compiled. No formulas are provided, no data released.
Surely, this study would be dismissed as irrelevant, and its results would not be highly publicized, nor would it effect any kind of real-life decisions.
But remarkably, this is exactly what Forbes has done with its annual “America’s Most Miserable Cities” article.
Forbes has been able to dress up a simple comparison of uncorrelated data and offer it as “research,” all while keeping the actual data and results largely out of view of the public. This farce would be comedic if the article did not have such far-reaching consequences for the wrongfully implicated cities.
Full disclosure: I am a proud native of Stockton, winner of Forbes’ dubious Most Miserable City title two of the past three years.
A student of policy is taught that any serious research needs to be meticulous, the methodology must be solid, and the data must hold up to scrutiny. Forbes’ list should make any intelligent person cringe: The antithesis of good research, it is arbitrary data strung together to make a list to get more website hits.
The “methodology” provided by Forbes would be laughable if people did not take these rankings seriously. They do. Forbes is a respected publication, and when it publishes such articles, people take notice.
Why is the Forbes methodology unreliable? Because many of its indicators are only loosely related to “misery.” Forbes uses just 10 indicators to determine the level of “misery” in more than 200 metropolitan statistical areas. These indicators include some intuitive factors such as tax levels, unemployment, housing prices, foreclosure rates and violent crime. These make sense for the most part, as studies show that anxiety and mental health can be linked to these factors and are relatively stable across geographic regions.
However, inexplicably, Forbes uses some indicators that seem completely unscientific and arbitrary. For example, the performance of local professional sports teams is used as an indicator of misery.
While this indicator may seem fun and cutesy to the Forbes “methodologists,” it is ultimately useless, especially when attempting to correlate this measure with a smaller metropolitan statistical area such as Stockton, Merced, Modesto, etc.
First, professional sports teams are not present in most of the top 200 MSAs, and Forbes has also thoughtlessly excluded data on NCAA and minor league teams. Ignoring the issue that misery and sports are only loosely correlated, cannot be generalized over different regions (Bostonians are probably more miserable when the Red Sox lose than are Kansas Citians when the Royals lose) and are complicated by cities with multiple teams (Mets fans in New York are most likely not less miserable when the Yankees win), it is reckless to believe that only professional sports teams affect mood, as major NCAA programs as well as minor-league teams fuel enthusiasm in many MSAs.
Stockton’s University of the Pacific men’s basketball team (NCAA division 1) regularly wins its conference, and the city’s ECHL hockey team has led the league in attendance in all but one year of its existence, details that are both lost on Forbes.
Forbes should not include sports as a factor unless it is willing to put in the due diligence to ensure that this indicator is thoroughly measured and not just a number in a column in a spreadsheet, which seems to be about the highest level of “analysis” Forbes is willing to conduct.
Another indicator used is weather. Psychologists have linked weather patterns to mood, and seasonal depression is a very real thing.
However, Forbes mentions in its article that this indicator is “less serious.” I would have to believe that the good people of the Midwest and Northeast do not seem to think that the weather is a “less serious” indicator of their misery as they dig through multiple feet of snow.
All the while, the people of Stockton, Merced, Modesto, Sacramento, Vallejo, Salinas, Bakersfield, Miami and West Palm Beach, Fla., are miserably wearing shorts and lamenting the 50- to 60-degree winter days.
Weather is not a lesser indicator and should be weighted heavily, as weather conditions affect an area’s economy greatly, costing millions in cleanup, lost wages and productivity; clearly more than just a nuisance. Of course, we do not know the weights Forbes assigned to its indicators.
Lastly, there is an indicator for corruption that takes into account public officials who have been prosecuted. Now, I cannot for the life of me find any literature that links public officials’ corruption to misery. There seems to be not even a shred of evidence to indicate that when a public official is shamed, it contributes to the misery of the population. Even if there were a correlation, we will never know how Forbes uses this measure in its methodology.
Aside from these uninformed choices of indicators, maybe more troubling are the potential indicators Forbes decided not to use, such as population decline. If these cities are truly miserable, then the populations should be declining. However, eight of the 10 “most miserable” cities have experienced population growth in recent years, with the California cities seeing an exceptional rise in new residents over the past decade.
Stockton’s population rose approximately 19 percent in the past decade, with the other California cities on the list showing similar trends. Meanwhile, Rust Belt and Frost Belt cities are hemorrhaging residents. I would argue that if Stockton were truly miserable, people would be leaving in droves, not continuing to move in from the Bay Area.
How about home affordability rates?
Forbes seems to give considerable weight to housing statistics that show only negative effects (high prices, foreclosures), but what about positive housing stats?
Seventy-nine percent of homes sold in Stockton in the third quarter of 2010 were affordable to families earning the national median income, as opposed to just 4.6 percent in late 2005. If Forbes is going to focus so heavily on housing to determine misery, then housing affordability rates should be just as important a factor.
The biggest omission may be the absence of any kind of survey of the people who actually live in these cities. It does no good to rate “misery” without researching how the residents of these areas feel about their situations. If the people there are happy, are they really living in a miserable city?
Not surprisingly, this revelation is never addressed by Forbes. I presume conducting research with actual people is beyond the skill set of the Forbes “methodologists,” as they barely exhibit an elementary knowledge of econometrics and data analysis.
Stockton City Manager Bob Deis was quoted in the Forbes article saying, “I find (the ranking for Stockton) unfair, and it does everybody a disservice. …The data is the data, but there is a richer story here.”
I disagree. This is not data at all.
Unfortunately, employers, businesses and residents will take Forbes’ flimsy excuse for research as fact. Real data analysis research is supposed to help analyze social problems to come up with practical policy solutions, but this list provides no actual benefits to the academic community, and Forbes is wrong for trying to pass it off as research.
Forbes’ rankings are irresponsible and show a complete lack of academic integrity and an absence of the understanding of basic research design.