The Peril of Chasing Active Mutual Fund Performance Ratings

Mutual fund rating systems really only do a great job of “predicting” the past. Larry Swedroe reviews the research.

The holy grail for mutual fund investors is the ability to identify in advance which of the very few active mutual funds will outperform in the future.

To date, an overwhelming body of academic research has demonstrated that past performance not only fails to guarantee future performance (as the required SEC disclaimer states), but has almost no value whatsoever as a predictor—with the exception that poor performance combined with high expenses predicts future poor performance.

The research has shown that not only is there a lack of persistence beyond the randomly expected among mutual funds, but also among hedge funds and even pension plans—despite their use of high-powered consultants who advise them on identifying the future winners.

The evidence on plan sponsor performance is so strong that a 2008 study by Amit Goyal and Sunil Wahal, “The Selection and Termination of Investment Management Firms by Plan Sponsors,” found that if plan sponsors had remained with the investment managers they regularly fired, their returns would have been larger than those actually delivered by the newly hired managers.

The bottom line is that past performance’s only value seems to be in showing that poor performance tends to persist, with the likely explanation being high expenses.

Superstar Or Superdud?

Jerry Parwada and Eric K.M. Tan contribute to the literature on the predictive value of past performance through their February 2016 study, updated in October 2017, “Superstar Fund Managers: Talent Revelation or Just Glamor?

The authors examined the performance of funds managed by the winners of Morningstar’s coveted Fund Manager of the Year (FMOY) award. Morningstar selects its FMOY winners based on an expectation of future alpha.

Here is how Morningstar presented its 2016 winners: “To be nominated for Fund Manager of the Year, the manager’s mutual fund must be among the 1,200 that receive Morningstar Analyst Ratings and earn a rating of Gold, Silver, or Bronze. The medal rating indicates that our analysts believe a fund will outperform its category peers and/or benchmark on a risk-adjusted basis over the long haul. Looking at their individual coverage lists, analysts nominate Morningstar Medalist funds that have strong recent and long-term risk-adjusted returns, excellent stewardship practices, and broad shareholder bases. Our asset-class teams whittle down the list to a group of finalists. Then the entire analyst team meets to debate the merits of the finalists in each category, and, following those discussions, analysts vote to determine the winners.”

Flows Chase Ratings

It has already been established in the literature that investors value Morningstar’s ratings, as fund flows tend to follow them. For example, the study “Morningstar Ratings and Mutual Fund Performance,” by Christopher Blake and Matthew Morey, found that an amazing 97% of fund inflows went into four- and five-star funds, while even three-star funds experienced outflows.

Parwada and Tan examined not only the effect of mutual fund managers’ superstar status (which comes with being named FMOY) on money flows, but also on their risk-taking behavior. Their study covered FMOY winners over the period 1995 through 2012 and compared their performance to the performance of the other finalist managers. Following is a summary of their findings:

  • They confirmed that investors respond positively to mutual fund managers who win a prominent fund-manager-of-the-year award based on proven long-term record. FMOY winners garnered 21% more assets over the 12-month period following the award announcements.
  • Award-winning managers generate positive risk-adjusted performance in the very short term. FMOY winners generated outperformance of 1.6% for the three-month period following award announcements using the Carhart four-factor (beta, size, value and momentum) model. However, that outperformance disappeared when measured during the subsequent six-, nine-, 12-, 24- and 36-month periods. The results were statistically significant at the 1% level of confidence.
  • Award-winning managers do not take on increased risks or trade more actively as implied by attention-induced incentives. There was no evidence of managers becoming overconfident (which could negatively impact performance) after receiving the award.

Success A Red Flag?

An interesting finding is that winning managers generally manage smaller and younger funds when compared to finalist managers. The average fund size of award winners was less than $200 million in assets under management.

Thus, not many investors were benefiting from the winners’ success. This raises the question: Does success sow the seeds of its own destruction? To answer it, consider Parwada and Tan’s finding regarding short-term outperformance, which is consistent with the findings of prior research.

It’s also consistent with the rational expectations equilibrium argument Jonathan Berk and Jules van Binsbergen present in their paper, “Measuring Skill in the Mutual Fund Industry.”

They write that skill exists among superstar fund managers, and investors recognize this skill and reward the managers with capital inflows. The increased capital arbitrages away outperformance due to diseconomies of scale—success does contain the seeds of future erosion.

However, this isn’t the only possible explanation for the short-term outperformance that Parwada and Tan documented. Before concluding there is skill, we should consider other theories.

‘Persistent Flow’

A second theory is that mutual funds’ short-term predictability is driven by stock return momentum. A third explanation for the positive flow/performance relation is what’s called the “persistent-flow” hypothesis. Research has found that investor flow-related buying pushes up stock prices beyond the effect of stock return momentum, and that fund performance owes more to flow-related trades than to managers’ skill.

Because fund flows have been shown to be highly persistent, mutual funds with past inflows (outflows) are expected to receive additional capital (redemptions), expand (liquidate) their existing holdings, as well as drive up (down) their own performance in subsequent periods. This is a very different explanation than the “smart-money” hypothesis.

In their study “What Drives the “Smart-Money” Effect? Evidence from Investors’ Money Flow to Mutual Fund Classes,” published in the January 2017 issue of the Journal of Empirical Finance, George Jiang and H. Zafer Yuksel found that the flow/performance relationship explains the short-term outperformance.

Before concluding, I’ll review some of the other evidence on Morningstar ratings and future performance.

When You Wish Upon A Morningstar

The November 2009 issue of Morningstar’s FundInvestor provided the following evidence on its five-star funds:

  • The 2004 class of five-star domestic funds had a five-year rating of just 3.2 stars, just slightly above average. The average fund underperformed its risk-adjusted benchmark by more than 1%.
  •  The 2005 group of five-star funds turned in a three-year rating of just 3.1 stars.
  • The 2006 group had a three-year rating of just 2.9 stars.

The paper “Mutual Fund Ratings and Future Performance” from Vanguard provides further evidence on the ability of star ratings to predict the future.

Authors Christopher Philips and Francis Kinniry Jr. examined excess returns over the three-year period following a given rating.

They chose the three-year period because Morningstar requires at least three years of performance data to generate a rating, and investment committees typically use a three-year window to evaluate the performance of their portfolio managers.

The 2010 study covered the period June 30, 1992 through August 31, 2009. Following is a brief summary of the authors’ findings:

  • 39% of funds with five-star ratings outperformed their style benchmarks for the 36 months following the rating, while 46% of one-star funds did so.
  • All the star-rating groups produced negative excess returns in the succeeding three years. Even worse, the four- and five-star figures were more negative than those of lower-rated groups.

No Signal Of Success

Philips and Kinniry concluded: “Higher ratings in no way ensured that an investor would increase his or her odds of outperforming a style benchmark in subsequent years.”

In fact, they found that “5-star funds showed the lowest probability of maintaining their rating, confirming that sustainable outperformance is difficult. This means that investors who focus on investing only in highly rated funds may find themselves continuously buying and selling funds as ratings change. Such turnover could lead to higher costs and lower returns as investors are continuously chasing yesterday’s winner.”

The bottom line is that using Morningstar ratings to identify future outperformers is like driving forward while looking through the rearview mirror; their ratings system does a great job of “predicting” the past. That applies to not just all rated funds, but even to FMOY winners.

This commentary originally appeared October 25, 2017 on ETF.com

 

By clicking on any of the links above, you acknowledge that they are solely for your convenience, and do not necessarily imply any affiliations, sponsorships, endorsements or representations whatsoever by us regarding third-party Web sites. We are not responsible for the content, availability or privacy policies of these sites, and shall not be responsible or liable for any information, opinions, advice, products or services available on or through them.

The opinions expressed by featured authors are their own and may not accurately reflect those of the BAM ALLIANCE. This article is for general information only and is not intended to serve as specific financial, accounting or tax advice.

© 2017, The BAM ALLIANCE

Privacy Policy | Legal Notices | Sitemap