In April 2026, we launched an array of enhancements to the Morningstar Medalist Rating (you can read more about that here). For both our analyst-driven and algorithmic approaches, the changes affect how we derive the overall Medalist Rating from underlying pillar scores and how we evaluate fees and incorporate that evaluation into the overall outcome. For our algorithmically derived ratings, we also changed how we calculate pillar scores (for analyst-driven ratings, the pillar scores continue to be based on Morningstar’s qualitative research).
Simplifying our quantitative ratings was a top priority for us during this relaunch. While our previous algorithmic approach was effective, we saw an opportunity to improve transparency. Its machine-learning models were designed to mimic analyst decision-making, but its adaptive nature made it harder to explain which data points were driving a particular fund’s pillar rating. It was clear when speaking to the users of our research that they needed to understand the data driving a pillar rating, not just the score.
To achieve this, we’ve replaced the random forest model with rules-based calculations built from specific, observable data points. These are the same kinds of inputs our analysts weigh when assigning pillar scores qualitatively. Every algorithmically assigned pillar score can now clearly be traced back to its underlying data, both through clarified narrative reports and through the inclusion of the underlying data points in our products. The ratings are easier to understand and explain to end investors and are easier to integrate into due diligence and portfolio construction processes.
Here’s what changed in each algorithmic pillar model.
People: Tracking the Manager, Not Just the Fund
The algorithmic People Pillar model now has no data points that directly overlap with those in the Process Pillar. This means our People rating is concerned only with the performance characteristics of the manager or managers running the fund in question. We use Morningstar’s PersonID data point, which tracks the full career of any manager in our database across any vehicle they have run. This allows the model to make forward-looking assessments on management changes by judging how well the new person in charge has done on other funds they have run across their career.
To apply this in the context of the People Pillar, we developed a new proprietary input: Fund Manager Successful Experience. This data point has proved to be one of the strongest predictors of future excess returns in our research and accordingly can carry up to two-thirds of the People Pillar weight. It measures whether the managers running a fund have demonstrated the ability to outperform across their careers. The model considers manager information ratios over one, three, and five years, falling back to equivalent Parent-level information ratios where manager-level track record data is unavailable. It also factors in retention and tenure at the firm level and whether managers have meaningful personal investment in the strategies they run. These inputs capture aspects of manager quality that career track records alone don’t, such as team stability, firm commitment, and alignment of interests with investors.
For passively managed funds, the algorithmic People Pillar is simplified and defaults to an Average rating, reflecting the limited impact individual managers have on passive results.
Process: Separate Models for Active and Passive
As before, we run distinct Process models for actively and passively managed funds. For active strategies, the model evaluates gross-of-fees information ratios, which carry roughly 70% of the weight when at least five years of data is available. There is an emphasis on longer time horizons, and the model requires at least a one-year information ratio to produce a score. This rewards processes that have added value consistently over time, making it more likely the fund has a durable, repeatable edge rather than just a strong but very short track record.
The model also considers risk-adjusted success at the Parent level, but importantly, this is measured within the fund’s asset class, not firmwide, so a weak equity lineup wouldn’t penalize a strong fixed-income fund. Penalties apply for equity funds with excessive style drift, as this can be a sign of process changes that make past performance less representative of likely future outcomes. Meanwhile, fixed-income and allocation strategies are instead penalized for weak benchmark correlation, which our research found to be a reliable signal of underperformance relative to Morningstar Category peers. The Process Pillar is ultimately focused on identifying funds with a durable edge that can hold up across a full market cycle.
The Passive Process rating uses a different dedicated rules-based model. Currently covering equity and fixed-income passive strategies, it first assesses whether passive tends to outperform active in each category, then evaluates the specific index tracked on representativeness, diversification, and turnover. All funds tracking the same index receive the same Process rating, which promotes consistency.
Parent: Simplified and Consistent
We also simplified the Parent model, which now evaluates every fund company through a focused, consistent set of data points: fee competitiveness across the firm; risk-adjusted success ratios over three, five, and 10 years; manager retention; average tenure; and fund obsolescence rates. A Low Parent rating caps the overall Medalist Rating at Neutral because serious stewardship problems at the firm level can undermine even strong People and Process scores.
Making Fees Explicit
Previously, there was no explicit Price score, as a fund’s fees were deducted outright from its pre-fee score. To improve transparency and comparability, we developed a new measure: The Medalist Rating Price Score is a continuous score from negative 2.5 to positive 2.5 based on a fund’s fee percentile within its category. As an example, scores would proceed as follows: 2.50, 2.49, 2.48, …, -2.50. Because it moves in decimalized increments, the Price score prevents cliff-edge changes in ratings owing to small changes in prices, allowing for a gradual increase or decrease that appropriately reflects fee impact relative to category peers. Our research has repeatedly shown that fees are among the best predictors of future relative performance, and this new measure helps ensure we capture fees in an easily understood and effective way.
How the Pieces Fit Together
Every rating input now carries a published weight. For actively managed funds, the three fundamental pillars of People, Process, and Parent account for 70% of the overall Medalist Rating. Within that 70%, People and Process are weighted equally at 45% each, and Parent at 10%. Fees make up the remaining 30% through the Morningstar Rating Price Score. For passive vehicles, the balance shifts: Pillars carry 60% and the Price score 40%. Within those passive pillars, Process dominates at 80%, with People and Parent each at 10%—reflecting how much more critical index construction is than manager skill to passive fund outcomes.
Fixed Thresholds Replace the Forced Curve
Finally, we’ve replaced the forced distribution curve with fixed rating thresholds. Previously, the number of Gold, Silver, and Bronze ratings in each category was constrained, meaning one fund’s rating could change solely because another fund moved. Now a fund’s Medalist Rating is determined by its own weighted score alone. If it clears the Gold threshold, it’s Gold regardless of what else is happening in the category.
These changes share a common thread: making the Medalist Rating more transparent, stable, and ultimately more useful. Published weights, defined thresholds, and rules-based pillar models mean that when a rating changes, investors can trace exactly what drove it and why. Complete details can be found in the Morningstar Medalist Rating Methodology.