JUNE 2017

Alice’s Adventures in Volatility-land

Category: Portfolio Management


By Athena Research Team

Modern Portfolio Theory provides us with the simple yet fundamental concept for constructing portfolios for risk-averse investors: for a given level of risk, select the portfolio with the highest level of return. Portfolio risk is estimated by calculating the volatility (typically the standard deviation) of the historical portfolio returns. However, calculating and interpreting volatility for a multi-asset class portfolio is complicated given the many flexible parameters to be chosen such as period length and the differing reporting periodicities of underlying asset classes. Less informed or naïve interpretations of volatility calculations may lead to poor decisions. Looking at volatility calculations with a critical eye is necessary for prudent portfolio management.

Much like the standard reminder that "past performance is not a guarantee of future results", the same concept holds for volatility where the reporting of historical volatility is not an assurance of identical volatility in the future. In planning investment recommendations, historical volatility data need the interpretation of an experienced practitioner to make a judgment if market conditions are likely to lead to future volatility that is higher, lower or similar to recent history.

Looking at only the trailing two years ending 6/30/2016 measures US Equity volatility at just under 4%. Whereas, looking back 13 years leads to a more representative US Equity volatility of approximately 15%, or nearly double the previous calculation. Using the two year data may lead to the conclusion that the asset class is safer and more stable than a client’s risk tolerance might actually be able to tolerate under stress. (Note that the inverse is also possible: having less risk than desired due to a “too high” measurement of short term volatility, but this is typically less of an issue.) Therefore, while we provide volatility data for reference in track records on what are often short periods of time, we tend to prefer longer volatility windows for making forward looking portfolio management decisions. While it may seem obvious that longer look-back periods provide more robust data for drawing conclusions, there is more to the puzzle. As discussed in our recent paper on the subject, in addition to the challenge of appropriate look-back periods, other considerations when calculating portfolio volatility are the differing valuation periodicities of asset classes, (such as intraday measurement of liquid exchange traded securities compared to only quarterly measurement for illiquid private investments), and scenarios with a limited number of observations resulting in a less precise estimate of volatility.


Various tools can be used though to address these challenges and avoid misleading data:

  • For data look-back we can choose to incorporate a function that decays the weight of observations with time (more recent data points are weighted more heavily, but older data points are still considered). This decay function is especially helpful when modeling funds that tend to turn over their portfolios frequently.
  • For data periodicity, we use the most frequent historical data available between any two investments to calculate variance and correlation. This may include weekly, monthly, and quarterly returns, which we then combine into a single covariance matrix used to estimate volatility.
  • For instances where we have limited data, we can introduce an estimated value, or a “Bayesian prior” value, to approximate the volatility profile of the missing data.


Further, many investors who take active risk (in other words, who invest beyond passive index ETFs or similar) are taking risk of relative volatility and performance deviations versus their chosen benchmark, a phenomenon known as “tracking error”. Therefore, for actively managed portfolios, an accurate measure of tracking error (a measure of deviation from benchmark) is useful to understanding the range of expected outperformance and underperformance versus benchmark in typical market conditions. Tracking error and volatility pair together well and are important barometers for monitoring risk and constructing portfolios.

In conclusion, why have we spent the time to dissect volatility and tracking error in a white paper? The primary reason we attempt to estimate volatility and tracking error is because risk involves the permanent loss of capital. In our experience, permanent loss of capital can occur when investors experience more downside volatility than expected or more than they are willing to bear, and end up emotionally selling in times of stress. This can lead to significant changes to strategic asset allocations (often selling higher volatility equities after sustaining an unexpected drawdown in favor of lower volatility assets) which can cause long term losses in permanent capital. While it is impossible to accurately predict future volatility and tracking error, we believe that there are useful tools (greater periodicity, longer look back periods, decay functions, and stress testing) to help to ensure that portfolio management decisions are made that sustain the intended risk profiles and strategic asset allocations of individual portfolios.



This blog covers the general concepts and highlights of a longer-form, more technical white paper by Athena.  To access this white paper on the measurement and reporting of volatility and tracking error, please click here or contact your portfolio manager.

Tags: , , , ,