The Ten Pillars of Quality Investing or: How I Learned to Stop Dumpster Diving for Undervalued Stocks

After recently having completed testing on a general method of discounted cash flow (DCF) analysis for estimating a broad basket of stocks’ intrinsic values, I became more concerned with “quality”. While DCFs remain the foundation of any sound business valuation, I discovered they are highly sensitive to the assumptions and data used. Slightly changing a minute detail can drastically influence the result causing an attractive investment to all of sudden seem not so attractive and vice versa. While relative valuation methods were a natural alternative (Wall Street’s preferred choice, in fact) to circumvent the sensitivity issues, I was inclined to believe that an ability to define robust ‘quality factors’ would complement the ideological purity of the discounted cash flow approach much better. The purpose of this discussion is to demonstrate that a good company can indeed also be a good investment.

Outline of the Ten Pillars
The Ten Pillars of Quality Investing I have identified, in no particular, are as follow:

  1. Acceptable Corporate Risk Levels
    • Financial risks
    • Operating risks
  2. Conservative Accountancy
    • Quality of earnings
    • Special items and special accounts
    • The goodwill account
  3. Strong Profitability
    • Cash flows on assets
    • Operating margins
  4. Operationally Efficient
    • Use of working capital
    • Use of physical assets
    • Use of people
  5. Wide Economic Moat
    • Gross profitability
    • Economies of scale / Margin of safety
    • Barriers to entry / protections on certain assets
  6. High Relative Growth Versus Peers
    • EPS growth and acceleration
    • Sales growth and acceleration
  7. Shareholder Friendly Policies
    • Yield and/or net payout yield
    • Payout policy relative to a peer group
    • Long-term dividend growth
    • Dilution potential
  8. Healthy Expectations Management Policy
    • Earnings surprises and consistency
  9. Positive Smart Money Sentiment
    • Insider buying
  10. Favorable Market Positioning
    • Market share
    • Market conditions

These pillars are not all-inclusive of all possible quality-indicative factors. Rather, these factors have been limited for the sake of concision and quantifiability. In addition, use of the pillars will not substitute for good financial planning or a well-thought through asset allocation strategy. However, used in aggregate, they can help quantitatively evaluate the quality of a firm and thus a potential equity investment in that firm based on factors with little to no correlation with traditional value factors. The summary of findings are included at the conclusion of this analysis. Skip to results.

Please note that the following discussion assumes a basic knowledge of accounting concepts and an intermediate knowledge of financial analysis. David Harper of Bionic Turtle has authored perhaps the best primer on the analysis of financial statements. I highly recommend this reading to all levels of investors.

Definition of Terms
In common terminology, the terms value and quality have interrelated meanings. For the intent of this discussion, value denominates price and worth to the same units of quantity (e.g., dollars). Quality is a component of worth but it does not produce a result which is directly comparable to price.

Some analytical methods are effective when used as stand-alone metrics that can be safely compared across all firms. However, the majority of methods utilized under the Ten Pillars are intended to be most effective when comparing a firm versus a peer group of similar firms. A broad peer group typically means comparison at the industry level (via Global Industry Classification Standard (GICS) code). Broad peer group analyses can also be held on the sector level (i.e., first two digits of the GICS code). More specialized analyses sometime require a comparison with a much narrower peer group. A narrow peer group compares a firm with a much smaller group, usually direct competitors or firms operating under similar market conditions. I am unaware of a good classification system to pick a narrow peer group.

Background
The two fundamental drivers of equity value are: (a) shareholder’s residual claim on assets; and (b) discounted cash flows to equity investors.

Since owners common equity have a residual claim on the value of a firm’s assets, equity can be thought of a call option on the value of firm, in which the terminal payoff to shareholders can be expressed as:

{V_E = \max(0, V_F - V_R)}
Where:
{V_E=} The value of a firm’s equity;
{V_F=} The value of a firm;
{V_R=} The residual claims of other parties, including other groups of capital investors;

This abstraction affirms Charlie Munger’s wisdom that “You must value the business in order to value the stock.” [1. Charles Munger. Vice President, Berkshire Hathaway] Since a firm is itself an investment, we now turn to valuing investments in general.

The time value of money principle says that the present value of an investment is equal to the sum of it discounted cash flows (DCF) over the appropriate time period and reflecting an appropriate opportunity cost (i.e., cost of capital). In its simplest form, a DCF looks like:

(1) {V_0 = } \sum{_t^T} \frac{C_t}{(1+r)^t}
Where:
{V_0 =} The value of an investment at some time ‘{t}‘;
{C_t =} The future value of a the cash we to be received at some time ‘{t}‘;
{r =} The discount rate which reflects a real (inflation adjusted) risk-
free rate, opportunity costs, and risk premiums that compensate investors for bearing distinct types of risk [1. CFA Institute]; and,
{T =} The terminal time vector.

In the case of valuing equities, investors will want to ensure that they are measuring cash flows to equity, which is not necessarily equivalent to reported net income.

The basic concept of discounted cash flows can become very complex as each of the parameters is extrapolated. Professor Aswath Damodoran at NYU Stern’s School of Business has posted some excellent working papers and presentations on the topic [2. Damodoran, Aswath. Growths Rates and Terminal Value] [3. Damodoran, Aswath. Valuing Firms with Negative Earnings] [4. Damodoran, Aswath. Estimating Beta] [5. Damodoran, Aswath. Adjusted Net Capital Expenditures]. With some major adjustments, I have borrowed extensively on his writings. While the added complexity adds net value, it is also a source of many sensitivity and robustness issues. Although a properly implemented DCF analysis is and will forevermore be the king of investment valuation tools, placing too much trust in a single number can lead to bad investment decisions.

In addition to the sensitivity issues, my research on discounted cash flow analyses strongly suggests that there is not much difference in terms of future performance between a “modestly priced” investment and an outright “dirt cheap” one. Whether a stock appears to be slightly or majorly undervalued is almost moot; the fact that it is “a value” at all is really all that matters. While it may be no great revelation that stocks don’t need to be “dumpster diving cheap” in order to provide good returns, this finding sparked my personal ‘flight to quality’ which represents a paradigmatic shift in my investment philosophy.

Perhaps the most important aspect of using quality factors is that there is little overlap with traditional value metrics like price-to-earnings and price-to-book ratios. Research has consistently demonstrated that the value effect among stocks is an overwhelmingly powerful force driving future returns [6. Novy-Marx, Robert. The Quality Dimension of Value Investing] [7. Quantpedia. Value (Book-to-Market) Anomaly] [8. Quantpedia. Value Premium in Large Cap Stocks] [9. Quantpedia. Long Term PE Ratio Effect in Stocks]. However, value’s ability to drive returns persists only as long as the investment remains under-valued. Ceteris parabus, an under-valued investment which appreciates in price to the point of its fair value ceases to be a good investment. Vice, an investment in a well-run and investor friendly company can appreciate much further and longer as rock star management keeps finding ways to create value. If the quality hypothesis is correct, this would corroborate notable quality-centric investors’ core premise that “you [almost] can’t pay too much for a great company”.

Notable exceptions to value orthodoxy exist. For example, Benjamin Graham who is usually relegated in history books to value investing orthodoxy, defined in The Intelligent Investor seven rules of thumb for the enterprising investor. Of these seven guidelines, five of which concern quality and only two deal with traditional value metrics. Perhaps more notably, Phillip Fisher codified his “15 Points to Look for in a Common Stock” with an almost single-minded devotion to seeking high-grade investments opportunities in best-of-class enterprises, almost irrespective of their selling price. Fisher’s methods are as relevant now as when they were first published in the seminal work, Common Stock and Uncommon Profits. Fisher’s points, however, were not designed for the weekend warrior without the resources required to constantly interface with company managers and operators. Warren Buffet is perhaps the most widely renowned quality-centric investor. While Buffet never wrote a complete treatise on his approaches, he has had much written about him. In The New Buffetology, Mary Buffet and David Clark set out “Warren Buffet’s Ten Points of Light” for selecting quality investments; not one of the ten points focus on price paid.

The overarching problem with defining and testing ‘quality factors’ is that quality is extremely difficult to quantify. Clearly, forward-looking and qualitative judgment cannot be avoided if one expects to be able to pick out the truly best-of-breed enterprises. In spite of the challenges, I will attempt to demonstrate that quantification is both possible and useful. While most of the constituent factors of the Ten Pillars are “plain vanilla” and “industry standard” management tools used by industry analysts and business professionals, many have been largely disregarded by investing public due to that, on their own, they show little to no predictive power. However, research suggests that when used in aggregate, the whole is truly greater than the sum of its parts.

This quality-centric approach has been designed to hold unique advantages over and used in a synergistic fashion with traditional value-centric analyses. In short, it is meant to identify superior firms. Further analysis would then reveal which of these firms’ equity can be gotten a reasonable price. Moreover, I do not doubt that this approach can be used in an additive fashion with many other stylistic approaches (e.g., growth, technical analysis, market timing, sector and asset class allocation, etc…) and systems (e.g., Graham’s Enterprising Investor, Graham’s Defensive Investor, CANSLIM, Greenblatt’s Magic Formula, NASDAQ’s Dirty Dozen, etc…). For some examples of guru-based systems, see my post Grading the Gurus. As a caveat, I stress the need to acknowledge the fundamental uncertainty involved with relying on past results to predict future performance.

1. Acceptable Corporate Risk Levels
Riskier firms demand higher potential returns in order to qualify as sound investments.  Risk factors are generally only meaningful when other factors which define the potential rewards are simultaneously considered. Exceptions to this rule tend to be the extreme cases. Most of the time, risk just shifts around the potential for rewards for one group to another.

Because risk is such a broad topic, we will focus on a just subset of non-systemic risk factors involving firm-specific operations and financial structure. Broader risks enter the calculus at an earlier stage in the analysis when deciding where to invest and how much to allocate. Sovereign, asset class, and sector risk factor analyses are logical places to start. However, I would like to echo Peter Lynch’s advice, “If you spend… 13 minutes analyzing economic and market forecasts, you’ve wasted 10 minutes”. By allocating too much time to predicting something as unpredictable as the broader economy, investors will likely miss out on some great opportunities. How much time is too much depends on one’s competencies. Personally, I belong to Lynch’s camp.

A firm’s intrinsic risk factors can be divided into operating and financial risks. Operating risks can be subdivided into operating leverage and liquidity risks. Financial risks include financial leverage and solvency risks. Because different businesses operate under different market conditions, it is important to consider these risk factors in context with a firm’s industry and competitors.

  • Financial Leverage

Financial leverage simply amplifies the potentials risks and rewards of an investment. It is usually expressed as the ratio of total assets to shareholder’s equity, or:

(2) \\ {Financial \, Leverage = \, } \frac {Total \, Assets}{Shareholder's \, Equity} \\

This can alternatively be expressed as:

(3) \\ {Financial \, Leverage = \, 1+ \,} \frac {Debt \times (1-Rate_{Corporate \, Tax})}{Equity} \\

Formulas 2 and 3 are industry standards, but are not equivalent in the real world. Formula 2 measures balance sheet leverage. Formula 3 measures the leverage of invested capital. Being somewhat agnostic to accounting conventions, I have found more uses for forumla 3. Furthermore, experts disagree over the whether to utilize market or book values. Market values are appropriate when estimating the weighted average cost of capital (WACC) while book values are more reliable for estimating a firm’s intrinsic risks.

Leverage should always be analyzed in context with a firm’s peer groups (i.e., industry classification, competitors, etc…) to reflect different kinds of market conditions. Firms with a large degree of financial leverage are typically risky, but there are notable exceptions. For example, utilities firms which are often laden with large amounts of debt relative to equity are not inherently more risky due to having this debt because their revenues and costs are predictable. As a result, borrowing costs as percentage of assets are often in line with other sectors. Therefore we define a relative leverage ratio as follows:

(4) \\ {Relative \, Financial\, Leverage = \,} \frac {(Firm \, Financial \, Leverage)}{(Expected \, Industry \, Financial \, Leverage)} \\

Firms that are burdened with a high degree of financial leverage relative to the norm deserve extra scrutiny. All things held equal, highly levered firms require greater expected returns to justify the added risks and the potential drags on returns from interest and financing costs. DuPont Analysis is a useful tool which illustrates the effects of financial leverage by breaking down return on equity (ROE) to several components.

Solvency is a complementary measure of financial risk. Solvent firms are those that can withstand shocks like asset liquidations and write-downs with minimal long-term impact to operations. The canonical measure of solvency, the Altman Z Score, is inherently biased because it is fitted to historical data and it assumes constant and/or predictable earnings. Instead, I recommend an analysis of deep liquidity in order to infer solvency. The ratio of net tangible asset value to total assets provides a decent first inspection of the depth of liquidity. More detailed analyses of deep liquidity would break down a balance sheet’s assets and liabilities into varying levels of liquidity, adjusted for reality if necessary. The effects of off balance sheet items (e.g., pension plans) should be considered whenever possible.

  • Operating Leverage

Operating leverage, which is usually defined as the ratio of fixed to variable costs, is analogous to financial leverage. It can be represented as follows:

(5) \\ {Operating \, Leverage = \, 1+ \,} \frac {Fixed \, Costs}{Variable \, Costs} \\

A special case where the costs of goods sold is equal to variable costs and SG&A is equal to fixed costs can be more easily expressed in financial databases as:

(6) \\ {Operating \, Leverage = \, 1+ \,} \frac {((Gross \, Margin) - \, (Operating \, Margin))}{(1 - \, Gross \, Margin)} \\

Because:

(7) \\ {Operating \, Leverage = \, 1+ \,} \frac {((1 - v) - \, (1-v-f))}{(1 - (1-v))} \\
Where:
{v =} The fractional variable costs as a percentage of sales; and,
{f =} The fractional fixed costs as a percentage of sales.

Similarly to financial leverage, operating leverage must always be taken in the broader context of a peer group as follows:

(8) \\ {Relative \, Operating \, Leverage = \, 1+ \,} \frac {(Firm \, Operating \, Leverage)}{(Expected \, Industry \, Operating \, Leverage)} \\

This use of operating leverage tells investors how risky a firm’s operations are on a stand-alone basis and in relation to its peer group. Firms with high fixed costs relative to sales are inherently risky. This form of operating leverage should not be confused with degree of operating leverage (DOL) which is more strictly defined as the ratio between gross profits and operating profits. DOL can be a useful metric to convert estimated revenue growth rates into forward-looking profit growth rates and vice versa. While DOL is by far the superior metric, it is often meaningless in a historical context for firms in the growth stage and whose operating profits are negative.

Measures of liquidity including the current ratio and quick ratio are able to provide further color to the nature of a firm’s operating risks. Firms whose liquid current assets well exceed their current liabilities are more resilient to sudden liquidity crunches which can cripple even the most solvent firms.

2. Accountancy
Firms that utilize conservative accounting methods generally make for good investments. While quantitative methods cannot detect outright fraud, they usually can detect early warnings signs. Firms that have slipped into fraudulent accounting practices typically don’t start out that way. Usually the fraud is presaged by aggressive accounting and the use of ‘special accounts’ like goodwill on the balance sheet and extraordinary items on the income statement. Although accounting peculiarities usually have legitimate purposes (e.g., smooth out earnings volatility), even their well-intentioned use can quickly slide into abuse if a firm with inadequate executive oversight or poor internal controls is unable to deliver on its expectations.

An easy and effective gauge of accounting practices is an analysis of earnings quality which compares balance sheets and income statement accruals. Accrual accounting is usually required for publicly listed companies in order to match revenues to the expenses incurred in procuring those revenues. Accrued assets and revenues result in increases in a balance sheet’s value which are typically non-cash in nature. Under normal circumstances, accrual accounting facilitates an understanding of how costs scale with revenues and allows managers to smooth out cash earnings volatility. However, industry standards and guidance regarding revenue recognition and inventory valuation methods gives managers quite a bit of latitude in this area. The thin red line between normalizing earnings and aggressive accounting is thin at best. While accrued earnings can show a profit, cash earnings can tell a different story altogether. IASB and FASB are collaborating on proposal to standard revenue recognition which would improve the reliability of and cross-comparability between various income statements and balance sheets. Until this happens (i.e., when hell freezes over), investors should inspect earnings quality through an analysis of accruals.

There are many approaches to measuring earnings quality through accruals but they are all pretty good. Blackrock Asset Management was an early adopter of using earnings quality in its global portfolio management strategy [10. Novy-Marx, Robert. The Quality Dimension of Value Investing] [11. (Kozlov and Petajisto, 2013)]. If Blackrock is doing this, it would probably be wise to take notice.

As a first check, I typically use the following method to investigate accruals using two balance sheets from separate reporting periods:

(9) { Accruals_{BS} \, = \, (Assets_T (0) - Cash_T (0) - Liabilities_C (0) ) - (Assets_T (t) - Cash_T (t) - Liabilities_C (t) )}
Where:
{Accruals_{BS} = } Balance sheet accruals;
{Assets_T = } Total assets;
{Cash_T = } The sum of cash and cash equivalents; and,
{Liabilities_C = } Current liabilities.

A complementary approach compares net income to its closest cash flow analogue. The following approach can be used to investigate income accruals by comparing an income statement to a statement of cash flows of the same period:

(10) { Accruals_{IS} \, = \, NetIncome_{BX} - (CashFlows_{Ops} + CashFlows_{Inv} ) }

Where:
{Accruals_{IS} = } Income statement accruals;
{NetIncome_{BX} = } Net income before extraordinary items;
{CashFlows_{Ops} = } Cash flows from operations; and,
{CashFlows_{Inv} = } Cash flows from investing activities.

Formulas 9 and 10 can be scaled a number of ways including to assets and/or revenues. It really does not matter that much. The key things to look for are accruals which are low and low in absolute value. Low accruals state that a firm’s earnings estimate its true cash generation power. However, negative accruals could also be a red flag signaling that the firm is under-investing and/or withholding too much cash.

Additional measures of conservative accounting practices include aggregating several years of special items on the revenue statement and looking out for abusive uses of the goodwill account. A firm recurrently using special items to waive off unprofitable quarters clearly indicates that unprofitably is the norm. A firm with a large amount of goodwill or a large change of goodwill on the balance sheet indicates bad acquisition practices and portends of upcoming write downs. All things held equal, asset write-downs are more conservative than increases of the goodwill account under mark-to-market accounting.

3. Profitability
Profitability should be a no-brainer. The two most important measures of profitability are return on assets and operating margins. Return on assets is relatively more important because it is a metric with broad cross-comparability across numerous companies and sectors.

Assets represent a firm’s economic base. This base can exert a “dead-weight” drag if not properly capitalized. Thus, we want to measure their ability to generate value. For measuring return on assets, I prefer to use a cash flow derivation. This is a stylistic preference, but an important one because investors should prefer firms which generate healthy cash flows based on their asset bases.

Operating margins measure profitability and the sustainability of profits. Firms with thinner margin relative to its peers likely either are less efficient or sell a more commoditized product blend. Operating margins are meaningful both in the context of a firm versus the broader market and in context with a peer group. For measuring operating margins, the standard earnings-based approach is appropriate since the accrual method matches revenues to expenses.

4. Operating Efficiency
Measures of operating efficiency attempt to quantify a firm’s ability to make good of use available capital. Measures of invested capital efficiency are fundamentally different in scope. The three areas of operating efficiency correspond to the the three primary types of available capital: working capital; physical (i.e., “operating”) assets; and, people.

A comprehensive analysis of working capital efficiency can be performed through a Cash Conversion Cycle (CCC) analysis. Basically, CCC measures the number of days it takes to convert sales into cash flow. Generally, faster is better. The standard formulation on Wikipedia is sufficient as stated. The Cash Conversion Cycle should be analyzed in context with a peer group due to the various market conditions that exist in different types of businesses. For example, government contractors like Boeing typically receive and pay cash on an irregular basis; the US government is a slow payer but it always pays. In addition, the cash conversion cycle formula is prone to unrealistic and screwball results if, for example, a firm delays payments to suppliers. Although this would shorten the cycle, it is not indicative of good business practices. For this reason, I recommend looking for a low absolute cash conversion cycle relative to a narrow peer group.

Asset efficiency can be measured through asset turnover; simply sales over assets. A better formulation would be to compare sales to physical assets. This alternative formulation gets closer to heart of the issue by asking how effective PP&E and working capital are at producing revenues. Asset turnover is highly industry specific and should be analyzed in the context of a broad peer group.

People efficiency is extremely hard to determine based on solely on standard financial statement information. Good managers realize the people are a firm’s greatest asset and therefore have come up with all sort of metrics to measure how efficient their people are at their professions. For the sake of simplicity and cross-comparability, I recommend simply looking at sales per employee. The number of employees employed by a firm can usually be found on their financial statements and most financial databases record this information.

Backward-looking analyses of the efficiency of invested capital (i.e., debt and equity) are not part of the The Pillars. While an analysis of invested capital is useful in its own right, previously invested capital represents a sunk cost which can be recovered only at the market price (whether higher or lower). Sunk costs have necessarily nothing to do with a firm’s current operating efficiencies and/or prospects and opportunities for return. Therefore, analyses of invested capital do not really fit into either the “profitability” or “operating efficiency” categories. However, since forward-looking measures of return on capital (ROC) is ultimately what we’re trying to measure, popular methods of measuring of ROC do warrant mention [12. Several variations of return on capital (ROC) exist as metrics for a firm’s efficiency at turning investor capital into profit. Some of these variations include: vanilla return on capital (ROC); return on invested capital (ROIC); Joel Greenblatt’s Return on Capital; return on capital employed (ROCE); cash flow return on investment (CFROI); and more. Different measures of ROC are highly specified. The proper metric to use depends on what cash flows are being measured and the type of capital investor they flow to.]. Since ROA already captures most (if not all) of the action in ROC, you won’t miss a lot by omitting backward-looking ROC measurements from your investment analysis.

5. Wide Economic Moat
Possessing a wide economic moat is roughly synonymous with Warren Buffet’s “margin of safety” principal. Morningstar’s methodology on measuring “economic moats” which stand the test of time provides very good guidelines on measuring a firm’s economic margin of safety [12. How Morningstar Measures Moats] [13. Morningstar Investment Glossary. Economic Moats]. Without replicating Morningstar’s work, I would like to briefly focus on three areas which I believe to be the most conducive to quantitative analysis: gross profitability; scale; and, protections.

  • Gross Profitability
    Possessing high gross margins relative to peers indicates that a firm is either a low cost producer or the firm’s products/services have high switching costs. In addition to looking at the gross number relative to a small peer group, it is equally as important to look for changes in gross profitability from one reporting period to the next. A firm which experiences a significant decline in gross margins could be a serious red flag that competition is eroding pricing power, operational inefficiencies have crept into the business model, or that the general market environment is deteriorating. In commodity driven businesses, a decline in gross profitability may just be due to cyclical changes in production costs; this is important to check.
  • Scale
    Scale is synonymous with “economies of scale”. Well-run firms of significant scale can employ their larger asset bases and economies to overwhelm competitors and bludgeon their ways into new markets. Scale alone says nothing about the efficacy of a firm and therefore should never be used as standalone metric to rate the attractiveness of an investment. Although larger firms are able to more heartily weather downturns and their returns are typically less volatile, they tend to produce lower shareholder returns than smaller and nimbler firms. Therefore, scale is an excellent example of a metric where the total is more than the sum of its parts when analyzed in the broader contexts of profitability, efficiency, and other factors. To measure economic scales, we might turn to:

      • The size of the manufacturing base measured by net operating assets (NOA);
      • The size of the entire enterprise measured by enterprise value (EV); and,
      • Market share measured by a firm’s sales relative to a summation of its competitor’s sales.
  • Protections and intellectual properties
    All things held equal, investors should prefer to partake in enterprises with high barriers to entry and protected value propositions. While the value of intangible assets on a balance sheet are a good place to begin, their stated values are often just accounting conventions with little basis in reality. Often goodwill is lumped into intangible assets and should be removed from the calculus. Efforts should also be taken to adjust the book value of intangible properties to their intrinsic and/or market worth.


6. High Relative Growth versus Peers
Growth (i.e., fiscal momentum) is a cliché among investment methodologies. While fiscal growth is an important consideration, one reason that growth matters is because people think it matters due to the breadth and salience of research on the phenomenon [14. Fama and French. Size, Value, and Momentum in International Stock Returns. 2013] [15. Quantpedia. Combining Earnings, Revenue, and Price Momentum]. Despite its overuse, growth still has important implications in the analysis of quality. Firms which have outpaced their peers in terms of earnings and revenue growth have demonstrated their ability to compete and gain market share. Holding all other things equal, a firm which has grown faster than its peers can be expected to continue outpace due to the inertial forces of consumer preference, a persistence of a good business strategy and its implementation, and plain old hype.

On a scale of relative importance, earnings-per-share (EPS) growth is more important than sales growth. Strong EPS growth indicates both that the firm has grown and that it did not over-dilute shareholder equity in doing so. Strong sales growth is best when accompanied by strong earnings growth, but sales growth in of itself can be enough to drive stock prices higher when a firm captures additional market share and increases its competitiveness. In a recent edition on Mad Money airing 6 February 2014, Jim Cramer (who I respect) proclaimed that this market wants “revenue-fueled growth” [16. Mad Money. 2013 February 6]. Amazon ($AMZN), for example, which has had immense revenue growth, also has no plans to achieve large scale profitability. Yet its valuation continues to soar due as investors vie to take part in an enterprise whose goal is to become the player in online shopping. Growth can be measured effectively across short (i.e., year-over-year) and multiple year long time horizons.

Another important consideration of a firm’s growth versus its peer group is the acceleration of growth. This holds for measures of both earnings and revenues growth. When measuring acceleration across multiple year spans, it is important to use the annualized growth rate.

A firm’s fiscal growth and the acceleration thereof should be measured relative to its sector or a smaller peer group. Although measuring growth relative to the entire market has as much (if not more) empirical predictive power, this approach shirks the emphasis on quality in favor of a pure growth-centric investment style.

7. Shareholder Friendly Policies
Shareholder friendly policies are indicative of a shareholder conscious management. These policies can be measured by a firm’s total yield to investors, a reasonable dividend payout policy, a consistent growth of dividends, and limited potential dilution to owners of common shares.

  • Yield and Net Payout Yield

Simple yield is expressed as the indicated annual dividend over market price. While a good yield is preferable, simple dividend yield has shown deteriorating predictive power in assessing future returns. Gray and Vogel suggest that net payout yield (NPY) has supplanted simple dividend yield partly due to tax disincentives against regular dividends [17. Gray and Vogel. Dissecting Shareholder Yield.] [18. Quantpedia. Net Payout Yield Effect]. Net payout yield is the total monies a firm remits to and from equity investors.

On the whole, NPY should take precedence over simple yield. When a firm pays a dividend, cash is remitted back to the investor rather than being placed into the retained earnings account of shareholder’s equity. Vice, when a firm remits monies back to its investors through a combination of regular dividends and stock dividends (i.e., buybacks), it thereby increases EPS and other per-share metrics. A strong NPY is highly indicative that management believes shares are under-valued and that their stock will give a good return on investment. If C-level executives think that their stock is a good deal, it probably is. NPY should be averaged over a longer time horizon to estimate management’s long term commitment to returning shareholder value. I measure NPY as follows:

(11) {NPY = } \frac {\sum_{t}^{T} (Equity_{Purchased} - Equity_{Issued}) + (1- R_{Tax}) \times (Dividends_{Total} - Dividends_{Pfd})}{Market \, Capitalization}

Where:
Equity_{Purchased} = Total common equity purchased during the reporting period (typically found in the Cash Flows from Investing section of the Statement of Cash Flows);
Equity_{Issued} = Total common equity issued during the reporting period (typically found in the Cash Flows from Investing section of the Statement of Cash Flows);
R_{Tax} = An estimate of the long term corporate percent tax rate on profits (to estimate the disincenting double taxation effect);
Dividends_{Total} = Total cash dividends paid during the reporting period (can be found either on the Income Statement or in the Cash Flows from Investing section of the Statement of Cash Flows);
Dividends_{Pfd} = Total preferred dividends paid during the reporting period (can be found either on the Income Statement or in the Cash Flows from Investing section of the Statement of Cash Flows);

  • Dividend Payout Policy

A high yield can be misleading. Normally, a strong yield is indicative of shareholder friendly policies, but a yield that is too high can be a red flag that the firm is under-investing its profits back into itself or even that it is disbursing cash in a manner which it cannot truly sustain. Comparing a firm’s payout ratio (dividends paid over net income) to its industry group’s expected payout ratio is a good check against unsustainable dividend policies

  • Long Term Dividend Growth

Firms which have steadfastly increased dividends over many years are known as dividend aristocrats. Typically dividend aristocrats are high quality firms which afford investors a wide margin of safety. These aristocrats often command valuations that exceed their peers, and often with good reason. Any measure of long-term dividend growth and consistency of that growth should suffice in locating these enterprises. As with any measure, though, this too can mislead if not taken in context with the broader fundamental picture.

  • Dilution Potential

Dilution is a complicated topic. It can come in many forms: stock options, warrants, private placement in public equities (PIPEs), issuance of new debt, the issuance of vanilla common shares as compensation, or a change in corporate structure. Basically, dilution is any action which decreases the common shareholder’s residual per share claims on cash flows and assets. Normally, some dilution is acceptable and can even be a good thing for shareholders when it incentivizes managers and employees to be successful. In certain situations, dilution can be extremely deleterious to owners of common shares and shows a blatant disregard for common shareholders and long term value creation (e.g., hidden dilution; or, dilution that does not benefit a firm’s value creators).

Some dilution is actually healthy because it incents employees and managers to succeed. If the company does extremely well, then employees and managers get rich off of stock options and other goodies. If common investors can go along for that ride, what difference does a little bit of dilution make?

The difference lies in the who the beneficiaries are. If employees and managers stand to benefit from the firm’s success, then the incentives lie in the right place. However, if the beneficiaries of shareholder dilution are external investors, then watch out. The most insidious forms of dilution are usually tucked away in the footnotes and on off-balance sheet obligation. Discovering them requires actual detective work. Moreover, future dilution in the form of brand new issuances is often unknowable. Due to the limitations of readily available data, empirical data analysis does not support the hypothesis that firms at a high risk for shareholder dilution (inferred by the ratio of Diluted Weighted Average Shares to Basic Weighted Average Shares) under-perform.

Nevertheless, checks against potential dilution deserve a place in the analysis of quality. Used in the broader analysis, metrics like the comparison of basic shares to fully diluted shares can raise red flags for potentially deleterious corporate policies. However, quick checks like this one do not excuse investors from performing in-depth due diligence checks against nefarious corporate policies.

8. Expectations Management Policy
Firms which tend to beat Wall Street consensus estimates tend to outperform over both the short and long terms. There has been a lot research on the connection between earnings surprises and future performance. It is one of the strongest anomalies known in the entire marketplace. While the estimates themselves are external to a company, surprises deserve a place in the analysis of quality because firms which tend to surprise tend to keep surprising. Strong and consistent earnings surprises are extremely strong indicators of an effective expectations management policy.

Anyone can “buck the odds” once or twice. Consistently beating expectations, however, tends to be more than dumb luck. Management that tempers analysts’ expectations tends to be conservative in other areas as well. Vice, management that hypes expectations, tends to disappoint. Markets will not hesitate to punish with extreme prejudice firms that strongly and/or consistently miss analysts’ expectations. Under-estimating and over-delivering is not just the right thing to do… it’s good for business.

Quantpedia has compiled numerous studies on this anomaly (membership may be required for some links) [19. Quantpedia. Trading on Earnings Announcements] [20. Quantpedia. Value Combined with Post-Earnings Announcement Drift] [21. Quantpedia. Earnings Revision Strategy] [22. Quantpedia. Reversal in Post-Earnings Announcement Drift] [23. Quantpedia. Institutional Ownership Effect During Earnings Announcements] [24. Quantpedia. Combining Post-Earnings Announcement Drift with Accrual Anomaly] [25. Quantpedia. Earnings Announcement Premium] [26. Quantpedia. Post-Earnings Announcement Effect]. However, an analysis of earnings surprises can sufficiently be performed through a simple study of Standardized Unexpected Earnings (SUE). SUE is safely cross-comparable across all industries and can be used in short-term and longer-term historical studies. SUE has no meaning for firms with little analyst coverage, however.

9. Smart Money Sentiment
Strong and broadly based insider buying is an extremely strong indicator of the sentiment of smart money. Arguably, the smartest money consists of insiders. They should know a lot about their business because they either run it, live it, or already own a large chunk of it. On the whole, healthy levels of insider buying is indicative of a healthy corporate culture and positive overall attitude towards a firm’s potential. If insiders are buying strongly and broadly, it is definitely time to take notice.

Contrary to what many people think when they hear “insider trading”, this is not illegal unless an insider acts on non-public or privileged information. Insiders legally buy and sell shares of the companies they work for all of the time. Anyone with a beneficial ownership in a firm (defined by SEC Rule Rule 16a-1(a)(2)) is an insider and must file an SEC Form 4 anytime he or she causes a significant change to the beneficial ownership of a firm. The data extracted from SEC Forms 4 indicate when insiders are buying and selling stock.

On the whole, insider buying is more telling than insider selling. There are many reasons for an insider to sell a stock (e.g., paying for tuition, refinancing a home loan, planned sales, exercising of stock options, etc…). There is only ONE reason why an insider would buy their stock… because he or she thinks it will go up.

The only caveat is that sometimes insider trading can be a form of financial engineering whereby managements knowingly send false signals to the broader market. These false signals often leave footprints, however. Additional information regarding the nature of the insider transactions can be gleaned by discerning whether insider transactions were done privately, planned, or on the open market.

Signals from insiders, though they consist of a much smaller group of individuals, are definitely more reliable than all of Wall Street’s analysts combined. Empirical data shows that analysts, steeped in the finest schools and traditions of finance, can predict stocks about as well as a coin flip; I echo Burton Malkiel, “a blindfolded monkey throwing darts at a newspaper’s financial pages could select a portfolio that would do just as well as one carefully selected by experts.” [27. Rick Ferri. Forbes. Why Smart People Fail to Beat the Market].

Other types of smart money exist. Most notably, institutional investors with more than $100 M in assets under management (AuM) must file Form 13-F with the SEC to disclaim their equity holdings on a quarterly basis. NASDAQ has an excellent interface that allows users to access this information in a concise format either by stock or by institution ($AAPL and Susquehanna used as examples). Generally, how much of a stock is held by institutions is usually irrelevant; most institutions will under-perform the averages over the long term [28. Bill Barker. Motley Fool. The Performance of Mutual Funds]. However, some institutional investors possess extraordinary alpha; these are ones to follow.

10. Market Positioning
In 1970, The Boston Consulting Group came up with a way to analyze market share and growth called the Growth-Share Matrix, or BCG Matrix. In its raw form, it has limited use to investors due to its original purpose as an internal management tool. Whereas managers are stuck with their businesses and have to make the most of the cards they are dealt, investors can more nimbly choose the place and time of their battles.

The intended use of the BCG Matrix is best demonstrated graphically:594px-Folio_Plot_BCG_Matrix_Example

(image provided by Wikipedia; for informational purposes only)

Basically, the BCG Matrix subdivides product-lines and even whole firms into one of four categories.

  • Cash Cows – Large market share, low growth industry
  • Dogs – Low market share, low growth industry
  • Question Marks – Low market share, high growth industry
  • Stars – High market share, high growth industry

In its raw form, however, the BCG matrix is not able to differentiate between a good versus bad investment because it would seem to suggest that “Stars” are the best investments. While it may be management’s goal for their brands and product lines to attain star status, it is in the investor’s interest to identify which investments stand to benefit the most from sector headwinds and which ones can most resiliently weather adverse market conditions. In short, we want to identify firms which are the right size and at the right place at the right time.

Given the investor’s flexibility, the most prudent investments on the matrix are “Questions Marks” which can attain future star status and “Cash Cows” which can resiliently stand up to slowing or adverse market conditions. Although “Stars” can make for good investments, they often have already peaked. “Dogs” should typically be avoided.

There are furthermore some ambiguities to actually implementing a BCG Matrix analysis. First, it is unclear what metric to use for market share. For all intents, I normally start with market shares as a percentage of Market Capitalization versus a broader peer group (i.e., industry) or a narrower peer group (i.e., direct competitors). However, I also advise balancing out this picture out by looking at market share as percentage of sales and as a percentage of profits. Additionally, I strongly advise against using historical growth rates even though this information is more readily available. Forward looking peer group growth rates can be either weighted means or simple medians. This information can be derived from many financial databases (CompuStat and I/B/E/S seem to be the best; Estimize’s API looks promising). Manually aggregating forward looking sector and peer group growth rates is time prohibitive, but Yahoo! Finance provides this information free of charge (see Yahoo! Finance’s Next 5 Years (per annum) Growth Estimate for Google).

If performed correctly, a BCG Matrix is a capstone achievement to a well-rounded analysis of quality. Used on its own, it holds little benefit. However, when other Pillars of Quality dimensions are added to the calculus, it truly becomes a powerful and totally unique analytical tool.

11.  The Qualitative Aspect
Surprise! There’s an eleventh pillar and it’s by far the most important.

Thus far we have been dealing with aspects of a quality investment that are quantifiable using commonly available data from financial statements, websites, and other databases. However, the quantitative aspect is but a microcosm in the overall schema. After all the easy answers have been exhausted, the real due diligence process begins.

The due diligence process requires investors to ask hard questions regarding any number of “known unknowns” and “unknown unknowns”. We might start, though, by considering obvious factors well suited to qualitative analysis, like:

  • Caliber of management
  • Product pipeline
  • Expected return on current investments
  • Internal controls
  • Labor relations
  • Auditors
  • Corporate structure
  • Catalysts for change
  • Etcetera…

Along the way to forming a well-rounded operational picture of a firm, we should also be looking for catalysts which could change a firm’s value or simply the market’s valuation. As always, though, answering these initial questions often results in more questions. Rather than reinvent due diligence process, I advise investors turn to Phillip Fisher’s “15 Points to Look for in a Common Stock” and other writings. Not only do these points provide a solid basis for research, but they further illustrate the improbability of divining truly unique insights by consulting the oracle of readily available data [29. Truly juicy data cannot normally be found on financial statements or the regular internet; often this has to come from unique insights and information sources. Phillip Fisher’s Scuttlebutt method offers up creative methods on sourcing this type of information. Familiarity with insider trading laws and other ethical obligations are advised before applying this approach.].

Summary of Findings
The data on the quantifiable aspects of the using the Ten Pillars indicates very positive return expectancy for widely ranging parameters. While the precise end results are certainly dependent on a number of factors, the results seem to hold up extremely well for changes in factor weights, time periods, selectable stock universes, liquidity and price constraints, size, re-balancing frequency, and more. In fact, it is quite easy to improve the data by “optimizing” the results (i.e., ‘tweaking’ the weightings). However, optimization is not the goal. Rather, the goal is to demonstrate robustness. To that effect, I believe that the results presented below are approximately the median of a broad range of possibilities. The following graph shows approximately 15 years of historical stock returns ranked in quintiles according to their proprietary “quality score” outlined above:

386925_63159

The parameters of this ranking system are as follow:
Period: 01/02/99 – 02/22/14
Rebalance Frequency: Weekly
Ranking Method Percentile: NAs Neutral
Slippage (% of trade amount): 0.0
Transaction Type: Long
Universe: Ex Financials/OTC/MicroCaps
Benchmark: S&P 500
Number of Buckets: 20
Minimum Price: 0.5
Sector: – ALL –

Note that this ranking system was performed using historical data from CompuStat and other databases as provided through the Portfolio123 API. Aggregate weightings for the Ten Pillars were equally weighted. Weightings for individual were chosen arbitrarily reflecting an initial ‘best estimate’. The selectable universe included all exchange-traded stocks (excluding the financial sector) for which fundamental data was available ex ante (i.e., survival bias is accounted for in the CompuStat database). Additionally, stock prices must have been at least $0.50 per share, and have had at least $100 M annual sales volumes or with at least $50 M in market capitalization. As of 2/24/2014, the ranked universe consisted of 3441 stocks. Stock are equally weighted (as opposed to capitalization weighted) in each quintile. Equal-weighting partly explains the large discrepancy between the median quintile return and long-run S&P 500 return. Equal-weighting may or may not provide a realistic depiction of actual results depending on capacity and AuM constraints. As always, PAST PERFORMANCE IS NOT INDICATIVE OF FUTURE RESULTS.

I would like to remind readers that the above performance included no rankings based on traditional value metrics. I believe that adding the value dimension to this analysis is completely additive to the end result. From this preliminary data, it appears as though “a good company can be a good investment” and that I can quit “dumpster diving” for dirt-cheap stocks.

I invite any discussion, comments, or questions.

– D