My intent is that this post become a living document which houses my own personal magnum opus on asset valuation. Herein and throughout I will posit certain axioms of asset valuation that I believe to be relevant for distinguishing between a thing’s market versus true value. Upon review, one might (correctly) deduce that none of these axioms are my original ideas.
The axioms are summarized through the following postulates:
Postulate (1): The market price is usually the right price.
 More verbosely stated, the right price for an asset can be any ranges of prices which do not result in arbitrage (i.e., a free lunch). The noarbitrage principle is the pillar of the efficient market hypothesis (EMH) and the fundamental theorem of asset pricing (FTAP). Complete markets are required for strong forms of noarbitrage to be true; weaker (i.e., statistical) forms of this principle have no such hard stipulation.
 That the market is fairly valued is an extreme tautology if one uses marketbased determinants for asset fair valuation. The fair value of a generic asset, , is reflexively its discounted net present value (NPV) which, in turn, is defined as the integrated value of internally generated free cash flows, :(eq 1)
in which continuous geometric compounding of the short (i.e., riskfree) rate, , results in exponential compounding:
For fixed exponentially growing , this becomes the wellknown growing annuity formula:
(eq 2)
As , this become the perpetuity equation of the Gordon Growth Model ubiquitous in equity valuation:
(eq 3)
If the discount rate, , is defined as the marketweighted value () of the weighted average cost of equity, , then the fair value of the market is a thing which reflexively defines itself:
if:
and: ,
then:
As long as assumptions regarding fair value are held internally consistent, the market is the capitalweighted reflection of market participants’ consensus regarding the expected values of determinant pricing variables under a generally broad range of possible scenarios.
 In even a weakly efficient market, common “valuation” ratios tell use little of value, but do tells us a lot of about the expectations of market participants. If we are take seriously the other participants in a market (which we should), there is a justification for their every purchase and sale. As result, security prices which appear “cheap” — relative to ubiquitous price multiples to earnings, book, and other fundamental factors — reflect expectations about the future performance of the underlying company (and, to some degree perhaps, expectations regarding the stock’s future performance independent of the underlying company’s future performance). A stock which appears “cheap” does not mean it is “a value” — these are not the same thing, but have been conflated according to the canonical value investor’s worldview. More often than not, an apparently cheap stock actually deserves to be cheap. These value traps are increasingly common as markets become increasingly efficient due increasing dispersion of financial data and knowledge surrounding the natures of mispricings and risk premia among market participants. Yet, the misconception that “cheap = value” persists because apparently “cheap” stocks have in aggregate outperformed the averages over the past century. This outperformance, however, was not necessarily due to a systemic “value” phenomenon. Rather, asymmetric upside and margin of safety was made possible because their prices reflected overly wrought pessimism and/or capriciousness. Indeed, the crosssection of “cheap” equities confirms that nearly all excess returns are due to a few outliers which exceeded the market’s low expectations. And of these outliers, the majority resided within the dusty corners of the market.
 Exceeding low expectations is much easier and more likely than exceeding high expectations. From this premise, explanations for market behavior rooted in behavioral economics allow for the relative predictability of some returns without relaxing (breaking???) the EMH. Note that while it is possible for individuals securities to be mispriced within a broadly efficient marketplace, abounding inefficiency should not become our base assumption, lest we become overly enthralled with any number of value traps.
 True value investing is a lot more involved than buying cheap.
 Fundamentals which are expected to improve offer information which unique from trailing valuation ratios. This effect may be objectively captured, albeit also using trailing data, within the Piotroski Effect. Another possible, but not mutually exclusive, explanation is that Piotroski factors are mistakenly interpreted as valuecentric when in reality are just conflated momentum factors.
 The veracity of EMH precludes almost any possibility that easily discerned patterns in widely dispersed data, such as price and volume, have any real and/or sustained predictive power over future price paths. An efficient market is in direct contravention with the causal underpinnings of nearly all forms of technical analysis. Proponents of technical analysis suppose that observed market prices discount all information related to supply and demand. This is not controversial. Proponents further claim that price action results in repeated and predictable patterns which provide signals regarding future areas of supply (i.e., resistance) and demand (i.e., support). That support and resistance exists is also not controversial. However, the ability of past price fluctuations to predict future paths is extremely controversial since it implies a free lunch. Even if a certain pattern did repeat itself in a predictable manner, the barriers to entry for arbitrage based on widely dispersed data are virtually nonexistent. Wary of the potential for free money, arbitrageurs would frontrun the expectation thereby changing the pattern altogether. Although certain technical patterns may contain information about the future, the nature of efficient markets requires that profiting by these patterns rely on proprietary knowledge, data, technology, and/or execution techniques.
 An efficient market does not rule out the possibility that constituent securities within that market may experience mispricings. True mispricings result as a consequence of investors’ errors regarding a security’s true value. Moreover, the very factors that contribute to errors also also tend to be those that relate to trading cost and risk premia which may be present. The FamaFrench (FF) factors work most strongly within the the dusty corners of the market (i.e., within securities which are hard to short, illiquid, undercovered and/or underfollowed, of undercapitalized and/or tightlyheld companies, et cetera…). As a rule, the relative predictability of these securities’ returns due to errors are not strong violations of the noarbitrage principle if we permit that these factors are also explicable barriers to arbitrage. Stated plainly, investor errors are most likely to persist in areas of the market in which: a) smart money (which generally equals big money) cannot or does not care to scale; b) unique information is more likely to exist; and/or, c) regulators are less likely to notice or care about asymmetry. An edgeseeking small investor should (lawabidingly) look in these places first.
 Moreover, the relative predictabilities of some asset returns due to the presence of risk premia are anticipated by EMH.
Postulate (2): The presence of momentum observed within many marketplaces is undoubtedly the most serious affront to Postulate (1).
 The presence of a predictable return spread which is not related to any kind of risk premia or trading costs looks like a classical free lunch. Momentum, if it truly does exist, utterly violates classical thinking on market efficiency and the noarbitrage principle of asset pricing. Its presence was popularized in 1997 paper by Mark Carhart where upon it was begrudgingly added to the classical FF Factor Model. So inconvenient is momentum to EMH that Eugene Fama has called its mere existence “the biggest embarrassment to the theory“. Anyhow, momentum has been removed from the FF framework in favor of more explicable return spreads. Explicable within the FF framework, by the way, simply means that observed excess return spreads do not really exist after costs and riskpremia are factored. Otherwise, it would be an arbitrage upon which knowledge of its mere existence makes it *poof* disappear. It’s like the old economics joke:
Two economists are eating lunch together and one of them points out to the other what appears to be a $100 bill lying on the ground. The other responds, “don’t worry — if that were a real $100 bill, someone would have already picked it up by now.”
 Kenneth French, in spite of the whether he eschews the core logic of their existence, continues to track anomalies related to portfolios constructed through prior returns within his data library, which includes shortterm mean reversion in addition to longerterm momentum. The presence of shortterm meanreversion is less problematic because return spreads are at least partially explicable through firstmover advantages and trading costs. The presence of meanreversion also implies that price deviations due to shortterm perturbations (e.g., supplydemand imbalances) are quickly identified and corrected as a thing returns to its equilibrium value. For example, if I am compelled to sell a thing (e.g., due to a margin call; in order to pay taxes or put money down on a house; et cetera) and thereby drive down the price below its equilibrium, taking the other side of that trade should result in a positive expectancy. Shortterm mean reversion is therefore not very problematic from the standpoint of EMH. Momentum, however… economists just can’t be having any free lunches.
 The relatively new field of behavioral economics, conceived by Daniel Kahnemann and Amos Tversky, has barely begun to unravel the crux of the human condition which compels people to “buy high and buy higher”, which is presumably at the root of the momentum anomaly. As a general aside, the field of behavioral economics is still a blue sky. A graduate student could probably learn most everything that has been written on the topic within a twoyear program. On the other hand, students in other general fields of finance and economics could at best only scratch the surface of the massive corpus of literature — thus specialization.
Postulate (3): The veracity of Postulate (1) does not imply that the market is usually right.
 That the market price is usually the right (i.e., noarbitrage) price simply means that it is very difficult to beat the market. This does not state that the current market price is an accurate estimate of the future market price — this is a common misreading. There exist inherent elements of uncertainty in asset prices which are virtually indistinguishable from random walks which will more than likely result in the following inequality:(eq 4)
Rather, changes in asset prices can be thought of as taking a random walk, as in a Weiner Process — in which is Geometric Brownian Motion — and changes in prices over are the result of timevalue (), drift (), variance (), and a normal random variable ().
(eq 5)
(eq 6)
In this proposed random walk environment in which asset prices are virtually indistinguishable from semimartingales, the current price is the probabilistic best estimate for future prices. Therefore, under the riskneutral expectation, , given by the FTAP — in which there must be at least one riskneutral expectation equal to the probabilistic expectation in order for no free lunches to exist — the current price provides the best probabilistic estimate for future prices, i.e.:
(eq 7)
Postulate (4): In cases in which Postulate (1) is false, there may exist some form of arbitrage.
 Strong forms of arbitrage demand a riskfree return which, in turn, depends on mispricings which are independent of assumptions regarding future price paths and the nature of their randomness. Strong forms of arbitrage are usually fleeting. Most investors will never definitively discover a true riskfree arbitrage opportunity in their lifetimes.
 However, if/when an investor is able to deduce that the nature of uncertainty is not completely random, he/she might deduce the presence of a statistical arbitrage — i.e., an expectation for profit above the riskfree rate of return. Statistical arbitrages may abound especially when and where there exists informational asymmetry. An investor has a statistical arbitrage when/where he/she determines that expectations implied by current market prices are unlikely to reflect revised expectations once other market participants become aware of either their previous misjudgements and/or the presence of contradictory information. Statistical arbitrages do not require complete markets and do not present a clear violation of EMH.
 In reality, success in the market is the result of passively riding the wave, luck, skill, or compensation for assuming (perceived) risk. A 2017 paper entitled Do Stocks Outperform Treasury Bills? on the crosssection of equity returns since 1926 elegantly explains why most active investors underperform their selfselected benchmark — because nearly all excess equity returns are attributable to the top performing quintile, underdiversified portfolios (i.e., most activelymanaged portfolios) tend to in aggregate underperform the passive index. And for any activelymanaged portfolios which do indeed outperform, it is also exceedingly difficult to discern whether it was due to luck or skill. See Warren Buffett’s essay on The Superinvestors for a demonstration of the luck versus skill paradox involving “a national coinflipping competition”. If the market is efficient, there is no such thing as a good coinflipper (i.e., “active investor”) who is also not an arbitrageur. However, it is reasonable for an average investor to expect an excess rate of return over the longrun by receiving premia for the assumption of perceived risk in a sufficiently diversified portfolio. The veracity of this socalled smart beta approach indicates that the bankable difference between a skilled investor/arbitrageur and a skilled risktaker is semantic. In the real world, it is irrelevant whether returns above the riskfree or market rates of return are due to alpha (i.e., returns in excess of beta) or beta (correlated volatility among other factors).
Postulate (5): Postulate (4) depends on the singular premise that price seeks value.
 The assertion that “price seeks value” is a philosophical one but is also supported by the EMH and lots of data. The EMH supposes that a market’s sole purpose is as a price discovery mechanism. The implication is that current prices reflect the current capitalweighted consensus. As new information regarding the present and future become known, prices will seek a new efficient equilibrium.
 Instances of excess return spreads which do not depend on the momentum anomaly (which may be interpreted as a form of massive cognitive dissonance) depend almost exclusively on violations of the timevalue of money (TVM) principle. If the expected net present value of internally generated cash flows of any generic asset can be shown to not equal its market price, then a statistical arbitrage may exists.
If: ,then a purchase or sale of: over the period, .The expected profit received from identifying violations of TVM reflect the wisdom of the famous Warren Buffett axiom: “Price is what you pay. Value is what you get”. But then again, usually, “you get what you pay for”.
 Instances in which prices diverge from value are more readily arbitrageable when there are instruments to trade both and , such as (the weak case of) constructing factorbased longshort portfolios and/or (the strong case of) dynamic hedging of replicating payoff portfolios. A onesided arbitrage trade is still risky even if the relationship between the expectation and the market holds; i.e., if transacting oneside of the expectation while in a (efficiently and cheaply) hedged position.
 Forms of statistical arbitrage which rely on TVM support the use of discounted cash flow analyses (DCF). While nearly all professional security analysts rely on some form of DCF, most fail to incorporate objective measures of uncertainty in their analyses. Furthermore, many analysts set discount rates equal to the weighted average cost of capital (WACC) as implied by the Capital Asset Pricing Model (CAPM). WACC as a proxy for opportunity cost inherent in TVM is not controversial. The utility of CAPM, however, is controversial.
 Stochastic pricing models (e.g., such as the BlackScholes derivation for the noarbitrage value of Europeanstyle put and call options) which model the expected value of a terminal, discrete cash flow based on the expected terminal value of an underlying stochastic process have made great progress in dealing with empirical uncertainty. However, this author is not yet aware of a closedform model which handles the generic case of continuous stochastic cash flows which derive their timedependent values from an underlying price process which is itself stochastic. In essence, the general formula for a stochastic annuity marries equation (1) on the timevalue of an annuity with equation (6) on the expected measure of a random walk:(eq 8)
where: is standard normal (i.e., Gaussian) probability density function defined by:
Note the analogy between equation (8) and the generic form of a doubleintegrated volumetric equation (i.e., a stochastic annuity is like the integral of pricing models which estimate the noarbitrage value of single, terminal payoff). Also, the dynamics of may not be lognormally distributed as equation (8) implies and therefore may have to be expressed as an expectation of a timedependent payoff condition contingent upon a random process. The generic problem of determining the noarbitrage fair value of a stochastic annuity holds great promise and should make a honking good graduate thesis.
Postulate (6): Postulate (5) depends on the expectation that markets are rational over the short and longrun.
 If markets are indeed rational, inefficiently priced assets will eventually become efficiently priced. However, as Keynes noted in days of yore, “the market can remain irrational longer than you can remain solvent”. Bouts of extended periods of massive cognitive dissonance are welldocumented. But only in hindsight do these delusions of the masses become apparent to the masses.
 Apparent irrationality at the group level may actually be rational from the perspective of individuals when such behavior is sanctioned and supported by an artifice of regulatory and/or monetary distortions. Even so, the resultant bubble is still no less a bubble.
Postulate (7): The possibility that Postulate (6) is false indicates that riskmanagement may be to needed avert ruination.

 The possibility that markets can deviate from rational behavior from time to time means that even the most skilled investors must practice riskmanagement in order to avert major, even total, loss. The possibility that extended bouts irrationality may prevail over the short and longterm is especially relevant to investors who attempt to earn an excess rate of return by holding a portfolio which is more focused than the market portfolio. Active investors who hold very broad portfolios are in essence closet indexers.
 Hedging is a great idea! However, hedging a concentrated portfolio of equities in which it is difficult (nay, impossible) to trade the underlying assets is problematic. Hedges which utilize options (i.e., derivatives of equities; i.e., derivatives of derivatives) are good risk tools for management. However, the use of derivatives to manage risk are usually costly to put on and costly to adjust in relation to the expected profit. Moreover, selffinancing positions (e.g. credit spreads) typically are not wellsuited to hedging risk since the credit received is compensation for risk assumed. In practice, the lossaversion provided through dynamic hedging is not a free lunch. Rather, the efficient compounding of capital over the longrun simply demands a minimization of the likelihood for large, debilitating losses.
 The central tendency for large numbers to equal the expectation — as in making a large number of individual bets within a given timeframe — is generally an undervalued riskmanagement approach which does not sacrifice essential return provided that: a) direct and implied transaction costs are small in relation to the expectancy; and, b) the net expectation for each bet is positive.
 Markowitz’ Modern Porfolio Theory (MPT) demonstrates that diversification averts extreme losses within a meanvariance setting. However, the presence of steadystate covariances in equity markets has not been proven, even if a more generic form of comovement is allowed to persist. The CAPM, an extension of MPT, possesses little utilitarian value aside from demonstrating the benefits of diversification.
 Most practical applications of CAPM are, contrary to canonical interpretation, not supported by the veracity of the ModiglianiMiller Theorem (MM) on the irrelevance of capital structure on firm value. Although Theorem II of MM differentiates between levered and unlevered equity beta, this original sense of beta was in terms of the firm’s assets to equity ratio. The CAPM (also called the SharpeLintnerBlack model) married the intuitions given by the arbitrage theory of pricing (ATP) with an extension of the MM theorem by positing that equity beta could be directly observed by regressing stock returns against the benchmark returns. It is this author’s opinion, however, that stock market returns contain little information on firmspecific risk factors. While the CAPM implies that a more volatile (i.e., “risky”) stock has a relatively greater expected returns on equity, there is little empirical evidence which corroborates the intuition that or that . FF (1993) found that when portfolios are adjusted for size, market beta’s explanatory value falls to zero (and that its utility is not likely salvageable through the remediation of analytical errors). Moreover, the opposite effect has been observed in which lowvolatility and lowbeta portfolios have outperformed the averages.
 FF’s entire existence is predicated on disproving the CAPM’s central intuition that riskfactors which drive expected return can be directly observed from the singlefactor betacoefficient (i.e., slope) of asset price returns versus an arbitrary benchmark. Still, FF Three and FiveFactor Models are merely more sophisticated implementation of ATP. The linear dependence on explanatory variables implied by APT is not necessarily indicative of market participants’ views.
 The Kelly criterion, conceived by John Larry Kelly Jr. at Bell Labs in 1956, is a promising venue for the construction of efficient portfolios in ways that mitigate the risk of ruin while preserving essential return. Professional gamblers have long employed this socalled Fortune’s Formula for sizing bets in relation to their bankrolls. Proper fullKelly betting minimizes the expected number of bets required to double one’s bankroll; its also maximizes the longrun expected logarithmic growth rate of the bankroll. This particular convergence between the Kelly criterion and normative discounting (i.e., logarithmic utility) methods used in canonical forms of TVM is indeed striking, but perhaps intentional. The Kelly criterion also anticipated the welldocumented “volatility drag” phenomenon whereby the variance () of logarithmic returns () literally exerts a drag on the expected arithmetic growth rate ():(eq 9)
 Although this form of drag disappears if one assumes returns are continuous and logarithmic, realworld betting is discrete. In discrete scenarios with limited bankrolls, betting in excess of fullKelly always has a probability of ruin . In other words, excessive leverage always results in an expectation of eventual and inevitable wealth destruction. The wealth destruction effect, in fact, explains the dramatic longrun underperformance of leveraged ETFs, multiple financial crises, among other things.
 In order to exploit the flipside of variance, professional gamblers often employ fractionalKelly in order to further minimize the risk of ruin while preserving an attractive rate of return. For example, halfKelly betting halves the expected volatility of the bankroll while only decreasing its expected rate of return by 25%. Legendary investor and MIT professor Edward Thorp is a proponent of the Kelly betting system and an editor of the Kelly Capital Growth Investment Criterion, which seeks to identify how investors can utilize Kelly criteria to size bets within a complex investment universe consisting of an arbitrary number of continuous semimartingales. Of note, this special case of the Kelly investment criterion converges with the findings of Stochastic Portfolio Theory (SPT).
 The field of finance which extends the continuous Kelly investment criterion and/or SPT in order optimize expected logarithmic utility under a general case of asset comovement (or even the specific case of covariance) has not been invented yet to my knowledge. This would make a honking good graduate thesis!
Postulate (8): Increasing information dispersion increases the likelihood that Postulate (1) is true (i.e., contributes to the obsolescence of all other postulates).
 The supposition that markets are trending towards increasing levels of efficiency is increasingly relevant in the information age. Transaction costs — barriers to arbitrage — are far lower now than in the past. Previously exclusive data is widely dispersed and usually much more inexpensive. The computing power to crunch that data has also become exponentially more powerful and available.
 The information regarding how to exploit that data is also now widely available. Quantpedia hosts a compendium of many supposed asset pricing factors and anomalies (i.e., those things which worked in the past). If we believe in even the weakest form of EMH, we would presume that dispersion of such knowledge leads to its obsolescence.
 Many supposed asset pricing anomalies are the expected result of overfitting. A May 2017 NBER study tests the significance of 447 known asset pricing anomalies (57, 68, 38, 79, 103, and 102 variables from the momentum, valueversusgrowth, investment, profitability, intangibles, and trading frictions categories, respectively), finding that as many as 64% may be insignificant at the 5% confidence level. Of these, many are expected to be spurious. But perhaps more important, the authors cite how their predictive powers tend to diminish over time, especially following publication. The paper further cites Schwert (2003) which shows that “after anomalies are documented in the academic literature, they often seem to disappear, reverse, or weaken”, and a similar study by McLean and Pontiff (2003) which shows that anomalybased returns spreads decline postpublication. Following the publication of Carhart (1994), returnspreads related to prior asset returns (e.g., momentum and meanreversion) have also declined.
 Evidence and logic overwhelmingly converge: the mere knowledge of an asset pricing inefficiency is its death knell — information dispersion is the alphakiller. As a result, many historically relevant correlations are expected to become irrelevant going forward. Others share this position. In a recent post, Jason Zweig noted the impending hazard whereby known historical inefficiencies are expected to become future efficiencies once the knowledge thereof becomes widely dispersed, especially in light of the growing popularity of factor investing. Tammer Kamel noted incisively in his post on the Unbearable Transience of Alpha that “Professional allocators will not pay hedge fund fees for the execution of strategies that are on the first year curriculum of any Masters of Finance program.”
 Yet, some asset pricing anomalies appear to be robust despite the widely dispersed knowledge therefore. There are at least four nonexclusive explanations for the persistence of certain factors: a) many apparent mispricings are actually compensation for risk; b) many apparent mispricings do not result in arbitrage once costs are considered; c) many anomalies are fleeting and/or have limited capacity; and/or d) anomalies rooted in human psychology anticipate investor error. None of these previously cited possible explanations are very problematic to the idea that “the market is hard to beat”. Only the persistence of probabilistic arbitrage due to (d) may be seen as a clear affront to EMH. Anomalies related to investor errors should persist as long as qualitative factors influence asset prices, but may increasingly diminish as humans increasingly abdicate reflexive decisions to methodical machine automation.
 In order to assess whether an anomaly is a true mispricing or whether it is spurious correlation, it is helpful to return to a bedrock of principle — in this case, Postulate (1)’s definition of fair value. The following questions may help to frame such attempts:
 How does any given anomaly articulate within the concept of fair present value? I.e., how can it be used to estimate the present value of future money flows, or at least gauge the market’s expectations?
 How likely will a given factor uncover errors in estimation? Even if a factor does not directly articulate with a mathematically convenient expression of present value, some anomalies may contain information regarding persistence of investor errors and/or the presence of nondispersed information.
 How likely is an apparent bargain to turn into a value trap? I.e., what is the likelihood that prices discount better information than I currently possess? Many times, apparently cheap things deserve to cheap. That said, low expectations are easier to surpass than high ones.
Upon review, one might (correctly) deduce that I offer no original ideas and that I stole everything from the Classical, Austrian, Chicago, and Behavioralist Schools of Economics… in that order, too. I believe that the Classicists laid the foundations; Austrians showed that economic canon (i.e., Keynesians) needed more precision than was then possible; Chicagoans brought rigor to the Austrians; and, now, the Behavioralists are showing the limitations of the Chicagoan efficient market hypothesis with evidence that humans beings are NOT mathematicallyconvenient, rational, utilityseeking agents. Then there are the ecclesiastical Keynesians — who by their very natures tend to gravitate to positions of authority — always stirring up the possibility that, in the age of certainty, it might finally be possible to anticipate the precise future ramifications of (de)regulation and of fiscal and monetary policy. We’ll see about that…