Drilling for Value, Epilogue: Putting My Money Where My Math Is

Since late 2014, I’ve been trying to understand how to value upstream oil and gas companies in a way that anticipates future equity returns. Industry standard practices for financial modeling were illuminating, but they left me unconvinced that I could somehow out-compete smarter, more sophisticated, and better connected institutional investors at their own game. Moreover, nearly every time I tried to apply conventional (i.e., right-headed) valuation techniques to upstream companies, I came up with valuations that were either zero or far-below the current equity market capitalization. This suggested that some heavily discounted bonds would very likely repay par with interest. But seeing as I was full-time employed (and deployed) during those crazy times, I failed to act. Anyway, that boat has sailed…

Despite my lack of early success, I was driven on by a single premise: the lack of differentiation of upstream companies makes them incredibly easy to value once the initial learning curve has been surmounted.

Problem Framing
Given that conventional methods for estimating discounted net present value would systemically result in an estimate that was far beneath market prices, the basic question then became this:

How do you value something that, under the base economic scenario (i.e., “the right price for a commodity is the current market price”), has no value but which could experience significant upside in the case of a commodity price recovery?

Given this problem and the supposition that something with no value cannot lose value, I was drawn to conventional options valuation models as methods for approximating the solution to the general problem. Indeed, the parallels between option valuation and equity valuation are robust. Valuing conventional corporate liabilities was, in fact, an original goal of most original options valuation frameworks. However, these methods have failed to gain broader use for valuing underlying assets due to assumptions of risk-neutrality which would ultimately conclude that a stock price is no different than a random walk with drift equal to the yield of a US Treasury Bill (or to whichever instrument the discount rate is pegged). While the random walk assumption may be sufficient for commodities prices, a thing which derives its value from an underlying random process does not any longer necessarily retain the independent and identically distributed (i.i.d.) properties of the underlying.

Moreover, it is unclear from the literature as to how to reconcile between multiple period cases of discounted cash flow analyses (i.e., annuities) and options valuation. Given that the time value of money (TVM) is the all-father of econometrics, most analysts rightly rely on discounted cash flow (DCF) analyses for valuation yet still fail to incorporate objective measures of uncertainty into their estimates. This failure in endemic within both the buy and sell-side communities.

The suppositions that:

  1. The Net Present Value (NPV) of discounted expected future cash flows is the reflexive fair value (V_{t=0}) of any cash-throwing asset;
  2. The first derivative of NPV with respect to time, \frac{dV}{dt}, is equivalent to a continuous and/or finite discounted cash flow;
  3. A thing in which fair value is defined by the integral (summation) of continuous (discrete) discounted cash flows over any finite period is, essentially, an annuity;
  4. Commodities prices are an underlying driver of discounted value; or, \frac{dV}{dt} can be expressed as partial function of the risk neutral expected values1 of commodities prices, \mathbb{E}_0^{\mathbb{Q}}[P_t];
  5. Commodities prices are assumed to be random walks, arbitrarily represented as semi-martingales2. Reference Appendix B for an argument in favor of treating commodities prices as semi-martingales. The expected price of an arbitrary semi-martingale at any time, t, can be represented as today’s price multiplied by the force of interest3:
    \mathbb{E}_0^{\mathbb{Q}}[P_t] = P_0*e^{rt}
  6. \frac{dV}{dt} is also a partial function of deterministic functions and/or values for costs, quantities of production, and discount rates; and,
  7. \frac{dV}{dt} has unique terminal and boundary conditions.

…imply that the expected present value of an arbitrarily defined expected cash flow at any time can be expressed as the first derivative of net present value with respect to time4. This, in turn, implies that the fair value of nearly any cash-throwing asset can be defined in probabilistic terms as a continuous stochastic annuity. Because value accrues continuously, the value of a stochastic annuity is essentially equal to the value of an integrated continuous call option5.

The special case of a single-period cash flow is tractable using some variation of Black-Scholes and/or heat equations. However, in the multiple period or continuous period case, risk-neutral expected outcomes and probabilities vary over time. It is simply not accurate to solve for the present value of a deterministic annuity and apply static expected probabilities and outcomes. Conventional closed-form models fall short of providing a closed-form solution for this one-step higher order problem. At this time, I am not aware of any generalized closed-form solution for the general problem of a stochastic annuity, although one may actually exist7.

Many scholarly techniques have been devised to account for time and strike varying properties without solving any core problems associated with continuously paying securities in which pay-offs are partially random. Adjustments to the sampling distribution to account for higher level moments (i.e., skew and kurtosis) are at best curve-fitting. Moreover, many practical methods to applying DCF under uncertainty rely on very complicated recursive algorithms or specialized computationally-intensive techniques8. Those might be fine, but I utterly protest unnecessary complexity (a-la Occam’s Razor). The best methods of which I became aware framed the core problem as simply one of expected value which, in the simplest form, is defined as a convolution of an expected risk-neutral probability density distribution upon an arbitrary function6.

Manifold are the benefits of the generalized convolution framework: the mechanics are incredibly intuitive; the probability distribution(s)9, the pay-off function(s), boundary conditions, and terminal conditions may be arbitrarily defined without any loss in generality; it allows for the inclusion of multiple and complex pay-off and terminal conditions; it can be estimated discretely and iteratively; it is a verbose method which can validate the accuracy of reduced-form models; et cetera. But applying even the convolution framework cannot provide a closed-form solution to the stochastic annuity; methods for estimating expected value utilize various discretization and transformative methods (e.g., Fast Fourier Transforms).

Even if it may be possible to approximate the multiple period case, there are still a multitude of assumptions that could drive a significant wedge between reality and expectancy. In order to proceed, one must assume to know a lot of things that one actually cannot know. Some of these hazards may be out-lined as follows:

  • The time to expiration (i.e., “terminal time period”) of a stochastic annuity is usually undefined. Assigning an arbitrary lifetime is sub-optimal and may be spurious. Optimally, we could solve for some rational economic limit which is itself deterministic10. If the terminal limit is allowed to be a function of expected cash flows (i.e., such that t = T as \frac{dV}{dt} \to 0), then the optimal terminal limit itself will often become a stochastic function.
  • With only a single factor under uncertainty, it is presumed that other parameters are deterministic quantities or functions — this may be a weak assumption, depending on many factors.
  • This all presumes that measurement error will not materially affect the over-riding hypothesis.

Implications
Upon an acknowledgement of these hazards, I believe that the implications of the ability to value upstream assets using normative methods of discounted present value are still… titillating11. If it can be shown that there is a probabilistic arbitrage-free price (though some would claim that this is not equivalent to the right price) for a stochastic annuity, then it is possible to identify probabilistic violations of no-arbitrage for the underlying assets themselves. The ability to find probabilistic violations of market efficiency implies that if/when market prices are sufficiently inefficient, there is an opportunity for a free lunch (i.e., earning a risk-free return above the risk-free rate). The ability to identity efficient prices for many kinds of derivatives in the wake of efficient pricing models since the 1970s may have sparked the exponential growth of the financial industry in the 1980s. Even though the public trust in Wall Street models has been rightly eroded, its usually not the models themselves which failed, but rather the analysts’ assumptions.

Although valuing underlying assets and liabilities was, in fact, an original goal of most original options valuation frameworks (including: Cox, Robinstein, and Ross; Black-Scholes; Heston; et al), real-world applications have been almost exclusively reserved for derivatives of the underlying for which complete markets provided the kind of complete information required in order to detect and profit from deviations from no-arbitrage12. However, the right assumptions which provide the fair value of a derivative may not be the right assumptions which provide a fair value estimate of the underlying. As a result, strong frameworks for quantifying empirical uncertainty have failed to permeate into the thought processes and models of nearly all corporate financial analysts who assess underlying assets and liabilities.

This failure is, I believe, a reflection of the overall framework’s inability to provide a simple and intuitive explanation of randomness; adapting these models to more realistic scenarios has traditionally introduced a level of complexity which is neither actionable nor unique13. Adjustments to account for price processes which are anything but risk-free and symmetrical (a-la, anything but risk-neutral GBM or a jump-stochastic Poisson process) are simply too complicated to suggest they represent an elegant encapsulation of the underlying dynamics of uncertainty.

Framing the random changes in securities prices as though they were fluctuating cash flows within stochastic annuities, I believe, directly addresses many shortcomings and failures of conventional frameworks while not deviating at all from established theory on asset pricing. Finally, we can return to a world where fundamental uncertainty remains normally distributed. We can now begin to conceive that observed non-normal characteristics of securities price evolutions may be the results of complex interactions between shifting, boundaries and terminal conditions on, and leverage to normal randomness without presuming that non-normality is itself a fundamental aspect of that randomness… and, of course, noise.

Concluding Thoughts
If indeed the same framework for valuing derivatives also can be applied to valuing the underlying assets — with some mild assumptions — then weaker (i.e. statistical) forms of arbitrage may yet be found. The basic premise that the upstream business model is largely undifferentiated implies that a generalized valuation approach should be indicative of fair value, especially when applied to existing reservoirs and production in which cost and revenue parameters are minimally uncertain. The implications of the stochastic annuity framework as thus presented are robust for any resource-based asset, such as mining and forestry projects. Adapting the framework to other types of assets should involve minimal additional complexity just as long as one can rightly presume that one or more underlying processes of periodic or continuous cash flows are, indeed, virtually indistinguishable from a random walk.

I am not suggesting that the framework of a stochastic annuity solves for the magical constant of the universe or constitutes a financial theory of everything. I do not even think it can generate quick, easy, or remotely guaranteed profits. If the markets are totally efficient from here forward, then one should expect to earn the market rate of return less costs (i.e., under-perform the benchmark) in spite of all attempts to the contrary. In fact, most active investors — who are usually weened from the crème de la crème of academia — under-perform their self-selected benchmarks over any sufficiently long time-frame. Moreover, if anything, markets are trending towards increasing efficiency, further casting a shadow of uncertainty over the future of active investing14. If, however, local inefficiencies are allowed to exist and market prices merely seek fair value15, then one who possesses a statistical arbitrage should expect to do at least marginally better.

Anyway, I think it’s finally time to put my money where my math is.

Footnotes   [ + ]

1. The fundamental theorem of asset pricing (FTAP) tells us that in order for their to exist an efficiently priced asset (a-la, no free lunch), there must be an arbitrage-free (i.e., risk-neutral) expectation, \mathbb{Q}, which is equivalent to the probabilistic expectation, \mathbb{P}.
2. If we take the efficient market hypothesis (EMH) seriously, commodities prices should be virtually indistinguishable from a random walk. A semi-martingale process is a type a random walk in which the expected value is equal to the sum of a local martingale plus a finite variance process; In mathematical finance, semi-martingales are often assumed to have properties of Geometric Brownian Motion (GBM) in which prices evolve over time according to a Wiener Process, W_t. This process can be represented notionally as such:
ln(\frac{P_t}{P_0}) = (r + \mu \pm \frac{\sigma_P^2}{2})t+W_t\sigma_P
3. The long-run expected value of any semi-martingale is its mean. The actual mathematical proofs for this intuitive simplification come from the Girsanov theorem and Itô’s lemma, which reflect the ability to create a dynamically replicating portfolio consisting of x units of underlying and y units of a zero-coupon bond. The ability to replace the drift constant, \mu, with an interest rate, r, greatly simplifies the math and allows us to account for time value (i.e., do economics)
4. For example, an expected cash flow may be arbitrarily defined as function of bounded expectation on future commodities prices, quantities of production, and costs:

(i) \mathbb{E}_t^{\mathbb{Q}}[\frac{dV_{t}}{dt}] ={\mathbb{E}_t^\mathbb{Q}[max(-L_t, P_t*q_t - K_t)] } e^{-rt}

= {\mathbb{E}_t^\mathbb{Q}[max(0, P_t*q_t - K_t)]} e^{-rt} - L_te^{-rt}

where: L_t is a time-dependent lower boundary condition on dV/dt; q_t is a time-dependent quantity (i.e., units of production); K_t is a time-dependent cost function; and r is the force of interest.

5. Restating this problem under terms where the factor under uncertainty is determined by GBM, the general formula for a continuous annuity may be represented as follows:

(ii) \mathbb{E}^{\mathbb{Q}}_0[V_{t=0}] = \int_t^T \int_{-\infty}^{\infty} \varphi(\frac{Z_t*\sqrt{t}*\sigma+\frac{\sigma^2}{2}*t}{\sigma*\sqrt{t}})*(\mathbb{E}^\mathbb{Q}_t[\frac{dV_t}{dt}]*e^{-rt})\,dZ\,dt

Substituting an arbitrary pay-off function of P_t from equation (i) into \mathbb{E}^\mathbb{Q}_t \frac{dV}{dt} yields:

(iii) \mathbb{E}^{\mathbb{Q}}_0[V_{t=0}] = \int_t^T \int_{-\infty}^{\infty} \varphi(\frac{Z_t*\sqrt{t}*\sigma+\frac{\sigma^2}{2}*t}{\sigma*\sqrt{t}})*({\mathbb{E}_t^\mathbb{Q}[max(0, P_t*q_t - K_t)]} e^{-rt} - L_te^{-rt})\,dZ\,dt

Finally, recognizing that P_t is Brownian Motion, depending on Z_t, yields:

(iv) \mathbb{E}^{\mathbb{Q}}_0[V_{t=0}] = \int_t^T \int_{-\infty}^{\infty} \varphi(\frac{Z_t*\sqrt{t}*\sigma+\frac{\sigma^2}{2}*t}{\sigma*\sqrt{t}})*({\mathbb{E}_t^\mathbb{Q}[max(0, P_{0}e^{Z_t*\sqrt{t}*\sigma+rt}*q_t - K_t)]} e^{-rt} - L_te^{-rt})\,dZ\,dt

where: \varphi(Z) is standard normal (i.e., Gaussian) probability density function defined by:

\varphi(X) = e^{\frac{-X^2}{2}}\frac{1}{\sqrt{2\pi}}

Note the analogy between equation (iv) to a general volumetric equation (i.e., ‘outcomes’ x ‘probabilities’ x ‘time’).

A distribution of probability densities times a corresponding distribution of outcomes can be further simplified with an equivalent cumulative probability density. This method, whereby one takes the equivalent martingale measure (EMM), for a given expected outcome, uses logic from the ‘heat equation’ in engineering to solve for the Black-Scholes model without deriving the Black-Scholes partial differential equation (PDE) — useful if one does not have access to stochastic calculus. Simplifying, we can now rewrite equation (iv) as such:

(v) \mathbb{E}^{\mathbb{Q}}_0[V_{t=0}] = \int_t^T (\mathbb{E}_t^\mathbb{Q}[P_t*q_t] - \mathbb{E}_t^\mathbb{P}[K_t])e^{-rt} - L_te^{-rt})\,dt

where \mathbb{P} is now the price-equivalent probability measure.

Using discretized EMMs, equation (vi) can be stated equivalently as follows:

(vi) \mathbb{E}^{\mathbb{Q}}_0[V_{t=0}] = \int_t^T (\phi(d_1)(P_t*q_t) - \phi(d_2)(K_t))e^{-rt} - L_te^{-rt})\,dt

where: d_1 = \frac{Ln(\frac{P_t*q_t}{K_t})+(\frac{\sigma^2}{2})t}{\sigma\sqrt{t}};

and, d_2 = \frac{Ln(\frac{P_t*q_t}{K_t})-(\frac{\sigma^2}{2})t}{\sigma\sqrt{t}} .

and, \phi is the cumulative probability density function of the standard normal distribution such that:

\phi(X) = \int_{-\infty}^{X} \varphi(t) \, dt = \int_{-\infty}^{X} e^{\frac{-t^2}{2}}\frac{1}{\sqrt{2\pi}} \, dt

Notice that equation (vi) is analgous to the integrand of the Black-Scholes solution for a call option except that the time-dependent pay-off function is no longer arbitrarily bounded from [0, \infty].

It is implied throughout this derivation that there is at least one risk-neutral (i.e., equivalent martingale) measure which equals the expected value under the probability integral; otherwise FTAP tells us there would be a free lunch.

6, 8. citation(s) needed
7. However simplified, equation (vi) still falls short of a closed-form solution. A closed form solution might be possible under PDE, EMM, and/or convolution framework(s) if there were some way to derive an integrated cumulative probability distribution as a function of time from t = 0 \to t = T. Integrating the heat equation with respect to time may be an equivalent work through.
9. Even if the probability distributions of expected outcomes may be arbitrarily defined under the convolution framework, it should remain risk neutral such as not to result in expected violations of FTAP.
10. See Appendix A for an explanation of economic limit with a case study.
11. Please reference Appendix C for an explanation of normative discounting.
12. Market data for many financial derivatives is complete, readily available, and easily traded — exchange traded options are most notable in this respect. Strong-from completeness results in the distinct possibility that strong-forms of arbitrage may exist. This is not necessarily true for the underlying assets themselves, although weaker forms of arbitrage may exist for commodities producers in which a dynamic position in the underlying commodities themselves may replicate the time-dependent pay-off functions of the corporate assets — assuming other parameters of the time-dependent cash flows are approximately known.
13. Unique information is that which differs meaningfully from market expectations which are, by definition, already priced-in to the price.
14. Tammer Kamel, founder of Quandl.com, has penned a fantastic piece on The Unbearable Transience of Alpha — or, the inevitable death of every investment thesis.
15. That price seeks value is a philosophical assertion, but one which is backed by lots of data.