Download a .pdf version of this white paper here.
Economic viability of bioenergy projects depends on de-risking feedstock supply. This white paper series helps explain the relevance of the latest developments in biomass supply chain research and analytics to industry developers and lenders. It serves as an introduction to important feedstock supply chain de-risking concepts: from risk analysis methods to actual procurement strategies.
The Quiet Revolution
Here’s why you should care. Every supply chain has risk, but when the risks, their probability of occurring and the impact when they do occur are not clearly understood, debt providers add a layer of “perceived risk” on top of the actual risk. This is one of the reasons that bioenergy, biogas, biochemical and biofuel project tend to be saddled with egregiously high debt cost (also one of the key reasons for slow bio-project development). Think of it as financial deadweight– millions of dollars of extra, needless monetary drag that go down the black hole of debt coverage or interest payment to cover “mirage” risk. Money that would otherwise drop directly to a project’s bottom line and into investors pockets.
Advantage of Advanced Forescasting
Biomass supply chains are complex. Whether it’s getting wood from the forest to the biopower plant, corn stover to the biofuel plant, or organics to the anaerobic digester, the fact of the matter is that biomass supply chains contain great numbers of dependent variables that interact to determine the final cost of feedstock.
What’s interesting and challenging about bio supply chains is that many of these variables are themselves characterized by uncertainty. Diesel cost, stumpage and weather to only name a few. When modelling potential outcomes based on a large set of dependent variables, one has two options: develop a deterministic model based on a potentially large set of fixed value assumptions, the validity of which determine the effectiveness of the model, or develop a probabilistic, or stochastic, model where each variable is given a range based on the probability of occurrence.
Deterministic Versus Stochastic Modelling
Deterministic models make determinations of risk and prediction of future cost in pretty much the same way as a compass bearing. A single bearing will produce a fix on something along one path. That path is only as good as variables affecting she ship’s direction, like current location, sea currents and wind speed. That is why a bearing has to be constantly updated, to adjust for earlier errors (or assumptions). Never does the initial direction lead to final destination. Its path, or projection, is never correct and
always has to be adjusted.
So now imagine something different: suppose you have 10,000 bearings. Or rather, imagine that you can produce a grid of 10,000 bearings, or a succession of overlapping grids , each comprised of thousands of bearings. In this situation it is the density of the grid, the volume of the data and, interestingly, often the variance within the data that is the prime determinant of your ability to get an accurate fix. This is the essence of stochastic modelling using computer simulation.
Stochastic modeling using computer simulation allows for the uncertainty of occurrence and impact of the events (like diesel cost, drought, weather, quality issues, insolvency of suppliers) over the long periods of time. Stochastic models have the computational power to take into account the hundreds of thousands of possible outcomes that happen when we forecast uncertainty and variation over multiple variables.
Perhaps the most popular type of stochastic modelling is Monte Carlo simulation. Monte Carlo simulation, or probability simulation, is a technique used to understand the impact of risk and uncertainty on forecasting models. The name Monte Carlo comes from a region in Monaco, known for its casinos. Just as casinos rely on probability distributions to make profit, Monte Carlo forecasting methods rely on probability distributions to forecast future values. Because Monte Carlo methods operate with probabilities, the outputs are also probabilistic.
One of the main obstacles to lowering debt cost is investors’ understanding of feedstock prices, specifically their future trends. Price forecasting is inherently complex as feedstock price depends on many interacting variables, such as availability of supply, diesel cost, feedstock quality, competition, etc. Of course the future trends of any of those variables are also hard to predict. Such multi-variable problems however have been long tackled by physicists armed with Monte Carlo methods. Each variable is inherently unpredictable, but that does not mean we cannot estimate a likelihood of the variable reaching a specific value. This is done through an analysis of historical data, and derivation of probability function for each variable. In other words, we are dealing with probabilities as opposed to
Dealing with probabilities, such as a probability of a competitor increasing production levels, is referred to as a stochastic problem. Due to the probability functions embedded in each stochastic problem, each analysis renders different results. Of course having a few varying results does not help in decision-making.
Fortunately, modern computer power allows us to run hundreds of thousands of analyses for each problem. And this is good news because the more results we have, the more confident we can be with a certain value being the most likely to occur.
With such an approach we can answer questions such as: “What is the probability that
the cost of feedstock will exceed $X?” or “with a risk tolerance of X%, what is the range of future feedstock cost?” These are precisely the answers that investors need. Is it really worth putting that much effort into cost forecasting? Aren’t deterministic forecasting methods simpler and better? The truth of the matter is that most deterministic forecasts are wrong.
Having just one line indicating future values has limited applications, as that line, just like the initial compass bearing, will turn out to be incorrect. In contrast, being able to claim with statistical certainty that, for instance, there is 90% chance that the cost of feedstock will not exceed $X is much more meaningful. Investors and insurers operate on such probabilities, and the bioenergy industry needs to do the same if we are to lower debt costs.