For an example of such a characteristic portfolio strategy, see the youtube video on slide #56 Dynamic Allocation (video).Example Python CodeIn our toy example with the goal of constructing a low volatility equity portfolio, our chosen allocation policy will be to weight the 30 DJIA stocks according to the ex-ante minimum variance portfolio, and rebalance the portfolio at the end of each month.We'll use an expanding historical data window of at least 3 years, apply time-conditioned weights to the observations when estimating the ex-ante distribution, and also use a simple form of shrinkage before optimizing.To simulate such a sequence of allocations over a 3 year period, we'll use the open source Zipline package. DJIA constituents and DIA ETF for benchmarktickers = ['MMM','AXP','AAPL','BA','CAT','CVX','CSCO','KO','DD','XOM','GE','GS', 'HD','INTC','IBM','JNJ','JPM','MCD','MRK','MSFT','NKE','PFE','PG', 'TRV','UNH','UTX','VZ','V','WMT','DIS', 'DIA']# Set investable asset tickersasset_tickers = ['MMM','AXP','AAPL','BA','CAT','CVX','CSCO','KO','DD','XOM','GE', 'GS','HD','INTC','IBM','JNJ','JPM','MCD','MRK','MSFT','NKE','PFE','PG', 'TRV','UNH','UTX','VZ','V','WMT','DIS'] def initialize(context): # Turn off the slippage model set_slippage(slippage.FixedSlippage(spread=0.0)) # Set the commission model set_commission(commission.PerShare(cost=0.01, min_trade_cost=1.0)) context.day = -1 # using zero-based counter for days context.set_benchmark(symbol('DIA')) context.assets = [] print('Setup investable assets...') for ticker in asset_tickers: #print(ticker) context.assets.append(symbol(ticker)) context.n_asset = len(context.assets) context.n_portfolio = 40 # num mean-variance efficient portfolios to compute context.today = None context.tau = None context.min_data_window = 756 # min of 3 yrs data for calculations context.first_rebal_date = None context.first_rebal_idx = None context.weights = None # Schedule dynamic allocation calcs to occur 1 day before month end - note that # actual trading will occur on the close on the last trading day of the month schedule_function(rebalance, date_rule=date_rules.month_end(days_offset=1), time_rule=time_rules.market_close()) # Record some stuff every day schedule_function(record_vars, date_rule=date_rules.every_day(), time_rule=time_rules.market_close())def handle_data(context, data): context.day += 1 #print(context.day) def rebalance(context, data): # Wait for 756 trading days (3 yrs) of historical prices before trading if context.day < context.min_data_window - 1: return # Get expanding window of past prices and compute returns context.today = get_datetime().date() prices = data.history(context.assets, "price", context.day, "1d") if context.first_rebal_date is None: context.first_rebal_date = context.today context.first_rebal_idx = context.day print('Starting dynamic allocation simulation...') # Get investment horizon in days ie number of trading days next month context.tau = rnr.get_num_days_nxt_month(context.today.month, context.today.year) # Calculate HFP distribution asset_rets = np.array(prices.pct_change(context.tau).iloc[context.tau:, :]) num_scenarios = len(asset_rets) # Set Flexible Probabilities Using Exponential Smoothing half_life_prjn = 252 * 2 # in days lambda_prjn = np.log(2) / half_life_prjn probs_prjn = np.exp(-lambda_prjn * (np.arange(0, num_scenarios)[::-1])) probs_prjn = probs_prjn / sum(probs_prjn) mu_pc, sigma2_pc = rnr.fp_mean_cov(asset_rets.T, probs_prjn) # Perform shrinkage to mitigate estimation risk mu_shrk, sigma2_shrk = rnr.simple_shrinkage(mu_pc, sigma2_pc) weights, _, _ = rnr.efficient_frontier_qp_rets(context.n_portfolio, sigma2_shrk, mu_shrk) print('Optimal weights calculated 1 day before month end on %s (day=%s)' \ % (context.today, context.day)) #print(weights) min_var_weights = weights[0,:] # Rebalance portfolio accordingly for stock, weight in zip(prices.columns, min_var_weights): order_target_percent(stock, np.asscalar(weight)) context.weights = min_var_weights def record_vars(context, data): record(weights=context.weights, tau=context.tau) def analyze(perf, bm_value, start_idx): pd.DataFrame({'portfolio':results.portfolio_value,'benchmark':bm_value})\ .iloc[start_idx:,:].plot(title='Portfolio Performance vs Benchmark',\ figsize=(10, 8))if __name__ == '__main__': from datetime import datetime import pytz from zipline.algorithm import TradingAlgorithm from zipline.utils.factory import load_bars_from_yahoo import pandas as pd import matplotlib.pyplot as plt # Create and run the algorithm.

"Warren Buffett, The Oracle of Omaha (born 1930)In this penultimate leg of the tour we'll be visiting 2 more attractions along Via Meucci, Construction and Execution.ConstructionPortfolio Construction is another yuge! In addition, we apply 2 common constraints, long-only allocations and full investment of the budget (positive weights and weights sum to 1), and use a simple shrinkage technique.Also, do note that Meucci advises that it is more general to calculate the estimates in $P&L terms (e.g.

"Salvador Dali, Artist (1904-1989)Today we'll be visiting 2 sites along Via Meucci, Evaluation and Attribution.EvaluationWe need some way to measure the goodness of the ex-ante portfolio distribution from the scenarios in the Aggregation step, and for this Meucci introduces the concept of a Satisfaction index.Given the distribution of the ex-ante performance we can compute the satisfaction index (or its opposite, the risk index) associated with the portfolio in a number of ways:expected utility/certainty equivalent: mean-variance, higher moments, prospect theoryspectral/distortion: VaR (economic capital), CVaR, Wangnon-dimensional ratios: Sharpe, Sortino, Omega and Kappa ratiosGiven that our toy example has the goal of creating a low volatility portfolio, we'll choose the (mathematical) opposite of volatility as our measure of satisfaction e.g. if the ex-ante portfolio volatility is \$40,000 then the satisfaction index is -\$40,000.In this simple example, the investor is only concerned with risk and not return i.e.

"In the future, instead of striving to be right at a high cost, it will be more appropriate to be flexible and plural at a lower cost. "Edward de Bono, author and thinker extraordinaire (born 1933)In this third leg of The Checklist tour, we will take 3 more steps, Projection, Pricing and Aggregation.Quick RecapGoal: Apply Meucci's The Checklist to create a toy example low volatility equity portfolio using DJIA stocks, with an investment horizon of 21 days.Risk drivers: log of stock prices (values): $X_t = \ln V_t$, where $X$ is risk driver, $V$ is stock price (value).Invariants: modelled using a simple random walk: $X_{t+1} = X_t + \epsilon_{t+1}$, where $\epsilon$ is invariant.Estimation: Historical (simulation) with Flexible Probabilities approach (HFP)Projection to the HorizonThe goal of this step is to create various possible scenarios (i.e.

For more on copula-marginal, go to the ARPM Bootcamp!Flexible Probabilities - the twist explained...I put Flexible Probabilities into the must-see category of attractions in Meucci-land. weighted mean and weighted covariance.Python Code Examples of Flexible ProbabilitiesThe Python code below shows 2 examples of flexible probabilities:Time-conditioned - apply higher weight to more recent observations, with the weights decaying exponentiallyState-conditioned - using the VIX as an example, give weight to past scenarios where the VIX is greater than 20 and no weight otherwise In [1]: %matplotlib inlinefrom pandas_datareader import dataimport numpy as npimport datetimeimport mathimport matplotlib.pyplot as pltimport matplotlib.gridspec as gridspecimport seaborn# Get Yahoo data on 30 DJIA stocks and a few ETFstickers = ['MMM','AXP','AAPL','BA','CAT','CVX','CSCO','KO','DD','XOM','GE','GS', 'HD','INTC','IBM','JNJ','JPM','MCD','MRK','MSFT','NKE','PFE','PG', 'TRV','UNH','UTX','VZ','V','WMT','DIS','SPY','DIA','TLT','SHY']start = datetime.datetime(2005, 12, 31)end = datetime.datetime(2016, 5, 30)rawdata = data.DataReader(tickers, 'yahoo', start, end) prices = rawdata.to_frame().unstack(level=1)['Adj Close']risk_drivers = np.log(prices)invariants = risk_drivers.diff().drop(risk_drivers.index[0])T = len(invariants)# Get VIX datavix = data.DataReader('^VIX', 'yahoo', start, end)['Close']vix.drop(vix.index[0], inplace=True)# Equal probsequal_probs = np.ones(len(vix)) / len(vix)# Time-conditioned flexible probs with exponential decayhalf_life = 252 * 2 # half life of 2 yearses_lambda = math.log(2) / half_lifeexp_probs = np.exp(-es_lambda * (np.arange(0, len(vix))[::-1]))exp_probs = exp_probs / sum(exp_probs)# effective number of scenariosens_exp_probs = math.exp(sum(-exp_probs * np.log(exp_probs))) # State-conditioned flexible probs based on VIX > 20state_probs = np.zeros(len(vix)) / len(vix)state_cond = np.array(vix > 20)state_probs[state_cond] = 1 / state_cond.sum()# Plot chartsfig = plt.figure(figsize=(9, 8))gs = gridspec.GridSpec(2, 2)ax = fig.add_subplot(gs[0, 0])ax2 = fig.add_subplot(gs[0, 1])ax3 = fig.add_subplot(gs[1, 0])ax4 = fig.add_subplot(gs[1, 1])ax.plot(vix.index, equal_probs)ax.set_title("Equal Probabilities (weights)")ax2.plot(vix.index, vix)ax2.set_title("Implied Volatility Index (VIX)")ax2.axhline(20, color='r')ax3.plot(vix.index, exp_probs)ax3.set_title("Time-conditioned Probabilities with Exponential Decay")ax4.plot(vix.index, state_probs, marker='o', markersize=3, linestyle='None', alpha=0.7)ax4.set_title("State-conditioned Probabilities (VIX > 20)")plt.tight_layout()plt.show() There are many other ways to create sets of flexible probabilities (e.g.

If you do, please give this tour guide a kind mention when you sign up!Toy Example SetupSince this is a quick trip, we'll keep things relatively simple by constructing the low volatility portfolio using equities only (OK maybe I'll throw in a bond ETF later to highlight a point), and use the constituents of the Dow Jones Industrial Average as the universe.We'll be using daily data downloaded from Yahoo and assume a 1-month (21 trading days) investment horizon.I've also created a repo on GitHub where you can download the Python code.Python SetupI'm running a Python 3.4 environment on Windows 10, installed via the Anaconda3-4.0.0 64-bit distribution. Identify the InvariantsSlide #6: take the difference between consecutive risk driver values = change or increment = log return = continuously compounded daily return = invariant (approximately).Python Code ExampleIn [1]: %matplotlib inlinefrom pandas_datareader import dataimport numpy as npimport datetimeimport matplotlib.pyplot as pltimport seaborn# Get Yahoo data on 30 DJIA stocks and a few ETFstickers = ['MMM','AXP','AAPL','BA','CAT','CVX','CSCO','KO','DD','XOM','GE','GS', 'HD','INTC','IBM','JNJ','JPM','MCD','MRK','MSFT','NKE','PFE','PG', 'TRV','UNH','UTX','VZ','V','WMT','DIS','SPY','DIA','TLT','SHY']start = datetime.datetime(2005, 12, 31)end = datetime.datetime(2016, 5, 30)rawdata = data.DataReader(tickers, 'yahoo', start, end) prices = rawdata.to_frame().unstack(level=1)['Adj Close']risk_drivers = np.log(prices)invariants = risk_drivers.diff().drop(risk_drivers.index[0])# Plotsplt.figure()prices['AAPL'].plot(figsize=(10, 8), title='AAPL Daily Stock Price (Value)')plt.show()plt.figure()risk_drivers['AAPL'].plot(figsize=(10, 8), title='AAPL Daily Log of Stock Price (Log Value = Risk Driver)')plt.show()plt.figure()invariants['AAPL'].plot(figsize=(10, 8), title='AAPL Continuously Compounded Daily Returns (Log Return = Invariant)')plt.show() Test for InvarianceAttilio also provides a visual test for invariance.

Last week I organised for Attilio Meucci to give a webinar to members of CFA Singapore and NUS Risk Management Institute on an Introduction to “The Checklist” – Ten Steps for Advanced Risk and Portfolio Management. Checklist ChangesSome key changes I noticed from the former Checklist available at ssrn.com:A new step, Dynamic Allocation, comes in at Step 10, and Ex-post Performance Analysis now follows as the “Ultimate Step”, so 10 + 1 steps…Evaluation now precedes AttributionOptimization renamed to ConstructionGrouping of the steps into 3 categories: Market Dynamics, Risk Management and Portfolio Management Python Code is Coming!Some great news for Python fans, ARPM Bootcamp attendees will also have access to Python code, which Attilio tells me will be about 10 times more than the code currently available in Matlab.What's more, there will be a 1-day Python Conference happening before the Bootcamp.Currently the code base is in Matlab, downloadable at mathworks.com and at the ARPM website.

I have signed up for Attilio Meucci’s ARPM Bootcamp next month (13-18 July 2015) in NYC http://www.symmys.com/arpm-bootcamp, and need to do quite a bit of prep as it’s going to be a deep-dive…The Advanced Risk and Portfolio Management Bootcamp provides in-depth understanding of buy-side modeling from the foundations to the most advanced statistical and optimization techniques, in 6 intensive days of theory and MATLAB live examples and exercises:Market modeling: random walk, ARMA, GARCH, Levy, long memory, stochastic volatilityMultivariate statistics: non-parametric, non-normal MLE, shrinkage, robust, Bayesian estimation; copula/marginal factorization; location-dispersion ellipsoidFactor modeling: theory and pitfalls of time-series and cross-sectional factor models, CAPM, APT, principal components analysis, random matrix theoryPricing: full evaluation, Greeks, stress-matrix interpolation; analytical, Monte Carlo, historicalRisk analysis: diversification, stochastic dominance, expected utility, Sharpe ratio, Omega, Kappa, Sortino, value at risk, expected shortfall, coherent and spectral measuresPortfolio construction: robust/SOCP optimization, shrinkage/Bayesian allocations, Black-Litterman and beyond; transaction costs, liquidity, market impact; statistical arbitrage; convex/concave dynamic strategies, CPPI, delta-replicationSo I thought I would delve into one of his many interesting papers, “Neither ”Normal" not “Lognormal”: Modeling Interest Rates Across all Regimes“, co-authored by Angela Loregian. Here is my port of the code and the resulting chart showing JGB rates, log-rates, and shadow rates (derived from the inverse call transformation): + Show R code ################################################################################# load packaages #################################################################################library(RCurl)library(xts)library(matlab)library(pracma)library(reshape2)library(ggplot2)library(gridExtra)################################################################################# get JGB data from file #################################################################################csvfile = getURLContent( "https://docs.google.com/uc?export=download&id=0B4oNodML7SgSZGdVQXYxVWRwMW8", ssl.verifypeer = FALSE, followlocation = TRUE, binary = FALSE)JGB <- read.zoo(textConnection(csvfile), format = "%Y-%m-%d", index = 1, header = TRUE, tz = "UTC", sep = ",")JGB <- as.xts(JGB)# process datatau <- as.numeric(gsub("Y", "", names(JGB)[c(1, 2, 3, 5, 7, 10, 11, 12)]))# ratesy <- coredata(JGB[, c(1, 2, 3, 5, 7, 10, 11, 12)])Date <- index(JGB)TimeStep <- 5 # daily (1) / weekly (6) observationsdate <- Date[seq(1, NROW(Date), TimeStep)]y <- t(y[seq(1, NROW(Date), TimeStep), ])################################################################################# Inverse Call Transformation function ################################################################################## this function computes the Inverse Call Transformation and returns shadow # rates see A. Meucci, A. Loregian - "Neither "Normal" not "Lognormal": Modeling# Interest Rates Across all Regimes" to appear (2013)# Last version of code and article available at http://symmys.com/node/601## INPUT# rates: matrix containing the time series of the rates to be transformed; # size(rates)= lenght(tau) x t_, where t_ is the length of the time series# tau: vector containing the times to maturity corresponding to the rows of the # rates matrix# eta, zeta: Inverse-call transformation parameters (the smoothing parameter s # is obtained as s=eta*exp(zeta*tau))## OUTPUT# x: shadow rates, computed from rates via inverse-call transformation################################################################################InverseCallTransformation <- function(rates, tau, eta, zeta) { t_=size(rates,2); x=zeros(size(rates)[1], size(rates)[2]); s=eta*exp(zeta*tau); # Bachelier call pricing function BachelierCallPrice <- function(x,s) { # Call function (zero-strike call option price profile according to the # Bachelier pricing function) # s: smoothing parameter C = x * pnorm(x/s) + s * dnorm(x/s); } call_fit <- function(tmpX,y,sigma) { c=BachelierCallPrice(tmpX,sigma); F= y - c; } for (v in 1:length(tau)) { # inverse call transformation # Optimization options.

Another hotly anticipated FOMC meeting kicks off next week, so I thought it would be timely to highlight a less well-known working paper, “Stock Returns over the FOMC Cycle”, by Cieslak, Morse and Vissing-Jorgensen (current draft June 2014). Another objective is to evaluate the R package Quantstrat “for constructing trading systems and simulation.”DataAlthough the authors used 20 years of excess return data from 1994 to 2013, instead we’ll use S&P500 ETF (SPY) data from 1994 to March 2015 and the FOMC dates (from my previous post here http://www.returnandrisk.com/2015/01/fomc-dates-full-history-web-scrape.html).As there is not a lot of out-of-sample data since the release of the paper in 2014, we’ll use all the data to detect the pattern, and then proceed to check the impact of transaction costs on the economic significance of one possible FOMC cycle trading strategy.

In the February 2015 edition of The Journal of Finance, a well known academic paper, “The Pre-FOMC Announcement Drift”, was finally published, almost 4 years after the working paper was released in the public domain in 2011.Authored by researchers, Lucca and Moench, at the US Federal Reserve, it documents the tendency for the S&P500 Index to rise in the 24 hours prior to scheduled FOMC announcements. DataLucca and Moench used intra-day S&P500 data (2pm on the day prior to the announcement to 2pm on the day of the announcement) to……show that since 1994, the S&P500 Index has on average increased 49 basis points in the 24-hours before scheduled FOMC announcements.As we don’t have ready access to such data, we’ll use daily close data for the S&P500 ETF (SPY) as a proxy.

As I delve into the existing academic research regarding price patterns around US Federal Open Market Committee (FOMC) meetings, it’s clear that I will need more data than I collected in the previous post FOMC Dates - Scraping Data From Web Pages.Which reminds me of the quote by Google’s Research Director Peter Norvig:We don’t have better algorithms. To do this I decided to use XPath (XML Path Language) and regular expressions in R. I created 4 R functions and saved them in a separate file “FOMC Dates Functions.R”, which you will need to download from GitHub in order to run the R code below (save the file in your working directory).## install.packages(c("httr", "XML"), repos = "http://cran.us.r-project.org")library(httr)library(XML)# load fomc date functionssource("FOMC Dates Functions.R")# extract data from web pages and parse datesfomcdatespre2009 <- get.fomc.dates.pre.2009(1936, 2008)fomcdatesfrom2009 <- get.fomc.dates.from.2009()# combine datasets and order chronologicallyfomcdatesall <- do.call(rbind, list(fomcdatespre2009, fomcdatesfrom2009))fomcdatesall <- fomcdatesall[order(fomcdatesall$begdate), ]# save as RData formatsave(fomcdatesall, file = "fomcdatesall.RData")# save as csv filewrite.csv(fomcdatesall, "fomcdatesall.csv", row.names = FALSE)# check resultshead(fomcdatesall) This will scrape the full history of FOMC meetings from 1936 to the present.

As a first step in visualizing/exploring the data from my last post, FOMC Dates - Scraping Data From Web Pages, I’ll plot the FOMC announcement dates along with the following price series: 2-Year and 10-Year US Treasury yields, S&P500 ETF (SPY) and USD Index ETF (UUP).I’ll use the quantmod R package to download the price data from the US Federal Reserve Economic Data (FRED) database and Yahoo Finance.For the graphics, I like the cool gglot2 R package, and although it’s more complicated to code, it’s worth it.First, install and load the quantmod, reshape2 and ggplot2 R packages.install.packages(c("quantmod", "reshape2", "ggplot2"), repos = "http://cran.us.r-project.org")library(quantmod)library(reshape2)library(ggplot2)Download the price series data and store as xts objects, and load the FOMC announcement dates from the file saved in the last post.# get 2-year and 10-year US Treasury yieldsgetSymbols(c("DGS2", "DGS10"), src = "FRED")DGS2 <- DGS2["2009-01-02/"]DGS10 <- DGS10["2009-01-02/"]# get S&P500 ETF pricesgetSymbols("SPY", from = "2009-01-02")# get USD Index ETF pricesgetSymbols("UUP", from = "2009-01-02")# load FOMC announcement dates from file previously saved in working directoryload("fomcdates.RData")Reshape the yield data in order to plot both the 2-Year and 10-Year yields on the same chart. The FOMC announcement dates will be shown as dashed vertical lines.# prepare yield datayields <- data.frame(index(DGS2), DGS2, DGS10)names(yields) <- c("Date", "2Yr Yield", "10Yr Yield")yieldsmelt <- melt(yields, id.vars = "Date")# plot yield chartgp <- ggplot(yieldsmelt, aes(x = Date, y = value)) + geom_line(aes(colour = variable)) + labs(list(title = "US Treasury Yields with FOMC Dates", x = "", y = "% p.a.

those when a statement is published after the meeting) from their web page http://www.federalreserve.gov/monetarypolicy/fomccalendars.htm. At the time of writing, this web page had dates from 2009 onward.First, install and load the httr and XML R packages.install.packages(c("httr", "XML"), repos = "http://cran.us.r-project.org")library(httr)library(XML)Next, run the following R code.# get and parse web page contentwebpage <- content(GET( "http://www.federalreserve.gov/monetarypolicy/fomccalendars.htm"), as = "text")xhtmldoc <- htmlParse(webpage)# get statement urls and sort themstatements <- xpathSApply(xhtmldoc, "//td[@class='statement2']/a", xmlGetAttr, "href")statements <- sort(statements)# get dates from statement urlsfomcdates <- sapply(statements, function(x) substr(x, 28, 35))fomcdates <- as.Date(fomcdates, format = "%Y%m%d")# save results in working directorysave(list = c("statements", "fomcdates"), file = "fomcdates.RData")Finally, check the results by looking at their structures and first few values.# check datastr(statements)head(statements)str(fomcdates)head(fomcdates)And you should see output similar to this below.## chr [1:49] "/newsevents/press/monetary/20090128a.htm" ...## [1] "/newsevents/press/monetary/20090128a.htm"## [2] "/newsevents/press/monetary/20090318a.htm"## [3] "/newsevents/press/monetary/20090429a.htm"## [4] "/newsevents/press/monetary/20090624a.htm"## [5] "/newsevents/press/monetary/20090812a.htm"## [6] "/newsevents/press/monetary/20090923a.htm"## Date[1:49], format: "2009-01-28" "2009-03-18" "2009-04-29" "2009-06-24" ...## [1] "2009-01-28" "2009-03-18" "2009-04-29" "2009-06-24" "2009-08-12"## [6] "2009-09-23"So what can we do with this data?