In my junior year of college I worked part-time in the school convenience store located near Fenway. Said store did not sell cigarettes, but it did sell Massachusetts State Lottery tickets. I always found this to be an irony. It was clear that the decision to not sell cigarettes was a made because it was considered wrong and not because it wouldn’t be profitable, but the university never thought it was wrong to sell lottery tickets. On top of that, of that chain of school stores, the one I worked in was one of the few locations that sold lottery tickets. If it’s not already obvious, students were not the ones buying lottery tickets.
Yes, I’m playing around with the blog’s title. I never get around to reading about physics like I had planned, so the title wasn’t making much sense. I may change it a bit more, depends on how I feel after a day or two…I know, you’re on the edge of your seat. Try to keep calm during these difficult times.
Anyway, here are the articles/blog posts worth reading from the last few days. If you’re not entirely satisfied with the selection I will refund the full cost of your purchase.
- Russia is learning that economic warfare can, in fact, hurt. Attempts to support the Ruble are not working so far. (Bloomberg)
- The mighty have fallen, or rather, stumbled a little. Google missed Q3 profit and revenue estimates. You may begin guess what this means about Google’s future now. Please make sure to ignore that it’s only a single isolated data point, more people listen to you that way. (Bloomberg)
- Because no one could have foreseen that the iPad’s problems would be cheaper competitors and limited usefulness. Serious work is still going to be done on a laptop and my phone can do anything I’d want with a mobile device. Tablets are not universally needed, so….make it thinner and faster right? (The New York Times – Apple’s iPad Problem)
- Josh Brown was nice enough to let everyone know the Dow is negative for the year. Please ignore his sensible conclusion that this is not so unusual. The correct response is to run in circles screaming about the end of the world and how Wall Street’s plan is about to reach its final objective and we’re all going to die. QED. (The Reformed Broker)
- I don’t have anything to add to this piece by Professor Aswath Damodaran on GoPro’s valuation. I still think it’s insanely overvalued, but I know enough to know it only feels that way to me and “feels” doesn’t mean a God damned thing in this context. In any case, the analysis is high quality and worth reading even if you’re not interested in GoPro because God forbid you learn about something new. (Musings on Markets)
- Yes, only one thing. Yes, it’s the most important thing to know about Ebola. In a stunning show of either irony or a flash of actual rational thoughts, The Verge has managed to impress me. They take time to point out that the media sucks at explaining
everythingEbola. Never forget that the news media exists to make money and there’s no reason to assume that they actually care about the accuracy of their reporting or the consequences of their mistakes. Of course, we’ll continue to ignore that because our favorite source says bad things about the other political party. I may be getting off topic here….(The Verge)
That’s all for this morning’s reading. If I have time I’ll post some afternoon reading. Don’t hold your breath.
This one doesn’t need a huge wall of text to explain.
Here’s a screenshot from this morning.
And here’s the screenshot from this afternoon.
Well, at least not the sort of metaphysical problems that Vox author Matthew Yglesias suggests are a problem.
“Except there’s a huge problem — we’re never going to know which model is correct.
…To anyone who understands probabilities, of course, this is nonsense….If you sit down at the blackjack table and play for a while, you will probably lose money. But you might not. Even the Washington Post’s current forecast that the GOP has a 95 percent chance of obtaining a Senate majority won’t genuinely be debunked by a Democratic hold. Five percent is unlikely, but unlikely things happen….
…But in an epistemological sense, the way we check probabilistic statements is to run the experiment over and over again. Flipping a coin twice doesn’t really prove anything. But if you flip it ten or twenty or a thousand times you’ll see that “it comes up heads half the time” is a good forecasting principle…
…we’re just never going to get the kind of sample sizes that would let us tell whose method of calculation is best.“
Cutting out all the background, that’s the heart of Yglesias’ argument (emphasis added). I’ll start by addressing the “problem” that we’ll never know which model is correct, we do have an answer to that.
“All models are wrong; some are useful.” -George E. P. Box
The correct question to ask is not which model is correct, but which model is more useful. Whether a given model is useful is highly subjective, to say the least. Even when we know that a model is deeply flawed it may still be considered to be useful. Take the Black-Scholes option pricing model, for example. We know that the Black-Scholes model has significant problems all the way down to the underlying assumptions not matching reality, but it’s still widely used for pricing options1. Why? Because it’s good enough for most investors and the results are known to be close enough to reality that it can be said to provide a useful result, even though it is known in advance that the result is wrong.
Now Yglesias is correct that observing an unlikely outcome does not, in itself, prove that a model is worse than another model that happened to predict the correct result this time. Yes, unlikely things do happen in the real world2, but why are you assuming that the assumptions that went into constructing the model are realistic3?
The assumptions underlying any model are simultaneously the strength and weakness of a model. We use models because we accept that the real world is too complicated to allow us to accommodate every single aspect of the system being modeled. The election forecasts use a poll of a small subset of the voting population to attempt to make predictions about the election in the future. There are two sources of possible error, first, the election happens at some point in the future and events can, and do, occur that can cause a significant number of people to change who they decide to vote for4.
The other possible source of error is that you are only polling a subset of voters and you don’t know whether or not they are representative of the entire population. If you had unlimited resources, you could in theory poll every single voter and likely achieve much greater accuracy (barring unforeseen events between your poll and the election). Needless to say, that’s not practical because that would amount to holding a poll that was effectively an election. Expensive and pointless.
I don’t follow election forecasts so I can’t say what exactly they do to attempt to improve the accuracy of the models. I can say that it is likely easy to find problems with the underlying assumptions of any poll that is 95% sure of the outcome. So the model can be debunked without needing to worry about the epistemological nature of probability. Now, given that such biased polls are put forward by the likes of The Washington Post, I’d still say an argument could be made that the model is useful, even if it’s stupid. After all, it’s making them money, isn’t it?
On a psychological level, most people are interpreting the forecast probability incorrectly. It doesn’t say that candidate X has a 60% chance of winning the election. It should be read as saying: candidate X would have a 60% chance of winning in the hypothetical universe of the model based on our observations of the real world and subject to the assumptions of the model. It’s telling IF the underlying assumptions hold that a particular outcome as the given chance of occurring.
So what does this mean as far as how you should view poll-based election forecasts? Honestly, I’d say you should always avoid using any model where you don’t understand the underlying assumptions and the model’s construction. You also need to know where the data used to fit the model parameters came from because that’s another possible source of bias. If you don’t know that much about the model you have no way to interpret what it is telling you, except to trust what others are saying that the model says. Your level of trust should be 0 when dealing with…really anyone who has either a financial interest in the model, or an ideological commitment to a particular result.
Really, if they don’t have a very long answer to the question, what’s wrong with this model, then you shouldn’t trust them.
1. Yes, I know the binomial model is more commonly used than Black-Scholes. The underlying assumptions are effectively the same between the two models and, for European style options at least, the binomial model will converge to the Black-Scholes model as the number of steps grows. 2. This glances over the question of how unlikely something has to be for it to be considered effectively impossible. Like most subjective things, going with your gut is not a good way to answer this question. A royal flush in poker is indeed unlikely, but it's not so unlikely as to have never happened in history. Contrast with a perfect bridge deal (assuming a fair deck) which has a probability of about . As my stats professor put it years ago, "if everyone who every existed played bridge continuously, the probability of ever seeing a perfect deal is still much less than one millionth of a percent. I'll leave it as an exercise for the reader to get a more specific result. 3. Yes, realistic is a rather soft term, but it's accurate. What's considered realistic has, to my surprise, turned out to be extremely subjective. Of course, from my point of view, I'd say that many people's idea of realistic has nothing to do with the real world. 4. There's a difference between uncertainty that can be measured as probability and actual uncertainty. That is, the risk that things we cannot anticipate will occur. You can never entirely eliminate uncertainty, but we try anyway. A great example of this was a psychology experiment I read about a long time ago. There are two urns, A and B filled with colored balls (red and blue). You are first instructed to pick a color, it doesn't matter if you pick red or blue. You next need to pick an urn, you win a prize (say, money) if the ball you pull out is the color you pick. You're told that urn A has 50 red and 50 blue balls in it. You are told nothing about urn B other than it has 100 balls in it. What urn do you pick. A majority of people selected urn A, even though there's no advantage to doing so. Mathematically speaking you cannot make an optimal choice because you have no information about the distribution of balls in urn B. We pick urn A because we at least know the odds, even though it doesn't help us to know the odds. We hate uncertainty, something to keep in mind when thinking about probability and more generally when thinking about forecasting.
When I looked at my email this morning, I noticed that, for the first time in a while I had a request from Quora. The question I was asked to answer is, of course, the question in the title of this post. I can’t say I was ready to answer first thing on Saturday morning and, for technical reasons1 I’m not going to go into, I’ve decided to post my response here rather than on Quora.
There are a lot of things to be said about civil forfeiture, but the video says it better than I can. This is a great example of why there need to be rules between people and easy money. It doesn’t matter whether you’re a police officer or a Wall Street banker or mid-level manager in a major company or whatever else you can think of. There is nothing that makes one of those groups special in this respect.
I’ve added a page, see top menu, of things you can download. As of now, there’s only one thing on the page, a spreadsheet with code I wrote to price options and make graphs of option spreads. It’s a work in progress, so read the disclaimer. Here’s a screenshot of the graphs it can produce.
Alternate title: Is October Really So Bad?
For investors, October has often been the month of doom. It has amassed a rather considerable list of market crashes including:
- The Panic of 1907 – October 22, 1907
- The Crash of 1929:
- Black Thursday – October 24, 1929
- Black Monday I – October 28, 1929
- Black Tuesday – October 29, 1929
- Black Monday II: Because one Black Monday wasn’t enough – October 19, 1987
- Friday the 13th Mini-Crash – October 13, 1989
- The 1997 Mini-Crash – October 27, 1997
And probably a few others I forgot. The history is well established, but whether it means anything is a more difficult question to answer1. In the interest of exploring this question, I decided to look at the distribution of S&P 500 returns (daily) for each month between 1950 and today and see if there are differences in the distributions of the returns between months2. (you can check my results yourself if you like, here is the raw data: SPX Data, I didn’t use Excel for the analysis but I did export the results to this spreadsheet: Monthly Return Data – SPX)
The results are, if nothing else, interesting. As I hope to make clear later, I don’t think the data is saying anything useful for investors, but it is interesting. However, the bigger point I want to make is that even when you have data, it is not always straightforward to interpret the data. In this case, depending on how you want to look at the data, October can be seen as either a more volatile, risky month for the stock market or no different from any other month.
The most popular measure of risk is volatility, or statistically speaking, the standard deviation of the returns. Over the past 64 years3, October has, in fact, been more volatile than any other month.
To get these results I separated all of the data for each month and looked at the distribution of the daily returns (so, for example, there were 1,407 trading days in the time period that were in October, 1,280 for November, and so on…in total there were 16,290 days in the time period).
Methodology, and Excel’s annoying unwillingness to let me put the months in a different order, it’s clear that November, September, and October stand out as being more volatile than the other months. October really stands out here, even compared with the next two most volatile months. October also includes both the largest one-day market loss and the largest one-day market gain in the data set. Black Monday (1987) was the largest one-loss, the largest one day gain was on 10/13/2008, so I doubt there was much happiness at the time. Clearly, it doesn’t seem to be much of a stretch to say that the data suggests October is a bad month. Numbers don’t lie, right?
Well, they may not lie, but nor do they tell the truth. Setting aside the countless problems with using volatility as a proxy for risk (that’ll be another post), there are other numbers to consider here. I next looked at the average daily return and the median return for each month.
First, for those who can’t find the energy to enlarge the picture, the orange bar is the mean and the blue is the median. This tells a different story about October entirely. The average trading day in October has a positive return. The median for October is positive. Every month has a positive median return, that’s not too surprising all things considered. It’s more interesting that four months have negative average returns: August, February, June, and September. Of those, only September had an above average volatility.
We can interpret this in lots of ways. I could argue that, given what we’ve seen so far, October is still a good month for investors because it has a positive average return. Or that it is bad because it has historically been more volatile. Really, the data can say what I want it to say depending on what I chose to present and what I don’t present. More to the point, I’d argue that October isn’t really unusual and that a number of very extreme outliers have taken place in October by coincidence and the deviations between the months is mostly noise. I get there by using what I’ve neglected to share so far.
I can’t know how much readers remember from their elementary school classes on applied statistical analysis. So, you may not be familiar with the, slightly obscure, statistical measures called skewness and kurtosis (I’ve talked about this a bit on Seeking Alpha if you’re interested). I’m a bit lazy, so I’ll give a 2 min. explanation here, see link above for better explanation. Skewness tells you, roughly, how asymmetric a distribution is and kurtosis tells you how heavy the distribution’s tail is (very roughly). So a high kurtosis (normal distribution has kurtosis of 0) tells you that extreme events occur much more often than a normal distribution would predict. A strong positive or negative skewness (say below -1 or above +1, again normal distribution has skewness=0) tells you that the left or right tail of the distribution is heavier. With lecture now over, let’s see the data.
How do I interpret that? Let me first say that I’ve seen no evidence of the following:
- There were historical reasons why extreme events happened in October that were not random chance.
- If there was a reason so many crashes happened in October, and the reason was not simply random chance, that the causal link is still meaningful today.
Given that, I’d say that it incorrect to make a judgement about months having different risk profiles. So, does that mean I wasted my time? No, a negative result is still a result.
Footnotes: 1. That usually is the hard part, not the math. We have computers for the math. 2. A quick note about the methodology. I used the daily log returns of the adjusted close in all calculations. If is the stock price on day and is the price the day before the log return is given by: 3. Technically 64 years for months through September and 63 years for the rest of the year.
“We have met the enemy, and he is us” – Walt Kelly
The protest I refer to is the climate protest that took place in New York on Monday where the guy in the polar bear costume was arrested (note: I’m not talking about the larger protest on Sunday). The protest was intended to send a message to Wall Street. They wanted Wall Street to know that it had to stop destroying the environment and instead invest in renewable energy. They wanted Wall Street to know that they’re angry.
Sadly, I think the only message that the protesters actually conveyed went like this:
“We do this to feel better than everyone else.”
This really makes me wonder just how strong the drugs they’re using really are. Alternatively, they could be doing their best to beat Occupy Wall Street’s record level of ineffectiveness. Consider polar bear guy’s bio in the Washington Post piece linked to above.
“Galvin lives in Shelter Cove, Calif. and once worked as a government contract wildlife researcher. He wears the polar bear suit to provoke discussion. (And because the kids love it.) “I want people to think of climate change in a different way,” he said. “We’re in a crisis, and our economy is driving it. We’re all in danger.” Wall Street, he said, needs to invest in renewable energy — the kind that doesn’t “destroy our planet.” After police ordered protesters to disperse Monday, the polar bear parked his fuzzy behind at the intersection of Broadway and Wall Street, a block from the Stock Exchange. He was among about 100 protesters arrested. Twitter noticed.”
I guess the logic looks something like this.
- The greedy assholes on Wall Street fund big oil companies and haven’t invested in renewable energy because they’re greedy/evil.
- They really are so evil that they’ll “fund global destruction” just to make a little more money. They also cause carbon emissions.
- Occupy Wall Street failed to change anything about how Wall Street operates.
- There are no consequences if Wall Street suddenly stops funding activity harmful to the environment in any way all of the sudden.
- So we’ll block traffic and some of us will dress as polar bears to protest Wall Street.
ProfitWall Street suddenly finds its heart, climate change is stopped, and everyone lives happily ever after.
All this does is make people associate environmentalism with crazy people.
First, in case you don’t know about the Monty Hall problem, here’s a quick explanation. There used to be a game show called “Let’s Make a Deal” where the contestant would have to pick one of three doors. Behind one of the doors was an awesome prize (money, new car…whatever) and there was nothing behind the other two doors.
There are no hints and no way to guess what door hides the prize, so you have a chance of guessing correctly. So far, so good. Let’s say, for example, you pick door number 2.
It’s important to note that the host knows the location of the prize. Before you can open door 2, the host will open one of the remaining doors that does not have the prize behind it. Let’s assume that the host opens door number 1, showing you that the prize isn’t there. The host gives you a chance to switch doors if you want. So, you can either stay with door 2 or switch to door 3. The Monty Hall problem is whether or not you should switch doors.
Almost everyone who hears this problem automatically assumes that it doesn’t matter if you switch, but that’s wrong. You are more likely to win if you switch than if you don’t.
The reasons have been explained in great detail elsewhere and even on Mythbusters, so I’m not going to go over it here. However, during a class discussion the other day, my professor used a great metaphor to explain why the result makes sense.
Imagine that instead of three doors there are 1,000,000 doors. You have a 1 in 1,000,000 chance of getting it right. Say you pick door 42. The host then opens every single door except door 42 and door 328,791. The only way that the prize is not behind door 328,791 is if you guessed correct. In other words, there’s only a 1 in 1,000,000 chance that you can go wrong switching. The same logic applies to three doors.