Tuesday, October 15, 2013

I Feel the Need. The Need... For Speed

In which Tacoma tries to solve their traffic problems, and I join the chorus of people telling them to "Jump on it."

There and Back Again

Six hundred seventy one miles is a long way to drive in a day. I know that seems pretty obvious, but it is the kind of drive that seems longer once you are an hour in to it, when the "hundreds of miles left" digit hasn't changed. That kind of drive gets you to thinking. About speeding.

It was on a drive like this that I started thinking about the speed I was driving and what minor changes would mean. I know there are a lot of attitudes about speeding, based on who you are, where you are from, and how much control you feel over your machine. I think I fall into a group with a large number of people who are comfortable at 5-10 mph over the limit but uncomfortable in the 15-20 range. Thats not to say I don't enjoy the occasional ride in my friend's Tesla. Or singing along when my pile shakes as I hit 80 on the open road. I do. I just don't think of driving that way most of the time.

But back to my long drive and thoughts on speed. Speed is funny.  We normally use units of miles per hour or meters per second or other various distance over time measures. But in practice this time-in-the-denominator thinking is made burdensome by one small truth. We don't say:

"I'll meet you in Tacoma. Its a half hour away, so I'll see you in 30 miles."

Correctly, we say:

"I'll meet you in Tacoma. It is 30 miles away, so I'll see you in a half hour."

We acknowledge the independence of distance and dependence of time, yet give them opposite roles in our measurement. (And we wonder why people have trouble determining which train gets to Chicago first!) All this is made uniquely interesting by freeway speeds that are generally close to 60 miles / h, making the super easy conversion of 1 min / mile. If you go from Seattle (I-5 exit 163) to Tacoma (I-5 exit 133), you don't need the internets to tell you that the 30 miles between those exits should take about 30 minutes (just to know that it is 2.5 h in current traffic). But when you diverge from that easily invertible speed, things get crazy. Twenty mph over (80) saves you 7.5 min, but 20 mph under (40) loses you 15 min. Its a kind of beautiful asymmetry in a hope-you-don't-get-stuck-in-Tacoma sort of way.


But Does It Make Sense?

So really, back to those six hundred seventy one miles. At 65 MPH the inverted speed is 0.923 min / mile. Can I save any time by speeding? Is it going to kill me to go slower than the posted speed? The curve is going to look the same, but bigger.


The big prize here would be to cut 5 hours off of the trip by going 125 MPH the whole time. I'm not sure my car can get to 125, let alone maintain it if I hit a hill. So, second prize? What about saving nearly an hour and a half by going 10 MPH over the limit (which my friend from NJ says they aren't even allowed to pull you over for). That seems doable!

But what does it get us? Depending on how much you value your time, maybe not much. That hour and a half could be made up by eating lunch in the car and skipping the stop.  It could be made up by leaving a bit early.  It could be made up by not leaving late.

And the downside case? Stuck in light traffic the whole time and losing 5 hours as my speed drops to 45 MPH. So to summarize, this long trip had a small achievable upside, and large achievable downside. What happens when the distance changes? Can we keep any of that time we saved by speeding, and perhaps fit in one extra activity before starting our trip? I know its been a while, so lets dust off that old spreadsheet and start modeling!

First let us come up with a list of activities:

Time (min)    Activity
0.25              Tie your shoes.
0.33              Tie your shoes (double knotted).
1                   Clap your hands 802 times (expert).
2                   Find your wallet (expert).
5                   Clap your hands 802 times (novice).
10                 Find your wallet (novice).
          or        Clap your hands 802 times (beginner).
22                 Watch an episode of Colbert Report (no commercials).
30                 Watch an episode of Colbert Report (w/ commercials).
60                 Watch an episode of 60 Minutes.

Enter the times in cells A4:A12 of the new sheet. Convert to hours by making cell B4 [=A4/60] (entering the portion [in brackets]), and complete the column (either dragging down or double clicking the bottom right corner).

Don't Forget to Log Out!

Next we need to think about our output. I want to know if I can still make up time given various distances. But 671 miles is very different from 1 mile, so chances are I'll want to use a logarithmic axis to display the data. Since I realize that now, I can choose distances to make the graph look pretty. This can be generalized to any log graph, using the range and number of data points you want. Enter the range in A1. For me that will be [=1000-1] because I'll have the log axis go from 1 to 1000. (Don't try to make it go to zero or you will have problems.) Now decide how many data points you want and put them in the log base (or denominator of an exponent). In my case I'll choose 16 so it looks like [=A1^(1/16)] = about 1.5. This is the factor that each distance will be increased by (basically counting in "one-point-fiveary" instead of decimal or binary). Make C1:S1 the numbers 1, 2, 3, 4...16, and make C2 [=1.5^C1] then complete the row. Copy and paste special "values" into row C3:S3 then delete all that stuff you needed to get your values. In my case I have the values 1.5, 2.25, 3.375... 985.261253. These values aren't nice and round, but on the graph they will be perfectly spaced on the log axis.

Type the reference speed in cell A1. Lets start with 65 MPH. Next, fill in cell C4 with [=-C$3/($B4-C$3/$A$1)] which is an algebraic way of solving for the speed at which you can make up that much time in that much distance (vs. your reference speed). Complete the column. Complete the row. (The $ should take care of keeping fixed values in place but I usually spot check. The form should still follow [= - distance / (time - distance / RefSpeed)] in all cells). Immediately you might notice that output is somewhat nonsensical. There are negative speeds in there! Those occur because of the physical impossibility of making up more time than the drive would take. You can't make up an hour on a 1 h drive, because you'd be going infinitely fast.  You can't make up 2 h either (hence the negative speed).

One way to get this out of the way is to do an "if" gate, where a negative value returns a ridiculously fast speed, of, say 1000 MPH. This value will never appear on our graphs, but gives them the appearance of the infinite asymptote they are approaching. Copy cells B4:B12 and paste special "values" in B14. For cell C14 enter [=IF(C4<0,1000,C4)] and complete the column. Complete the row. Yay! Graph that!


Looks like at freeway speeds it is tough to make up time. Anything under 10 miles and you need some major speeding (the kind that can get you pulled over even in NJ) to make up just 2 minutes. Alternatively, drives over 100 miles lead to nice time gains from your speeding, such that you might even catch up on your TV watching.

Things get a bit more interesting on surface streets though. Bump the reference speed down to 30.


The lower speeds here make the curves look much sharper. Driving ten miles at our new reference speed would take about 20 minutes if we hit all green lights, but to shave even 5 minutes off that would require us to increase our speed by a third.  On the other hand, that same level of speeding can save an hour when you go over a hundred miles.

Actually, now that I mention it, stoplights are a much better reason to be late. Forget that double-knotting of your shoe... hitting just one stoplight could put you a minute or two behind schedule. I wanted to see how much time you might lose to stoplights, so I played around with modeling traffic light timing in Excel before realizing:

A) It's hard.
B) It depends on what kind of pattern you have, and
C) It's been done before.  Hundreds of thousands of times.

A much easier path is to make some assumptions:

Assumption 1. The number of lights you pass through increases linearly with distance you drive. (If you drive 10 miles you will pass through twice as many intersections as a 5 mile drive.)

Assumption 2. For every X lights you pass through, you will be stopped N number of minutes. (I'll choose 4 intersections, stopping for 1 minute. I don't know which intersections, just total stop time.)

Assumption 3. The spacing for intersections is similar to that in Seattle. Every 5 blocks = major intersection = quarter mile.

This results in something quite interesting. There is no effect of speed on the amount of time you are stopped at lights. Of course this is a little simplistic, and does not account for running yellow lights (which would favor speeding) or timed lights for a given stretch (which discourages speeding), but instead lets us calculate a new ratio of "stoplight time / mile".  With the conditions I mentioned in assumption 2, that ratio would be 1 min/mile. Glancing back up at the 30 MPH chart, I can see that to save 1 min on 1 mile (or 10 min on 10 miles etc.) would be... 60 MPH?

Let me double check that. Ten miles at 30 MPH is 20 min. Adding 1 min/mile of stoplight time would bring the total to 30 min. If we tried to get back down to 20 min total trip time we'd subtract 10 min for sitting at stoplights and try to make the 10 miles in the remaining 10 minutes.  Yup, thats 60 MPH.

And the sad thing is that I was trying to be conservative with my estimate of stoplight time. Downtown it could be much higher, and remember, this stoppage is distance independent, so we can't just drive 1000 mies to get efficiency (although after we leave the city the stoplight time/mile should drop significantly). And here's a spooky thought (my hedge against whether or not I get in a Halloween post), what if the stoplight time gets to 2 min/mile?


That time is unrecoverable.

Tuesday, July 16, 2013

Baby You Can Drive My Population Growth

In which we watch the baby-making process over time.  Um, no... in which we analyze how babies are brought into the world.  Nope, still creepy.  In which we stare at data.

Rollin' In Their Graves

The number of people alive today are over 1/10th of the people who ever lived.

I'm not sure where I first heard that curious statistic, but it is exactly the kind of thing I like to repeat.  Is it true?  I mean, I know there are a lot of people in China and India, as well as a bunch of people in the US, Indonesia, Brazil, Pakistan, Nigeria etc, but to compare that to all the people?  Ever?  In all 6,000 30,000 200,000 years of human existence on Earth?  Assuming the zombie apocalypse, would we really just need to destroy 9 zombies each?  That sounds quite doable!  (Side note, it feels really good to write a statistical blog and start a sentence with, "Assuming the zombie apocalypse".)



So our first task today is to see if this is true. If so, will our odds of surviving the zombie apocalypse get better or worse over time?

Oh, The Humanity

One problem we run into is the definition of what it means to have "all of humanity".  It means there was a beginning.  We separated from our closest surviving primate ancestors (chimps) a few million years ago, but were still mingling and struggling against other hominids until much more recently. Our small populations at the time make the annual human contribution quite tiny, but the sheer variation in start date magnifies the importance of the question. One solution is to use our pinch point. About 100,000 years ago the population of early humans was reduced to possibly as few as 2,000 individuals. After this, we were clearly our own species, so I will use 100,000 BCE as our start date and 2,000 as our starting population. Other data I pulled off the US Census website where they have nice global estimates for 10,000 BCE through 1950 at which point they have precise annual estimates

First let's model their data. I put date in Column A (using negatives for BCE) and population (in millions) in Column B.  The initial plot looks kind of exponential.


Reploting as a log scale graph reveals distinct phases.


You can see how big a role artificial fertilizers were. Since their introduction, the doubling period of global population growth has been 50-ish years. This is obviously not sustainable, but we will come back to that later. 

But we don't want to know how many people there were in any given year. We want to know how many people there were total. To do this we will count the one thing everyone has had... a birth. Even if they only lived a few minutes or are still alive today, at some point they were born. (I hadn't thought about it until now but a significant portion of the zombie apocalypse will be children. It is sad, but it does improve our chances.)  The number of births in a given year can be calculated by the crude birth rate (CBR) which is total births per 1,000 people per year. This number can be found for modern eras and estimated for ancient ones. It ranges from 50-ish in cultures with high infant mortality, limited women's rights, and limited contraception to 10-ish in cultures with the opposite. Both extremes put pressure on society (to care for the young or the elderly, respectively), but if you are the leader of a culture you would probably prefer the latter. 

Anyways, put CBR in Column C and we'll estimate that pre-1950 it is about 37.2 (our earliest data point). If we average the population over our different time spans and multiply by years and CBR we get approximate number of people born during that time span. We make that equation by typing the [in brackets] portion.  Move everything down so data starts in Row 4 and we have room for calculations.

(Cell D4) [=0.002] Million People Initially
(Cell D5) [=(C5/1000)*AVERAGE(B4:B5)*(A5-A4)]

Complete the column. Column E will be how many people are born up to that point. And Column F will be the ratio of current population to people ever.

(Cell E4) [=sum(D$4:D4)]
(Cell F4) [=A4/E4*100]

Complete the columns.  Cool! We really are about 1/8th of the population ever.  Here are some representative data points.


Making the CBR of nomadic hunter gatherer people's 20 or 50 (which are both defensible positions without much real data) only changes this by about 3%.  Interestingly, back when I heard the statistic, it might have been 1/10th of the population.  The fraction of "All People" who are currently alive has been steadily growing.

But what will happen in the future?

One Hundred... Billion... Dollars.  What? People?  That Doesn't Make sense.

To answer that, lets first have a little aside.  If you thought the growth of the twentieth century was scary, the following graph should be downright terrifying.


Keeping with the 50-year doubling trend would mean that in 200 years we could be looking at 100 billion people.  Even discounting the Earth's ability to sustain such a population, each person would only have a third of an acre of Earth's land-surface area.  Unless we started building underground.  This is obviously silly, right?

It May Be A Growing Problem

At what point will global growth slow?  Lets look forward to futuristic date of... 1970.  Yeah, this totally threw me for a loop too. I was taking the second derivative of population with respect to time, you know, like you do, and bamb!  It is glaring. Here is an even easier way to look at it. Find the average annual growth rate for each period:

(Cell G5) [=EXP(LN(B5/B4)/(A5-A4))-1]

Complete the column and you see that for most of human history there has been low or no growth in population followed by furious activity in the last thousand (and especially hundred) years.


Lets zoom in.


Correlation is not causation, but it certainly looks like artificial fertilizer helped increase the growth rate.  As for the decline, it could be many things.  China instituted its one-child policy, many economies slowed down, but importantly, lots of countries modernized their view about contraception and women's rights.  If we extrapolate the rate of growth, we eventually get down to zero.  At that point our population would stabilize at right around 10 billion people.


So that slowing is good news for the planet (also confirmed by the US Census and UN independent estimations), but bad news for our odds in the zombie apocalypse.  Even worse is that we start losing our edge even before the population stabilizes.

Undead Reckoning

Lets take a minute to think about what we really want to know and what we already know.  We know there is a ratio of "Current Population" to "All People Ever".  We know this ratio changes.  We want to know when that ratio is going to stop changing in the positive direction, and start going down.  In math or physics, we would call this "finding the maximum of a curve" and would accomplish this by defining the derivative of the ratio with respect to time and finding where it goes to zero.  That is where it stops changing.  If you dislike reading the maths, this might be a good time to skip to the graph.  Basically we are doing a related rates problem:


We already have the rate of current population growth, but to bring this forward, I first need to tell you how I made the extrapolations.  Start by taking a derivative of population growth. This is just the change in population growth over a change in time.

(Cell H6) [=(G6-G5)/(A6-A5)]

Complete the column, and you have the increase or decrease in population growth per-year.  This data  is often very noisy, but recently it has been remarkably stable, with a value of around -0.03%.  The negative means that the rate of population growth is decreasing each year.  To extend it past 2013, I calculate the moving average of the previous 40 years:

(Cell H107) [=AVERAGE(H67:H106)]

When you complete the column (and add a hundred years to column A) you can extrapolate the growth rate:
(Cell G107) [=G106+H107]

And you can increase (or decrease) your future population and total of "All People Ever":

(Cell B107) [=B106*G107]
(Complete all other rows)
(Add in estimates of CBR for future years)

Now we are back to the graph I showed previously, so let's look at the growth rate for "All People Ever":
(Cell I5) [=EXP(LN(E5/E4)/(A5-A4))-1]

Plotting these two on the same graph demonstrates that the "Current Population" rate passes over the "All People Ever" rate well before the "Current Population" rate reaches zero (and our population stops growing).  


So our best chances are in the year 2059 when 15.21% of All People Ever will currently be alive.  This may seem counter intuitive since "Current Population" will still be growing, but one rationalization is that while "All People Ever" is just based on births, "Current Population" is based on births and deaths, meaning that "All People Ever" will start growing faster than "Current Population".  (And theoretically, integrating the area under the Current Population curve between the "maximum ratio" date and the "population stabilizes" date will yield a number of people that will die after the best zombie apocalypse date but before current population decreases.)

Boneheaded Oversight

Of course, as my wife mentioned to me when looking over my first draft, we shouldn't really be afraid of people who are dead for such a long time that they are just bones.  Those aren't real zombies.  Let's limit it to just people who might still be creepy looking.  What is the ratio of people alive today to people who died in the last 50 years?

(Cell J44:J206) = Crude Death Rate (CDR)
(Cell K44) [=(J44/1000)*AVERAGE(B43:B44)*(A43-A44)]
(Cell L93) [=A93/SUM(K44:K93)]

Complete the columns and graph.


Looking at it this way our odds are much better.  There are currently 2.86 people alive for each person dead in the last 50 years.  On the other hand, our advantage goes away even sooner, due to the stabilization of our CDR.  Here's hoping that the zombie apocalypse is right around the corner?

-----



Wednesday, July 10, 2013

Finding Your Rebalance

In which I use a lot of fancy math to tell you to buy low and sell high.


Aaaaaaand We're Back

So much for sticking to my once-a-week schedule, but thanks for coming back!  I've been thinking about today's post for a long time, but have been too busy to actually model anything until the last couple days.  While I've been teasing with several recent finance posts, this one comes closest to my actual philosophy.  That said...

I am not a financial planner.  You should do your own research and make your own financial decisions based on what is best for you.  Also, I highly recommend making a leveraged buyout offer for Dell Computers.  Everyone I know who has done it is now a billionaire.

Winning!

Anyways, one of the biggest problems with investing is that we don't know the future.  Sure, we can Buy and Hold, but unless someone from the future tells us what to Buy and Hold, we are stuck with our best guesses and a distinct lack of hoverboards.  So we diversify.  By that I mean we make lots of guesses and end up with an average yield.  Most people are of two minds on this.  Their rational risk-averse scared-mammal mind thinks, "Whew, glad I am safe!  And look at all these gains!" while their greedy reward-centered hungry-reptile mind thinks, "Wow, if I had just put a bigger bet on the winners I could have had so much more!"

Well, why don't people put a bigger bet on the winners?  Mainly because, well, most people only have so much money and they don't know the future.  They hedge their bets to mitigate the chance of betting on a loser.  Once you know it is a winner, it is already too late to "Buy Low".

So what if you follow the mantra of "Buy Low, Sell High"?  After all, that is what investing in the markets is all about.  Once your winners are winning, sell high and bet on some potential winners that may go either way.  Rebalancing your assets this way can be dangerous with individual stocks.  If you sell a winner and end up putting all your money on penny stocks that go to zero, your investment hasn't done much for you.  You want assets that are safe and won't go to zero but still show some growth.

One asset that fits the bill is an index fund.  These mutual funds (or exchange traded funds- ETFs) simply follow a basket of stocks (or more recently bonds, real estate, gold, or bitcoins) and reflect the value of the diverse set of underlying assets.  So this is today's challenge: apply the idea of rebalancing to index funds.

A few more notes on index funds.  The oldest ETFs have only been around since the 90's, and most are less than ten years old.  For this analysis I'll only look at the last 10 years of data, and even though I'd love to throw in some alternative assets, the only tradable ones that fit our needs cover things like the NASDAQ (QQQ) and S&P 500 (SPY).  They charge a small fee (~0.05%) to rebalance the stocks inside the fund (not always holding everything, but mimicking the results) and distribute the dividends, but for our purposes we will ignore the slightly different fees and distributions as well as the cost of trading them and tax implications.

Whew!  Let's do some modeling.

An Interesting Start (get it? compound growth? anyone?)

We will start with the NASDAQ and S&P 500 ETFs, which were both assets you could have invested in 10 years ago.  This puts our start date as July 2003, conveniently after the dot-com bust, but far enough back that we should be able to see some divergence.  NASDAQ (QQQ) consists of holdings that are concentrated in the tech sector.  Additionally, it is "market cap weighted" so the bigger companies comprise a larger portion of the assets.  Nearly 20% of it is just Apple and Microsoft, and another 30% is the next 8 largest companies.  Contrast that with S&P 500 (SPY) which consists of holdings spread across more industrial companies.  Yes, they still have Apple and Microsoft, but they also have Exxon, Wells Fargo, General Electric, and Johnson & Johnson.  They are also market cap weighted, but with so many more companies, the top ten don't even make up 20% of the index.  (As an aside, this market cap weighting also means that the ETF only needs to buy or sell stock components when it needs money, as an increase in the price of a company will automatically make it a larger component of the index.  When the ETF does sell assets, selling 0.1% of every company means selling more of the expensive stocks than the cheap ones... already initiating a form of rebalancing!)

So given the information above, which ETF would you have chosen as the best asset for your, say, $10k investment?  The right answer is... you don't know.  Or rather, you didn't know.   You can't predict the future nor should you.  Over that 10 years, the NASDAQ returned 8.2% annually and the S&P 500 returned 5.9%.  If you invested 50:50 in both, you returned just over 7%.


That difference between the two ETFs can be attributed mostly to the growth of two companies: Apple and Google.  While both indices had them, NASDAQ had more.

Oh, how did I make that graph?  You can get the daily close price for QQQ and SPY from July 7, 2003 to July 3, 2013.  This turns out to be over 2500 data points each, which tends to slow down Excel when I try to graph it.  I take the data, crop out the unnecessary bits and sort it in ascending order with:
Column A = Date
Column B = QQQ
Column C = SPY

I then make a Column D which is a series of ascending numbers a fixed number of rows apart.


You can sort by Column D (ascending), and effectively sample your data.  I used every 11 trading days for the graph, but you can easily do every month (21 trading days) or year (252 trading days).  Delete Column D and any excess data, as you won't need it any more.  I also insert three rows at the top to run calculations.  To get the value of each portfolio type the [In Brackets] portion:


Complete the columns and graph.  That 50:50 Buy and Hold portfolio is what we aim to beat.  Hopefully we get up to the yield of the NASDAQ, but without the risk of choosing the wrong asset.

When we rebalance, we are trying to get back to our initial distribution of assets because they have grown at different rates.  This only works if the two data sets are not perfectly correlated with each other.  We can check for this in Excel by looking at our full data set and probing with

Correlation Coefficient [=CORREL(B2:B2530,C2:C2530)] = 0.72

So about 72% of the movement in SPY can be predicted by movement in QQQ, but not all of it.  This makes a lot of sense, as there are several underlying stocks in common between the indices, but not all of them.  Again, this is where some alternative assets would be nice, as they may have even lower correlations.  The other thing you need is for some force other than momentum to be pushing the prices.  It seems silly to add this caveat, but you can imagine that if a stock only went up because it went up the day before, you will never get any benefit from selling that stock to buy one that isn't doing as well.  I don't think this should be a problem.

Stock of the Month Club

As a first pass, lets sample our data monthly (see above) so that we have Date, QQQ, and SPY in Columns A:C with our 50:50 in Column D.  We can have a simple reporter above them that tells us the (geometric) average annual yield over ten years.

Annual Yield of QQQ [=exp(ln((B2533/B5)/10)-1)*100]

Basically you figure out how much growth you have had (final/initial value), spread it over 10 years, and use algebra to find the yield you would need to produce that growth.  For the actual "Monthly Rebalanced" portfolio we will need three columns (E:G).  Each month we will sell all shares, then buy back a balanced portfolio.  (In reality you would only sell the excess shares to buy the lower shares but this is easier to model/explain and gives the same result.)  To do this, put 10000 in cell E5.  Then determine how many shares you will buy using:

(Cell F5) [=E5/2/B5]
(Cell G5) [=E5/2/C5]

We now have equal value in shares of the two ETFs.  Next month we need to sell our shares at that months current price.
(Cell E6) [=F5*B6+G5*C6]

Complete the columns.  (Completing columns F and G down to row 6 first is the easiest way to do this without getting an error.)  Lets check out the results!


Confused?  Yeah, I was too.  It just so happens that you get no benefit at all.  But this should work?!?  I'm supposed to make fun of other people's silly financial strategies not disprove my own! 

One way to wrap your head around this is that when we model the Monthly Rebalance method, we aren't doing it smartly.  We really don't know why we rebalance or even if it's necessary.  We could have a completely balanced set of assets that we sell and buy back in the exact same ratio.  Alternatively, we might be missing out on huge imbalances simply because it isn't time to change yet.  When it is time to change, the imbalances may have resolved themselves.  All this means is that the small amount of momentum inherent in the price swings can eat away at our inept rebalancing attempts.  There are two solutions to this.  We can rebalance smarter or more often.

If At First You Don't Succeed, Repeat As Necessary

Lets try more often!  Bring back the full set of data with date, QQQ (daily), and SPY (daily) in Columns A:C.  Make a 50:50 portfolio in Column D as well.  Now apply the same E:G equations and complete the columns.  This is modeling what would happen if we sold everything at the end of the day and immediately bough back a balanced portfolio.  Lets check the results... again!


Now thats what I'm talking about!  Now we were able to take advantage of all the imbalances brought on by the volatile market.  An extra 0.7% yield is nothing to sneeze at, and all it costed us was... wait (doing maths in head cheap $5 trades, selling all of two ETFs, buying back two ETFs, each of 252 trading days a year... $5*(2+2)*252 = ) a little over $5k a year... to handle our $10k portfolio.  Ouch.  Granted it would still be $5k for a $10mm portfolio (0.05%), but I don't have ten million dollars.

I Can't Believe It's Not Smart Balance

So what if we instead rebalanced smarter.  Only when we needed to.  Only when things were really out of whack.  First, lets define really out of whack.

Out of whack = (Cell A1) = 0.05

So "out of whack" currently is set at 5%.  We'll be changing this later on.  I'll just add this Smart Rebalance to our Daily Rebalance worksheet.  Start out the same way:

(Cell H5) [=10000]
(Cell I5) [=H5/2/B5]
(Cell J5) [=H5/2/C5]
(Cell H6) [=I5*B6+J5*C6]

So you start with the same shares.  Now we throw in some IF statements:

(Cell I6) [=IF(ABS((I5*B6-J5*C6)/H6)>A$1,H6/2/B6,I5)]
(Cell J6) [=IF(ABS((I5*B6-J5*C6)/H6)>A$1,H6/2/C6,J5)]

So now if the difference in our two assets is more than 5% of our portfolio, we rebalance to 50:50, otherwise we keep the shares we have.  Complete the columns.  And the results?


Not too bad!  How much did it cost us?  Well, we can make a quick reporter:

(Cell K5) [=SUM(K6:K2533)]
(Cell K6) [=IF(J6=J5," ",1)

This returns 1 each time we rebalance, then adds them up.  For the 5% threshold, we end up with an average of 1.7 trades per year (doing math again... man I should really get a computer program that can do this instead $5*1.7*(2 sells + 2 buys) =) about $34 a year to manage our portfolio and squeeze another 0.15% out.  Again, this doesn't really help a $10k portfolio (since it only squeezes out $15), but right around a $23k portfolio it does pay off.  Additionally, many brokerage firms give you a few free trades a year or free trades on ETFs if you hold them for more than a month.  Also, if we really just sold our overperformer and bought the underperformer, that cuts our costs in half.  Anyways, you probably noticed by now that the Smart Rebalancing didn't quite get back to the Daily Rebalancing.  This changes if you change your stringency threshold.


As you can see, moving to the 2% range can squeeze out nearly 0.5% extra yield and still stay near one rebalance a month.  Unlike our previous Monthly Rebalance, these Smart Rebalances are distributed over the ten years as necessary.


Though you can't tell it just from this graph (as rebalancing can go either way), NASDAQ fared much better during the '08 collapse (once again, due to Apple and Google being more heavily weighted).  Benefiting from one fund's awesome stretch is what rebalancing does best.

Foreign vs. 'Merican

But as I mentioned before, NASDAQ and S&P 500 have a lot of correlation.  Lets look at some other ETFs that have ten years of data.  I won't bore you with the Excel details- basically I imported the data the same way and ran [=CORREL(B5:B2533,C5:C2533)] on each relationship.


As you can see, most index funds of stocks (name on the side, ETF ticker symbol on top) have significant correlation.  (As an aside, you can see why the only good use for the Dow Jones is that it predicts movement in the S&P 500.  If only we had an indicator for that... such as the S&P 500.) One nice low correlation pairing is the Consumer Discretionary Fund (XLY) which features stocks like Home Depot and Ford, with the Emerging Market Fund (EEM) featuring stocks like Samsung and China Mobile. 


As before, a 50:50 mix gives an average yield between the two individual ETFs that can be significantly improved by Daily Rebalancing.  Interestingly, it takes very little Smart Rebalancing to achieve similar results.  This may be because both these funds are more volatile than the ones we were looking at previously, but the end result is that you can set your threshold high and end up trading only once or twice a year.  This meant that when EEM reached new highs at the beginning of the year, a 10% threshold allowed you to sell (maybe missing the true peak) so that you would instead be invested in XLY when EEM came crashing down in June.

One, Two... Many

Realistically, it is unlikely that you are invested in just two ETFs that you have to balance.  In a way that makes things easier.  Now we are just looking for the huge imbalances where one component is, you guessed it, way out of whack.  Lets try it with all four ETFs.

The 50:50 portfolio and Daily Rebalance portfolio are pretty much the same.  Just change any "/2" to "/4" and you are basically there.  (I guess that technically makes it a 25:25:25:25 portfolio.)  For the Smart Rebalance I'm going to need to get a bit tricky.  Fill in Columns A:E with the date and the four different datasets.  Column F will be our portfolio size, and we will still make F5 [=10000], but instead of implicitly valuing each component, lets make Columns G:J be the value of our four assets.  Initially we can assign G5:J5 [=2500].  Now we need to determine how many shares we have to start and insert them in Columns K:N similar to before:

(Cell K5) [=G5/B5]
(Cell L5) [=H5/C5]
(Cell M5) [=I5/D5]
(Cell N5) [=J5/E5]

The value of the share the next day contributes to the total portfolio:

(Cell F6) [=sum(G6:J6)]
(Cell G6) [=K5*B6]
(Cell H6) [=L5*C6]
(Cell I6) [=M5*D6]
(Cell J6) [=N5*E6]

Now for the IF statement.  We will rebalance only if the difference between the most valuable and least valuable components is more than (A1= out of whack) percent of the portfolio:

(Cell K6) [=IF((MAX(G6:J6)-MIN(G6:J6))/(F6)>A$1,(F6/4/B6),K5)]
(Cell L6) [=IF((MAX(G6:J6)-MIN(G6:J6))/(F6)>A$1,(F6/4/C6),L5)]
(Cell M6) [=IF((MAX(G6:J6)-MIN(G6:J6))/(F6)>A$1,(F6/4/D6),M5)]
(Cell N6) [=IF((MAX(G6:J6)-MIN(G6:J6))/(F6)>A$1,(F6/4/E6),N5)]

Complete the columns!  Lets look at different thresholds:


Now we're talking!  With a 3% threshold, trading just 5 times a year you can beat three of the four components of your portfolio.  While you don't get the huge results of your best ETF, your transaction costs are very reasonable to squeeze out a 0.5% increase over a simple Buy and Hold.  In reality, you aren't going to have an alarm going off when your portfolio starts to go out of whack.  You aren't going to have this exact schedule:


Hopefully you just keep an eye on things.  Hopefully you aren't paying someone else 1% of your portfolio just so they can "beat the market" by 0.5% doing what you could do in 5 minutes every two months.  Over the last 10 years it seems like you could invest quite well just by diversifying and rebalancing.  Just remember, past events may or may not be indicative of future ones.

-----

So apparently Blogger (read: Google Overloards) now allows people to actually subscribe to blogs!  It feels so 2008!  The google groups email list is still active, but if you want to subscribe to the blog now I'd suggest using the link on the right panel.


Saturday, June 22, 2013

Momentum Mo Problems

In which I ask myself if I'd jump off a bridge just because all my friends did.  Wait, what?  All my friends are gone and I'm stuck up here with whatever chased them off a bridge?

The Super Secret Awesomeness Strategy

Fall of 2008 was not a fun time for investors.  Even Buy and Hold believers had their faith questioned as the value of their portfolios dropped by 30-50%.  From out of the shadows came calls for the Super Secret Awesomeness Strategy... able to trade punches with Buy and Hold during good times, yet calmly stand aside when everyone else panics. Why suffer, when you could have results like these:


(As you can guess, that is a very selective date range.)  So this is today's challenger for Buy and Hold's heavyweight belt.  A strategy that had been around for decades, but which gained renewed popularity in the last few years.  In the red squares... the Super Secret Awesomeness Strategy... aka Momentum Investing.

Strength in Numbers

The wisdom of crowds is a curious thing.  In an efficient market setting, it can decide that the price of rice is exactly $500 a metric ton (coincidently the only size available at Costco).  Or, in a low-information setting, it can decide that the best way to exit a burning theater is to climb over the person next to you.  These two acts are actually very similar.  In each case, there are lots of people doing what they think is best.  The difference is information.

Whether it is farmers weighing an ox, voters choosing a politician, or bidders trying to snag something on e-bay, there is some small piece of information contributed by each member of the crowd.  That's not to say it is good information.  It may be egregiously bad information.  But the hope is that it is balanced out by opposite opinions.  You want to sell me a soda for a dollar?  I want to buy one for a nickel.  We haggle and settle on fifty cents.  If that information is public, the next buyer-seller pair can dispense with the haggling, or maybe fine tune the price to fit their specific desires.  This is all pretty obvious, but what I'm trying to get at is the idea that you can piggy-back on what everyone else is doing and get similar results.  Because the price was decided by others, you can choose to ride along and get your Costco rice at a price that is somewhat reasonable.  (Just be careful that you are actually heading to an exit when someone pulls that fire alarm.)

One would hope that the market works this way too.

Oh, and since we are talking finance, I should probably make my disclaimer here: First off, I should reiterate that I am not a financial planner, and you should make your own financial decisions based on what is best for you.  Secondly, I highly recommend pyramid schemes.  They have gotten a bad rap recently, but I can totally get you in on the ground floor.

Anyways, if you let the masses determine the price of an asset, it will save you a lot of time and effort.  Yes, maybe you have a general feeling that the asset will go up... that would be why you are buying it.  (And if that asset is, say, the whole stock market for the next 30 years, you have a strong case for your thesis.)  How much should you pay for it though?  You could talk to random people who own that asset and haggle with each of them, or you can piggy-back on the trillions of trades a day (just for the NYSE) that have already worked out their haggling.  Even if they are each contributing a tiny amount of information, the overall market has a very well-informed price (which you may or may not agree with).

Indicator, I Hardly Know Her!

This brings up the two basic types of market trading you can do.  When people trade an individual stock, they might think, "Company X is very valuable and produces earnings that I would like a piece of.  I value those earnings at $10 or over.  I believe that the price is going up.  I like to end sentences with prepositions following."  They then buy a share if the price (which is determined by the crowds) reaches $10 and hold on to it until the price meets their value of the share.  This is investing based on fundamentals.  The idea is that you found an inefficiency in the market where someone disagrees with you on the fundamental value of something.

But what if we believe the market is actually efficient?  An alternative way to invest in an individual stock is to say, "The stock for Company X has risen recently.  Their chart shows an accelerating stock price that should continue for at least the short term."  They then buy a share at the market rate and hold on to it until the technical indicators tell them to sell.  This alternative is usually a risky (and expensive) play for individual investors due to our non-instantaneous information and the high transaction costs.

Instead of using technical indicators to buy an sell individual stocks, today we are going to look at buying and selling the market.  The idea is that there is a momentum to the market that makes the swings larger than would make sense from just the fundamentals.  In late 2008 the "average company" lost about 50% of its value according to the stock market.  Was that really true?  Did General Electric have half as many factories?  Did Microsoft have half as many Windows users?  Did Lehman Brothers have half as much money as they thought they did?  (Oh, wait.)  Anyways, the quick(-ish) rebound in the broad stock market (as measured by the S&P 500) shows that it was mostly the market's momentum that was bringing down prices.  Smart investors like Warren Buffet saw that and kept plowing money back in to the market because fundamentally everything looked cheap to him (but not to whomever was selling to him).

Taking the Bear by the Horns

To buy the whole market is quite easy.  There are mutual funds that will let you do it for 0.2% of your portfolio/year and exchange traded funds that will do it for 0.05% (plus a trade fee of $5-$10).  We aren't going to concentrate so much on the "how" for this post.  Lets focus instead on the "when".  One of the easiest things to measure in the market is the 12-month simple moving average (SMA).  For our purposes, we can look at the monthly value of the S&P 500 (available since 1950) and average the last 12 months.  These fit nicely into our spreadsheet typing the [in brackets] portion:

(A25:A784) = Date
(B25:B784) = Adjusted Close of the S&P 500
(C36) = 12-month SMA [=AVERAGE(B25:B36)] -> complete the column

To round out our data set for times when we can't have a 12-month SMA, we can just set C25:C35 to = B25:B35.  Any strategy dealing with averages over time will run into end effects like this, so we won't worry about it too much.

The technical indicator in this case is when the current value of the S&P 500 passes above or below the 12-month SMA.  If it passes below, we can expect that momentum is artificially pushing stock prices down (like Fall 2009).  If it passes back above, we can expect that momentum is artificially inflating  the prices.  Because the S&P 500 has grown (at a 7-8%/yr pace), most of the time we will see the positive momentum.  At that point we should be invested in the market.  If the momentum turns negative we should sell and hold as cash (or equivalent).  Similar to the Sell in May question, this is a binary decision, using the month's information to determine whether to be in stocks or cash for that month.  (Interestingly, this limits us to 12 trades a year, making it a somewhat fiscally achievable strategy.)  To model our binary decision:

(D25) = Market rate for that month [=B26/B25]  -> complete column
(E25) = Decision for rate [=if(B25<C25,1,D25)] -> complete column
(F25) = Starting Portfolio Value [=A25]
(F26) = Portfolio Value [=F25*E25]  -> complete column

Wow!  12-Month Momentum is going toe-to-toe with the champ!


I'd have to call this one a tie.  The strategies were neck and neck from 1950 through the mid-80's, but then increased volatility sent false "sell" signals which allowed Buy and Hold to take the lead.  But next, the 12-Month Momentum strategy showed its pugalistic prowess by calmly sitting out the worst of the two bear markets we've had in 2001-2 and 2008-9.  Unfortunately that meant it sat out much of the recovery of '09, for a final lead of 6% over 63 years.  To put that in perspective, it equates to 0.1% per year.  Actually implementing the strategy would have likely cost an unknown amount, but there is a decent chance it would be more than 0.1% per year.  This strategy does leave the portfolio value more stable, though, so if that has value to you it might be worth pursuing it a bit further.

Round Two

So if the 12-Month SMA challenger is so close, maybe a little fine tuning will unseat the champ.  Going in to this analysis, I thought the main drawback to a momentum strategy would be that you spend time (which I previously showed was valuable) out of the market waiting to get back in.  Because the information always lags, and the market on average is growing, you miss some of the "average" returns when the market is turning around.  Lets modify the length of time over which we are averaging to see if we can improve.  One way to do this is go with a longer average, such as a 24-Month SMA.  Another way is to go shorter, such as with a 3-Month SMA.


Ouch!  These two never really get off the ground.  The 24-month SMA portfolio experiences about the same number of transitions to-and-from cash (a little over 1 per year) as the 12-month SMA portfolio, but they are delayed by 1-5 months, so that the portfolio misses just a bit more of the good times and (probably more importantly) feels the downturns for just slightly longer.  This is enough to lose a little over 1% per year compared to the Buy and Hold or 12-month SMA strategies.  The 3-month SMA strategy simply gets too many signals to transition (over 4 per year), which keeps it in cash for an astounding 70 months longer than the 12-month SMA.  That is nearly 6 years of sitting around waiting!  Lest you think that we just haven't refined enough, I went ahead and ran a bunch more timespans:


It looks like 12-months is the optimal timeframe to average over in creating your indicator, and it just barely yields more than the market average.  Actually, if you add up all the months the 12-month SMA tells you to sit out the market, you end up on the sidelines for over 18 years (of the 63 years I'm looking at).  It's remarkable to me that this strategy compounds at all, and it is a testament to the yield you get during the time you are invested... a whopping 10.7%!

Technical Knockout

One final thought for today's post.  It isn't really fair to compare the SMA Technical Indicator Portfolio to the Buy and Hold Portfolio.  We've played around with the length of averaging, but imagine an extremely long SMA (simple moving average).  As the length of averaging time gets longer, we'd get fewer and fewer signals to switch between stocks and cash.  Because that would expose our portfolio to more growth in the market, the dip in yield we saw reverses itself and eventually you get no signals to move to cash.



So, actually, an alternative way to think of Buy and Hold Portfolio is that it is a Technical Indicator Portfolio whose indicator has never triggered a sell signal.

-----

Update!!!

It's Log, It's Log, It's Better Than Bad, It's Good!

As noted by Jared in the comments, showing exponential growth on a linear axis makes for tough comparisons between lines.  I promise I wasn't trying to deceive (well, maybe just a little).  Here is what the Buy & Hold, 12-Month SMA, 24-Month SMA and 3-Month SMA look like in log.



-----

Didn't answer your question?  Feel free to let me know in the comments and I'll include your ideas when I post more on this in the coming weeks.   If you want to get email notifications when new posts go up, send an email to subscribe+overly-complicated-excel@googlegroups.com to subscribe to the mailing list. 

Friday, June 14, 2013

They See Me Rollin'

In which they do a cost-benefit analysis of hatin'.


The Wheels on the Bus

Temperament matters a lot.  Case in point: a few weeks ago, while riding home from work, the traffic suddenly slowed down and from behind us I hear a loud "thwunk."  My bus had just been rear ended.  Everyone was fine, except for the tiny car whose front just crumpled.  Even its driver seemed ok (or at least aware enough to discretely slide her phone away).  It was a funny story (made much better in person because I could properly pronounce "thwunk" for you), but as I was sitting there on the bus for an hour I heard two distinct conversations.  One type started, "Dude, this is crazy.  Has this ever happened to you?", while the other type stared, "Dude, I'm never riding the bus again.  It's not worth it."  I know, only in Portland would everybody on the bus start their conversation with "Dude", but what it got me thinking about was how do we know if the bus is "worth it".

When my wife and I moved to our current apartment, we didn't think about the fact that she would be driving to work and I would be taking the bus.  It was just intuitive.  That said, her work is closer.  She's also more environmentally conscious than I am and the ticket price for her bus route is less than for my route.  All of that seems like it would switch our roles.  So that is today's goal.  Lets add up real costs to see why it makes sense for me to ride the bus and not her.

A Non-Ideal Gas Law

On the surface, it seems like this should be simple.  Does the gas to get there cost more than the cost of the ticket?  For her a round trip looks like this:


And for me:

So as a first pass it looks like it wouldn't make sense for either of us.  Of course, this is the simple case.  Since we know gas isn't the only expense that goes into the car, lets overly complicate it.

Four Car-dinal Virtues

The way I see it there are fixed costs of owning a car, and there are per-mile costs.  Fixed costs include the actual purchase, the license and registration, as well as the insurance.  These don't scale with how much you drive.  (Within reason, of course.  The difference between 10k miles and 12k miles per year is zero.  There is still a difference between 10k miles and 0 miles or 30k miles.)  Instead, these costs scale with how pricey a car you have and how good of a driver you are.  On average these costs end up being ~$16 a day (almost $6000 a year!).  Maybe I'll tackle this another day, but for the current post lets say that we are keeping the car, just deciding if it makes sense to use it for a certain trip.

The flip side of this is the per-mile costs. By my count you could group these into four categories.  Lets find the {inputs} we will need for each cost.  The first one, gas, we already mentioned.  It will only depend on {car's MPG}, {cost of gas}, and {miles traveled}.  The second cost is what to do with your car once you reach your destination, as there is often an associated parking fee.  This is pretty straightforward, so we'll just spread the {parking fee} over the whole round-trip.  The third cost is wear and tear on the car.    As a back of the envelope calculation I'm guessing for every 30k miles you need about $500 of tune up, $500 of brake pads and tires, and ten $50 oil changes.  This works out to $0.05 a mile, which is a figure I've seen repeated on several auto maintenance sites (backing out gas prices).  So again we just need {miles traveled}.  Finally, the fourth cost I'll throw in is carbon offset credits.  The market has determined a price of $0.011 per mile for offsetting the carbon emissions from your car.  This isn't quite fair as they must be assuming you have an average car (23 mpg).  Instead, you can convert that price (just multiplying it by 23 mpg) to get $0.25 per gallon of gas you consume.  Again we need {miles} and {mpg}.  All of these things could be split between multiple {passengers} if you carpool.

We can do the same cost analysis for the bus.  They only have two costs, and the first one is easy: the {ticket cost}.  Once again, we'll go for round trip.  They also have a carbon offset, and this one is a bit more tricky.  Buses get lousy gas milage (about 5.5 mpg once you convert from diesel to regular), but spread it out over many {passengers}.  My commuter bus routinely fills up its 45 seats, but some routes are nearly empty.  By using the {bus mpg} and {miles traveled} then dividing by {people riding} we get gallons of gas used per passenger, which can get us to our carbon offset credit price.  What happens when we add these things up?

For My Wife's Commute:
And Mine:

 So now the bus seems to make sense for both of us.  Still not the intuitive result, though.

As an Aside: Taking a Test-Drive Around Portlandia

Before I go any further, one cool thing you can do with this model is look at when it makes sense to take the bus on smaller trips.  Other than parking, the price to make a trip by car scales linearly with the number of miles you go.  The Portland bus system is a fixed price of $2.50 for 2 hours or $5 all day.  So if you know your cost of parking, you could choose your best method of transport based on the following cost curves (assuming one person in the car and average 23 mpg).


Is your trip less than 10 miles with free parking?  Take your car.  Are you going downtown where the parking is a $5 minimum?  Take the bus.  Are you visiting someone 5 (round trip) miles away where the parking is $2?  Take a bus if you plan to stay a short time, but a car if you would need the all-day pass.  Based on my own Portland bus experience, I'm guessing the buses travel at a rate of 15 mph in the city, which means you could just as easily scale the axis by a factor of 2 and call it the "number of one-way minutes traveled," which may be easier to visualize.

This chart also explores the idea of a free-ride zone and tiered pricing, in that it doesn't really make sense to pay for a bus ride less than 20 minutes unless there is a parking fee, while an hour and a half ride might be worth $10 to someone.  Downtown parking is in the $5 range, so Portland's recent move to a single price structure has little impact on their overall downtown bus usage.  Oh, and as an aside to this aside, look at how awesome the mpg can be when you get a full bus... it's in the 300 mpg range!

Time is Money

Getting back to my earlier question, why is it that my wife's commute makes no sense by bus.  The answer really comes down to the fact that there is no direct route.  While it certainly is annoying to spend 20 minutes and $6.30 in driving expenses to get to work, this is dwarfed by the 90 minutes it would take by bus.  We need a way to put a value on this time.

One way we can do this is to think about our "real" wage.  This is the value that you put on your time.  By implicit agreement, if you work for money you have agreed that a certain amount of money is worth at least a specific amount of your time (if not more).   If you make $40k a year, your per hour wage is just under $20 an hour for a 40 h workweek.  But if you commute to work, think about work off the clock, or spend any money on clothes/computers/vuvuzelas for work, you don't really make $20/h.  Maybe it is more like $15/h. This is your value of time.  Do you really love your job?  Would you do it for less money?  Maybe your time value is more like $5/h.

If your time value is $10/h and you spend time not being productive, it costs you $10/h.  This may be fine if you are spending time playing with your kids, reading a book, or watching an episode of dancing with the stars.  You agree that the lost time is worth $10, otherwise you would work and earn yourself $10.  This is obviously simplified, and because your work/life balance is not a free market you could end up mowing the lawn for an hour (for no pay) when you would rather read a book.  Ideally you'd pay someone less than $10 to mow the lawn for you.

So, why is this important again?  Well, imagine your time value is still $10/h.  If your choices are to drive 30 min (and lose $5 of productive time) or ride the bus for 90 min (and lose $15 of productive time), that money needs to go into the car/bus equation.  This can be mediated somewhat by being productive on the bus (or in the car).   If you do some work on the bus, you might be able to be at work for less time.  If you read a book on the bus, that is "productive" time that you don't do at home and still doesn't count against you.  The way you "lose" the money is to sit around doing nothing or doing something you don't want to do.  You wouldn't pay $10 to do that.  Lets apply this to our model.  I haven't shown you how I defined everything, so lets do that now (typing the items [in brackets]).

Oh, and you can follow my work here.

You Details
Time value (per hour) = C5

Car Details
Car route (miles) = C8
Car trip time (min) = C9
Productivity in car (min) = C10
Price of gas ($) = C11  (Currently $3.77 near me)
Mpg = C12     (Average for cars is 23, but we'll allow any value)
Gas used = C13  [=C8/C12]
Parking = C14
Total people = C15   (1, unless you are carpooling)
Cost of gas per person = C16   [=C13*C12/C15]
Cost of parking per person = C17   [=C14/C15]
Cost of wear & tear = C18   [=0.05*C8/C15]
Cost of lost productivity = C19   [=(C9-C10)/(C5/60)]
Cost of carbon offset = C20   [=0.25*C13/C15]
Cost of driving = C21   [=SUM(C16:C20)]


Bus Details
Bus route (miles) = F11
Bus trip time (min) = F12
Productivity on bus (min) = F13
Mpg (equivalence of diesel) = F14 = 5.5
Gas used = F15  [=F11/F14]
Total people = F16   (10 = low use, 20 = moderate use, 45 = seat capacity, 60 can fit with standing)
Effective mpg = F17   [=F11/(F15/F16)]
Cost of (round-trip) ticket = F18
Cost of lost productivity = F19   [=(F12-F13)/(C5/60)]
Cost of carbon offset = F20   [=0.25*F15/F16]
Cost of bus = F21   [=SUM(F18-F20)]

And lets throw one more in:
Walking details
Time = F5
Productivity while walking = F6
Cost of walking = Cost of lost productivity = F8 = [(F5-F6)/(C5/60)]


Running the Model... Or Driving it... Or Taking It On The Bus, I Guess

Now, when you plug in my wife's commute, it seems much more obvious that driving is the economically favored option.


Which is different than for my commute:


Her conditions could equilibrate if she valued her time less (somewhere around $5 would work).  Alternatively, she could make use of more of the time on the bus.  But that is tough!  I already make the assumption that she could work two hours, and with two bus changes each way sleep isn't an option either.  My route is simpler, with one bus change, meaning I can work or read for 90 of my 130 minutes.

Obviously productivity can change a lot with minor input changes.  Does the bus evade traffic?  Do I enjoy driving enough to count it as productive time worth $15/h?  (In short, no.)  Some of this productivity can be conceptualized if you look at walking to work (noted top right but not added to the graph).  It would take me about 7 hours to walk the round-trip.  With a bike I could do it in 3.  I like walking and biking, but only for perhaps the first hour, so after that it is non-productive time.  Am I willing to pay the extra money to get some of that time back?  If you are using this model as a tool for your own circumstances, (and I recommend you try it), keep in mind that some things are flexible (like your love of walking... in the Portland rain) and some things are fixed (like a monthly or yearly bus pass that saves you money).

Interestingly, you can also go backwards in the analysis.  If you know that you like your bus commute at least as much as the drive, you can back-calculate from your lost productivity calculation how much you value your time.  Maybe you realize that you value time at less than $5/h.  It might make you think twice about paying someone $10 to mow your lawn for an hour.

Taking The Long View (past Longview)

One final note.  My wife and I love going to Seattle for the weekend to visit friends and family.  In the past I often thought of it as a low cost outing.  Yeah, the price of gas is a pain, but the total cost isn't much if you have low cost fun like board-games and "hanging out."  When I first started building the model, though, I started having second thoughts.  ("It really costs that much just for my commute?" etc.)  Here is what the trip looks like when compared with a greyhound bus.


It's actually not that bad.  Split between the two of us, the cost of the car is less than $33 round trip (compared to ~$41 for the bus) before lost productivity.  And with two of us in the car, I'd say I get at least two hours of conversation, reading, and other fun activities that I'd count as productive time.  That doesn't hold if it were just one of us going up though.


What can I say... at least I didn't have to walk!