Tyler Caraza-Harter
Previously, we learned how to create matplotlib pie charts and scatter plots by calling Pandas plotting methods for Series and DataFrames.
In this document, we'll also learn how to also create line plots and bar plots.
Let's start by doing our matplotlib setup and usual imports:
%matplotlib inline
import pandas as pd
from pandas import Series, DataFrame
For readability, you may also want to increase the default font size at the start of your notebooks. You can do so by copy/pasting the following:
import matplotlib
matplotlib.rcParams.update({'font.size': 16})
We can create a line plot from either a Series (with s.plot.line()
) or a DataFrame (with df.plot.line()
).
s = Series([0,100,300,200,400])
s
s.plot.line()
The y values are clearly the values in the Series, but where are the x-values coming from? You guessed it, the Series' index. Let's try the same values with a different index.
s = Series([0,100,300,200,400], index=[1,2,30,31,32])
s
s.plot.line()
Now we see that the plot starts from 1 (instead of 0) and a bigger gap in the index (between 2 and 30) corresponds to a bigger line segment over the x-axis.
What happens if our index is not in order?
s = Series([0,100,300,200,400], index=[1,11,2,22,3])
s
s.plot.line()
Oops! That's probably not what we want. 99% of the time, people making a line plot want readers to be able to lookup a single y-value (per line) given a point along the x-axis. So even though this line passes through all of our data points, the lines between the points are very misleading.
If your data isn't already sorted, you'll probably want to sort it by the index first:
s.sort_index()
Don't get confused about this function! If we have a Python list L
and we call L.sort()
, the items in L are rearranged in place and the sort function doesn't return anything.
In contrast, if we have a Pandas Series s
and we call s.sort_index()
, the items in s
are not moved, but the sort_index function returns a new Series that is sorted. So if we print s
again, we see the original (unsorted) data:
s
Because sort_index() returns a new Series and we can call .plot.line()
on a Series, we can do the following on an unsorted Series s
in one step:
s.sort_index().plot.line()
In addition to the Series.plot.line()
method, there is also a DataFrame.plot.line()
method. Whereas the line function for a Series creates a plot with a single line, the line plot for a DataFrame draws a line for each column in the DataFrame (remember that each column in a DataFrame is essentially just a Series).
Let's try with a DataFrame containing temperature patterns for Madison, WI. The data was copied from https://www.usclimatedata.com/climate/madison/wisconsin/united-states/uswi0411, and contains the typical daily highs and lows for each month of the year.
df = DataFrame({
"high": [26, 31, 43, 57, 68, 78, 82, 79, 72, 59, 44, 30],
"low": [11, 15, 25, 36, 46, 56, 61, 59, 50, 39, 28, 16]
})
df
df.plot.line()
Not bad! We can see the temperatures vary througout the year, with highs correlated with lows. But what is the x-axis? What is the y-axis?
Remember that calling an AxesSubplot object. There are AxesSubplot.set_xlabel
and AxesSubplot.set_ylabel
functions that will help us out here. Just to make sure to call them in the same cell where .plot.line
is called, or the plot will be displayed before they can have an effect.
ax = df.plot.line()
ax.set_xlabel('Month')
ax.set_ylabel('Temp (Fehrenheit)')
What if we want the plot in Celcius? That's easy enough with some element-wise operations.
c_df = (df - 32) * 5/9
c_df
ax = c_df.plot.line()
ax.set_xlabel('Month')
ax.set_ylabel('Temp (Celsius)')
That's looking good!
One small thing: did you notice the extra print above the plot that says Text(0,0.5,'Temp (Celsius)')
? That happened because the call to set_ylabel
returned that value. We could always put None
at the end of our cell to supress that:
ax = c_df.plot.line()
ax.set_xlabel('Month')
ax.set_ylabel('Temp (Celsius)')
None
The above plot would be nicer if we saw actual month names along the y-axis. Let's create a DataFrame with the same data, but month names for the index.
df = DataFrame({
"month": ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"],
"high": [26, 31, 43, 57, 68, 78, 82, 79, 72, 59, 44, 30],
"low": [11, 15, 25, 36, 46, 56, 61, 59, 50, 39, 28, 16]
})
df = df.set_index("month")
df.head()
Let's try plotting it.
ax = df.plot.line()
ax.set_xlabel('Month')
ax.set_ylabel('Temp (Fehrenheit)')
None
Unfortunately, even though we now have months for the index, matplotlib won't use them for the x-axis unless we specifically tell it to. We can explicitly give matplotlib tick labels with the set_xticklabels
method.
# careful, this is an example of a bad plot!
ax = df.plot.line()
ax.set_xticklabels(df.index)
None
Yikes! That's not what we wanted at all. The above plot starts at Feb (instead of Jan), and it only covers half a year. We've set the tick labels, but not the tick positions. Let's take a look at the positions:
ax.get_xticks()
You should read the above as follows:
Fortunately, we can set the tick positions explicitly. The only correct configuration in this case is 0, 1, 2, 3, ...
ax = df.plot.line()
ax.set_xticks([0, 1, 2, 3])
ax.set_xticklabels(df.index)
None
If we want to count from 0 to 11, we can use range(len(df.index))
.
ax = df.plot.line()
ax.set_xticks(range(len(df.index)))
ax.set_xticklabels(df.index)
None
This plot is correct, but crowded! There are two solutions: (1) make the plot wider or (2) rotate the labels. We'll demo both. We'll also add back the axis labels.
# approach 1: wider plot
ax = df.plot.line(figsize=(8,4)) # this is the (width,height)
ax.set_xticks(range(len(df.index)))
ax.set_xticklabels(df.index)
ax.set_xlabel('Month')
ax.set_ylabel('Temp (Fehrenheit)')
None
# approach 2: rotate ticks
ax = df.plot.line()
ax.set_xticks(range(len(df.index)))
ax.set_xticklabels(df.index, rotation=90) # 90 is in degrees
ax.set_xlabel('Month')
ax.set_ylabel('Temp (Fehrenheit)')
None
In this example, we'll plot the performance of American stocks from 1970 to 2017. Specifically, we'll be looking at S&P 500 index data. The S&P 500 index tracks how well the 500 largest public American companies are collectively worth (think of it as a weighted average with more valuable companies being weighted more heavily).
We'll get our data from the Wikipedia on the S&P 500 Index article. Take a moment to skip the article.
We're interested in the "Total Annual Return Including Dividends" column of the table in the "Annual returns" section. Investors make money when (1) stock prices rise, or (2) companies pay dividends to shareholders. This column captures the yearly return, considering both these factors.
There are three parts in this example. In part 1, we do some web scraping to collect the data (it's a details BeautifulSoup example). For part 2, we'll visualise the data in several ways. In part 3, we'll simulate stock market returns, sampling from the real data in order to explore possible investment outcomes.
As a first step, let's download the wiki page and save it to a file named sp500.html. We check if this file exists before doing the download. If it does, we just use the contents of sp500.html instead of fetching the data again from Wikipedia (it's faster to access data on your computer rather than from a website).
import os, requests
path = "sp500.html"
if not os.path.exists(path):
r = requests.get('https://en.wikipedia.org/wiki/S%26P_500_Index')
r.raise_for_status()
f = open(path, "w")
f.write(r.text)
f.close()
f = open(path)
html = f.read()
f.close()
# let's parse the HTML
from bs4 import BeautifulSoup
page = BeautifulSoup(html, 'html.parser')
The page contains six tables. Which one has the data we care about? We can loop over each table, convert the contents to text, and check with the text contains the term "Total Annual Return" (that's the name of the column with the data we want).
target_column = "Total Annual Return"
tab = None
for curr in page.find_all('table'):
if curr.get_text().find(target_column) >= 0:
tab = curr
break
assert(tab != None)
Now we have the table we want. Let's create a list of lists representing the table data. This will be a list of rows, where each row contains td
(table data) and th
(table header) elements. Both of these elements are used to represent cells in HTML tables.
rows = []
for tr in tab.find_all('tr'):
rows.append(tr.find_all(['td', 'th']))
# let's print the first three rows to make sure they are what we expect.
rows[:3]
Let's make sure (with asserts) that the 0th and 2nd columns contain year and annual return data. If they do, we want to extract these entries and construct a Series with year as index and annual return for values.
assert(rows[0][0].get_text().find("Year") >= 0)
assert(rows[0][2].get_text().find("Total Annual Return") >= 0)
index = []
values = []
for row in rows[1:]:
index.append(row[0].get_text().strip())
values.append(row[2].get_text().strip())
if index[-1] == '2018':
break
returns = Series(values, index=index)
returns.tail()
Let's normalize the data so we can use it to multiply initial money. For example, we want to convert 4% to 1.04. That way, if we start with \$100, we can multiply by 1.04 to compute that we have \\$104 after a year.
Don't worry about the replace of chr(8722)
. It's not important to the example.
print("'{}' is a weird dash, not the negative dash '-' that will let us convert to a float.".format(chr(8722)))
mults = returns.str.replace(chr(8722), "-").str.replace("%", "").astype(float) / 100 + 1
mults.head()
We'll save this nicely formatted data to a CSV file. Any analysis of returns can use that directly without needing to repeat this HTML parsing.
df = DataFrame({"year":mults.index, "return":mults.values})
df.to_csv("sp500.csv", index=False)
df.tail()
In the previous step, we generated sp500.csv. Let's read that in and start doing some plotting. There are a few things we want to plot:
df = pd.read_csv("sp500.csv")
df.tail()
Lets use the year as the index.
df = df.set_index("year")
df.head()
Plot 1: returns each year. We want the year for the x-axis and the return on the y-axis.
df.plot.line()
We see a lot of noise, but the line stays above 1 in most years.
Plot 2: total returns over time. The x-axis will be time, and the y-axis will be total returns. We will assume we started in 1970 with \$1000.
In order to get the total money in a given year, we want to multiply the starting money by all the return multiples up through that year (this is called a compounding return). We can use the cumprod
method for this.
df['return'].cumprod().head()
For example, the 1973 value of 1.207
came by multiplying 1.0401 * 1.1431 * 1.1898 * 0.8534
(the multiples for 1970 through 1973). Let's plot how much money we have over time, if we start with $1000.
total = 1000 * df['return'].cumprod()
total.head()
ax = total.plot.line()
ax.set_ylabel('Net Worth')
None
Plot 3: do a scatter to show the correlation between one year and the next.
To do this, we'll create two Series, both indexed by year. The first Series we'll pull directly from sp500.csv: the index will be a year, and the corresponding value will be the returns for that year. In the second Series, the index will be a year, and the value will the the returns in the year FOLLOWING the year in the index.
df = pd.read_csv("sp500.csv")
df.head()
df = df.set_index("year")
df.head()
series1 = df['return']
series2 = Series(df['return'].values[1:], index=df['return'].index[:-1])
pairs = DataFrame({"curr":series1, "next":series2})
pairs.head()
As you can see, the next
column of the 1970 year contains the curr
value of the 1971 year. Let's do a scatter plot to look at the correlation. As a pre-step, we'll subtract 1 from ever cell so a 10% loss will be represented as -0.1 (instead of 0.9).
(pairs - 1).head()
(pairs - 1).plot.scatter(x='curr', y='next')
In this section, we'll going explore likely outcomes if one were to invest \$1000 in an S&P 500 index fund for 10 years.
df = pd.read_csv("sp500.csv")
df.head()
returns = df['return']
returns.head()
import random
sim = DataFrame()
# do 25 simulations
for i in range(25):
# sample returns for 10 years
decade = random.choices(returns, k=10)
# start with $1000, compute compounded wealth over
# the course of the decade
net_worth = 1000 * Series(decade).cumprod()
# add this simulation as a column in the DataFrame
sim['sim'+str(i)] = net_worth
sim
Each of the above columns in the above DataFrame represents a simulation. The bottom row represents the total wealth after 10 years.
Let's plot each simulation. We'll disable the legend because 25 legend entries is too many.
sim.set_index(sim.index + 1, inplace=True)
ax = sim.plot.line(legend=False, figsize=(8,8))
ax.set_xlabel("Years Elapsed")
ax.set_ylabel("Net Worth ($)")
It appears that doubling one's money (or better) over 10 years is fairly like. Of course, in some cases wealth increases very little (or worse, decreases). We also observe that the road to wealth is usually bumpy.
Just like a line plot, bar plots can be created from either a Pandas Series or DataFrame. For our example data, let's learn a bit about the fire hydrants around the city of Madison. Data describing each fire hydrant can be found at http://data-cityofmadison.opendata.arcgis.com/datasets/54c4877f16084409849ebd5385e2ee27_6. We have already downloaded the data to a file named "Fire_Hydrants.csv". Let's read it and preview a few rows.
df = pd.read_csv('Fire_Hydrants.csv')
df.head()
For our first example, let's see what nozzle colors are most common. We can get a Series summarizing the data by first extracting the nozzle_color
column, then using the Series.value_counts()
function to produce a summary Series.
df['nozzle_color'].head()
df['nozzle_color'].value_counts()
The above data means, for example, that there are 5810 "blue" nozzles and 1148 "Blue" nozzles. We can already see there is a lot of blue, but we would really like a total count, not confused by whether the letters are upper or lower case.
df['nozzle_color'].str.upper().value_counts()
Great! It's not clear what "C" means, but the data is clean enough. Let's plot it with Series.plot.bar
.
counts = df['nozzle_color'].str.upper().value_counts()
ax = counts.plot.bar()
ax.set_ylabel("Hydrant Counts")
Is the data reasonable? Try to notice next time you're walking by a hydrant. Consider it a challenge to spot a green nozzle (bonus points for orange!).
One thing we should do is to always make all the bars the same color, unless the different colors represent some aspect of the data. Let's do that now:
ax = counts.plot.bar(color="gray")
ax.set_ylabel("Hydrant Counts")
For our second question, let's create a similar plot that tells us what model of hydrants are most common. The model is represented by the Style
column in the table. The following code is a copy/paste of above, just replacing "nozzle_color" with "Style":
counts = df['Style'].str.upper().value_counts()
counts.plot.bar(color="gray")
Woah! That's way too much data. Let's just consider the top 10 models.
top10 = counts[:10]
top10
How many others are not in the top 10? We should show that in our results too.
others = counts[10:].sum()
top10["others"] = others
top10
Now that looks like what we want to plot.
ax = top10.plot.bar(color="gray")
ax.set_ylabel("Hydrant Counts")
Nice! This shows us what we want. We see Pacer is easily the most common. Some of the longer texts are harder to read vertically, so we also have the option to use .barh
instead of .bar
to rotate the bars.
top10.plot.barh(color="gray")
I wonder what is up with all those Pacer hydrants? Have they always been so popular with the city? Turns out we can find out, because we also have a column called year_manufactured
.
Let's find all the rows for Pacer hydrants and extract the year.
pacer_years = df[df['Style'] == 'Pacer']['year_manufactured']
pacer_years.head()
Let's round to the decade. We can do that by dividing by 10 (integer division), then multiplying by 10 again.
pacer_decades = pacer_years // 10 * 10
pacer_decades.head()
How many Pacers were there each decade?
pacer_decades.value_counts().astype(int)
Let's do the same thing in one step for non-pacers. That is, we'll identify non-pacers, extract the year, round to the decade, and then count how many entries there are per decade.
other_decades = (df[df['Style'] != 'Pacer']['year_manufactured'] // 10 * 10)
other_decades.value_counts()
Let's line up these two Series side-by-side in a DataFrame
pacer_df = DataFrame({
"pacer":pacer_decades.dropna().astype(int).value_counts(),
"other":other_decades.dropna().astype(int).value_counts()
})
pacer_df
That looks plottable!
pacer_df.plot.bar()
That plot shows that the city started getting Pacers in the 90's. Most were from the 2000 decade, and it seems there is finally a shift to other styles.
While this plot is fine, when multiple bars represent a breakdown of a total amount, it's more intuitive to stack the bars over each other. This is easy with the stacked=
argument.
ax = pacer_df.plot.bar(stacked=True)
ax.set_ylabel("Hydrant Counts")
This data supports all the same conclusions as before, and now one more thing is obvious: although there was stead growth in the number of hydrants over several decades, things seem to have leveled off more recently. Why? Further probing of the data might provide an answer. One explanation is that the 2000 decade contains 10 years, but we have a couple years left for the 10's. Perhaps this decade will still catch up.
After this reading, you should now be ready to create four types of plots: pie charts, scatter plots, line plots, and bar plots.
We saw that both line and bar plots can be created from either a single Series or a DataFrame. When created from a single Series, we end up with either a single line (for a line plot) or one set of bars (for a bar plot).
When we create from a DataFrame, we get multiple lines (one per column) for a line plot. And for a bar plot, we get multiple sets of bars. We can control whether those bars are vertical (with .bar) or horizontal (with .barh), as well as whether the bars are stacked or side-by-side.