Using Rpart to figure out who voted for Trump

[This article was first published on d4tagirl, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

It’s been a few days since we witnessed the inauguration of Donald Trump as the 45th President of the United States, whose victory over Hillary Clinton came as a shock for most people. I’m not much into politics (and it is not even my country!) but this result really caught my attention, so I wanted to dig a little about the population’s characteristics that made him the winner of the Election.

I’ve been studying Machine Learning for a while now, and a couple of months ago I discovered the awesome tidyverse world (I can’t believe the way I used to do things :$), so I thought this was a great opportunity to test my skills. In addition to that, I’m not a native English speaker so in this first post I am facing major challenges! If you see something improvable, not clear or plainly wrong, please leave a comment or mention me on Twitter.

What I do here is estimate a Classification Tree (CART) to find an association between the winner in the county and its socio-demographic characteristics.

The data

I recently joined Kaggle, and Joel Wilson gathered data about 2016 US Election Results by county and County Quick Facts from the US Census Bureau. This is the data I use to run this analysis.

I start by loading the data and merging it. No mystery here, except I load it using readr and merge it using dplyr (yay!).

<span class="n">library</span><span class="p">(</span><span class="n">readr</span><span class="p">)</span><span class="w">
</span><span class="n">library</span><span class="p">(</span><span class="n">dplyr</span><span class="p">)</span><span class="w">

</span><span class="n">url_pop</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="s1">'https://github.com/d4tagirl/TrumpVsClintonCountiesRpart/raw/master/data/county_facts.csv'</span><span class="w">
</span><span class="n">url_results</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="s1">'https://github.com/d4tagirl/TrumpVsClintonCountiesRpart/raw/master/data/US_County_Level_Presidential_Results_12-16.csv'</span><span class="w">  
</span><span class="n">pop</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">read_csv</span><span class="p">(</span><span class="n">url</span><span class="p">(</span><span class="n">url_pop</span><span class="p">))</span><span class="w">
</span><span class="n">results</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">read_csv</span><span class="p">(</span><span class="n">url</span><span class="p">(</span><span class="n">url_results</span><span class="p">))</span><span class="w">

</span><span class="n">votes</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">results</span><span class="w"> </span><span class="o">%>%</span><span class="w"> 
  </span><span class="n">inner_join</span><span class="p">(</span><span class="n">pop</span><span class="p">,</span><span class="w"> </span><span class="n">by</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="s2">"combined_fips"</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"fips"</span><span class="p">))</span><span class="w">
</span>

It is a clean dataset, but I need to do some modifications for the analysis:

  • There is no election data for the state of Alaska so I remove those counties.
  • Replace the old ID (X1) with a new one (ID).
  • Delete non relevant variables.
  • Rename variables to make them interpretable.

All of this taking advantage of the dplyr of course.

<span class="n">votes</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">votes</span><span class="w"> </span><span class="o">%>%</span><span class="w">
  </span><span class="n">filter</span><span class="p">(</span><span class="n">state_abbr</span><span class="w"> </span><span class="o">!=</span><span class="w"> </span><span class="s2">"AK"</span><span class="p">)</span><span class="w"> </span><span class="o">%>%</span><span class="w">
  </span><span class="n">mutate</span><span class="p">(</span><span class="n">ID</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">rank</span><span class="p">(</span><span class="n">X</span><span class="m">1</span><span class="p">))</span><span class="w"> </span><span class="o">%>%</span><span class="w">
  </span><span class="n">select</span><span class="p">(</span><span class="o">-</span><span class="n">X</span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="o">-</span><span class="n">POP010210</span><span class="p">,</span><span class="w"> </span><span class="o">-</span><span class="n">PST040210</span><span class="p">,</span><span class="w"> </span><span class="o">-</span><span class="n">NES010213</span><span class="p">,</span><span class="w"> </span><span class="o">-</span><span class="n">WTN220207</span><span class="p">)</span><span class="w"> </span><span class="o">%>%</span><span class="w">
  </span><span class="n">rename</span><span class="p">(</span><span class="n">age18minus</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">AGE295214</span><span class="p">,</span><span class="w"> </span><span class="n">age5minus</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">AGE135214</span><span class="p">,</span><span class="w"> </span><span class="n">age65plus</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">AGE775214</span><span class="p">,</span><span class="w">
    </span><span class="n">american_indian</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">RHI325214</span><span class="p">,</span><span class="w"> </span><span class="n">asian</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">RHI425214</span><span class="p">,</span><span class="w"> </span><span class="n">asian_Firms</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">SBO215207</span><span class="p">,</span><span class="w">
    </span><span class="n">black</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">RHI225214</span><span class="p">,</span><span class="w"> </span><span class="n">black_firms</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">SBO315207</span><span class="p">,</span><span class="w"> </span><span class="n">building_permits</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">BPS030214</span><span class="p">,</span><span class="w">
    </span><span class="n">density</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">POP060210</span><span class="p">,</span><span class="w"> </span><span class="n">edu_batchelors</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">EDU685213</span><span class="p">,</span><span class="w"> </span><span class="n">edu_highschool</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">EDU635213</span><span class="p">,</span><span class="w">
    </span><span class="n">firms_num</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">SBO001207</span><span class="p">,</span><span class="w"> </span><span class="n">foreign</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">POP645213</span><span class="p">,</span><span class="w"> </span><span class="n">hisp_latin</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">RHI725214</span><span class="p">,</span><span class="w">
    </span><span class="n">hispanic_firms</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">SBO415207</span><span class="p">,</span><span class="w"> </span><span class="n">home_owners_rate</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">HSG445213</span><span class="p">,</span><span class="w"> </span><span class="n">households</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">HSD410213</span><span class="p">,</span><span class="w">
    </span><span class="n">housing_Units</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">HSG010214</span><span class="p">,</span><span class="w"> </span><span class="n">housing_units_multistruct</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">HSG096213</span><span class="p">,</span><span class="w"> </span><span class="n">income</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">INC910213</span><span class="p">,</span><span class="w">
    </span><span class="n">land_area</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">LND110210</span><span class="p">,</span><span class="w"> </span><span class="n">living_same_house_12m</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">POP715213</span><span class="p">,</span><span class="w"> </span><span class="n">manuf_ship</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">MAN450207</span><span class="p">,</span><span class="w">
    </span><span class="n">Med_house_income</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">INC110213</span><span class="p">,</span><span class="w"> </span><span class="n">Med_val_own_occup</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">HSG495213</span><span class="p">,</span><span class="w"> </span><span class="n">native_haw</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">RHI525214</span><span class="p">,</span><span class="w">
    </span><span class="n">native_firms</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">SBO115207</span><span class="p">,</span><span class="w"> </span><span class="n">nonenglish</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">POP815213</span><span class="p">,</span><span class="w"> </span><span class="n">pacific_isl_firms</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">SBO515207</span><span class="p">,</span><span class="w">
    </span><span class="n">pers_per_household</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">HSD310213</span><span class="p">,</span><span class="w"> </span><span class="n">pop_change</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">PST120214</span><span class="p">,</span><span class="w"> </span><span class="n">pop2014</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">PST045214</span><span class="p">,</span><span class="w">
    </span><span class="n">poverty</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">PVY020213</span><span class="p">,</span><span class="w"> </span><span class="n">priv_nofarm_employ</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">BZA110213</span><span class="p">,</span><span class="w"> </span><span class="n">priv_nonfarm_employ_change</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">BZA115213</span><span class="p">,</span><span class="w">
    </span><span class="n">priv_nonfarm_estab</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">BZA010213</span><span class="p">,</span><span class="w"> </span><span class="n">retail_sales</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">RTN130207</span><span class="p">,</span><span class="w"> </span><span class="n">retail_sales_percap</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">RTN131207</span><span class="p">,</span><span class="w">
    </span><span class="n">sales_accomod_food</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">AFN120207</span><span class="p">,</span><span class="w"> </span><span class="n">sex_f</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">SEX255214</span><span class="p">,</span><span class="w"> </span><span class="n">travel_time_commute</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">LFE305213</span><span class="p">,</span><span class="w">
    </span><span class="n">two_races_plus</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">RHI625214</span><span class="p">,</span><span class="w"> </span><span class="n">veterans</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">VET605213</span><span class="p">,</span><span class="w"> </span><span class="n">white</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">RHI125214</span><span class="p">,</span><span class="w"> </span><span class="n">white_alone</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">RHI825214</span><span class="p">,</span><span class="w">
    </span><span class="n">women_firms</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">SBO015207</span><span class="p">,</span><span class="w">
    </span><span class="n">Trump</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">per_gop_2016</span><span class="p">,</span><span class="w"> </span><span class="n">Clinton</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">per_dem_2016</span><span class="p">,</span><span class="w"> </span><span class="n">Romney</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">per_gop_2012</span><span class="p">,</span><span class="w"> </span><span class="n">Obama</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">per_dem_2012</span><span class="p">)</span><span class="w">
</span>

Most of these variables are measured as a percentage of people in the county with that characteristic (e.g. edu_batchelors and black), or the total amount in the county (e.g. land_area and firms_num).

Note that there is a white variable and a white_alone variable. This is because there are 2 separate facts gathered here.

  • white refers to race: people having origins in any of the original peoples of Europe, the Middle East, or North Africa. If a person declares white among other races, is classified as two_races_plus and not as white.

  • white_alone refers to white race people who also reported not Hispanic or Latino origin.

For further references on any variable you can go to the Census Bureau’s site.

Building the Response Variable

I create the variable I want to explain: pref_cand_T. It takes the value 1 if Trump has a greater percentage of votes than Clinton in the county, and 0 otherwise. Note that it is not necessary that one of them has more than 50% of the votes, it’s only required that has more votes than the other.

<span class="n">votes</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">votes</span><span class="w"> </span><span class="o">%>%</span><span class="w"> </span><span class="n">mutate</span><span class="p">(</span><span class="n">pref_cand_T</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">factor</span><span class="p">(</span><span class="n">ifelse</span><span class="p">(</span><span class="n">Trump</span><span class="w"> </span><span class="o">></span><span class="w"> </span><span class="n">Clinton</span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">0</span><span class="p">)))</span><span class="w">

</span><span class="n">summary</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">votes</span><span class="w"> </span><span class="o">%>%</span><span class="w"> </span><span class="n">summarize</span><span class="p">(</span><span class="n">Trump</span><span class="w">       </span><span class="o">=</span><span class="w"> </span><span class="nf">sum</span><span class="p">(</span><span class="n">pref_cand_T</span><span class="w"> </span><span class="o">==</span><span class="w"> </span><span class="m">1</span><span class="p">),</span><span class="w">
                               </span><span class="n">Clinton</span><span class="w">     </span><span class="o">=</span><span class="w"> </span><span class="n">n</span><span class="p">()</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">Trump</span><span class="p">,</span><span class="w">
                               </span><span class="n">Trump_per</span><span class="w">   </span><span class="o">=</span><span class="w"> </span><span class="n">mean</span><span class="p">(</span><span class="n">pref_cand_T</span><span class="w"> </span><span class="o">==</span><span class="w"> </span><span class="m">1</span><span class="p">),</span><span class="w">
                               </span><span class="n">Clinton_per</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">1</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">Trump_per</span><span class="p">)</span><span class="w">

</span><span class="n">library</span><span class="p">(</span><span class="n">knitr</span><span class="p">)</span><span class="w">
</span><span class="n">knitr</span><span class="o">::</span><span class="n">kable</span><span class="p">(</span><span class="n">summary</span><span class="p">,</span><span class="w"> </span><span class="n">align</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s1">'l'</span><span class="p">)</span><span class="w">
</span>
Trump Clinton Trump_per Clinton_per
2624 488 0.8431877 0.1568123

Trump got more votes than Clinton in 2624 counties opposing the 488 other counties where Clinton got more votes (remember we are talking about counties and not Electoral College votes). The proportion is 84% for Trump and 16% for Clinton.

Some visualization about race and origin

I explore briefly some counties’ characteristics about race and origin by visualizing them. It is time to test my brand new ggplot skills!

I generate a plot for the mean of each characteristic across all counties, which is the mean of the proportion of people with each characteristic in all counties (simple mean, without considering the population of the county).

<span class="n">library</span><span class="p">(</span><span class="n">tidyr</span><span class="p">)</span><span class="w">
</span><span class="n">library</span><span class="p">(</span><span class="n">ggplot2</span><span class="p">)</span><span class="w">

</span><span class="c1"># Order in x-axis
</span><span class="n">limits</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="s2">"white"</span><span class="p">,</span><span class="w"> </span><span class="s2">"white_alone"</span><span class="p">,</span><span class="w"> </span><span class="s2">"black"</span><span class="p">,</span><span class="w"> </span><span class="s2">"asian"</span><span class="p">,</span><span class="w"> </span><span class="s2">"hisp_latin"</span><span class="p">,</span><span class="w"> </span><span class="s2">"foreign"</span><span class="p">)</span><span class="w">

</span><span class="n">total</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">votes</span><span class="w"> </span><span class="o">%>%</span><span class="w"> 
  </span><span class="n">summarize</span><span class="p">(</span><span class="w">
    </span><span class="n">white</span><span class="w">       </span><span class="o">=</span><span class="w"> </span><span class="n">mean</span><span class="p">(</span><span class="n">white</span><span class="p">),</span><span class="w">
    </span><span class="n">white_alone</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">mean</span><span class="p">(</span><span class="n">white_alone</span><span class="p">),</span><span class="w">
    </span><span class="n">black</span><span class="w">       </span><span class="o">=</span><span class="w"> </span><span class="n">mean</span><span class="p">(</span><span class="n">black</span><span class="p">),</span><span class="w">
    </span><span class="n">asian</span><span class="w">       </span><span class="o">=</span><span class="w"> </span><span class="n">mean</span><span class="p">(</span><span class="n">asian</span><span class="p">),</span><span class="w">
    </span><span class="n">hisp_latin</span><span class="w">  </span><span class="o">=</span><span class="w"> </span><span class="n">mean</span><span class="p">(</span><span class="n">hisp_latin</span><span class="p">),</span><span class="w">
    </span><span class="n">foreign</span><span class="w">     </span><span class="o">=</span><span class="w"> </span><span class="n">mean</span><span class="p">(</span><span class="n">foreign</span><span class="p">))</span><span class="w"> </span><span class="o">%>%</span><span class="w">
  </span><span class="n">gather</span><span class="p">(</span><span class="n">variable</span><span class="p">,</span><span class="w"> </span><span class="n">value</span><span class="p">)</span><span class="w"> </span><span class="o">%>%</span><span class="w"> 
  </span><span class="n">ggplot</span><span class="p">()</span><span class="w"> </span><span class="o">+</span><span class="w">
  </span><span class="n">geom_bar</span><span class="p">(</span><span class="n">aes</span><span class="p">(</span><span class="n">x</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">variable</span><span class="p">,</span><span class="w"> </span><span class="n">y</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">value</span><span class="p">),</span><span class="w"> 
           </span><span class="n">stat</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s1">'identity'</span><span class="p">,</span><span class="w"> </span><span class="n">width</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">.7</span><span class="p">,</span><span class="w"> </span><span class="n">fill</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"#C9C9C9"</span><span class="p">)</span><span class="w"> </span><span class="o">+</span><span class="w">
  </span><span class="n">geom_vline</span><span class="p">(</span><span class="n">xintercept</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="m">4.5</span><span class="p">,</span><span class="w"> </span><span class="m">5.5</span><span class="p">),</span><span class="w"> </span><span class="n">alpha</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">0.2</span><span class="w"> </span><span class="p">)</span><span class="w"> </span><span class="o">+</span><span class="w">
  </span><span class="n">scale_x_discrete</span><span class="p">(</span><span class="n">limits</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">limits</span><span class="p">)</span><span class="w"> </span><span class="o">+</span><span class="w">
  </span><span class="n">labs</span><span class="p">(</span><span class="n">title</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"Mean of % in counties"</span><span class="p">,</span><span class="w">    
       </span><span class="n">subtitle</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"(Simple mean of % in counties without considering counties' population)"</span><span class="p">)</span><span class="w"> </span><span class="o">+</span><span class="w">
  </span><span class="n">theme_bw</span><span class="p">()</span><span class="w"> </span><span class="o">+</span><span class="w">
  </span><span class="n">theme</span><span class="p">(</span><span class="n">axis.title.x</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">element_blank</span><span class="p">(),</span><span class="w"> </span><span class="n">axis.title.y</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">element_blank</span><span class="p">(),</span><span class="w">
        </span><span class="n">axis.text.x</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">element_blank</span><span class="p">(),</span><span class="w"> </span><span class="n">axis.line</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">element_line</span><span class="p">(</span><span class="n">colour</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"grey"</span><span class="p">),</span><span class="w">
        </span><span class="n">panel.grid.major</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">element_blank</span><span class="p">(),</span><span class="w"> </span><span class="n">panel.border</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">element_blank</span><span class="p">())</span><span class="w">
</span>

Something beautiful about tidyverse is the way that you can build up the solution. The first part is simply dplyr plus tidyr::gather, necessary to manipulate the input for the ggplot. The rest is the ggplot layer by layer.

It seems like a lot of code for a simple plot, but trust me: once you get the hang of ggplot, it is magic! The only fear is that you’ll want to customize everything! (An that’s why I have so much code…)

Then I generate the same plot by candidate.

<span class="n">by_candidate</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">votes</span><span class="w"> </span><span class="o">%>%</span><span class="w"> 
  </span><span class="n">group_by</span><span class="p">(</span><span class="n">pref_cand_T</span><span class="p">)</span><span class="w"> </span><span class="o">%>%</span><span class="w"> 
  </span><span class="n">summarize</span><span class="p">(</span><span class="w">
    </span><span class="n">white</span><span class="w">       </span><span class="o">=</span><span class="w"> </span><span class="n">mean</span><span class="p">(</span><span class="n">white</span><span class="p">),</span><span class="w">
    </span><span class="n">white_alone</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">mean</span><span class="p">(</span><span class="n">white_alone</span><span class="p">),</span><span class="w">
    </span><span class="n">black</span><span class="w">       </span><span class="o">=</span><span class="w"> </span><span class="n">mean</span><span class="p">(</span><span class="n">black</span><span class="p">),</span><span class="w">
    </span><span class="n">asian</span><span class="w">       </span><span class="o">=</span><span class="w"> </span><span class="n">mean</span><span class="p">(</span><span class="n">asian</span><span class="p">),</span><span class="w">
    </span><span class="n">hisp_latin</span><span class="w">  </span><span class="o">=</span><span class="w"> </span><span class="n">mean</span><span class="p">(</span><span class="n">hisp_latin</span><span class="p">),</span><span class="w">
    </span><span class="n">foreign</span><span class="w">     </span><span class="o">=</span><span class="w"> </span><span class="n">mean</span><span class="p">(</span><span class="n">foreign</span><span class="p">))</span><span class="w"> </span><span class="o">%>%</span><span class="w"> 
  </span><span class="n">gather</span><span class="p">(</span><span class="n">variable</span><span class="p">,</span><span class="w"> </span><span class="n">value</span><span class="p">,</span><span class="w"> </span><span class="o">-</span><span class="n">pref_cand_T</span><span class="p">)</span><span class="w"> </span><span class="o">%>%</span><span class="w"> 
  </span><span class="n">ggplot</span><span class="p">()</span><span class="w"> </span><span class="o">+</span><span class="w"> 
  </span><span class="n">geom_bar</span><span class="p">(</span><span class="n">aes</span><span class="p">(</span><span class="n">x</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">variable</span><span class="p">,</span><span class="w"> </span><span class="n">y</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">value</span><span class="p">,</span><span class="w"> </span><span class="n">fill</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">pref_cand_T</span><span class="p">),</span><span class="w">
           </span><span class="n">stat</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s1">'identity'</span><span class="p">,</span><span class="w"> </span><span class="n">position</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s1">'dodge'</span><span class="p">)</span><span class="w"> </span><span class="o">+</span><span class="w"> 
  </span><span class="n">geom_vline</span><span class="p">(</span><span class="n">xintercept</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="m">4.5</span><span class="p">,</span><span class="w"> </span><span class="m">5.5</span><span class="p">),</span><span class="w"> </span><span class="n">alpha</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">0.2</span><span class="w"> </span><span class="p">)</span><span class="w"> </span><span class="o">+</span><span class="w">
  </span><span class="n">scale_fill_manual</span><span class="p">(</span><span class="n">values</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">alpha</span><span class="p">(</span><span class="nf">c</span><span class="p">(</span><span class="s2">"blue"</span><span class="p">,</span><span class="w"> </span><span class="s2">"red"</span><span class="p">)),</span><span class="w"> 
                    </span><span class="n">breaks</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="s2">"0"</span><span class="p">,</span><span class="w"> </span><span class="s2">"1"</span><span class="p">),</span><span class="w"> </span><span class="n">labels</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="s2">"Clinton"</span><span class="p">,</span><span class="w"> </span><span class="s2">"Trump"</span><span class="p">))</span><span class="w"> </span><span class="o">+</span><span class="w">
  </span><span class="n">scale_x_discrete</span><span class="p">(</span><span class="n">limits</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">limits</span><span class="p">)</span><span class="w"> </span><span class="o">+</span><span class="w">
  </span><span class="n">labs</span><span class="p">(</span><span class="n">fill</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"winner"</span><span class="p">)</span><span class="w"> </span><span class="o">+</span><span class="w">
  </span><span class="n">theme_bw</span><span class="p">()</span><span class="w"> </span><span class="o">+</span><span class="w">
  </span><span class="n">theme</span><span class="p">(</span><span class="n">axis.title.x</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">element_blank</span><span class="p">(),</span><span class="w"> </span><span class="n">axis.title.y</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">element_blank</span><span class="p">(),</span><span class="w">
        </span><span class="n">axis.line</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">element_line</span><span class="p">(</span><span class="n">colour</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"grey"</span><span class="p">),</span><span class="w"> </span><span class="n">legend.position</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"bottom"</span><span class="p">,</span><span class="w"> 
        </span><span class="n">panel.grid.major</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">element_blank</span><span class="p">(),</span><span class="w"> </span><span class="n">panel.border</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">element_blank</span><span class="p">())</span><span class="w">
</span>

Same as before, but excluding the grouping variable pref_cand_T from the gathering to generate the ggplot input.

Next I plot both. I use the gridExtra package to display both plots together. (And yes, it was a lot of learning this past few months!).

<span class="n">library</span><span class="p">(</span><span class="n">gridExtra</span><span class="p">)</span><span class="w">

</span><span class="n">grid.arrange</span><span class="p">(</span><span class="n">total</span><span class="p">,</span><span class="w"> </span><span class="n">by_candidate</span><span class="p">,</span><span class="w"> </span><span class="n">nrow</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">2</span><span class="p">)</span><span class="w">
</span>

plot of chunk plot_race

Among races, the mean of white people is higher for the counties where Trump won than the rest, and for the counties where Clinton won, the mean of black and asian people is higher. Clinton also won in counties with higher mean of Hispanic or Latin origin people, and foreign-born population.

In a future post I will be doing some more digging into this variables to find out more.

The Classification Tree

As a standard practice, I split the data into train and test samples. I use 70% of the counties for training and the rest for testing. Since I am dealing with unbalanced data, I use the createDataPartition function in the caret package that conserves the proportion of classes across all samples.

<span class="n">library</span><span class="p">(</span><span class="n">caret</span><span class="p">)</span><span class="w">

</span><span class="n">perc_train</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="m">0.7</span><span class="w">
</span><span class="n">set.seed</span><span class="p">(</span><span class="m">3333</span><span class="p">)</span><span class="w">

</span><span class="n">trainIndex</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">createDataPartition</span><span class="p">(</span><span class="n">y</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">votes</span><span class="o">$</span><span class="n">pref_cand_T</span><span class="p">,</span><span class="w"> </span><span class="n">p</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">perc_train</span><span class="p">,</span><span class="w">
                                  </span><span class="n">list</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="kc">FALSE</span><span class="p">,</span><span class="w"> </span><span class="n">times</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">1</span><span class="p">)</span><span class="w">

</span><span class="n">train</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">votes</span><span class="w"> </span><span class="o">%>%</span><span class="w"> </span><span class="n">subset</span><span class="p">(</span><span class="n">ID</span><span class="w"> </span><span class="o">%in%</span><span class="w"> </span><span class="n">trainIndex</span><span class="p">)</span><span class="w">
</span><span class="n">test</span><span class="w"> </span><span class="o"><-</span><span class="w">  </span><span class="n">votes</span><span class="w"> </span><span class="o">%>%</span><span class="w"> </span><span class="n">setdiff</span><span class="p">(</span><span class="n">train</span><span class="p">)</span><span class="w">
</span>

Growing the tree

I estimate the tree using rpart, excluding variables non relevant for modelling, and those used to build the response variable.

<span class="n">library</span><span class="p">(</span><span class="n">rpart</span><span class="p">)</span><span class="w">

</span><span class="n">pref_cand_rpart</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">rpart</span><span class="p">(</span><span class="n">pref_cand_T</span><span class="w"> </span><span class="o">~</span><span class="w"> </span><span class="n">.</span><span class="p">,</span><span class="w">
                         </span><span class="n">data</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">train</span><span class="p">[,</span><span class="w"> </span><span class="o">-</span><span class="nf">c</span><span class="p">(</span><span class="m">1</span><span class="o">:</span><span class="m">24</span><span class="p">,</span><span class="w"> </span><span class="m">70</span><span class="p">)],</span><span class="w">
                         </span><span class="n">control</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">rpart.control</span><span class="p">(</span><span class="n">xval</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">10</span><span class="p">,</span><span class="w"> </span><span class="n">cp</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">0.0001</span><span class="p">))</span><span class="w">
</span>

This algorithm grows a tree from a root that has all the observations, splitting binarily to reduce the impurity of its nodes, until some stopping rule is met. I set up this rules with the rpart.control:

  • minsplit = 20 is the default, the minimum of observations in a node to attempt a split.
  • minbucket = 7 is also the default, the minimum of observations in any terminal node.
  • cp = 0.0001 is the minimum factor of decreasing lack of fit for a split to be attempted.

This is quite a big tree because I use a very small cp (I’m not showing it for that reason, but you can find it here). Simpler trees are preferred, since they are less likely to overfit the data.

Pruning the tree

Now it’s time to prune the tree. To do this, I keep the split if it meets some criteria about the balance between the impurity reduction (cost) and the increase of size (complexity). This is the cp.

I am pruning the tree following the 1-SE rule, according to which I choose the simplest model with accuracy similar to the best model.

<span class="n">library</span><span class="p">(</span><span class="n">tibble</span><span class="p">)</span><span class="w">
</span><span class="n">cp</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">as_tibble</span><span class="p">(</span><span class="n">pref_cand_rpart</span><span class="o">$</span><span class="n">cptable</span><span class="p">)</span><span class="w"> </span><span class="o">%>%</span><span class="w">
  </span><span class="n">filter</span><span class="p">(</span><span class="n">xerror</span><span class="w"> </span><span class="o"><=</span><span class="w"> </span><span class="nf">min</span><span class="p">(</span><span class="n">xerror</span><span class="p">)</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">xstd</span><span class="p">)</span><span class="w"> </span><span class="o">%>%</span><span class="w">
  </span><span class="n">filter</span><span class="p">(</span><span class="n">xerror</span><span class="w"> </span><span class="o">==</span><span class="w"> </span><span class="nf">max</span><span class="p">(</span><span class="n">xerror</span><span class="p">))</span><span class="w"> </span><span class="o">%>%</span><span class="w">
  </span><span class="n">select</span><span class="p">(</span><span class="n">CP</span><span class="p">)</span><span class="w"> </span><span class="o">%>%</span><span class="w">
  </span><span class="n">unlist</span><span class="p">()</span><span class="w">

</span><span class="n">winner_rpart</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">prune</span><span class="p">(</span><span class="n">pref_cand_rpart</span><span class="p">,</span><span class="w"> </span><span class="n">cp</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">cp</span><span class="p">)</span><span class="w">
</span>

dplyr manipulates dataframes, so I use the tibble::as_tibble function to convert the pref_cand_rpart$cptable table to a tibble, and then select the cp.

And here I have the tree! (If you know some other way to plot nicer trees, please comment!)

<span class="n">library</span><span class="p">(</span><span class="n">rpart.plot</span><span class="p">)</span><span class="w">

</span><span class="n">rpart.plot</span><span class="p">(</span><span class="n">winner_rpart</span><span class="p">,</span><span class="w"> </span><span class="n">main</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"Winner candidate in county"</span><span class="p">,</span><span class="w">
           </span><span class="n">extra</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">104</span><span class="p">,</span><span class="w"> </span><span class="n">split.suffix</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"%?"</span><span class="p">,</span><span class="w"> </span><span class="n">branch</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w">
           </span><span class="n">fallen.leaves</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="kc">FALSE</span><span class="p">,</span><span class="w"> </span><span class="n">box.palette</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"BuRd"</span><span class="p">,</span><span class="w">
           </span><span class="n">branch.lty</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">3</span><span class="p">,</span><span class="w"> </span><span class="n">split.cex</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">1.2</span><span class="p">,</span><span class="w">
           </span><span class="n">shadow.col</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"gray"</span><span class="p">,</span><span class="w"> </span><span class="n">shadow.offset</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">0.2</span><span class="p">)</span><span class="w">
</span>

plot of chunk plot_pruned_tree

Starting from the top (the root), the tree splits the population in two subsets according to the question asked (the variable and the cut), and it does the same for every node. If the county meets the criteria it is classified to the left, and otherwise to the right. In this case the first split indicates that if the county has less than 50% white_alone population, it is classified to the left node (only 12% of the counties), and the rest goes to the right (80% of the counties). The higher the percentage of counties that Trump won in the node, the redder the node. Nodes associated with Clinton are bluer.

Apparently race is one of the most important characteristic determining the winner candidate. 3 over the 5 splits include race (we will know more about this when we check the Variable Importance later on). And there is also a variable referring the housing structure and other about education.

One key feature of CART is that it allows different characteristics to be relevant for each node resulted from a split. In this case, for counties with less than 50% white_alone population, the winner was also determined by the amount of housing units in multi-unit structures (maybe an urbanization’s proxy?) and the percentage of 25 years old or older persons holding a Bachelor’s degree or higher. But for the rest of the counties, it was determined by other racial characteristics. This is because the algorithm partitions the feature space in two using linear parallel to the axis decision boundaries, and for each generated region does the same, over and over.

This can be revealing, uncovering underlying interactions of variables for different groups, harder to discover using other methods.

Performance Evaluation

Now I evaluate how good this model is. Prior to this post I wouldn’t pay any extra attention to the usage of different measures taking into account how unbalanced the data was. There are a lot of measures to evaluate the fit, I will explore some of them prioritizing the ones that explicitly deal with unbalanced classes.

This evaluations should always be done over the test sample, otherwise it would be like cheating!

Missclassification Error

One classic way to evaluate the performance of a classifier is by calculating the missclassification error, simply by computing the percentage of times that the the classifier is wrong.

<span class="n">test</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">test</span><span class="w"> </span><span class="o">%>%</span><span class="w">
  </span><span class="n">mutate</span><span class="p">(</span><span class="n">pred</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">predict</span><span class="p">(</span><span class="n">winner_rpart</span><span class="p">,</span><span class="w"> </span><span class="n">type</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"class"</span><span class="p">,</span><span class="w"> </span><span class="n">test</span><span class="p">),</span><span class="w">
         </span><span class="n">pred_prob_T</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">predict</span><span class="p">(</span><span class="n">winner_rpart</span><span class="p">,</span><span class="w"> </span><span class="n">type</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"prob"</span><span class="p">,</span><span class="w"> </span><span class="n">test</span><span class="p">)[,</span><span class="m">2</span><span class="p">],</span><span class="w">
         </span><span class="n">error</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">ifelse</span><span class="p">(</span><span class="n">pred</span><span class="w"> </span><span class="o">!=</span><span class="w"> </span><span class="n">pref_cand_T</span><span class="p">,</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">0</span><span class="p">))</span><span class="w">

</span><span class="n">missc_error</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">test</span><span class="w"> </span><span class="o">%>%</span><span class="w"> </span><span class="n">summarize</span><span class="p">(</span><span class="n">missc_error</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">mean</span><span class="p">(</span><span class="n">error</span><span class="p">))</span><span class="w">
</span><span class="n">knitr</span><span class="o">::</span><span class="n">kable</span><span class="p">(</span><span class="n">missc_error</span><span class="p">,</span><span class="w"> </span><span class="n">align</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s1">'l'</span><span class="p">)</span><span class="w">
</span>
missc_error
0.0782422

But since I’ve been doing my research, it has become clear that in this case I could not treat this measure without further considerations.

Let’s suppose I have a model that predicts for all cases Trump as the winner. It would have a missclassification error of 15%, not so bad! But it is definitely not a great model: it would be 100% accurate for predicting counties where Trump won, but it would be wrong for all of the counties where Clinton won. This is known as the Accuracy Paradox, and that is why we need some alternatives to measure how good is this tree in predicting the county’s winner.

Kappa statistic

One good thing to do in any case (balanced the data or not) is to take a look at the confusion matrix. This is the input for many performance measures as shown next.

<span class="n">test</span><span class="w"> </span><span class="o">%>%</span><span class="w">
  </span><span class="n">select</span><span class="p">(</span><span class="n">pred</span><span class="p">,</span><span class="w"> </span><span class="n">pref_cand_T</span><span class="p">)</span><span class="w"> </span><span class="o">%>%</span><span class="w">
  </span><span class="n">table</span><span class="p">()</span><span class="w"> </span><span class="o">%>%</span><span class="w">
  </span><span class="n">confusionMatrix</span><span class="p">()</span><span class="w">
</span>
## Confusion Matrix and Statistics
## 
##     pref_cand_T
## pred   0   1
##    0  87  14
##    1  59 773
##                                           
##                Accuracy : 0.9218          
##                  95% CI : (0.9026, 0.9382)
##     No Information Rate : 0.8435          
##     P-Value [Acc > NIR] : 6.196e-13       
##                                           
##                   Kappa : 0.6611          
##  Mcnemar's Test P-Value : 2.607e-07       
##                                           
##             Sensitivity : 0.59589         
##             Specificity : 0.98221         
##          Pos Pred Value : 0.86139         
##          Neg Pred Value : 0.92909         
##              Prevalence : 0.15648         
##          Detection Rate : 0.09325         
##    Detection Prevalence : 0.10825         
##       Balanced Accuracy : 0.78905         
##                                           
##        'Positive' Class : 0               
##

Just looking into this matrix we can have some clues on how good is the classifier for each class. Particularly I want to look closer at the Kappa statistic, because it specifically considers the unbalance between classes.

Kappa measures the accuracy of the classifier corrected by the probability of agreement by chance. There is not much consensus on the magnitude of Kappa to consider it low or high, but it can be interpreted as how separate it is the obtained result from chance. In this case the expected accuracy (the one that occurs by chance) is 77% and the perfect accuracy is of course 100%. There is a gap of 23%, and this classifier closes this gap by 66% of it (15% of improvement!). The higher the Kappa, the better. Some sustain that a value of Kappa larger than 60% means a good agreement, so we are OK!

ROC Curve and AUROC

To complement the performance evaluation, I check the ROC curve. It plots the True Positive Rate against the False Positive Rate across many different thresholds of predicted probability. As we are evaluating a tree with only 6 final nodes, there is a limited amount of thresholds. (Trying to simplify this explanation, I came across this video that is very clear if you want to go deeper)

This measure is great for classification analysis, and it is particularly useful here because it is not affected by unbalanced classes. Luckily I came across this great ggplot2 extension, plotROC, and now I can use my favorite tools to create a pretty nice plot!

<span class="n">library</span><span class="p">(</span><span class="n">plotROC</span><span class="p">)</span><span class="w">

</span><span class="n">roc</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">test</span><span class="w"> </span><span class="o">%>%</span><span class="w">
  </span><span class="n">select</span><span class="p">(</span><span class="n">pref_cand_T</span><span class="p">,</span><span class="w"> </span><span class="n">pred_prob_T</span><span class="p">)</span><span class="w"> </span><span class="o">%>%</span><span class="w">
  </span><span class="n">mutate</span><span class="p">(</span><span class="n">pref_cand_T</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">as.numeric</span><span class="p">(</span><span class="n">pref_cand_T</span><span class="p">)</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="m">1</span><span class="p">,</span><span class="w">
         </span><span class="n">pref_cand_T.str</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">c</span><span class="p">(</span><span class="s2">"Clinton"</span><span class="p">,</span><span class="w"> </span><span class="s2">"Trump"</span><span class="p">)[</span><span class="n">pref_cand_T</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="m">1</span><span class="p">])</span><span class="w"> </span><span class="o">%>%</span><span class="w">
  </span><span class="n">ggplot</span><span class="p">(</span><span class="n">aes</span><span class="p">(</span><span class="n">d</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">pref_cand_T</span><span class="p">,</span><span class="w"> </span><span class="n">m</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">pred_prob_T</span><span class="p">))</span><span class="w"> </span><span class="o">+</span><span class="w">
  </span><span class="n">geom_roc</span><span class="p">(</span><span class="n">labels</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="kc">FALSE</span><span class="p">)</span><span class="w">

</span><span class="n">roc</span><span class="w"> </span><span class="o">+</span><span class="w">
</span><span class="n">style_roc</span><span class="p">(</span><span class="n">theme</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">theme_bw</span><span class="p">,</span><span class="w"> </span><span class="n">xlab</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"False Positive Rate"</span><span class="p">,</span><span class="w"> </span><span class="n">ylab</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"True Positive Rate"</span><span class="p">)</span><span class="w"> </span><span class="o">+</span><span class="w">
</span><span class="n">theme</span><span class="p">(</span><span class="n">panel.grid.major</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">element_blank</span><span class="p">(),</span><span class="w"> </span><span class="n">panel.border</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">element_blank</span><span class="p">(),</span><span class="w">
      </span><span class="n">axis.line</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">element_line</span><span class="p">(</span><span class="n">colour</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"grey"</span><span class="p">))</span><span class="w"> </span><span class="o">+</span><span class="w">
</span><span class="n">ggtitle</span><span class="p">(</span><span class="s2">"ROC Curve for winner_rpart classifier"</span><span class="p">)</span><span class="w"> </span><span class="o">+</span><span class="w">
</span><span class="n">annotate</span><span class="p">(</span><span class="s2">"text"</span><span class="p">,</span><span class="w"> </span><span class="n">x</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">.75</span><span class="p">,</span><span class="w"> </span><span class="n">y</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">.25</span><span class="p">,</span><span class="w">
         </span><span class="n">label</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">paste</span><span class="p">(</span><span class="s2">"AUROC ="</span><span class="p">,</span><span class="w"> </span><span class="nf">round</span><span class="p">(</span><span class="n">calc_auc</span><span class="p">(</span><span class="n">roc</span><span class="p">)</span><span class="o">$</span><span class="n">AUC</span><span class="p">,</span><span class="w"> </span><span class="m">2</span><span class="p">)))</span><span class="w">
</span>

plot of chunk roc_curve

The AUROC (Area Under the ROC curve) computes the probability that the classifier ranks higher a positive instance than a negative one.

Warning about Classification Trees instability

When you use this kind of Machine Learning algorithm you should be aware that small changes in the data can change the tree. Find below a second tree generated by using the same parameters than before, but changing the seed to do the sampling.

<span class="n">set.seed</span><span class="p">(</span><span class="m">4444</span><span class="p">)</span><span class="w">

</span><span class="n">trainIndex_2</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="n">createDataPartition</span><span class="p">(</span><span class="n">y</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">votes</span><span class="o">$</span><span class="n">...

To leave a comment for the author, please follow the link and comment on their blog: d4tagirl.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)