On February 28, 2019, season 11 of RuPaul’s Drag Race premiered with 15 new queens competing for the crown. While some of us are feeling some drag race fatigue, coming right after All Stars season 4, I’m trying to spice things up a bit with another round of machine learning predictions. This time, however, I’ve translated everything into R (instead of Python) since I primarily use R at work. Below I re-introduce the algorithms I use and then show the predictions taking into account the results of episode 1.
Meet the Queens
Let’s begin the challenge by meeting our contestants. First up is Support Vector Machines, a classifier with a pretty intuitive algorithm. Imagine you plot points on a two-dimensional graph. Support vector machines (SVM) attempts to separate out the groups defined by the labels using a line or curve that maximizes the distance between the dividing line and the closest points. If you have more than two features (as is often the case), the same thing happens but in a higher dimensional space.
The next to enter the work room is Gaussian Naive Bayes, an algorithm that is not as intuitive as SVM, but faster and simpler to implement. Gaussian naive Bayes algorithms assume that the data for each label is generated from a simple gaussian (or normal) distribution. Using Bayes theorem, along with some simplifying assumptions (which makes it naive), this algorithm uses the features and labels to estimate the gaussian distributions which it uses to make its predictions.
Our third contestant is the Random Forest Classifier. Random forests are aggregations of decision trees (get it!?). Decision trees are classifying algorithms composed of a series of decision points, splitting the data at each decision point to try to properly classify the data. Think of a game of Guess Who or Twenty Questions – you ask a series of yes/no questions to try to sort possibilities into different bins. Decision trees work the same way, with any number of possible bins. The problem with decision trees is that they tend to overthink the data, meaning that they do a really good job of predicting the training data, but the decision points are specific to the training data and so they aren’t so good at predicting testing data. The solution is to split the training data itself into different subsets, create decision trees for each subset, and then average those trees together to create a “forest” that typically does a much better with testing data than a single tree.
The fourth contestant is the Random Forest Regressor, also from the Haus of Random Forests, it works much the same way the classifier does, but rather than trying to predict unordered categories, it is predicting continuous values.
Our final contestant is Neural Network. Neural networks are a family of methods that roughly simulate connections between neurons in a biological brain. The neural network used here consists of neurons that take some number of values as inputs, applies weights to these values (that can be adjusted in the learning process), then applies these values to a logistic function to produce an output between 0 and 1. Neural networks consist of two or more layers of neurons (an input layer, an output layer, and zero or more hidden layers). The neural network I use here has one hidden layer consisting of three neurons.
The Maxi Challenge: Predicting Season 11
The main challenge is to predict the outcome of season 11 of RuPaul’s drag race. To do so, we’ll use the following features in our models:
Season the queen appeared in
Age of the queen
Whether the queen is Black
Whether the queen is white
Whether the queen is a non-Black person of color
Whether the queen is Plus Size
The total number of main challenges a queen won during the season
The total number of times a queen was among the top queens for the challenge, but did not win the challenge
The total number of times a queen was among the worst queens for the challenge, but did not lip-sync
The total number of times a queen had to lip-sync for her life (including the lip-sync that she sashayed away from)
For all four algorithms, I rank the predicted ranks, as some algorithms did not predict any queens to place first. Ranking the predicted ranks ensures that at least one queen will be predicted to come in first.
The Results
The final predicted score is based on the average of predicted places for each algorithm.
With one episode down, the algorithms are predicting our top 4 to be A’Keria, Mercedes, Brooke Lynn, and Yvie. The algorithms correctly guessed that Soju would be going home first, and predict a double elimination next week: both Ariel Versace and Scarlet Envy are predicted to go home next.
Name
Place
SVM
GNB
RFC
RFR
NN
Average
Predicted Rank
A’keria Chanel Davenport
NA
2
1
1
2
3
1.8
1
Mercedes Iman Diamond
NA
1
1
4
3
2
2.2
2
Brooke Lynn Hytes
NA
9
8
1
1
1
4.0
3
Yvie Oddly
NA
3
1
4
8
8
4.8
4
Vanessa Vanjie Mateo
NA
10
1
4
5
5
5.0
5
Honey Davenport
NA
5
1
4
7
9
5.2
6
Ra’jah O’Hara
NA
6
1
4
6
10
5.4
7
Plastique Tiara
NA
11
1
13
4
4
6.6
8
Shuga Cain
NA
4
8
3
10
14
7.8
9
Nina West
NA
12
8
4
9
11
8.8
10
Silky Nutmeg Ganache
NA
8
8
4
11
15
9.2
11
Kahanna Montrese
NA
7
8
13
14
6
9.6
12
Ariel Versace
NA
13
8
4
12
12
9.8
13
Scarlet Envy
NA
13
8
4
12
12
9.8
13
Soju
15
15
8
15
15
7
12.0
15
Tune in next week to see if we will see another double elimination! In the mean time, check out the source code for this analysis.