The Library is Open: Season 11

On February 28, 2019, season 11 of RuPaul’s Drag Race premiered with 15 new queens competing for the crown. While some of us are feeling some drag race fatigue, coming right after All Stars season 4, I’m trying to spice things up a bit with another round of machine learning predictions. This time, however, I’ve translated everything into R (instead of Python) since I primarily use R at work. Below I re-introduce the algorithms I use and then show the predictions taking into account the results of episode 1.

Meet the Queens

Let’s begin the challenge by meeting our contestants. First up is Support Vector Machines, a classifier with a pretty intuitive algorithm. Imagine you plot points on a two-dimensional graph. Support vector machines (SVM) attempts to separate out the groups defined by the labels using a line or curve that maximizes the distance between the dividing line and the closest points. If you have more than two features (as is often the case), the same thing happens but in a higher dimensional space. The next to enter the work room is Gaussian Naive Bayes, an algorithm that is not as intuitive as SVM, but faster and simpler to implement. Gaussian naive Bayes algorithms assume that the data for each label is generated from a simple gaussian (or normal) distribution. Using Bayes theorem, along with some simplifying assumptions (which makes it naive), this algorithm uses the features and labels to estimate the gaussian distributions which it uses to make its predictions. Our third contestant is the Random Forest Classifier. Random forests are aggregations of decision trees (get it!?). Decision trees are classifying algorithms composed of a series of decision points, splitting the data at each decision point to try to properly classify the data. Think of a game of Guess Who or Twenty Questions – you ask a series of yes/no questions to try to sort possibilities into different bins. Decision trees work the same way, with any number of possible bins. The problem with decision trees is that they tend to overthink the data, meaning that they do a really good job of predicting the training data, but the decision points are specific to the training data and so they aren’t so good at predicting testing data. The solution is to split the training data itself into different subsets, create decision trees for each subset, and then average those trees together to create a “forest” that typically does a much better with testing data than a single tree. The fourth contestant is the Random Forest Regressor, also from the Haus of Random Forests, it works much the same way the classifier does, but rather than trying to predict unordered categories, it is predicting continuous values. Our final contestant is Neural Network. Neural networks are a family of methods that roughly simulate connections between neurons in a biological brain. The neural network used here consists of neurons that take some number of values as inputs, applies weights to these values (that can be adjusted in the learning process), then applies these values to a logistic function to produce an output between 0 and 1. Neural networks consist of two or more layers of neurons (an input layer, an output layer, and zero or more hidden layers). The neural network I use here has one hidden layer consisting of three neurons.

The Maxi Challenge: Predicting Season 11

The main challenge is to predict the outcome of season 11 of RuPaul’s drag race. To do so, we’ll use the following features in our models:
  1. Season the queen appeared in
  2. Age of the queen
  3. Whether the queen is Black
  4. Whether the queen is white
  5. Whether the queen is a non-Black person of color
  6. Whether the queen is Plus Size
  7. The total number of main challenges a queen won during the season
  8. The total number of times a queen was among the top queens for the challenge, but did not win the challenge
  9. The total number of times a queen was among the worst queens for the challenge, but did not lip-sync
  10. The total number of times a queen had to lip-sync for her life (including the lip-sync that she sashayed away from)
For all four algorithms, I rank the predicted ranks, as some algorithms did not predict any queens to place first. Ranking the predicted ranks ensures that at least one queen will be predicted to come in first.

The Results

The final predicted score is based on the average of predicted places for each algorithm. With one episode down, the algorithms are predicting our top 4 to be A’Keria, Mercedes, Brooke Lynn, and Yvie. The algorithms correctly guessed that Soju would be going home first, and predict a double elimination next week: both Ariel Versace and Scarlet Envy are predicted to go home next.
NamePlaceSVMGNBRFCRFRNNAveragePredicted Rank
A’keria Chanel DavenportNA211231.81
Mercedes Iman DiamondNA114322.22
Brooke Lynn HytesNA981114.03
Yvie OddlyNA314884.84
Vanessa Vanjie MateoNA1014555.05
Honey DavenportNA514795.26
Ra’jah O’HaraNA6146105.47
Plastique TiaraNA11113446.68
Shuga CainNA48310147.89
Nina WestNA12849118.810
Silky Nutmeg GanacheNA88411159.211
Kahanna MontreseNA78131469.612
Ariel VersaceNA138412129.813
Scarlet EnvyNA138412129.813
Soju151581515712.015
Tune in next week to see if we will see another double elimination! In the mean time, check out the source code for this analysis.