On February 28, 2019, season 11 of RuPaul’s Drag Race premiered with 15 new queens competing for the crown. While some of us are feeling some drag race fatigue, coming right after All Stars season 4, I’m trying to spice things up a bit with another round of machine learning predictions. This time, however, I’ve translated everything into R (instead of Python) since I primarily use R at work. Below I re-introduce the algorithms I use and then show the predictions taking into account the results of episode 1.
Meet the Queens
Let’s begin the challenge by meeting our contestants. First up is Support Vector Machines, a classifier with a pretty intuitive algorithm. Imagine you plot points on a two-dimensional graph. Support vector machines (SVM) attempts to separate out the groups defined by the labels using a line or curve that maximizes the distance between the dividing line and the closest points. If you have more than two features (as is often the case), the same thing happens but in a higher dimensional space.
The next to enter the work room is Gaussian Naive Bayes, an algorithm that is not as intuitive as SVM, but faster and simpler to implement. Gaussian naive Bayes algorithms assume that the data for each label is generated from a simple gaussian (or normal) distribution. Using Bayes theorem, along with some simplifying assumptions (which makes it naive), this algorithm uses the features and labels to estimate the gaussian distributions which it uses to make its predictions.
Our third contestant is the Random Forest Classifier. Random forests are aggregations of decision trees (get it!?). Decision trees are classifying algorithms composed of a series of decision points, splitting the data at each decision point to try to properly classify the data. Think of a game of Guess Who or Twenty Questions – you ask a series of yes/no questions to try to sort possibilities into different bins. Decision trees work the same way, with any number of possible bins. The problem with decision trees is that they tend to overthink the data, meaning that they do a really good job of predicting the training data, but the decision points are specific to the training data and so they aren’t so good at predicting testing data. The solution is to split the training data itself into different subsets, create decision trees for each subset, and then average those trees together to create a “forest” that typically does a much better with testing data than a single tree.
The fourth contestant is the Random Forest Regressor, also from the Haus of Random Forests, it works much the same way the classifier does, but rather than trying to predict unordered categories, it is predicting continuous values.
Our final contestant is Neural Network. Neural networks are a family of methods that roughly simulate connections between neurons in a biological brain. The neural network used here consists of neurons that take some number of values as inputs, applies weights to these values (that can be adjusted in the learning process), then applies these values to a logistic function to produce an output between 0 and 1. Neural networks consist of two or more layers of neurons (an input layer, an output layer, and zero or more hidden layers). The neural network I use here has one hidden layer consisting of three neurons.
The Maxi Challenge: Predicting Season 11
The main challenge is to predict the outcome of season 11 of RuPaul’s drag race. To do so, we’ll use the following features in our models:
Season the queen appeared in
Age of the queen
Whether the queen is Black
Whether the queen is white
Whether the queen is a non-Black person of color
Whether the queen is Plus Size
The total number of main challenges a queen won during the season
The total number of times a queen was among the top queens for the challenge, but did not win the challenge
The total number of times a queen was among the worst queens for the challenge, but did not lip-sync
The total number of times a queen had to lip-sync for her life (including the lip-sync that she sashayed away from)
For all four algorithms, I rank the predicted ranks, as some algorithms did not predict any queens to place first. Ranking the predicted ranks ensures that at least one queen will be predicted to come in first.
The Results
The final predicted score is based on the average of predicted places for each algorithm.
With one episode down, the algorithms are predicting our top 4 to be A’Keria, Mercedes, Brooke Lynn, and Yvie. The algorithms correctly guessed that Soju would be going home first, and predict a double elimination next week: both Ariel Versace and Scarlet Envy are predicted to go home next.
Name
Place
SVM
GNB
RFC
RFR
NN
Average
Predicted Rank
A’keria Chanel Davenport
NA
2
1
1
2
3
1.8
1
Mercedes Iman Diamond
NA
1
1
4
3
2
2.2
2
Brooke Lynn Hytes
NA
9
8
1
1
1
4.0
3
Yvie Oddly
NA
3
1
4
8
8
4.8
4
Vanessa Vanjie Mateo
NA
10
1
4
5
5
5.0
5
Honey Davenport
NA
5
1
4
7
9
5.2
6
Ra’jah O’Hara
NA
6
1
4
6
10
5.4
7
Plastique Tiara
NA
11
1
13
4
4
6.6
8
Shuga Cain
NA
4
8
3
10
14
7.8
9
Nina West
NA
12
8
4
9
11
8.8
10
Silky Nutmeg Ganache
NA
8
8
4
11
15
9.2
11
Kahanna Montrese
NA
7
8
13
14
6
9.6
12
Ariel Versace
NA
13
8
4
12
12
9.8
13
Scarlet Envy
NA
13
8
4
12
12
9.8
13
Soju
15
15
8
15
15
7
12.0
15
Tune in next week to see if we will see another double elimination! In the mean time, check out the source code for this analysis.
It’s that time again! RuPaul’s Drag Race is starting a new season of All Stars. I’ve previously used machine learning to try to predict the outcome of regular seasons of Drag Race. Since All Stars has traditionally had a different format (paired queens in season 1 and lip sync for your legacy in season 2), I don’t have the data to train machine learning algorithms for All Stars seasons. However I can try to predict how the queens will do based on their performances in their original seasons. This is a brief post, so to learn more about how all this works, read the first post. Based on the queens that have officially been announced (so this doesn’t include the mystery 10th queen that will be revealed on tonight’s premier), the outcome of All Stars Season 3 is predicted to be:
So based on the algorithms, Aja will be heading home first. BenDeLa, Shangela, and Kennedy will be our top three, with BenDeLa taking home the crown.
Obviously with a new format and a different mix of competitors, how a queen did in their original season isn’t a great predictor of how they will do in All Stars. Still, it’s fun to see what the data says.
I’m looking forward to what will hopefully be a great season of All Stars! To the queens:
Season 9 of RuPaul’s Drag Race has started! Drag Race is a reality competition show hosted by the legendary drag queen RuPaul to find the next drag superstar. Each season, between 12 and 14 queens compete in challenges, which could consist of sewing together new looks for the runway, acting in parody scenes of gay cult classics, singing, or dancing. At the end of the episode, one queen is named the winner of the challenge and the two queens who fall in the bottom must lip-sync for their lives. They perform a lip-sync to a pre-chosen song on the runway for RuPaul, and whoever impresses most gets to stay, while the other is asked to sashay away.
I’m preparing to reopen the library for season 9 of Rupaul’s Drag Race! When the new season starts airing I’ll do a comprehensive write up of what I’ll be doing and the data I’ll be using. For now, if you need a refresher, you can read the original post I wrote last year.
The new cast of the second season of Drag Race All Stars was announced, with ten queens from previous seasons competing to win.
Adore Delano, Alaska, Alyssa Edwards, Coco Montrese, Detox, Ginger Minj, Katya, Roxxxy Andrews, Phi Phi O’Hara, and Tatianna will be joining us again. I wondered what the machine learning algorithms might predict if I trained on all non-All Star queens and then predicted where these queens might rank. The following table shows the results, sorted by the average ranking.
Name
Season
Place
SVC
GNB
RFC
RFR
NN
Avg
Adore Delano
6
2
3
1
1
2
3
2
Ginger Minj
7
2
1
7
1
1
1
2.2
Roxxxy Andrews
5
3
2
1
7
3
2
3
Alaska
5
2
3
1
3
4
7
3.6
Phi Phi O'Hara
4
3
5
1
8
5
3
4.4
Katya
7
5
6
1
5
6
6
4.8
Tatianna
2
4
7
6
4
7
7
6.2
Alyssa Edwards
5
6
8
8
8
9
5
7.6
Detox
5
4
8
8
5
8
9
7.6
Coco Montrese
5
5
8
8
8
10
9
8.6
The algorithms favor Adore, with Ginger coming in second and Roxxxy taking third. This strikes me as unlikely, and of course these predictions are based on the queens’ performances during their original seasons. Each has had time to improve and grow as drag queens, so their actual performance is difficult to predict at this point. I’m looking forward to watching the season.
The script I used can be found on my github. The original series predicting season 8 starts here.
With the finale aired and a new Superstar crowned, I take some time to see how the algorithms perform overall, comparing predictions across all eight seasons. If you missed my earlier blog posts, see the first to understand what’s going on in the rest of this post.