GET();
Miquela Sousa, by the name of @lilmiquela, is an Instagram influencer that
has over a 1.5 million follows, and over 400 posts. She is a computer generated 3D model that poses as a real person online for her followers. She talks and acts just like a typical 19 year old Cali girl in on Instagram. For a brief period in 2019, she promoted a mysterious website called club-404.com, which urged people to sign up using email for "updates in the future."
The AI, and Computer
Graphics company, Brud, created Miquela, two other robot characters, @blawko22 and @bermudaisbae, and all of their profiles across their social media presence.
GET();
Miquela's mysterious website, club-404.com, turned out to simply be a merchandise store for a few overpriced items like shirts and socks. However, to us it represented another one of Miquela's publicity stunts, designed to get people talking about the mysterious website and by extension, more social media engagement for the Instagram account.
GET();
Club 405 is intended to be an homage to club-404.com, detailing our methods and findings while researching Instagram influencers and how virtual and human influencers on the internet deal with harassment differently.
The design is inspired by the original vaporwave aesthetic design of club-404.com.
POST();
We at the Rutgers University - Camden Digital Studies Center set out with the goal of exploring how harassment online affects virtual beings; specifically, we wanted to explore if and how harassment manifested on these virtual influencer's Instagram posts. Next, we plan to examine the questions: "How do virtual beings resist harassment?", and "How can these virtual beings react to being harassed in ways different from how humans can?"
POST();
We analyzed over 300 of LilMiquela's publicly available Instagram posts, and over 10000 publicly available comments on those posts, to look for different types of harasssment, and created categories that each of these comments falls into.
We coded these comments into 8 categories, each of which could be individually assigned to a comment. Some examples of the categories we created are based on the content of the comment, and which audience it is aimed at. To narrow our data further, we used a sentiment analysis program coded in Python to organize the comments into 3 categories: Positive, Neutral, and Negative.
POST();
Our coding categories can be divided into two sections: Those which are aimed at a person/group, and those whose content can be percevied in a certain way.
The first group consists of categories like "Harassment of Miquela", "Harassment of other commenters", "General harassment", and "Defense of Miquela". Although we were looking at posts with "negative" sentiment, plenty of comments defending the account in question made their way into our data. The most interesting thing about these comments is that they would lift Miquela up, while putting other commenters down (usually those who were harassing her). So, these comments would typically fit into these two categories.
The second group dealt with the content of the comments themselves: "Spam", "Body Comments", "Self-promotion", and "Robot Comments". Typically, the most common form of comment was "Spam." Unfortunately, this category was only helpful for further refining our dataset after the coding step. As Miquela is a public female figure, we predicted that there would be a lot of comments regarding her body/appearance, and we were correct. Very often we would see comments regarding her body but they were usually split between harassment and defense of the account.
POST();
We are currently still researching using our dataset and plan to write a paper soon.