top of page

Sentiment Analysis

Sentiments against robots as evidenced by Blade Runner.

Domo Arigato, Mr. Roboto


I Iinitially proposed to do a research project addressing the question: Do movie interactions between robots and humans reflect cultural sentiment toward robots at the time the movie was made? Further, I wanted to know if the degree of humanoid features made a difference. I had big ideas about psychological datasets, cultural datasets, many film scripts, counting bipedal robots and arms and eyes, and sentiment comparisons. I pulled down script after script of movies found through AMC’s list of robot movies, IMDB’s sci-fi film list, and Paste Magazine’s top robot movie lists. Some movies I considered analyzing included Her, A.I., Interstellar, Moon, Ex Machina, Kronos, and 2001: A Space Odyssey. Turns out I may have gotten ahead of myself. This ended up being much too wide a scope, so I narrowed it to a general look at robots in movies, and sentiment toward robots (replicants) in a single movie: Blade Runner.


I used Python/Beautiful Soup to parse the script found on imsdb.com. Luckily, someone had a script on git that did exactly what I needed it to do, so after a bit of trial and error, I was able to collect the scripts in an almost usable format. Although this didn’t work for all the scripts, it gave me something to work with. It split lines into character, speech, stage direction, and location, and did it fairly accurately. After a manual look at the data and a few changes, I had a dataset ready to work with.


Then I started working in R. Using a number (read: a lot) of different packages, but mostly tidytext, dplyr, and ggplot2, I played around with data structure and visualization. I looked at the data by character, by line, and by word. In order to work with the data I joined the character information and tokenized words in a dataframe. I removed stop words and stemmed to associate like words. I combined datasets to identify the character associated with each line and each word. This tutorial proved to be a great foundation to adapt the code and explore my data. I also combined several of the IMDB datasets based on title id, and filtered down to a list of movies derived from the previously mentioned lists. This helped me bring up rating information, and gives a framework for country, actor, and character should I choose to look at those in the future.


In R, I made a lot of static charts. These looked at descriptive statistics like count of lines, words, and characters, but also moved into sentiment analysis. I looked at sentiment of lines and words in total, and found information about specific words. I used both the BING and NRC sentiment analysis repositories to see if I could see any differences. I made some static charts to get a sense of the kind of information I would run into, but ended up using Tableau to make more interactive charts, and an overall dashboard. I ended up working with Tableau after several distracting paths toward learning how to create parallax scrolling for a website, jumping into AWS to host my website, and playing around with CSS/HTML/javascript for a website. (Please note: this project did not have a requirement for a website - I distracted myself by creating the presentation and spent a disproportionate amount of time focusing on it for a while. I learned some cool stuff, though.)

After I pivoted my focus back to the data, I realized that I needed to let it guide the visualization, rather than trying to create some cool robot-shaped vis. and shoehorning some data in. Because I was looking at several groupings of data using several methods, a dashboard seemed appropriate. I’m familiar with Tableau, but I’ve only used older versions, so using the newest desktop version was really exciting. There are more options like extensions, and more integration between sheets.

While I made my visualization, I tried to keep a few things in mind — accessibility and ease of understanding. I chose blue as the dominant color to evoke the blue wash of the film itself. I wanted to make sure that when I used colors, though, the information was still accessible to readers who have difficulty differentiating between colors, so I stayed away from purples and yellows, and made sure that while I used gray, it was light enough to clearly show a difference from the dark blue. I also tried to focus on pre-attentive processors when creating the visualizations. I relied greatly on hue and location to clearly show comparisons.


 

Although this is an over 30 year old movie, I still feel compelled to mention that there may be (very small) spoilers in the following section.

 

The script I had was for the theatrical release, and so included the Deckard voice overs. I looked at this information first, but began to wonder if there were any differences when the voice over is removed. Luckily, the way the data was structured/parsed allowed for me to fairly easily remove the voice over lines, so I was able to do a quick comparison.


Voice over version vs. non-voice over version

Although it doesn’t sound like a great idea, I started by adding a static word cloud associated with sentiment as an image. I know the issues with word clouds, but I wanted to see if one could help with this at all. I thought adding a different dimension (color associated with sentiment and grouped with like sentiment) would be a way to use a word cloud that helped show some interpretation, but it didn’t really add much. It kind of showed there was difference in sentiment, but I got that same information from my other charts. Instead, I relied on bar charts, symbols, and one heatmap to show my information.


I wasn’t particularly surprised to see Tyrell’s sentiment was the most positive — as the creator of the replicants, he was proud of his work, and pleased to see Roy regardless of how that meeting ended. Bryant was actually more positive than I expected. Seeing this makes me wonder if I put too much emphasis on the way he says things vs. the actual words he said, or whether this analysis is missing some of that inflection. I suspect it’s a combination of both. I found a wide difference between characters at even the first and second spot for most line counts. For example, Rachael, who I would consider one of the main characters, has a third the amount of the lines of the main character. There seems to be little difference between the visualizations/sentiment in the voice over and non-voice over scripts. The non-voiceover script has more positivity and really more relative sentiment in general. There is still little-to-no difference in sentiment associated with word count.


I mostly stuck to quantification of words, but did try to use text in one of the sentiment analyses; I visualized based on the size of the word. It kind of showed differences, but didn’t really add too much to the analysis. I left it in, but added the tooltip to show the actual numbers for more fidelity to the comparison. Comparing the BING package to the NRC package did show some differences, though. Bing showed more negative sentiment, while NRC showed more positive sentiment. NRC breaks the sentiment down more granularly in general, so this may have something to do with the differences.


I then created my slides in Tableau using buttons and actions with either images or text colored white to blend in with the background. I added a scrollytelling extension in order vary the way the text is presented and to direct the audience’s focus toward imagining future applications before showing my own ideas.


In the future, it might make sense to look at the data temporally across many films and many time periods. Comparing this information to historical events, or even historical sentiment. I am also interested in seeing intra-film robot depiction. That is, in Wall-E there is a good robot and a “bad” robot, in Star Trek you have Data and Lore, in modern Star Wars you have BB-8 and evil BB-8, and so on. It would be interesting to look at interactions between characters comparing the similar robots. Finally, I see a good chance to look at the locations of the movies/filmmakers and try to determine if there are any cultural factors involved.


Although I didn’t get as deep into the data as I would have liked, I learned a lot about data scraping, conditioning, and visualization. I was still able to pull out some insights, and have a foundation upon which I can continue the analysis. (I also got to watch some cool movies in the name of “research for school”.)


You can go here for a quick demo of the dashboard.


bottom of page