This was submitted as a project for my Big Data Analytics class in my MS Business Analytics program. The original title is “Exploring the Association of Movie Trailer Performance on YouTube and Box Office Success”. My other teammates are Yi Cai, Michael Friscia, and Zheyu Tian. This project was done for educational purposes only. Click the photos to enlarge. Check out the GitHub page for the files and data set. Due to policies of thenumbers.com regarding their data, that particular data set won’t be uploaded.
UPDATE: If you scroll below, you will see that the final accuracy was 82.55%. Using genetic algorithms and a Sklearn implementation, the accuracy was improved to 98.66% (with a final generation average accuracy of 92.28%). Check out the code in this GitHub repo.
The purpose of this study is to determine if there is a correlation between the performance of trailers on YouTube and Hollywood movie sales.
- By evaluating important predictors from YouTube viewers, studios and agencies can create and publish movie trailers on YouTube more efficiently, thus:
- driving box office ticket sales domestically and globally
- generating more revenue
- Trailer performance can be focused on and improved if it shows that there is a correlation to boxoffice/post-show sales
- Data was collected from YouTube, using its proprietary API, and from thenumbers.com
- Youtube – trailer performance and comments
- thenumbers.com – Movie Box Office data
- 32.4GB (when comments are expanded into 1 line per comment)
- 1,713 movies
- 5,244 trailers
- 2,979,511 comments
- The ROI variable had to be created.
Hypothesis and Rationale
- There is a positive correlation between Youtube movie trailer performance indicators and Box office performance/Video Sales.
- Rationale: “Likes” = Sales
- There is a positive correlation between Movie trailer comment sentiments and Box office/Video Sales performance.
- Rationale: If trailers are viewed in a positive manner, then people will be more likely to watch the movie.
- After data extraction using Python, data was transformed using Python. Output files were CSV and TXT files.
- Three sentiment models were implemented in the project: polarity-based sentiment models by using Bing Liu’s and Harvard IV-4 dictionaries, and Naive Bayes Classifier: NLTK Sentiment model.
- To process part of the sentiment analysis, Apache Spark was used.
- The sentiment scores were also used to help identify the ROI of each movie using a neural network model.
Results and Discussion
Variable Correlation Test
The graph, which was generated by R, shows the correlations between the independent variables and dependent variables.
There are three main main conclusions based on the graph:
1.The graph demonstrated a positive correlation among count Views, Count Comments, and Likes/Dislikes.
2. The graph was also used to test the hypotheses regarding the movie trailer features and movie performance which assumed that the movie trailer comment counts/ Movie Trailer Likes and Movie Box Office are positively correlated.
3. Unfortunately, three sentiment models have little correlation with the Box Office Data (eg. ROI), which means that the initial hypothesis wasn’t proven. Two feature-based sentiment models have negative correlations with: Count Views, Count Comments, Likes/Dislikes.
Time Series Analysis
- It was interesting to see that for 2008, even though with the financial crisis, overall ROI turned out to be good.
- Another interesting finding is that ROI continuously decreased after 2008.
Two models were implemented for sentiment analysis.
- a polarity-based model using Bing Liu’s and a Harvard dictionary, which nets the counts of positive and negative words that can be found in each comment, and
- the NLTK Sentiment Analyzer using the Vader dictionary, which is a rule-based approach
- Scores were scaled and centered to zero to maintain positive scores > 0 and negative scores < 0. The scale is [-1,1].
- Comparing the performance of the three models, the Polarity-based models gravitated towards negative sentiment, which could be explained by the internal structure of the dictionaries used; meaning, if there were more negative than positive words, most likely there will be a higher chance of a higher negative-word count.
- For the NLTK Sentiment Analyzer, results showed more positive sentiment towards the comments.
Sentiment Analysis – Movie Studios
- Based on the Harvard sentiment dictionary, Paramount Vantage has the lowest average sentiment score whereas Weinstein Company has the highest.
- The Vader sentiment sentiment dictionary determined that Apparition has the highest average sentiment score while Focus/Gramercy has the lowest sentiment average score.
- Bing Liu sentiment dictionary predicted that Freestyle Releasing and Apparition have the lowest and highest average sentiment score, respectively.
Sentiment Analysis – Genre
- When evaluating the Bing Liu and Harvard dictionaries, Romantic Comedies and Documentaries have the highest and lowest average sentiment score respectively.
- Interestingly, for the NLTK Analyzer, the Concerts and Performances genre has the lowest average sentiment score, while Romantic Comedy has the highest score.
Clustering (to follow)
Predicting Box Office ROI Performance using Neural Net
- ROI performance was classified using four bins:
- Poor (less than the 25% quantile)
- Passing (between 25% and 50% quantile)
- Ok (between the 50% and 75% quantile)
- Great (above the 75% quantile)
- Neural Net implemented using R
- ROI Performance ~ countsComments + countsViews + Ratio_of_Likes_and_Dislikes + ProdBudget + genre + MPAArating + MovieStudio + BingLiuSentiment + HarvardSentiment + VadeSentiment
- Model Accuracy = 82.55%
- Due to the success of the neural network model, companies now have the ability to accurately predict the ROI of their movies, specifically with the use of the number of YouTube comments, ratio of likes and dislikes, and their sentiment scores from the three models.
- With the hypotheses predicted for the research, there is a higher probability of Box Office success which would then in return generate a higher ROI for movie studios and production companies.
- Although the sentiment results are different among the three dictionaries, this implicates that some dictionaries used in the models view more neutral words as negative or positive.
- The best alternative methods to predict the sentiment of YouTube comments in movies are to use domain-specific dictionaries and the application of machine learning classifiers paired with a sample comment-sentiment data set.
Scope and Limitations
There are many popular websites and applications that can be used to comment on trailers or movies, such as Rotten Tomatoes, Facebook, Twitter and so on. However, in this case, Youtube is the only trailer source used.
Trailers are not the only factors that impact box office and video sales. Other factors such as advertisements,the actors, and the competition of other movies being released at the same time can have an effect on the movie’s box office sales. However, these factors are not included in this study. Further studies could be conducted with those variables included.
- VADER sentiment analysis:
- Hutto, C.J. & Gilbert, E.E. (2014). VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Eighth International Conference on Weblogs and Social Media (ICWSM-14). Ann Arbor, MI, June 2014.