With contributions by Husein Shamshudin and Mark Simmonds.
Overwatch is an intense 6v6, network-based, action game that is played by millions of gamers in over 190 countries. The game challenges teams to capture more objectives than their opponents against the backdrop of fantasy cityscapes, using an array of tools and characters. In 2018, Activision Blizzard, the publisher of Overwatch, launched the inaugural season of the Overwatch League, an esports league of professional gamers that represents city-based teams around the world who compete for millions in prize money and Overwatch League supremacy. The simple question we all want answered is, “Who is the best player?” The speed, strategy, and mechanics of the game create complex data combinations that challenge traditional player rankings.
In 2020, IBM became the official cloud, AI, machine learning, and analytics partner for the Overwatch League. Throughout this multiyear partnership, IBM will engage a global audience in novel ways by bringing AI-based solutions and insights to the league. This same technology can be used within enterprise applications such as banking and healthcare.
Challenge #1: Rankings
In traditional sports such as American football, any given team is segmented based on two functional roles, offense and defense. Typically, the players who are on the offensive side only play or specialize in offense, and likewise for the defensive side. As the roles or positions of players emerge and change, determining the best offensive and defensive players becomes harder. For example, in ESPN Fantasy Football, players can be drafted for multiple positions or slots that track with their real-world performance. See how AI was similarly leveraged to build a trade assistant for fantasy football teams. Overwatch takes this combinatorial problem to another level. The combinations of roles, heroes, maps, and team composition can instantaneously morph the capabilities of a player from one second to the next. As a result, determining who the best players are within a segment is extremely challenging.
First, let’s look at the game. Overwatch is a game that has three main roles: damage (offense), tank (defense), and support (main responsibility is to keep the team alive and in the fight). Typically, a team of six players has two members assigned to each of the roles. Within each role, there are nearly a dozen specific characters from which players can choose. Each character’s primary ability is focused on the specific role of blocking (tank), attacking (damage), and healing (support). Where Overwatch gets complex is that characters can have secondary and in some cases, tertiary abilities that cross roles. For example, Brigitte (pronounced brih-geet-teh) is a character in the support category who can supply Repair Packs to heal damaged teammates during a match. She has the Rocket Flail to swing, taking out enemies in close-range melees. And to add the cherry on top, Brigitte possesses the Barrier Shield, a frontal energy barrier weapon that can absorb damage from opponents, protecting herself and teammates from harm. She can also use this shield to stun opponents with Shield Bash, temporarily immobilizing the opponent to finish them off. Support, damage, and tank…all in one character!
Figure 1. Brigitte is a support character in Overwatch
Now, with this understanding of Brigitte’s capabilities and the fact that there are six other support characters, each with unique mixes of capabilities, how does the Overwatch League go about determining the best support player in the league? Or damage, or tank…
Ben Trautman, lead data analyst for the Overwatch League, put it this way, “This is a complex game. There’s a lot going on in any given match. Different roles, different abilities. Countless decisions and interactions. For years, we struggled with the best way to objectively evaluate performance. To give our players, coaches, general managers, and especially, our fans a way to compare players and teams beyond statistics and standings. I used to bang away on my laptop trying to make it work. But, there was just so much data. We worked on a ranking system for years with no luck.”
Ben…you had us at “complex.”
Assembling the team and the tech
There’s a subtle irony in the fact that the approach to tackle this challenge, within a 6v6 game, involved enlisting a six-person team from some of IBM’s brightest data scientists, software engineers, and designers. This team, based in Germany, was assembled as an Area 631 project. Area 631 is an incubator program that originated at the IBM Canada Lab in Markham in October 2018. The idea is simple – a team of 6 innovators works for 3 months to create 1 breakthrough. The success of the program has been so significant that Area 631 went global in July 2020.
“The Area 631 mission is to be light on process, heavy on support. Working in a focused, collaborative space (albeit virtually during the pandemic) helps enable teams to efficiently build something new, innovative, and disruptive,” states Steven Astorino, IBM Vice President, Development for Data and AI and Canada Lab Director. “Teams have access to all of the tools they need, as well as a team of executive mentors and sponsors who provide insight and advice throughout the residency.”
Now, the assembled Area 631 team[1] was ready for an esports grand challenge, equipped with a powerful development environment consisting of:
-
IBM Cloud Pak for Data: A platform providing a unified experience that can help organizations optimize data and create an information architecture for AI. It simplifies and automates how to collect, organize, and analyze data and infuse AI across the business.
-
IBM Watson Studio: A collaborative development environment that helps enable organizations to build, run, and manage AI models, and to optimize decisions at-scale across hybrid multicloud environments. It empowers teams to operationalize AI anywhere as part of IBM Cloud Pak for Data, unite teams, simplify AI lifecycle management, and accelerate time to value with an open, flexible multicloud architecture.
-
AutoAI: An automated machine learning tool that is deployed within IBM Watson Studio that empowers companies challenged with little to no data science expertise with the ability to automate the end-to-end process that is involved in building and maintaining machine learning models.
With this stack, all deployed within Red Hat OpenShift on IBM Cloud, the team began developing a novel ranking system for the Overwatch League.
The team kicked off on September 14, 2020, with a 3-month deadline of discovering and implementing several breakthroughs to enable an industry-first power ranking. The Overwatch League, a digital native environment and culture, was running the 2020 season with teams competing from remote locations due to COVID-19. In prior seasons, fans could enjoy league play in large, dedicated arenas where players competed in front of fans and match play projected on Jumbotrons. There’s nothing like being in the midst of a live Overwatch League arena and hearing shout casters!
At this point in the 2020 season, the regular season action was nearing its end, and the league operations team was preparing for post-season playoffs and the Grand Finals. The 631 team built and tuned an evolving rankings system during the climax of the Overwatch League.
Along with their development tools, the team learned to play the game, either on a PC or a PlayStation and began a twice-weekly cadence of calls with several members of the Activision Blizzard team, who provided them insights about the game, the characters, and the competitors. What the team quickly discovered, with delight, was that Overwatch the game is an environment that is rich with data.
Almost every action in Overwatch is captured. To illustrate the enormity of data, the game collects 360+ player performance statistics during each action step. These are stats around a character’s role performance of tank, support, and damage that are largely focused on aggregating totals and determining rates of performance that is based on the total time of the action step (that is, total heals, total healing, healing per second, and healing per 2 seconds). The data, originally available to the team through the Overwatch League’s Stats Lab, amounted to:
- 1.8M data points in a match
- 18M data points each week of the regular season
- 410M data points per season
This was the perfect challenge for the team that was armed with the right tools to tackle it.
Connecting the dots
Given the problem statement and the tools used, how do you think we pulled it all together to develop the industry’s first ranking system? To give you a hint, the hypothesis that guided our scientific process toward our solution uses a combination of techniques and technologies, as shown in the following figure.
Figure 2. Candidate high-level architecture for OWL power rankings
Our hypothesis and approach will be covered in the next part of this series of posts, coming after the 2021 season launches on April 16. In the meantime, you can check out the 2021 preseason Overwatch League Power Rankings with Watson to see where your favorite players rank, and check out these free AI resources for developers.
The approach to creating the Overwatch League’s official power ranking can be used for enterprise projects. The technology stack of IBM AutoAI, IBM Cloud Pak for Data, and IBM Watson Studio creates agile teams to conquer any data science challenge within diverse industries such as automotive, food, and energy.
Now, let’s enjoy The Overwatch League 2021 season!
[1] The Area 631 team membership included: