Players control toxic through Artificial Intelligence

Check to toxic players through Artificial Intelligence


Riot Games, creators of the popular game League of Legends, is making use of machine learning and artificial intelligence to combat verbal attacks on his game. Judgments depend on machines, not people.

The insults and verbal harassment are two endemic problems of the online communities from birth, especially in some video game. The clearest example is League of Legends, popular game for its “toxic community”. Teenagers and young protagonists of these practices helped anonymity and adrenaline of a competitive game where the rewards and penalize victories defeats in the qualifying matches

Since the beginning of the game, the most popular right now if we consider the number of players online daily, verbal abuse has been one of the major problems for Riot Games . Prevents the enjoyment of the games of many players, creates unnecessary stress and can cause the rejection continue playing the new.

It is easy to find many players who start, attracted by the game, and found four companions who insult him, laugh at him and blame themselves for the defeat when in truth, it would be best to help you, that is the way in which most likely could win. But many players ejected frustration with insults.

Everyday for many players of League of Legends.

The day for many players of League of Legends.

Riot Games has a team of scientists, designers and psychologists designing the system improve the forms of interaction between players in League of Legends. Equipment that, in recent years has been experimenting with various systems and techniques, supported by a machine learning system ( machine learning), designed to control how the players interact. The objective of punishing the “toxic” and reward positive behavior. According to Jeffrey Lin, chief designer of social systems in Riot Games, results, collected by MIT Technology Review has been surprisingly positive. of the millions of cases monitored and punished, 92% has not returned to backslide

of human work .. .

Lin, a cognitive neuroscientist, believes that technical equipment used in enterprises can be applied in the online context of video games, indeed, thought to have created a kind of antidote to the negative behavior regardless of context in which it occurs.

 Image representative of Court, a group of volunteers players judge whether the reported behavior of players is in line with the spirit of the game. It has a name and representation according to the

Image representative of Court, a group of volunteers who judge whether players player behavior reported is in line with the spirit of the game. It has a name and representation according to the “lore” game.

The project began years ago , when the team introduced the community a system of “judgment”, referred to the Court, to “punish” those players who, with their behavior, were hurting the game. In this system the players reported by others a game for inappropriate behavior were subjected to judgment . a team of players who voluntarily decided through chat logs that player if his behavior was acceptable or not the system automated, part human, was pretty good. According to Lin 98% of the verdicts were in line with the view of social Riot Games team.

… automatic

Millions and millions of incidents were treated in this way, with the help of many players. Social League of Legends team then began to distinguish clear patterns in the toxic language employed Players . Therefore, to ease the process, they decided to apply machine learning techniques to the huge amount of data extracted every hour of the thousands of games played simultaneously in the world.

 Illustration of Machine Learning. s6.io

illustration of Machine Learning. s6.io

The last scheme, introduced earlier this year, is controlled by artificial intelligence rather than serve the discretion of the players. Now is Riot, much more efficient toxic find players in the game and more importantly, to offer “feedback” about what you have just done wrong. The difference between caution and do so immediately punish weeks later, as with the court, is one of the biggest keys to success in the race to eradicate bad behavior.

The system began four months ago and it was announced in the blog Riot . The company is usually very communicative with the players and not hesitate to show humanity in dealing with these social issues. Experience and communicate the progress and failures.

The system is programmed so that when a player’s attitude another report, the AI ​​system to determine whether their attitude is in the spirit online game or not. For example, homophobic, racist, sexist remarks, threats, etc. They are punished with two weeks of ban or even permanent bans if the offender is a recidivist. The key is that doing an algorithm, the resolution is done in a few minutes in this order.

  • It is checked if the report is false or not
  • locates its history of violations and chat history.
  • a resolution to the case and punishment is generated chord
  • .

  • notifies the offender.

Finally, the system notifies the player through An email with the log of the chat , explaining what has gone wrong and why he has been punished. It is a punishment after an action and explained, so that the player understands easier after weeks without remembering nor did he say what game.

The big limit of the algorithms is to discern the context Supported by algorithms and not by individuals, the new system has dramatically improved Riot so called “reform rates. When a player incurs a penalty as a deterrent to use chat for dozens of items or not to play qualifying, is considered reformed when not penalized again for a period of time.

The main challenge for an algorithm is to discern the context . Imagine a football game between friends, it is normal that in this context an insult not offended at all and sarcasm can be as usual in an atmosphere of colegueo. A machine does not know detect sarcasm and is the largest barrier that exists to rely solely her judging those reports. A player should not be penalized for a joke in a context of cooperation and happiness, yes it should when you make threats and insults in a hostile environment. It is still very difficult for algorithms detect these contexts, such as is difficult even for others .

Lin and his team attempt to solve these problems more machine learning systems. For example, although the system identify “toxic” behavior that decision is contrasted with other systems to validate the final verdict. Each report will validate the correct history reports that player has made. There are many who report no reason and could lead to mislead the trial system.

There is still no magic formula

 league-of-legends

The important thing remains the c astigos designed to fight each case determined. Limit chat for those who constantly insult in the game, good behavior is rewarded also, although it should be the duty of each player.

Toxic players are not bad people as a rule, just have a bad day The strange thing is Lin believes that the main lesson to be learned from his work trying data of millions of players of all types of demographics, “the toxic behavior comes not necessarily , bad people, usually normal people with a bad day, “said Justin Reich for the MIT Technology Review,” our strategies have to be to combat it socially accepted out the worst in us under the anonymity that provides Internet. ”

Still there is no magic formula to fight causing refugee problems in the anonymity of the Internet, and there is constant and infallible solution for anyone to have a bad day and pay with others on the Internet. It happens inside and outside of gaming. But Riot Games believes in encouraging change through gaming, community and communication channels and experiment with all the advantages of artificial intelligence and machine learning, but is still limited its true potential.


Hypertext

Players control toxic through Artificial Intelligence
Source: english  
September 14, 2015


Next Random post