With all the bad press that’s been aired in regards to chatbots and fake news recently, this new tool could not have come out at any better time. Twitch is launching a new moderation tool that will allow users more control about what they are being subjected to during chats and ultimately give them more control.
The new tool is called AutoMod, and it works by using both machine learning (ML) and natural language processing (NLP) techniques to filter out unwanted content from chat at the user’s request. Twitch programming manager and inclusive group leader, Anna Prosser Robinson, said in a statement, “One of the best ways we can help bring about change is to provide tools and education that empower all types of voices to be heard.” The way it will work is that once switched on, any messages that are detected as being risky will be held for human moderator approval before being sent on to chat.
Users will be able to decide themselves what content they would like to filter and how strict they want to be. AutoMod can even be used to filter out inappropriate symbols too. Hopefully, with thanks to Twitch, this will make people’s online experiences a more enjoyable experience and not one that they are constantly trying to avoid. Currently, the system is only available in English, but the company will be adding additional languages very soon, so keep your eyes and ears peeled for this.
More News To Read