What has begun as a strong rumor, Riot Games confirmed during the weekend that they will be jumpstarting a new evaluation program in which moderators will be listening to random chat rooms for the safety and wellbeing of players.
The program seeks to combat disruptive behavior or their call to toxic players out of Valorant but ironically, this stage of the program is to have their system being "trained" (probably AI to avoid using a real human monitor at long term), instead of starting people to answer on their wrongdoings.
Riot Games plans to expand these functionalities to their catalog like League Of Legends and its derivatives but decided to start on Valorant as probably the toxicity is strong here.
The first stage of this evaluation process will start only in the North American/English market first and exponentially with time, upgrading in other markets and languages.
Via Riot Games