YouTube has, in an effort to improve spam and abuse detection on comments and live chat, introduced enhancements to its video-sharing platform.
The update comes at a time when social media companies are facing increased scrutiny over their inability to limit spam, harassment, and abuse that is believed to be hurting the mental health of users exposed to the platform.
The company has introduced an improvement to its mechanisms that detects spam in comments, and bots in live chats.
YouTube has also introduced a new feature that will notify users with a warning when detected for violating the platform's Community Guidelines.
According to YouTube's official support page, the company has introduced improvements to its machine-learning models that are now equipped to detect advanced spamming techniques utilised by malicious users.
YouTube also confirmed that it has removed 1.1 billion spam comments in the first half of 2022 alone.
Meanwhile, the Alphabet-owned video-sharing platform has also improved its automated bot detection systems which will be deployed during live chats to prevent any negative impacts.
The most important update from YouTube includes the introduction of a new mechanism for comment removals, warnings, and timeouts.
The new feature will notify users with a warning when an abusive comment posted from their account is detected of having violated YouTube's Community Guidelines.
Repeat offenders will temporarily be suspended from being able to comment for up to 24 hours, confirmed YouTube's support page post.
These tools are currently limited to users commenting in English.
However, YouTube is planning to expand these capabilities to other languages as well in the coming months.
The platform had also recently announced that it would start certifying doctors, nurses, and other health care professionals in an effort to limit the misinforming content on YouTube regarding health issues.