TikTok is using their platform to combat online bullying.
Under fire for the way users have free reign to verbally assault content creators, the company has rolled out two features designed to curb negativity on the app.
The first is a setting called "Filter all comments" which allows the video creator to manually approve every comment on a video. Comments won't be displayed until they're seen by the account holder. This will be an extension of their controls that allow a creator to filter out specific keywords or offensive language.
The second feature being rolled out is an AI-based feature that identifies unkind or offensive language in a comment and prompts users to reconsider posting it.
Of course, neither of these are foolproof. A commenter can still post a hateful message by overriding TikTok's prompt, and creators with millions of followers could end up overwhelmed by needing to filter through hundreds of thousands of comments, but it's a step in the right direction for the app to become a friendlier and more inclusive environment.
A representative for TikTok said “We want TikTok to continue to be a place where creativity can thrive, and we hope these new features help to further foster kindness and community."