Content moderators are hired to work behind the scenes of all major social media platforms to remove content that does not adhere to laws or social media guidelines. Content moderators will also make decisions on particular users within the platform they’re working on to determine if the user should be banned, if the user has not been adhering to the app’s guideline policies. Content moderation is a much-needed service provided by social media apps to prevent users, especially younger users, from viewing explicit or illegal content on the platform, as well as content such as hate speech and spam. (Jackson, J., 2024). Although Kik’s community guidelines prohibit ‘illegal content, harmful behaviour, and abusive interactions’ (Kik, n.d). Despite this, it's clear to see that these guidelines are not enforced as Kik's moderation is known for being almost non-existent, allowing for illegal, harmful and abusive interactions to thrive on Kik. Moderation of content on Kik largely relies on users to report harmful content, which is largely limited to spam. This will only be resolved if the user is able to provide evidence. Kik also offers online chatbots bots support its users; however, when asking questions about bullying or reporting abuse, the bot will simply offer fun facts and jokes. (Kobie, 2016).
Next