When a machine moderates content, it evaluates text and images as data using an algorithm that has been trained on existing data sets. The process for selecting training data has come under fire as it’s been shown to have racial, gender and other biases.
>FAGMAN trains ai to ban content about criminal gang activity
>FAGMAN ai bans journalist documenting criminal gang activity without regards to context because it is a machine
I’m gonna have to say that the ai is correct here. Rather it’s the entire approach to “content safety” that’s wrong.
Only it leaves pictures of the Hell’s Angels alone. Clearly there’s an issue here.