It’s not for lack of effort. Social media companies invest in programs, many driven by AI, to help keep experiences safe and positive for their users. Many employees are parents themselves or identify with groups that experience online bullying. They certainly didn’t set out to create a breeding ground for harmful content. So why are these efforts falling short?
The short answer is: We’re not leading with empathy.
Too many lines of code, too many scaling and availability issues, too many upgrades to firmware. We’ve gotten so engulfed in the details that we’ve forgotten the basic human trait of empathy. Behind those lines of code, long architecture meetings, and hardware are people and children who use their services on a daily basis. They care much more about having a safe space than optimizing server use (though these aren’t mutually exclusive).
The advent of AI offered hope to these companies of the possibility of moderating their platforms at scale, something that would…