Amid rising concern over the flood of hateful content on social media, Instagram and Twitter are taking small steps to cut the bullying. Not everyone is cheering.
Facebook-owned Instagram this week announced two features flagging offensive content and restricting interaction with certain accounts, while Twitter said it will remove tweets that “dehumanize on the basis of religion” when reported by users or the company.
- After revealing plans to flag tweets from political figures violating content rules on the social network, Twitter plans to protect religious organizations as a whole against offensive content.
- The first Instagram AI-based feature flags comments it considers offensive and asks the user “Are you sure you want to post this?” while giving the option to undo. The second feature enables the user to limit interaction with an account considered offensive, allowing comments or posts from the account only if the user approves it.
- While the company has been taking safety steps over the last three years, including a machine-learning filter applied to images and captions last year, Instagram hasn’t outright flagged or blocked accounts.
- Some people were underwhelmed. “It’s not a great system,” Christopher McKenna, founder of Protect Young Eyes, a company dedicated to defending kids from dangerous online content, said during the Protecting Innocence in the Digital World hearing at the Committee on the Judiciary on Tuesday.
- New accounts can be created very quickly after offensive content is reported and a site is flagged, making the protections weak, he said. “The very nature of Instagram creates ripe opportunities for exploitation through direct messaging, hashtags,” he said.
- Karma Takeaway: Major social networks have a long way to go toward fully protecting users from bullying, a task made difficult as they try to operate their networks while seeking to balance free speech and safety.