Facebook
put the safety of its content moderators at risk after inadvertently
exposing their personal details to suspected terrorist users of the
social network, the Guardian has learned.
The security lapse affected more than 1,000 workers across 22 departments at Facebook who used the company’s moderation software to review and remove inappropriate content from the platform, including sexual material, hate speech and terrorist propaganda.
A bug in the software, discovered late last year, resulted in the personal profiles of content moderators automatically appearing as notifications in the activity log of the Facebook groups, whose administrators were removed from the platform for breaching the terms of service. The personal details of Facebook moderators were then viewable to the remaining admins of the group.
Of the 1,000 affected workers, around 40 worked in a counter-terrorism unit based at Facebook’s European headquarters in Dublin, Ireland. Six of those were assessed to be “high priority” victims of the mistake after Facebook concluded their personal profiles were likely viewed by potential terrorists.
The Guardian spoke to one of the six, who did not wish to be named out of concern for his and his family’s safety. The Iraqi-born Irish citizen, who is in his early twenties, fled Ireland and went into hiding after discovering that seven individuals associated with a suspected terrorist group he banned from Facebook – an Egypt-based group that backed Hamas and, he said, had members who were Islamic State sympathizers – had viewed his personal profile.
Facebook confirmed the security breach in a statement and said it had made technical changes to “better detect and prevent these types of issues from occurring”.
“We care deeply about keeping everyone who works for Facebook safe,” a spokesman said. “As soon as we learned about the issue, we fixed it and began a thorough investigation to learn as much as possible about what happened.”
Read more: Revealed: Facebook exposed identities of moderators to suspected terrorists | Technology | The Guardian
The security lapse affected more than 1,000 workers across 22 departments at Facebook who used the company’s moderation software to review and remove inappropriate content from the platform, including sexual material, hate speech and terrorist propaganda.
A bug in the software, discovered late last year, resulted in the personal profiles of content moderators automatically appearing as notifications in the activity log of the Facebook groups, whose administrators were removed from the platform for breaching the terms of service. The personal details of Facebook moderators were then viewable to the remaining admins of the group.
Of the 1,000 affected workers, around 40 worked in a counter-terrorism unit based at Facebook’s European headquarters in Dublin, Ireland. Six of those were assessed to be “high priority” victims of the mistake after Facebook concluded their personal profiles were likely viewed by potential terrorists.
The Guardian spoke to one of the six, who did not wish to be named out of concern for his and his family’s safety. The Iraqi-born Irish citizen, who is in his early twenties, fled Ireland and went into hiding after discovering that seven individuals associated with a suspected terrorist group he banned from Facebook – an Egypt-based group that backed Hamas and, he said, had members who were Islamic State sympathizers – had viewed his personal profile.
Facebook confirmed the security breach in a statement and said it had made technical changes to “better detect and prevent these types of issues from occurring”.
“We care deeply about keeping everyone who works for Facebook safe,” a spokesman said. “As soon as we learned about the issue, we fixed it and began a thorough investigation to learn as much as possible about what happened.”
Read more: Revealed: Facebook exposed identities of moderators to suspected terrorists | Technology | The Guardian
No comments:
Post a Comment