The security lapse affected more than 1,000 workers across 22 departments at Facebook who used the company’s moderation software to review and remove inappropriate content from the platform, including sexual material, hate speech and terrorist propaganda.
A bug in the software, discovered late last year, resulted in the personal profiles of content moderators automatically appearing as notifications in the activity log of the Facebook groups, whose administrators were removed from the platform for breaching the terms of service. The personal details of Facebook moderators were then viewable to the remaining admins of the group.
Of the 1,000 affected workers, around 40 worked in a counter-terrorism unit based at Facebook’s European headquarters in Dublin, Ireland. Six of those were assessed to be “high priority” victims of the mistake after Facebook concluded their personal profiles were likely viewed by potential terrorists.
The Guardian spoke to one of the six, who did not wish to be named out of concern for his and his family’s safety. The Iraqi-born Irish citizen, who is in his early twenties, fled Ireland and went into hiding after discovering that seven individuals associated with a suspected terrorist group he banned from Facebook – an Egypt-based group that backed Hamas and, he said, had members who were Islamic State sympathizers – had viewed his personal profile.