The French Council of the Muslim Faith said it was suing the French branches of the two tech giants for “broadcasting a message with violent content abetting terrorism, or of a nature likely to seriously violate human dignity and liable to be seen by a minor,” according to the complaint, a copy of which was seen by AFP.
In France, such acts can be punished by three years' imprisonment and a €75,000 fine.
Facebook said it “quickly” removed the live video showing the killing of 50 people by a white supremacist in twin mosque attacks in Christchurch on March 15.
But the livestream lasting 17 minutes was shared extensively on YouTube and Twitter, and internet platforms had to scramble to remove videos being reposted of the gruesome scene.
(Facebook has said it worked quickly to remove video of the shooting in Christchurch. Photo: AFP)
The CFCM, which represents several million Muslims in France, said it took Facebook 29 minutes after the beginning of the broadcast to take it down.
Major internet platforms have pledged to crack down on the sharing of violent images and other inappropriate content through automated systems and human monitoring, but critics say this is not working.
Internet platforms have cooperated to develop technology that filters child pornography, but have stopped short of joining forces on violent content.
A US congressional panel last week called on top executives from Facebook and YouTube, as well as Microsoft and Twitter, to explain the online proliferation of the “horrific” New Zealand video.
The panel, the House Committee on Homeland Security, said it was “critically important” to filter the kind of violent images seen in the video.