Facebook trained its AI to block live streams of violence

In the wake of the Christchurch terrorist attack, Facebook trained its AI systems to detect and block any future attempts to broadcast live shooting sprees with "police/military surveillance footage" and other violent material.
The emergency exercise, details of which were revealed in company documents leaked by whistleblower Frances Haugen, was internally described as a "watershed moment" for Facebook's live video service.
The attacker could broadcast the attack on the two mosques' lives for 17 minutes, but it was not detected by the company's systems, allowing it to be quickly replicated online. In the next 24 hours, Facebook deleted 1.5 million uploads.

At the time, Facebook admitted that its artificial intelligence systems had failed to stop the video, which was only removed after New Zealand police alerted it. “It was clear that Live was a vulnerable surface which can be repurposed by bad actors to cause societal harm,” the leaked review stated. “Since this event, we’ve faced international media pressure and have seen regulatory and legal risks increase on Facebook increase considerably.” It also details how Facebook is tackling the problem in an attempt to improve its cutting-edge technology. One key element is retraining the company's AI video detection system to provide it with a data set of harmful content to determine which should be highlighted and blocked.
"The training data set includes police/military body camera footage, entertainment shots, and simulations," as well as "military videos" obtained from the company's law enforcement outreach team, the internal document said. It also includes video clips from first-person shooters as examples of unblocked content.
As a result of these and other efforts, Facebook believes it has reduced detection times from five minutes to 12 seconds, the documents show. Christchurch's video now has a score of 0.96 for internally violent images, well above the intervention threshold.
Elsewhere, the set of leaked documents shows how eager Facebook is to repair its tarnished image. The company acknowledged that it had "taken minimal restrictions". In May 2019, the company announced a "one strike" policy, banning accounts from Using Live for 30 days for just one terrorist violation.
The change, announced to coincide with the Christchurch summit in Paris, is aimed at removing terrorist content from the web. New Zealand's prime minister, Jacinda Ardern, "used Facebook Live to update her fans after the announcement," which Facebook called "a major public relations win."

Inquiry us

Our Latest Products

Factory 3-5nm Nanodiamond Powder CAS 7782-40-3

Brief Introduction of Nanodiamond PowderProduct name: Nanodiamond PowderFormular: C Product NameNanodiamond PowderPurity 99%Particle Size3-5nmAnalysis ResultChemical CompositionAnalysis (%)Al0.0046%Co0.007%Si0.079%Na0.002%K0.00015%Ca0.003%Mg0.00058%...…

CAS No. 557-05-1 40% Water-based Zinc Stearate Zinc Stearate Emulsion

Product DescriptionProduct Description Description of zinc stearate emulsionZinc stearate emulsion is easy to disperse in water, has ultra-fineness, good dispersion compatibility. Zinc stearate emulsion has the characteristics of lubricating and deli...…

China factory cheapest price lightweight concrete wall panel making machine large hydraulic cement foaming machine equipment

Product performance of TR-40 Cement Foaming Machine1. The shell is made of high-strength precision thickened steel plate, painted twice, which is durable.2. The slurry output is uniform and stable, the density of the foam concrete finished product i...…

0086-0379-64280201 brad@ihpa.net skype whatsapp