Facebook’s Mark Zuckerberg

The giant social media corporation, Facebook will hire 3,000 more people over the next year to speed up the banishment of videos showing murder, suicide and other violent acts posted on Facebook websites.
This is an indirect acknowledgement by Facebook that more than automated software is needed to identify and remove such material.
After last year’s introduction of Facebook Live, a service that allows any of Facebook’s 1.9 billion monthly users to broadcast video, the problem become more pressing.
Facebook Chief Executive Mark Zuckerberg announced the hiring plan on Wednesday, to counter the threat to its valuable public image – which has been spoiled by some violent scenes
The move comes after users reacted to two shocking Facebook live-stream video posts in April; a father in Thailand who killed his 11-month baby girl before committing suicide at a deserted hotel and a man in United States who broadcasted himself shooting a stranger in cold blood before shooting himself in the head after being chased down by police officers.
Researchers say, some violence on Facebook is inevitable given its size, and the company has been reproached for its slow response.
In Germany, the lawmakers have pressured the company to be quicker and more accurate in removing illegal hate speech and to clamp down on so-called fake news and threatened to apply fines.
UK lawmakers also has accused Facebook and other social media companies of doing a ‘shameful’ job removing child abuse and other illegal material.
However, Facebook has eluded fallout from the U.S. lawmakers. Some ad industry have even defended Facebook, citing the difficulty of policing material from so many users. Police agencies also said Facebook works well with them.
 
Better tools, lack of technology
In addition to investing in more people, Mr Zuckerberg said Facebook is also building better tools to keep the community safe. “We’re going to make it simpler to report problems to us, faster for our reviewers to determine which posts violate our standards and easier for them to contact law enforcement if someone needs help,” he stated.

Mark Zuckerberg / photo: Mark Zuckerberg Facebook
On 3 May, he wrote on his Facebook, “Over the last few weeks, we’ve seen people hurting themselves and others on Facebook – either live or in video posted later. It’s heartbreaking, and I’ve been reflecting on how we can do better for our community.”
“If we’re going to build a safe community, we need to respond quickly. We’re working to make these videos easier to report so we can take the right action sooner — whether that’s responding quickly when someone needs help or taking a post down.”
“Over the next year, we’ll be adding 3,000 people to our community operations team around the world — on top of the 4,500 we have today — to review the millions of reports we get every week, and improve the process for doing it quickly.”
“These reviewers will also help us get better at removing things we don’t allow on Facebook like hate speech and child exploitation,” Mr Zuckerberg wrote.
Facebook still depended largely on users to report problematic posts. Millions of reports were received from users each week, and it relies on thousands of human monitors to review the reports.

Mr Zuckerberg wrote in an update on 4 May that Facebook’s next focus is building community. He said in the update, “We have a lot more work to build a global community that works for everyone.”
“This quarter we also took a major technology step forward at F8 by opening up the camera to be the first mainstream augmented reality platform. I’m excited to get virtual and augmented reality in more of your hands soon.”
Mr Zuckerberg had earlier spoken about Facebook being as a place for ‘raw and visceral’ communication when its live service was first launched in April 2016.
“Because it’s live, there is no way it can be curated,” Zuckerberg had told BuzzFeed News then. “And because of that it frees people up to be themselves. It’s live; it can’t possibly be perfectly planned out ahead of time.”
Sarah Roberts, a professor of information studies at UCLA who looks at content monitoring said in an interview, “Despite industry claims to the contrary, I don’t know of any computational mechanism that can adequately, accurately, 100 percent do this work in lieu of humans. We’re just not there yet technologically.”
“The workers who monitor material generally work on contract in places such as India and the Philippines, and they face difficult working conditions because of the hours they spend making quick decisions while sifting through traumatic material, Ms Roberts added.
 

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
You May Also Like

US state sues to have Google declared a public utility

San Francisco, United States — The state of Ohio on Tuesday filed…

Musk says inactive Twitter accounts being purged

Elon Musk announced that Twitter is cancelling inactive accounts, aiming to boost engagement and monetization. The move follows Musk’s efforts to make changes to the platform, including eliminating staff and free verification check marks, which have faced criticism from users and advertisers. Revenues have declined as advertisers shy away from Twitter, while Musk has relaxed content moderation since acquiring the company.

WhatsApp flap shows importance of message platform to Facebook

by Rob Lever When WhatsApp users began to raise concerns about a…

MIT on TraceTogether saga: S’poreans not particularly bothered by privacy issues, but by “bait-and-switch” in how contact tracing data can be used

Earlier this month (4 January), Singapore’s Minister of State for Home Affairs…