White House Press Secretary Jen Psaki admitted on Thursday that the administration is contacting Facebook and telling them what posts that it deems to be “problematic” based on what the administration claims is “misinformation.”
“This is a big issue of misinformation, specifically on the pandemic. In terms of actions, Alex, that we have taken or we’re working to take I should say from the federal government, we’ve increased disinformation research and tracking within the Surgeon General’s office,” Psaki said.
“We’re flagging problematic posts for Facebook that spread disinformation. We’re working with doctors and medical professionals to connect to connected medical experts … who are popular with their audiences with accurate information and boost trusted content. So we’re helping get trusted content out there.”
WATCH:
https://twitter.com/thejcoop/status/1415726506333122561
U.S. Surgeon General Vivek Murthy issued an advisory earlier in the day and claimed at the press conference that “we live in a world where misinformation poses an imminent and insidious threat to our nation’s health.”
Murthy’s advisory said that tech companies can:
- Assess the benefits and harms of products and platforms and take responsibility for addressing the harms. In particular, make meaningful long-term investments to address misinformation, including product changes. Redesign recommendation algorithms to avoid amplifying misinformation, build in “frictions”— such as suggestions and warnings—to reduce the sharing of misinformation, and make it easier for users to report misinformation.
- Give researchers access to useful data to properly analyze the spread and impact of misinformation. Researchers need data on what people see and hear, not just what they engage with, and what content is moderated (e.g., labeled, removed, downranked), including data on automated accounts that spread misinformation. To protect user privacy, data can be anonymized and provided with user consent.
- Strengthen the monitoring of misinformation. Platforms should increase staffing of multilingual content moderation teams and improve the effectiveness of machine learning algorithms in languages other than English since non-English-language misinformation continues to proliferate. Platforms should also address misinformation in live streams, which are more difficult to moderate due to their temporary nature and use of audio and video.
- Prioritize early detection of misinformation “super-spreaders” and repeat offenders. Impose clear consequences for accounts that repeatedly violate platform policies.
- Evaluate the effectiveness of internal policies and practices in addressing misinformation and be transparent with findings. Publish standardized measures of how often users are exposed to misinformation and through what channels, what kinds of misinformation are most prevalent, and what share of misinformation is addressed in a timely manner. Communicate why certain content is flagged, removed, downranked, or left alone. Work to understand potential unintended consequences of content moderation, such as migration of users to less-moderated platforms.
- Proactively address information deficits. An information deficit occurs when there is high public interest in a topic but limited quality information available. Provide information from trusted and credible sources to prevent misconceptions from taking hold.
- Amplify communications from trusted messengers and subject matter experts. For example, work with health and medical professionals to reach target audiences. Direct users to a broader range of credible sources, including community organizations. It can be particularly helpful to connect people to local trusted leaders who provide accurate information.
- Prioritize protecting health professionals, journalists, and others from online harassment, including harassment resulting from people believing in misinformation.