News and Commentary

Parents Bring ‘Back’ Parkland Shooting Victim Through AI To Push Gun Control
BOSTON, MA - FEBRUARY 13: Manuel and Patricia Oliver, parents of Joaquin Oliver, one of 17 students and staff members killed in the Parkland mass shooting on Valentine's Day of 2018, speak at a press conference in Boston to raise awareness for a new Gun Safety-Certified logo they designed on Feb. 13, 2020. Parkland-based Change the Ref and Boston-based Stop Handgun Violence helped create the symbol for businesses to show their support for background checks for all gun sales and other gun regulation legislation. (Photo by Blake Nissen for The Boston Globe via Getty Images)
Blake Nissen for The Boston Globe via Getty Images

The parents of a student killed in the 2018 mass shooting at Parkland High School in Florida are using their dead son’s likeness to push people to vote for politicians who support more restrictions on firearms.

Manuel and Patricia Oliver, the parents of Joaquin Oliver, founded the organization Change The Ref (CTR) after their son and 16 others were killed at Parkland on Feb. 14, 2018. CTR is dedicated to training young people to promote stricter gun control measures through activism.

In their latest campaign, the Olivers released a video on Friday urging people to vote against the “gun lobby” and for politicians that will enacted tighter laws surrounding firearms. As part of the video, the Olivers used technology to create a virtual image of their dead son, Joaquin, and made him speak in favor of the initiative.

The video begins with the Olivers introducing themselves before advocating for stronger gun control measures.

“Every day, nearly 100 more families lose someone they love to gun violence. Every single day, we keep telling people it doesn’t have to be like this. They don’t listen. So we found a way to bring back someone that no one will ignore,” Manuel says, referring to Joaquin.

“It’s very hard for me to look at this, so please, please listen to what our son has to say,” Patricia says. The video goes to a black screen that flashes the message, “Joaquin Oliver was shot and killed on Feb. 14, 2018. We used artificial intelligence to bring him back for one last message.”

A virtual Joaquin appears on screen in a hoodie, beanie, and with earbuds dangling from one ear. The video appears to show Joaquin moving and speaking as normal, though the bit has been entirely created digitally.

“Yo, it’s me. It’s [Joaquin]. I’ve been gone for two years and nothing’s changed, bro. People are still getting killed by guns. What is that? Everyone knows it, but they don’t do anything,” Joaquin appears to say. “I’m tired of waiting for someone to fix it. The election in November is the first one I could have voted in, but I’ll never get to choose the kind of world I wanted to live in.”

“So you’ve got to replace my vote. Go to unfinished votes dot com, register, then go vote. Vote for politicians who care more about people’s lives than the gun lobby’s money,” Joaquin continues. “Vote for people not getting shot, bro. I mean, vote for me, because I can’t. We’ve got to keep on fighting and we’ve got to end this.”

Similar types of videos to the one used by CTR and the Olivers are known as deepfakes. The videos purport to show their subjects talking and moving as normal, though their words and movements have been invented and faked through technology. The technology has been used to create fake videos of prominent people saying and promoting things that they have not supported.

Facebook banned such content in January.

As The Daily Wire reported:

As artificial intelligence advances, the capacity for malicious actors to covertly spread misinformation has also increased. To mitigate this, Facebook has announced that the company will no longer allow “deepfake videos” or “misleading manipulated media” on its platform.

But in order for content to be banned, it must meet a strict criteria. First, it must have “been edited or synthesized — beyond adjustments for clarity or quality — in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words they did not actually say,” writes Monica Bickert, the vice president for global policy engagement at Facebook, in a blog post published on Monday.

In addition, the content must be “the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.” The blog post also notes that the policy “does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words.”

The Daily Wire is one of America’s fastest-growing conservative media companies and counter-cultural outlets for news, opinion, and entertainment. Get inside access to The Daily Wire by becoming a member.