How CGI and AI Will Make ‘Fake News’ More Difficult to Spot

“Fake news” isn’t going away. The distribution of false information, whether it’s by accident or with purpose, is only going to become more common as the focus on every slice of controversy grows. While we already have serious issues resulting from plain-text tweets or basic news stories, what happens when tools like computer generated imagery and artificial intelligence are used to create fake imagery, videos, and possibly entire personalities?

In the below video, researchers assembled a system that allowed them to take existing footage of several several political figures and alter the expressions displayed in real-time by syncing up to a human actor they had in-studio.

Consider the fact that this video is already nearly a year old, and that researchers around the world have made incredible strides in this same technology since its creation. Getting a little concerned yet?

While some argue that this evolution of tech might make us more likely to scrutinize every piece of “news” we see, experience in crisis management tells us that it’s highly unlikely the average person is going to do a deep dive before reacting to and sharing something they find controversial or engaging. We’re in for a bumpy ride on the fake news train folks, buckle up.

——————————-
For more resources, see the Free Management Library topic: Crisis Management
——————————-

[Jonathan Bernstein is president of Bernstein Crisis Management, Inc., an international crisis management consultancy, author of Manager’s Guide to Crisis Management and Keeping the Wolves at Bay – Media Training. Erik Bernstein is vice president for the firm, and also editor of its newsletter, Crisis Manager]