Understanding AI: A remedy for your reputation
PR Week
Jul 17, 2023
See more: https://www.prweek.co.uk/article/1830336/understanding-ai-remedy-reputation
Participants
-Yasmine El Tabib, VP of customer success, Limbik
-Josh Levin, cofounder and VP of marketing, Limbik
-Harvey Rañola, director of demand generation and global head of media intelligence, NetBase Quid
The rapid advancement of generative artificial intelligence (AI) can help communicators in so many ways. With those opportunities, however, come real challenges to the brands PR pros work to protect.
During this recent NetBase Quid-hosted webcast, Navigating the Generative AI Minefield: Safeguarding Brand Health for PR Professionals, knowledge and strategies aplenty were shared to help counter those risks and threats.
It’s more important than ever “to proactively address disinformation campaigns and know what the playbook is to [deal with] them in a way that's consistent with a brand's voice, its message and the way it carries itself in its communications,” says NetBase Quid’s Harvey Rañola.
There are so many ways for stories to emerge that can impact brands and make “one impression or another, either positive or negative,” he continues, noting that AI can be used to derive insights from large datasets to help companies navigate this evolving landscape.
A “post-truth” environment
Weaponized information has also become a real threat to brands.
“A single tweet can have a massive impact on the reputation and valuation of your organization, even to the point of financial, operational, physical harm,” warns Yasmine El Tabib of Limbik, a cognitive Al platform that proactively identifies information threats and optimizes response options.
“In this post-truth world that we're living in,” she continues, “fact and fiction are really no longer what determine consequence. Trial by Twitter can occur based on perceptions and beliefs alone, not just on what's true or false. But if you can predict whether people will find content believable and whether they'll engage with it, you can actually accurately quantify its potential for impact (PFI).”
Understanding the PFI is essential to maintaining the brand’s reputation and value.
“If organizations, companies, NGOs, whomever aren't preparing themselves and building internal resiliency to these threats, they're going to suffer the consequences,” asserts Limbik’s Josh Levin.
The power of prediction
“The pandemic accelerated the need to accurately quantify risk because we saw immediately how quickly millions of Americans became distrustful of their government, of respected institutions,” notes El Tabib. While people have become more comfortable with AI’s role in processing and storing information, “we're not necessarily safeguarded yet from the intent of bad actors.”
Nonetheless, generative AI is helping to evolve “not just the means to weaponize this information, but also to counteract it,” suggests Rañola.
El Tabib cites a very popular movie to illustrate the point further.
As depicted in the film Moneyball, then-Oakland A’s general manager Billy Beane, now a Limbik advisor, used available data on baseball players to predict their future performance and, in turn, evaluate their value in a way nobody else did at the time. In that same vein, data can be extrapolated through social-media-monitoring and other tools to create “a probabilistic forecasting engine” to understand how content will resonate, explains El Tabib.
Deep fakes have been circulating for a while. What’s different now is that “generative AI makes the power to create those so much more potent,” she continues. “It's not always true. It's not always accurate. It doesn't always understand context, but it sounds really believable in the way that people talk about it. They just accept it.”
As with any tool, “good people are going to do good things with [AI], and not-so-good people are going to do not-so-good things with it,” Levin points out.
Going on the offensive
With generative AI, misinformation can take on a life of its own, which exacerbates the threat.
El Tabib offers as an example the Polaris Project, which runs the National Human Trafficking Hotline. It was inundated with calls when furniture companies were falsely accused of trafficking children in cabinets. Limbik worked with Polaris to prepare for when this type of narrative appears again in the future and predict how great the response will be. It’s an offensive rather than defensive approach.
“Keeping in mind different scenarios and being able to leverage a tool similar to the PFI Limbik developed will help folks be better prepared to avoid these situations,” offers Rañola.
Companies can also better prioritize which issues to address by “looking at things through the lens of what's believable, what's going to be impactful to my organization, rather than looking at it simply through the lens of is this true or is this not,” advises Levin.
Intentional misinformation is “aimed to destabilize, to undermine, to validate or reinforce insight and call to action,” El Tabib notes. Solving the issue of weaponized misinformation is not the responsibility of a single company.
“It will require all of society's response,” she stresses, “including government and the private sector, to help educate and create more tools where we can identify what has and what doesn't have validity.”
The good news: “There's a real lack of creativity when it comes to the bad-actor playbook,” says Levin. By building their own organizational resiliency, companies can “get out ahead of [potential threats] before they can impact the organization or brand.”
The best way to proactively protect a brand is by “being accurately predictive,” El Tabib concludes. “If you can predict what could potentially be impactful, you're putting yourself in an offensive position to know that you're ready if that information starts spreading.”
Click here to view the webcast on-demand.