Whatever happened to the Metaverse? In June of 2022, Facebook officially changed its name to Meta reflecting a strategic decision to develop applications for the Metaverse. Hailed as the future incarnation of the internet, the Metaverse was described as a digital plane that would exist alongside the physical plane. Generated by advanced augmented and virtual reality technologies, the Metaverse would allow individuals to have coffee with friends in the digital plane, while commuting to work in the physical plane. Remote work, remote tourism and even remote culture would all radically change given the creation of fully immersive environments that stimulate human senses. Through computer-brain interfaces, visiting Paris in the Metaverse would, over time, become almost indistinguishable from visiting Paris physically.
Yet in recent months discussions and news stories about the Metaverse have subsided dramatically while there has been an upsurge in discussions about Generative AI. Like the Metaverse, Generative AI has been labeled a groundbreaking technological advancement that will reshape society. Some have gone as far to suggest that the impact of Generative AI will be similar to that of the personal computer. Generative AI, able to automate creativity and generate insight based on mass swarms of data, would soon render entire professions outdated including code writers, advertisers, analysts, legal aids, doctors and teachers. Some news pundits claimed that Generative AI such as ChatGPT would replace Google as the main portal to the internet with AI based chatbots populating our social networks in place of physical contacts.
The excitement over the Metaverse, and the current enthusiasm about Generative AI, has been described as a response to the volatility of contemporary life. In a world where fact and fiction can longer be separated, where lies and truth have equal impact, and where Wars and coups last hours, digital technologies have once again been imbued with utopian powers. After a decade of digital skepticism, brought about by the rise of hate speech, disinformation, online extremism and political contestation, cyber optimism is back in vogue. The question is thus not whether we are about to undergo a digital revolution but, rather, how glorious will this revolution be?
Despite the re-emergence of cyber optimism, ChatGPT has sparked many debates among diplomats and digital diplomacy scholars with some foreseeing the death of diplomacy as we know it. Generative AI, able to formulate press statements, author tweets and posts, answer consular questions and map strategies for successful negotiations could render many diplomats superfluous. Other predictions reduce diplomats to editors tasked with “tweaking” and “tailoring” ChatGPT speeches, or verifying the language used in ChatGPT-generated UN resolutions. Others have suggested that ChatGPT will merely disrupt diplomacy creating new challenges and opportunities. Press attaches, for instance, would have more time to charm journalists if AI systems authored initial drafts of press releases or generated media summaries sent daily to headquarters.
Although this debate has been illuminating, it has failed to offer compelling case studies. That is, under which specific conditions could Generative AI prove to be an especially disruptive technology? How could Generative AI be leveraged by states to obtain strategic goals? One answer may be the deliberate dissemination of AI-generated images in times of acute political crises.
For example, Russia used social media ads in the 2016 US elections to obtain a strategic goal- the election of Donald Trump. Russian social media ads targeted two different audiences- African Americans who were leaning towards not voting conservatives who were leaning towards voting for Trump. Russian ads sought to motivate individuals to follow through on their intended behaviors. In so doing, Russia hoped to suppress votes for Hillary Clinton while boosting votes for Donald Trump. Facebook ads were disseminated in the months leading up to the elections and often reflected media discourses such as talk of America’s unguarded borders or the Obama administration’s inability to deal with police brutality against black people.
In a similar way, state actors could use Generative AI to create content that would motivate people to follow through on certain behaviors during political crises leading to domestic strife and paralysis. One case study could be the current political and social crises in Israel. Over the past 32 weeks, hundreds of thousands of Israelis have taken part in pro-democracy rallies, protests and marches opposing the Netanyahu government’s attempt to curtail judicial oversight of the government. Netanyahu’s decision to attack the democratic foundations of the country have also sparked spontaneous, and at times violent, protests. For instance, Netanyahu’s decision to fire the Defense Secretary led to night-long protests in which pro-democracy activists blocked one of the nation’s main highways, lighting fires and creating roadblocks. Other nights saw smaller demonstrations that included police measures to vacate protestors including fire hoses, mounted police officers and brute violence.
Such spontaneous protests have been marked by high tensions. Police officers, on the one hand, have used violence against proctors yet have thus far refrained from all out police riots such the ones seen in the US during the Civil Rights era. Protestors, on the other hand, have attacked police officers verbally and refused to vacate places. Yet they have not used physical violence against police officers. Yet what if midway through a spontaneous protest, just as tensions were reaching fever pitch, AI-generated fake videos of extreme police brutality would suddenly circulate among WhatsApp groups, the main platform used by protestors? Would such fake images, including highly graphic and violent content, spread like wildfire among protestors and tilt the tipping point towards violence, destruction, and vandalism? Would the outrage sparked by AI generated images of police brutality turn a tense situation into a volatile one in which violence washed the streets of Israel?
Alternatively, could AI generated images of protestors harming and striking helpless police officers spark a police riot? Would individual police officers let go of their restraints and give into a whirlwind of emotions flogging their batons and ramming their horses into protestors? Would local police commanders lose control over their forces as these stormed protestors and beat them on behalf of their injured comrades? Would such a night of violence and ferocious animosity not further fuel internal divisions with the government supporting police officers and the opposition calling for restraint? Would a night of violence not lead the government to enact a curfew which would then be met with protests and greater violence until the nation is consumed with animosity and paralysis takes over?
This is a highly plausible scenario. The internal situation in Israel is so volatile now that one image, one match, may light the bonfire of barbarity. And Israel is not the only nation currently facing internal political conflicts. As these conflicts are heavily mediated, and as digital media shapes public discourse and individual action, states may use Generative AI to escalate internal tensions and divisions while edging nations towards paralysis. This would be a highly questionable course of action, yet one that is not so different from how older mediums were used by states to facilitate revolutions and coups in foreign countries.