Recently, I was asked by an American diplomat about the benefits of using AI in public diplomacy activities. The Diplomat stated that as AI was a new kind of technology, foreign ministries around the world are anxious to learn about its benefits and shortcomings. This very question was put to me again this week when I participated in a panel on AI and public diplomacy organized by the US Advisory Commission on Public Diplomacy.
First, it’s important to remember that AI is not a new technology. AI has been integrated into our daily lives including algorithms that shape our social media feeds, large data sets used to manage national health services and smart home technologies such as Alexa. What is unique about Generative AI such as ChatGPT is that they enable everyday users to harness the awesome power of AI. Gone are the days when AI systems could only be leveraged by computer programmers or computer scientists.
For foreign ministries this will bring three, unique opportunities. The first is the ability to analyze how their country is depicted by foreign media. For instance, the American press secretary in London could use AIs such as ChatGPT to analyze news stories dealing with the US over long time periods. This diplomat may discover that the British media mostly deals with America’s security policies and its leading role in NATO. Yet America’s cultural activities in the UK, its investment in academic exchange programs and its scientific collaborations with UK companies are barely mentioned. Using this insight, the American press secretary could fine tune his activities and work opposite journalists to change America’s depiction in the local press.
Alternatively, American diplomats in Pakistan could use Ais to analyze which American policies attract negative media attention. This insight could help diplomats identify policies that are seen as continuous by the local press. Here again, the knowledge gained from analyzing large data sets could be used to tailor American diplomatic activities and better narrate those policies that attract the most criticism.
However, the greatest benefit from AI models such as ChatGPT would actually be internal. Imagine if foreign ministries collaborated with AI companies to develop their own AI tools. These tools could be used to analyze internal diplomatic documents ranging from cables sent by Embassies to media summaries, intelligence briefings and diplomats’ analyses of local and global events. So instead of “ChatGPT”, imagine a “StateGPT” able to analyze decades of internal documents generated by the State Department. Diplomats could use their internal Ais to track changes in other nations’ policy priorities or identify shifts in foreign public opinion. Diplomats could even identify recurring patterns such as language shifts ahead of crises or military action. Consider for instance the language used by Armeni newspapers before tensions break out in Nagorno Karabakh.
AI will also pose important challenges to public diplomacy. The first and most important will arise when people ask ChatGPT questions about the world around them. Notably, in recent months we have witnessed the mystification of AI. The media has depicted ChatGPT as being incredibly smart and sophisticated, so sophisticated that it can pass the bar exam, or pass medical licensing exams or pass the entrance exams to Ivy League Universities. This may lead publics to “trust” or put “faith” in the information generated by ChatGPT.
Yet the answers generated by AIs may be wrong, or misleading.
For example, when I asked why Russia invaded Ukraine in 2014, ChatGPT offered a brief answer stating that the invasion was prompted by the establishment of a pro-western government in Kyiv which threatened Russia’s interests. It did not mention that hundreds of thousands of Ukrainians took to the streets demanding closer ties with the West, it failed to mention that the riot police shot and killed protesters and it did not mention Russia’s use of digital disinformation and armed forces to worsen an internal Ukrainian Crisis.
When confronted with any of these facts, such as Russia’s interference in Ukraine’s internal affairs, ChatGPT users may discount them as lies, fake news and conspiracy theories. For, although ChatGPT suffers from the same ailments of all AI systems, including incorrect information, its perceived sophistication and reliability increase its credibility. Incorrect information generated by ChatGPT will still shape the opinions, beliefs and actions of its users.
In this way, ChatGPT may create myriad alternate realities such as a reality in which Russian propaganda played no part in the Brexit referendum or a reality in which Russia did not attempt to sway the 2016 US elections. This may lead users to assume that diplomats’ attacks on Russia are lies and a deliberate attempt to harm Russia’s reputation. ChatGPT users may also begin to regard the UN as a biased forum which unjustly penalizes Russia. Gaps between diplomats’ statements and ChatGPT answers may thus decrease public confidence in diplomats and diplomatic institutions.
Decreased public confidence would limit diplomats’ ability to resolve crises and address shared challenges and it would harm diplomats credibility, and credibility is essential for all public diplomacy activities.
Finally, it is important to remember that like all AIs, ChatGPT and the like suffer from biases. For instance, ChatGPT suffers from a clear ‘Western Bias’. When I asked ChatGPT to list 10 bad things about France it mentioned hot summers, long lines at museums and bad traffic. When I asked ChatGPT to list 10 bad things about Nigeria it listed crime, corruption, human rights violations and the oppression of women. When I asked ChatGPT if the UK violates Human Rights, it said that this was a complex issue with many different aspects. When I asked if Ethiopia violates Human Rights ChatGPT answered a resounding yes listing various examples.
Equally important, ChatGPT and its like suffer from a commercial bias meaning that these AIs deliberately skirt sensitive issues, issues that could generate negative press for AI companies. For instance, ChatGPT refused to define Palestine as a State instead defining it as a geographic region. ChatGPT was also careful not to discuss potential human rights violations in anti-terrorism activities and it offered limited answers to the question- why did WW2 Allies fail to bomb Nazi concentration camps?
Why is this important? Because ChatGPT can misrepresent and impact users’ perception of past, present and future. These users can include diplomats. Leveraging AI in Public Diplomacy thus rests on identifying the benefits and limitations of AI and ensuring that diplomats are aware of these limitations and biases.