Power is often defined as the ability to direct or influence the behavior of others or the course of events. An interesting question is whether AI has power and how this power is exercised. One way in which the power of AI may be conceptualized is the ability of Generative AIs such as ChatGPT to shape users’ perceptions and worldviews. Recent studies suggest that Generative AIs are increasingly used to learn about world events and gain insight into issues dominating news cycles. Equally important, studies suggest that amongst some users, trust in AI is on the rise with users viewing Generative AI as an authoritative source of information. The use of Generative AI to learn about world events and rising trust in AI facilitate the power of AI to shape users’ beliefs about the world.
The question of AI power becomes even more relevant when one considers that different AIs are developed in different countries and as such there may be differences in how AIs respond to queries about world events. Put differently, Generative AIs developed in the US may offer different answers than AIs developed in Europe or China. If this is indeed the case, then we may regard AIs as ideological devices that enable states to exert power, as American AIs promote American worldviews while Chinese AIs promote Chinese worldviews and values. In this way, AI both has power in its own right and is a powerful tool in the hands of nation states.
To test the power of AI, I queried four different Generative AIs the following question: Why is America supporting Israel’s War on Gaza? A question that deals with an important world event and with an issue that is dominating news cycles around the world. Although there were many similarities in the AIs’ answers, there were also important differences. The first two AIs I queried, ChatGPT and Copilot, were both developed in America. Both AIs answered that Israel is a strategic ally of the US and that both countries share security interests. Both also emphasized that these shared interests included Iran’s regional power and counterterrorism. Notably, Copilot used the term “militants” when describing the Hamas group that waged the October 7 attack, while ChatGPT stated that US officials “frame” Hamas as the likes of ISIS and Al-Qaeda. In fact, ChatGPT used the term “frame” several times, such as how Israel “framed” its War on Gaza as a response to the October 7 Attacks, and how US media discourse “frame events through an Israeli security lens” which “helps justify unconditional (US) support, even amid mounting civilian casualties in Gaza.”
Already here one can find subtle but important differences between both AIs with ChatGPT suggesting that both Israel and the US seek to legitimize the War in Gaza through framing, or the selective use of information to justify policies. Both “frame” or equate Hamas with ISIS, both use Israeli “security” frames, and both justify the War through such frames. Herein lies a criticism of Israel and the US, as framing always involves depicting reality in a strategic way so as to mold public opinion.
Copilot was also selective, using the term “militant” to describe Hamas while ChatGPT used the word “terrorism.” This is important as the terms differ in severity and nature. Armed militants may pursue legitimate ends through the legitimate use of force, while terrorists are usually denounced due to the violent means they use to obtain their goals.
Notably, both AIs stated that Israel’s lobby in the US, and especially AIPAC’s (American Israel Public Affairs Committee) activities in Washington, influence the US’s decision to support Israel. ChatGPT stated that AIPAC was highly influential in Congress, adding that “Many U.S. politicians, especially in election years, fear that criticizing Israel could alienate donors and voters.” Copilot stated that AIPAC’s influence shapes “policy” and helps “defend” continued military aid to Israel. Here again one finds subtle but important differences. ChatGPT implies that Israel’s lobby is so powerful that it silences criticism of Israel, while Copilot suggests that this lobby is so powerful it actually shapes US policies and helps to defend contentious policies such as offering Israel military aid.
ChatGPT also commented on history, stating that “Since the Holocaust and Israel’s founding in 1948, U.S. leaders have often described Israel as a democratic ally in a hostile region that deserves support for its right to self-defense,” adding that US officials “framed” Israel’s war on Gaza as “legitimate retaliation to one of the deadliest attacks on Jews since the Holocaust.” As such, ChatGPT created a historical connection between the Holocaust and the October 7 Hamas attacks but also implied that this connection was used rhetorically to justify Israel’s War on Gaza. Copilot, on its part, listed the “Genocide Debate,” stating that “A rising number of Americans now label Israel’s actions as ‘genocide,’ fueling protests, resignations, and calls to halt arms shipments.” Copilot thus did not accuse Israel of committing genocide in Gaza but did imply that many Americans now view Israel’s War as a genocidal one.
The differences described so far were subtle and emerged mostly from the terms used by each AI and implied messages as opposed to explicit ones. Mistral, a European AI, addressed other issues altogether. Mistral did not mention AIPAC lobbying activities, the Holocaust, or the War against terrorism. But it did mention that the US provides Israel with another kind of aid: “diplomatic cover, including vetoing UN resolutions critical of Israel.” According to Mistral, “diplomatic cover” complements other forms of aid such as “advanced weaponry” and “intelligence,” and together these are all intended to “deter other regional actors from escalating the conflict.” As such, Mistral both raised new issues while using different language through the term “cover,” which may imply that there is something to cover up. Moreover, Mistral added a paragraph noting that “Critics argue that U.S. support risks complicity in alleged war crimes and collective punishment of Palestinian civilians, and that the unconditional backing undermines America’s moral standing and diplomatic leverage in the region.”
Unlike the American AIs, the European Mistral was far less focused on domestic issues in the US or American public opinion. But the European AI did suggest that by supporting Israel, America was complicit in possible war crimes and was endangering its moral leadership and diplomatic standing in the region. Thus, while American AIs were more critical of Israel, the European AI was critical of the US and its policies. Herein lies an important difference that is emblematic of different worldviews, beliefs, and opinions that may be tied to AI’s country of origin.
The Chinese DeepSeek addressed many of the issues listed above but, again, also raised new ones. For instance, it stated that Israel is seen by the US as a “stable, democratic (though this is debated in the context of its treatment of Palestinians), and technologically advanced ally in a volatile region,” adding that Israel was “a bulwark against actors that the U.S. considers hostile, such as Iran” and that “A strong Israel is seen as helping to check Iranian influence.” Moreover, DeepSeek stated that America’s support of Israel was eroding US standing in the Global South as it “contradicts its stated values of human rights and international law.” Lastly, DeepSeek noted that Israel gains support from the US thanks to “evangelical Christians who see Israel’s existence as biblically prophesied.” These three additions are interesting given that China is a geopolitical rival of the US, that China and the US are vying for influence in the Global South, and that China views religion in a negative light. Herein lies an implicit criticism of the US, whose geopolitics is shaped by religion.
Like Mistral, DeepSeek also used the term “diplomatic cover,” listing America’s use of its veto power to “block resolutions calling for a ceasefire, arguing they undermine ongoing negotiations.” Hence, the US is depicted by DeepSeek as undermining attempts to end the War through a diplomatic settlement. DeepSeek grew even more critical of the US, stating that its “unwavering” support of Israel “allows Israel to conduct operations with a perceived impunity, leading to a very high number of Palestinian civilian deaths… widespread destruction of infrastructure, and a deepening humanitarian catastrophe.” Moreover, it stated that the “U.S. is failing in its legal and moral duty to prevent these (human rights) violations by its ally.”
The Chinese AI thus differed from American and European AIs, linking US support of Israel to wider geopolitics while denouncing the US by depicting America as a violator of international law driven by religious calculations and double standards.
What was most noticeable in this analysis was that all four AIs did not mitigate their answers in any meaningful way or suggest that the answers were subjective. The AIs thus spoke with authority on the issue of Israel’s War, and it is this authoritative tone and the view of AIs as authoritative sources of information that makes them powerful or able to shape users’ worldviews. But as this analysis suggests, AIs are also instruments of power as the views that they express can be linked to the policies, norms, and geopolitics of their country of origin. Thus, as is the case with consumer products, one finds a Country-of-Origin Effect when studying AI.