Bending the Baobab: AI’s Erosion of Security in the Sahel
Foreign, domestic, and non-state actors have manipulated the information landscape to secure their interests across Sub-Saharan Africa. Malign actors use information warfare to incite military coups, as well as religious and political insurgencies. Artificial intelligence (AI) will act as an effective tool for disinformation south of the Sahara, with its consequences exacerbating the existing conflicts in the region. Revisionist disinformation campaigns in the Sahel, particularly from Russia and the People’s Republic of China (PRC), aimed at destabilizing the region and alienating Sahelian societies from the West, must be confronted by Western and international institutions. International institutions must emphasize the threat artificial intelligence presents to political stability in the Sahel and focus on alleviating the effects of AI-driven disinformation campaigns and overall Sahelian insecurity.
The Global AI Landscape
AI has revolutionized the information battlefield as generative text, voice modeling, “deepfake” images, and generative videos create people and scenes that alter populations’ perceptions of reality. AI-generated content is a unique innovation in disinformation campaigns because it uses human elements to deceive the listener regarding from where the information is coming. This increases the efficacy of disinformation campaigns in a few ways; in cases where the individual being altered or generated is someone of influence like a government official, targeted listeners believe that they are seeing actions or hearing opinions directly from the source, making their misperceptions much more dangerous. In cases where disinformation attempts to emulate a generic individual of a certain nationality, listeners are more inclined to trust this source’s representation of the people they supposedly come from than, for example, a local commentator. AI-generated disinformation’s victims are forced to attach a human face or voice to the information they receive, which makes this type of disinformation much more potent than simply reading text on a page or seeing a doctored image.
Developing nations struggle in particular to keep up when it comes to digital infrastructure, namely internet access. The digital divide means that the dramatic improvement in AI technologies and the installment of internet services in disconnected Sahelian communities will co-occur, leading to a potential information catastrophe as people who have been structurally disenfranchised from media access and media literacy are suddenly exposed to global disinformation campaigns. 300 million people in Africa joined social media in the last seven years. Africans—particularly Kenyans and Nigerians—have been shown to rely on social media for news disproportionately more than the rest of the world. Educational programs aimed at media literacy are a proven strategy for combatting disinformation, but studies show that AI-generated content like deepfake technology is much harder to detect at face value, even among highly educated populations. Despite multilateral agreements aimed at improving education in the region, Sahelian countries remain susceptible to AI-generated disinformation due to inadequate funding and local insecurity which inhibits academic programs that could combat it.
Incidents in the Sahel
AI-generated content has made waves in security and political stability across the Sahel. After a coup in May 2021, multiple AI-generated news stories were used in Mali to demonize France and delegitimize political reconciliation between the French government and Malian political leaders. Disinformation campaigns have been central to the rise of Russian influence and the use of Russian mercenaries—namely the Wagner Group—in Malian internal affairs. Russian disinformation capitalized on anti-Western and anti-UN sentiment, leading to the Malian government’s June 2023 withdrawal of consent from a UN peacekeeping operation initiated to resolve a conflict between the Malian government and a coalition of Tuareg rebels and Islamist militants. This is a common course of events in the Sahel; Wagner has used disinformation to discredit seemingly ineffective Western and UN forces, presenting itself as a suitable replacement for providing stability for the Sahel. While it is difficult to establish causality between AI-generated disinformation and immediate security crises, it is logical to assume that disinformation that spreads online is likely to exploit existing societal grievances and lead to civil unrest.
This case in Mali is not an isolated event. In 2022, AI-generated deepfakes were used in Burkina Faso to legitimize the new military junta just days after French forces were ordered to withdraw. The deepfakes claimed to be “American Pan-Africans” who supported the new regime. In 2023, a now-banned TikTok account used voice modeling of the former leader of Sudan, Omar al-Bashir, to criticize multiple Sudanese military leaders. While the motives of these videos remain unclear, they muddied the information landscape as the country was suffering from a civil war and likely served to delegitimize the regime. Even the African Union has fallen victim to AI technology, as it was used to impersonate its head, Moussa Faki, to schedule and attend meetings with several European leaders. While the reputations of African leaders are certainly at risk, incidents of this type could also provide bad actors with access to sensitive information when their target is not adequately trained in recognizing deepfakes or voice modeling.
Malign actors have employed AI-generated content in disinformation operations, often with the primary objective of delegitimizing Western, particularly French and U.S., involvement in Sub-Saharan African affairs. While it can be difficult to pinpoint the origin of these disinformation campaigns, many have been tied to Russian and PRC-linked actors. These efforts tend to serve two operational objectives: first, to spawn anti-democratic sentiment, often resulting in civil unrest and anti-western movements, and second, to create a confusing information environment that paralyzes citizens uncertain of what to believe. These campaigns are often performed in collaboration with home-grown assets like local media and social media networks, groomed by the Kremlin and the Chinese Communist Party. Russia prefers a decentralized approach, employing African influencers and propping up networks of African grassroots organizations to spread disinformation. The PRC licenses directly with African media outlets and asserts control of the information and communications infrastructure it owns in the region. The influenced sources’ revolutionary and anti-Western tendencies align them with geopolitically revisionist powers, like Russia and the PRC, threatening to fundamentally shift international institutions, international norms, and the existing regional and global balance of power.
Strategic Consequences of Inaction
There are many dangerous consequences to the presence of AI-generated disinformation in the Sahel. Disinformation campaigns in the region have evolved to be more convincing to the public, with AI-generated content becoming more potent than traditional disinformation techniques. A loss of trust in domestic institutions and their relationships with international partners has harmed democratization efforts in the region. While persistently corrupt behavior by Sahelian governments and frequent conflict over power contribute to a lack of public trust, AI-generated disinformation threatens to deal a death blow to democratic institutions that have struggled to survive frequent military coups and insurgencies. Many disinformation campaigns have focused on alienating Sahelian governments from the international community, risking a further reduction in regional stability as foreign aid, peacekeeping and military assistance, and development projects lose consent from countries of interest.
Escalating insurgencies in the Sahel threaten to create a hotbed of anti-Western sentiment across the continent. While Russia and the PRC have emphasized investment in the Sahel, the lack of global prioritization for the region’s insecurity will reduce the ability of international institutions to invest in sustainable security programs. Disinformation efforts to delegitimize the West have partly succeeded, as unstable coupist regimes in multiple countries have traded French and U.S. military assistance for brutal Russian mercenaries and PRC foreign investment. Continued withdrawal of Western forces will only continue to allow insurgencies to become more intense with the reduction of military assistance and sustainable economic development through conditional international aid programs.
The long-term effects of these AI-generated disinformation campaigns in Sub-Saharan Africa are unclear, but AI’s role in Sahelian disinformation will continue to grow as the technology improves and becomes more accessible. Many companies offer this generative AI technology for free or a small charge, and there is currently no framework for regulating content creation using these products. Social media platforms are accessible by design, meaning the means of distributing this content are widely attainable. The decentralized nature of AI-generated content means it will serve as a delegitimizing force for states as their control of the truth weakens. The very existence of AI-generated content has diminished peoples’ ability to discern falsity from reality. According to technologist and human rights advocate Sam Gregory, “We’re now approaching a world where it is broadly easier to make fake reality, but also to dismiss reality as possibly faked.” Experts share concerns that this destabilized information landscape will result in security crises even in the absence of AI-generated content, such as the 2018 attempted coup in Gabon. Despite a lack of deepfake technology, the public perceived an address by President Ali Bongo as artificially generated due to his symptoms of a recent stroke, triggering an unsuccessful coup before experts could adequately debunk the conspiracy.
There are clear consequences for Sahelian insecurity in great power competition. The current trend of Western delegitimization exacerbated by AI-generated content will inhibit Western powers’ ability to maintain the status quo in international institutions. Humanitarian intervention and the “Responsibility to Protect” will continue to decline in international favor, and the protection of human rights will fall out of the purview of multilateral organizations. As international norms shift, so will global support of Western international initiatives. Western assistance to security operations will be confronted with increasingly hostile regional governments, intensifying insurgencies and religious extremism in the region as security initiatives become less cooperative. Without actionable international policy, AI-generated disinformation threatens to push much of the developing world, including the Sahel, into a spiral of insecurity.
International Inaction
International efforts to stabilize countries in the Sahel require stronger policies to combat AI-generated disinformation. The UN stresses that AI-generated content can act as a serious threat to international security, democracy, the rule of law, and information integrity generally. Based on this, the UN has argued for the global regulation of AI but has not yet sponsored a major international treaty. The UN admits that “a global, multi-stakeholder approach is required, alongside binding commitments” to combat AI-generated disinformation. As a small step in the right direction, the very first global treaty on the regulation of AI was sponsored by the Council of Europe. However, the treaty was only signed by the European Union and nine other individual countries, including the United States and the United Kingdom. Neither of the countries notorious for disinformation campaigns in the Sahel––Russia and the PRC––were included in the negotiations, nor were they signatories. The UN Educational, Scientific, and Cultural Organization (UNESCO) adopted the first global agreement on AI in 2021, though that text notably is not enforceable for member states.
Non-UN intergovernmental organizations, such as the Global Partnership on Artificial Intelligence hosted by the Organisation for Economic Co-operation and Development, may serve as promising avenues by which binding policy can be implemented for member countries. The problem with smaller intergovernmental organizations is that there is less of an opportunity cost for refusing to join, meaning that revisionist powers who use AI for disinformation are unlikely to become members. While policy may be enacted to require members to implement certain sanctions against offending countries outside of the organization, it is unlikely that smaller or region-focused intergovernmental organizations will gain jurisdiction over countries that utilize generative AI for disinformation, simply because they are unlikely to participate.
Attempting to regulate AI-generated disinformation is more effective when accompanied by regulation of other aspects of information dissemination, such as social media. However, due to political sensitivity regarding the freedom of opinion and expression, and international institutions’ failure to serve as a global source of truth, it is difficult to design the guardrails by which dialogue can take place. As truth in the information landscape becomes less and less discernible, the international community must develop a framework to treat intentional misrepresentations of reality for the purposes of political intrigue and destabilization as violations of human integrity. Combatting disinformation is difficult for democratic nations because expectations for transparency, commitment to the truth, and the promise of freedom of expression leave them handicapped against nations more willing to employ these measures. Barriers to countering disinformation are the highest in societies that lack relative technological expertise, educated populations, reasonable levels of public trust, or stable institutions. It is virtually impossible for nations of democratic values to “win hearts and minds” in this scenario without a dramatic shift in foreign policy strategy. To properly address this worrying trend, the political consequences of state-mandated disinformation campaigns must become more severe. Without recognizing the severity and global importance of the security situation in the Sahel, the international community will fail to remedy the rising threat of disinformation.
International institutions cannot afford a passive policy regarding AI technologies in information warfare. Their falsities are too convincing globally, let alone in societies struggling to provide the education and technological expertise necessary to avoid being victimized by a manipulated information landscape. An over-reliance on the strength of objective truth, and the democratization of the internet, could be catastrophic for Western foreign policy objectives, and the lack of a severe response to AI-generated content in disinformation campaigns will result in unacceptable global outcomes.
Views expressed are the author’s own and do not represent the views of GSSR, Georgetown University, or any other entity. Image Credit: OpenAI DALL-E