The Rise of Artificial Intelligence in Modern Warfare: A New Era of Combat by Syed Tahir Abbas



Source: www.aspistrategist.org


Syed Tahir Abbas

Researcher at History, Culture, and College of Nationalities, Southwest University, Chongqing, China.


Throughout history, war has driven humanity's most significant technological leaps. From the tank’s debut in World War I to the atomic bomb in World War II, military necessity has consistently fueled innovation. Today, artificial intelligence (AI) is emerging as the latest tool to reshape warfare, proving pivotal in conflicts across Gaza and Ukraine. AI is no longer just a support system—it's becoming a primary actor, altering both strategy and execution in real-time. This article delves deep into how AI is changing the face of warfare, the ethical questions it raises, and its potential long-term consequences.


AI on the Battlefield: A Game-Changer or Ethical Dilemma?

The concept of AI in warfare may once have seemed like science fiction, but it has now transitioned from theory to practice. Militaries around the world are not just exploring AI; they are deploying it on the battlefield, making decisions that impact life and death. From Israel’s controversial use of AI to generate "kill lists" to Ukraine’s reliance on AI for strategic planning, this technology is defining modern combat. But at what cost?


Israel’s "Lavender": The AI Killing Machine

One of the most significant—and controversial—advances in military AI comes from Israel, where a system known as "Lavender" has been deployed by the Israeli Defense Forces (IDF). Lavender is an AI-powered system designed to identify individuals it deems potential threats, specifically targeting those suspected of being involved in terrorism. The system processes data from a wide range of sources, including surveillance, communication intercepts, and social media, to produce a list of suspected terrorists. Reports suggest that Lavender has compiled a list of over 37,000 Palestinians, each flagged as a potential Hamas militant. Once someone is identified by the system, human soldiers act on this information without further scrutiny, often launching strikes based on Lavender's data alone. The typical military procedure of human intelligence gathering, verification from multiple sources, and proof of involvement is often bypassed in favor of AI's rapid identification. This reliance on AI raises significant ethical concerns, particularly because Lavender reportedly has an error rate of 10%. This means that on a list of 100 targets, 10 could be innocent civilians mistakenly branded as terrorists. The IDF officially denies using AI in such a direct capacity, claiming that human oversight remains a critical part of their targeting process. However, investigative reports challenge this, pointing to the widespread and increasing use of Lavender in the decision-making process. Critics warn that this level of automation reduces the accountability and moral responsibility typically associated with human warfare.


Ukraine’s AI-Enabled Drones: The Next Frontier

On another front, Ukraine has embraced AI in its fight against Russian forces. Unlike Israel’s Lavender, Ukraine’s AI applications are more focused on strategic planning and battlefield efficiency. AI-enabled drones are a critical component of Ukraine’s defense strategy. These drones are equipped with technology that allows them to navigate through electronic jamming systems and find their targets with high precision. While these drones are not fully autonomous in deciding whom to strike, their ability to operate in highly contested airspaces gives Ukraine a significant advantage in terms of mobility and accuracy. Beyond drones, AI is used in war planning. Ukraine relies on AI software to analyze satellite images, drone footage, and reports from the battlefield, enabling commanders to track enemy movements and plan operations with greater precision. This use of AI dramatically speeds up decision-making, allowing Ukraine’s military to adapt quickly to the evolving nature of warfare. However, Ukraine is not fully satisfied with its current AI capabilities. Officials have called for more advanced AI technology—particularly systems similar to Israel’s Lavender—that could autonomously identify and neutralize enemy targets. As the conflict drags on, Kyiv is pushing for tech innovations that could turn the tide in its favor, including more sophisticated AI systems for identifying and targeting Russian forces under electronic warfare conditions.

The United States and China: AI Superpowers

While AI is already a key player in the conflicts in Gaza and Ukraine, the true AI arms race is being led by the world’s two largest military powers: the United States and China. Both nations are rapidly developing AI technologies to integrate into their defense systems, marking the beginning of a new kind of arms race—one defined not by nuclear weapons but by algorithms.


The United States: 800 AI Projects and Counting

The U.S. military is leading the charge, with more than 800 active AI projects aimed at various aspects of warfare. These projects range from AI systems that process vast amounts of battlefield data in real-time to those that optimize weapon resupply routes. The U.S. has already demonstrated AI’s capabilities in recent military actions, particularly in the Middle East. In strikes conducted in Yemen, Iraq, and Syria, the U.S. used AI to identify targets—such as rocket launchers and ships—pinpointing their exact locations before initiating attacks. One of the most significant advantages of AI in the U.S. military is its ability to process and analyze vast amounts of data quickly. AI systems can scan satellite images, drone footage, and radar inputs, identifying potential threats far faster than human analysts ever could. This speed allows commanders to make more informed decisions, significantly enhancing the U.S.’s ability to respond to threats swiftly.


China: A Growing AI Power

Not far behind the U.S., China is also heavily investing in AI for military applications. The People’s Liberation Army (PLA) is developing a comprehensive network of unmanned weapons, AI-powered sensors, and surveillance systems that are being deployed in the waters surrounding China. These systems are designed not only to gather intelligence but also to conduct electronic warfare. AI is expected to play a pivotal role in processing the data gathered from these sensors, improving the PLA’s ability to disrupt enemy communications, jam radar systems, and launch cyber-attacks. In many ways, China’s strategy mirrors that of the U.S., focusing on automation and data processing to give its military an edge. However, China’s emphasis on unmanned systems suggests that it sees AI as a way to limit human involvement in warfare as much as possible, automating tasks traditionally performed by soldiers.


The Global AI Arms Race

As more nations adopt AI for military use, a new arms race is emerging—one defined not by nuclear weapons or fighter jets, but by algorithms and autonomous systems. The stakes are high, as AI can provide militaries with unprecedented capabilities in surveillance, intelligence gathering, and combat. However, this rapid adoption of AI also raises serious ethical and legal questions. How much decision-making power should be delegated to machines? What happens when an AI makes a mistake and innocent lives are lost? International law has yet to catch up with these advancements, leaving a gray area in terms of accountability and responsibility. The possibility of AI systems making life-and-death decisions without human oversight is becoming increasingly real, and the implications are profound.


The Ethical Dilemma: Automated Warfare and Accountability

One of the biggest concerns surrounding the use of AI in warfare is the potential erosion of accountability. In traditional combat, human soldiers make decisions based on intelligence, experience, and moral judgment. When AI systems like Israel’s Lavender are given the power to identify targets autonomously, the line between human decision-making and machine judgment becomes blurred. What happens when an AI system makes a mistake? Lavender, for instance, has an error rate of 10%, meaning that innocent people could be labeled terrorists and killed. In such cases, who is responsible? Is it the programmer who designed the algorithm? The military official who approved its use? Or the machine itself? As AI continues to evolve and become more integrated into warfare, these ethical questions will need to be addressed by policymakers, military leaders, and international legal bodies.


Conclusion: A Precarious Future

The rise of AI in warfare represents a fundamental shift in how conflicts are fought. Whether it's Israel’s Lavender, Ukraine’s AI drones, or the U.S. and China’s race to dominate AI military technology, the battlefield is changing rapidly. While AI offers unprecedented advantages in terms of speed, precision, and data processing, it also raises serious ethical, legal, and moral concerns. As AI systems continue to take on more responsibility in warfare, the world will need to grapple with the consequences of machines making life-or-death decisions.

In the years to come, AI’s role in global conflicts is likely to expand, leading to even greater reliance on automated systems. The question is not whether AI will change the face of warfare—it already has—but how humanity will adapt to this new era of combat.



Syed Tahir Abbas

Student at History, Culture, and College of Nationalities, Southwest University, Chongqing, China. Specializing in the History of International Relations

Email: syedtahirabbasshah46@gmail.com


 

Comments

Popular posts from this blog

Iran, USA, Israel — A Complex Trio? Lessons from History and Future Prospects by Richa Bhattarai

Will superpowers play a role in escalating tensions in the Middle East? By Lorenzo Trombetta

What role does Qatar play in mediating conflicts in the Middle East? by Ariel Admoni