Can and should deep learning predict war?

Machine Learning


For centuries, war has been seen as a failure of human foresight. Diplomats respond, militaries prepare, and academics analyze, but conflict is rarely accurately predicted in advance. Now, that may be changing. Advances in deep learning neural networks are moving competitive predictions from guesswork to more accurate, data-driven predictions. The focus is no longer on whether machines can predict conflict, but on how this predictive ability should be harnessed and regulated.

The roots of this transformation can be traced back to the earlier use of neural networks in competitive modeling. Even basic architectures such as multilayer perceptrons (MLPs) and radial basis function (RBF) neural networks have shown that nonlinear, data-driven methods can outperform traditional statistical models. These systems effectively capture complex interactions between variables such as economic conditions, alliances, and geography that cannot be captured by linear regression. Empirical evidence shows that MLP models have a predictive accuracy of over 75%, which is better than other techniques because they can model the interrelated relationships between variables. This interconnection makes deep learning networks extremely difficult to interpret.

The main change is scale. Today’s deep learning systems can process large and diverse datasets, such as satellite images showing troop movements, financial flows that reveal economic stress, climate indicators that predict resource scarcity, and social media signals that capture public sentiment. These models do more than just analyze variables. They develop internal representations of conflict dynamics that are hidden, constantly changing, and highly nonlinear. This shift moves us from traditional theory-based modeling to large-scale pattern recognition.

This change has significant implications.

First, deep learning improves predictive accuracy by capturing complexity rather than simplifying the world. Because conflicts typically result from the interaction of multiple political, economic, environmental, and social forces, neural networks inspired by the human brain are well-equipped to model these interactions. The ability to approximate complex functions allows you to identify relationships that are difficult to analyze manually. Essentially, they act as “experts” trained on historical data and can detect early warning signs that human analysts might miss.

However, increasing the prediction accuracy introduces inconsistencies. As predictions become more accurate, their impact increases.

Second, deep learning, if properly formulated, can provide a probabilistic view of competition. Instead of a binary prediction of war or peace, the output is expressed as a probability, an assessment of risk under uncertainty. Techniques such as Bayesian neural networks and evidence frameworks allow these probabilities to account for uncertainties in data and model parameters. This approach is essential in policy settings where decision-making relies on risk assessment rather than certainty.

However, increasing the prediction accuracy introduces inconsistencies. As predictions become more accurate, their impact increases. An 80% accurate conflict prediction model can guide preventive diplomacy, and a 95% accurate model can influence military strategy, financial markets, and geopolitical alliances. In this context, prediction equals power.

This raises three fundamental challenges.

The first issue is causation and correlation. Deep learning models are great at detecting patterns, but they don’t inherently explain why those patterns occur. For example, a model might predict conflict based on rising commodity prices, declining gross domestic product, or increasing online polarization. However, without additional structure, we cannot distinguish between cause and chance. Policy makers may act on correlations that may change over time. Therefore, incorporating causal inference, counterfactual analysis, and experimental design into the machine learning process is not optional, but critical.

The second important aspect is interpretability and reliability. Neural networks are often classified as “black boxes,” and this lack of transparency can be dangerous when it comes to conflict prediction. Decisions about war and peace should not be made by a system that cannot articulate its logic. Incorporating advances in explainable AI, such as feature attribution, model decomposition, and surrogate models, into competitive prediction frameworks is essential to keeping humans in control.

Without proper governance, predictive systems risk being weaponized and used not to prevent conflict, but to predict and exploit it.

The third aspect is governance. Who owns the models? Who controls the data? Who decides how the predictions are applied? Without proper governance, predictive systems risk being weaponized and used not to prevent conflict, but to predict and exploit it. If some adversaries have superior predictive abilities, this could create intelligence asymmetries and destabilize global security. Managing AI in conflict prediction is therefore not just a technical issue. It’s a geopolitical issue.

There are also fundamental ethical issues. If we can predict a conflict, do we have a responsibility to intervene? What if acting on our predictions changes the actual outcome? This illustrates the classic problem of reflexivity. That is, predictions can have an impact on the very system they are trying to predict. Warnings of potential conflict may prompt diplomatic efforts and, if misunderstood, can escalate the situation. Conflict prediction is therefore an inherently active process. It’s an intervention.

Despite these obstacles, significant benefits can be achieved. Deep learning-driven early warning systems could facilitate proactive diplomacy at an unprecedented level. Resources can be used more effectively, humanitarian emergencies can be predicted earlier, and conflicts can be quelled before they escalate. In a world dominated by complex, interconnected threats, such capabilities are not optional, but essential.

However, technology alone is not enough. The future of conflict prediction depends on combining three elements: technical capacity, institutional governance, and ethical responsibility. Deep learning can predict where conflicts may occur, but it cannot determine the appropriate action to take.

It is still fundamentally a human decision.



Source link