Skip to main content

Support us

On 7 October 2023, Hamas launched a large, coordinated assault into Israel. Israel enjoyed massive military and intelligence superiority. International partners had warned it of a threat - “something big” - from Hamas on the near-term horizon. Its own services had collected intelligence information indicating Hamas attack preparations across the Gaza border, including daily aggressive drone activity, training assaults against replicas of Israeli military installations and vehicles, and operatives lining the border with explosive devices. And they were aware of Hamas’ intentions of raiding Israel and abducting its citizens. Despite these advantages, Israel’s intelligence could not turn information into insight, nor insight into action. As military intelligence chief Major General Aharon Haliva admitted just days after the attack: “we failed to warn of the terror attack carried out by Hamas”.

The assault had caught Israel’s intelligence services and political and military decision-makers alike by surprise, leaving them unprepared to thwart and underprepared in their defence against an assault by a weaker opponent. The ramifications of this warning failure were as massive as they were horrific, and they continue to unfold to this day, most recently demonstrated by Israel’s airstrikes on Hamas leaders in Qatar. 

Clear signals of an impending attack were present, yet they were misinterpreted. By one account, Israel’s intelligence apparatus operated with the faulty assumption that Hamas would refrain from aggression that could spark counterattacks, leading the higher echelons of Israeli intelligence to disregard the reports it was receiving on Hamas’ attack intentions and preparations. Major General Haliva has since attributed the warning failure to "systemic and broad" causes.

Strategic surprise despite available indicators is a well-known challenge in intelligence warning. Information indicating a pending threat can be missed, misjudged, or dismissed due to cognitive biases, warning fatigue, or a prevailing belief in the adversary’s strategic restraint. Why intelligence warning fails, and under what conditions it succeeds, are questions that sit at the heart of our recent book Contemporary Intelligence Warning Cases: Learning from Successes and Failures (Edinburgh University Press, 2024). In it, we explore contemporary cases of intelligence warning, not only failures like 7 October 2023, but also lesser-known successes across domains such as terrorism, war, pandemics, and technology. Drawing on cases from near and far, we aim to expand the scholarly and practical understanding of how warning works – and why it so often doesn’t.

The highest purpose

The 7 October 2023 attack is but one recent example of the importance of, and severe challenges associated with, intelligence warning. It draws its particular importance from the potentially dire consequences of failure. In extremis, these amount to mass casualties, national traumas, loss of territory or sovereignty, and war. As Israel’s military intelligence chief underscored in conceding failure to warn about the attack, “we failed in our most important mission”. 

Due to the paramount importance of the warning function, some notable intelligence bodies were born to warn. The US established both the Central Intelligence Agency (CIA) and the Office of the Director of National Intelligence (ODNI) to bolster the national warning function in the wake of major failures, respectively the Japanese attack on Pearl Harbour in 1941 and al-Qaeda’s attack on the US east coast on 11 September 2001. Many other intelligence services have warning written into their regulatory documents. For instance, the Norwegian Intelligence Service Act establishes that “the Norwegian Intelligence Service shall warn the Norwegian authorities of threats” and the Norwegian Armed Forces intelligence doctrine treats warning as the main purpose of intelligence.

The greatest challenge

While warning is a core intelligence task, it is also one of the most challenging. Warning requires intelligence services to produce a picture of an uncertain future that is credible enough to convince decision-makers to invest their hard-pressed resources to redraw it. It is often a matter of addressing mysteries rather than stealing secrets and solving puzzles. And unlike most intelligence, it demands some form of decision-maker response to succeed, although, admittedly, this response need not be the mobilisation of countermeasures. A conscious and informed decision not to do so also suffices. 

Intelligence risks not only missing threats that are real but also warning of threats that are not. It might be tempting to argue that such false alarms are no big deal; that, after all, it is better to err on the side of caution in warning too much rather than too little. But its risks are very real. In extreme cases, warning decision-makers of threats that do not exist can produce consequences rivalling those of failing to warn them of threats that do. 

Authoritative voices treat US intelligence warnings of Iraqi weapons of mass destruction (WMD) that did not exist as such a high impact false positive warning failure. President George W. Bush invaded Iraq, triggering a war that killed hundreds of thousands and, in dismantling state institutions and destabilising the region, produced fertile breeding grounds for extremism. To the extent that this warning failure left President Bush surprised, it was not by the threat of Iraqi WMD, but rather by the lack of it.

Warning that is so successful that it prevents a threat from materialising can create the illusion that it didn’t exist in the first place; that intelligence cried wolf and that decision-makers wastefully adopted costly countermeasures. The reputational damage that ensues false alarms, or successful warning that comes across as a false alarm, can make intelligence services more conservative in their future warning efforts, and it can degrade decision-makers’ trust in and responsiveness to future warning, both at the risk of further failures.

No shortage of failures

Unsurprisingly then, there is no shortage of failures. We know this because, unlike most intelligence, warning failures tend to force their way into the public limelight. As Hamas’ surprise attack painfully demonstrates, the consequences of warning failure are often devastatingly tangible, and readily observed and felt by the broader society. 

This brings attention not only to the performance of those producing warning, but also to those responsible for acting on it. Warning failure is not always the result of failures within the intelligence community itself. It can, and often is, the result of more systemic issues, frequently reflecting shortcomings in the interaction between intelligence and decision-making, where signals are misunderstood, ignored, or set aside as politically inconvenient. And it often triggers intense public, journalistic, political and sometimes judicial pursuit of answers to the questions of what went wrong and who is at fault. Many a warning failure has ended up the object of public hearings in democratic societies and investigations forcing intelligence chiefs and decision-makers to detail their efforts to detect and thwart a threat against national interests that ended up materialising. And many a head has rolled at the end of such processes, including Israel’s military intelligence chief following the 7 October attack, France’s military intelligence chief following Russia’s February 2022 full-scale invasion of Ukraine, and the head of the US Secret Service following the July 2024 assassination attempt on then former President and Republican presidential nominee frontrunner Donald J. Trump.

Expanding the literature

For the above-mentioned reasons, warning failures typically generate forgivingly voluminous and detailed information available to scholarly attention. This, in turn, has given rise to a sizeable literature on intelligence warning. Historical examinations of US and to some extent UK intelligence failing to detect threats such as wars, invasions and terror attacks dominate this literature. In our recent book, we examine contemporary rather than historical cases, broaden the geographic focus of our examination to include experiences from such places as NorwayFrancePakistanSyriaUkraine, and beyond, examine warning successes in addition to failures, and consider also presumably lower impact threats that are not traditionally associated with intelligence services such as pandemicsthreats against cultural heritageenvironmental threats, and financial crises.

Some established truths persist. Prior to Russia’s annexation of Crimea, Ukrainian institutions suffered various forms of bias. Prior to the 6 January 2021 insurrection on Capitol Hill, the Trump administration politicised intelligence in suppressing US assessments on the domestic threat. At the onset of the COVID-19 pandemic, US leaders’ mistrust in its intelligence services led them to ignore intelligence on the rapid spread of a deadly respiratory virus within and across China’s borders. The complexity and volume of financial data produced an information overload that overwhelmed US analysts and regulators prior to the 2008 financial collapse on Wall Street and the ensuing global financial crisis. And, in the lead-up to the 2015 terrorist attack across Paris, poor interagency cooperation prevented French services from aggregating the “[staggering] amounts of available information suggesting that a major attack was about to happen”. These well-known challenges – bias, politicisation, mistrust between intelligence and policy, information overload, and cooperation issues between intelligence services – combined with others to produce notable and consequential warning failures in all these cases.

While these failures offer sobering lessons, they also raise the question: under what conditions does warning succeed? Studying such cases can help uncover the organisational, political, and cognitive factors that enable timely and effective response. Success is sometimes unequivocal, such as when UK authorities outright thwarted the 2006 al-Qaeda plot to down transatlantic commercial airliners with suicide bombers armed with improvised liquid explosives. Warnings in the wake of recent terrorist attacks, among them 9/11 and the 2005 terror attack on London’s public transport system, compelled UK authorities to make counterterrorism their top priority. UK services significantly increased counterterrorism surveillance efforts which, coupled with close domestic and international cooperation, enabled them to identify and disrupt a plot the consequences of which had the potential to eclipse the 9/11 attacks.

At other times, success is just as real, but less obvious at face value. Such as when US intelligence over the course of decades warned American decision-makers of North Korea’s nascent nuclear program, and these decision-makers responded with enticement and coercion aimed at halting Pyongyang’s development of nuclear weapons. Similarly, US and UK intelligence repeatedly produced accurate and timely warnings of Russian intentions of and preparations for a major military offensive in Ukraine, and their decision-makers responded with countermeasures aimed at compelling Moscow to stand down and Kyiv to prepare and strengthen its defences. Despite North Korea’s realisation as a nuclear power at the Punggye-ri nuclear test facility in 2006 and Russia’s full-scale invasion of Ukraine in February 2022, neither case represent failed intelligence warning, as intelligence issued warnings to which decision-makers responded with the countermeasures they found available and appropriate. These cases also remind us that warning need not fully prevent or thwart a threat to be successful. Enabling decision-makers to engage a threat informed and prepared, and reducing the negative consequences of it, also represent at least partial warning successes.

Strategic warning, contrary to an authoritative voice, is indeed actionable, but it requires different, often broader-stroke countermeasures. Simply, strategic warning requires strategic response, such as when, in response to strategic warning of Russian interest in acquiring sensitive Norweigan technology via business acquisitions, Norwegian lawmakers broadly empowered the government to intervene in activities putting national security interests at risk. Less than three years later, the government in March 2021 invoked its new legal powers to thwart Russia’s attempt to acquire Bergen Engines AS, a Norwegian company that possessed technology and produced engines Norwegian services assessed would be of great military-strategic value to Moscow.

As vital and difficult as ever

Intelligence warning remains as important and challenging as ever. It is not simply about predicting the future, it is about shaping it in your favour. Intelligence warning is thus a tool of both foresight and influence, enabling political actors to mobilise and reallocate resources, prepare populations, deter, or prevent or pre-empt adversaries. Its success depends on the integration of knowledge, trust, and timely reaction - and its failure often reflects where that integration breaks down.

Disclaimer: All contents of this article represent the views of the authors alone.

Dr Bjørn Elias Mikalsen Grønning is Deputy Research Director at the Norwegian Intelligence School. He holds a PhD in Political Science from the Norwegian University of Science of Technology. Bjørn has served in senior academic, advisory and operational positions across the Norwegian defence sector, herein the Norwegian Defence Staff, the Norwegian National Security Authority, Norwegian Defence University College, Norwegian Defence Intelligence College and the Royal Norwegian Air Force Training Inspectorate.

Professor Stig Stenslie is Research Director and Head of the Centre for Intelligence Studies at the Norwegian Intelligence School in Oslo, and Professor at Oslo New University College. He holds a PhD in Political Science from the University of Oslo and has held visiting research positions at institutions including the King Abdulaziz Foundation for Archives and Research (Darah) in Riyadh, Columbia University’s Weatherhead East Asia Institute in New York, the East Asia Institute at the National University of Singapore, and King’s College London.