The tactical level of war is the culminating point of military strategy, and where operational art (the goals of military strategy) is sculpted into a gallery of tactical actions. At the forefront of strategy, tactical manoeuvre plans are nested with the big picture. The Australian Army is accustomed to navigating these levels of conflict and derives a competitive advantage from fundamental functions: combined arms effects and tactical command posts. Combined arms doctrine is proven practice as a battle-winning formula. Combat and support elements unify to leverage collective strengths while obviating singular vulnerabilities. However, balancing the equation of close combat with many variables takes careful planning and accurate intelligence to produce optimum tactical results.
To achieve favourable mission effects, the tactical command post is enshrined in foundation warfighting doctrine as an essential enabler and the source of mission command execution.
Tactical command posts are currently equipped with digital battle-management systems that provide planning tools and decision-support aids, which are manipulated and analysed by human operators. This manual approach to tactical design is guided by formal military appreciation processes and from regular contributions by the commander. At the brigade level this can mean days of planning, targeting analysis and multiple staff briefings to shape the blueprints for manoeuvre. In conducting a deliberate planning cycle, rushing to failure is generally avoided due to the nature of protracted planning processes and inherent ambiguities of the battlefield.
But how will the function and form of tactical command posts evolve in an age of artificial intelligence, where complexity or vast quantities of data can be rapidly distilled to clarity by a computer program that learns and iteratively improves its performance?
Artificial intelligence with capacity to identify patterns in mass data and assist in decision-making is already a reality, but has yet to be applied in a tactical command post setting. Project Maven is using artificial intelligence software to analyse US military drone imagery to track terrorists and has plans for expansion. Other nations are also investing in artificial intelligence programs for military applications. So it may only be a matter of time before smart battle-management systems emerge. While this represents a step-change in the conduct of war, automated mission support systems may be exponentially more capable if they are accelerated with quantum super-computing technology.
Quantum super-computers are expected to revolutionise the information technology industry due to their unprecedented processing power, but in context of artificial intelligence mission systems the trajectory is clear. Tactical decision at machine speed appears to be on the technology horizon, which may in future conflicts erode Army’s longstanding and deeply entrenched competitive advantage. So while instantaneous quantum processing speeds may be years away from being operationalised, preparing for this alternative future should ideally start soon, noting artificial intelligence is likely to be a continual source of strategic surprises.
An adversary commander augmented by digital super-intelligence mission support systems in a quantum command post might routinely gain the initiative in close combat situations.
If augmented command threats materialise, it will only be the quality of Army’s equipment, tactics and leadership at the point battle that could be relied on in response to tactical surprise. As Napoleon remarked ‘terrain is to courage like three against one’, so regardless of how good Army’s soldiers are, if they have to fight at a disadvantage they may fail. This dystopian circumstance would have demoralising effects, if soldiers knew their moves were anticipated by enemy with predictive quantum technologies. This moral dislocationmay be amplified if the adversary possessed a digital dossier on the Australian Commander(s) due to cyber surveillance.
Relying on legacy battle-management systems and planning processes, while engaging in combat with a quantum command post enabled opponent may be an exercise in futility.
While this hazard is yet to unfold, indicators are emerging in relation to artificial intelligence surpassing human cognitive abilities. Recent tests proved that Chess Grand Masters can be quickly beaten in a game of chess by artificial intelligence software. So extrapolation of this software engineering achievement suggests that Army’s Tactics Grand Masters might also be outclassed by artificial intelligence in the future. Particularly if intelligent mission systems were boosted with quantum brain power. This would mean that even the most experienced military officers in command appointments may be at risk of tactical defeat by taxonomies of shrewd war algorithms.
While two-dimensional games of chess are not comparable to incongruous multi-domain battle, or management of chaos in close combat, there are analogous themes for war algorithm design and intelligent decision support source code engineering.
A game of chess involves opposing forces employing combined arms effects of chess pieces with diverse capabilities. The Queen is comparable to attack aviation able to roam across the entire battlefield, while Rooks are akin to tanks that destroy targets via direct fires. Knight pieces are cavalry that manoeuvre through infantry (Pawns) battle positions and shape the battlefield. Artillery is provided by Bishops and the King is a command post to be defended. Furthermore, Chess Grand Masters and Brigade Commanders both strategise temporal moves. They also plan steps ahead, which are influenced by threat dispositions and bounded by chess board (aka terrain) restrictions.
Quantum speed systems will target and emasculate Army’s tactics, operations and strategy, as opposed to its capabilities. Soldiers may be incidental to the violence it unleashes. Worst case our forces would be like innocent bystanders suddenly caught in a drive-by shooting.
While an apocalyptic portrayal of quantum artificial intelligence (QAI) futures is open to conjecture, there is mitigation. QAI has a critical vulnerability due to reliance on accurate data. This is no different to extant tactical command posts, where decision is influenced by precise intelligence reports. So there is cause for optimism if threat QAI mission systems can be foiled using tactical deception to confuse data flow. Also the human-factor should not be underestimated, given the instinctive capacity of commander’s tactical foresight to prevail during a contest of wills.
A Napoleonic axiom of ‘never interrupt your enemy when they are making a mistake’ illustrates commanders prescience; as knowing when to exercise tactical patience vs tactical impatience is a critical skill that might be problematic to anthropomorphise in a machine.
Consequently, nascent QAI may have limitations, but future generations i.e. QAI v3.5 promises to be potent. Imagine insurgent groups with QAI technology, which enables them to be increasingly lethal and difficult to contain. Moreover, in high-intensity conflict QAI might be a decisive factor between combatants with 5th Generation capabilities. Consider a stalemate between high-tech nation states in future war – perhaps the side with the most superior QAI technology might break the deadlock in the same manner as tanks did in the First World War?
A QAI ‘Rommel’ or ‘Guderian’ orchestrating ‘Blitzkrieg v2.0’, or a quantum speed mission system that synthesises and applies tactics of the best military commanders in history, could be a development pathway that artificial intelligence software engineers explore.
So how can Army adapt to this technology upheaval? Initially, the phenomenon could be included in a line of modernisation, as QAI may become a key feature of Robotics and Autonomous Systems strategy. Commanders and staffs will benefit from the cognitive cover that QAI may generate. Force structure alterations might also be necessary, given tactical deception will likely be a pivotal means to thwart smart mission systems. It follows that Sun Tzu’s maxim of ‘all war is based on deception’ may find new relevance in efforts to counter digital super-intelligence threats.
Dedicated tactical deception force structures could include deception platoons at unit level, a deception sub-unit in each combat brigade and a deception unit at divisional level.
Dedicated deception forces such as the Ghost Army were successfully used by allied forces in WWII, so this proven concept could be trialed again. Deception planning should also be evaluated during career courses and field training. Additionally, inventive ‘dummy’ equipage could be acquired via a bespoke project. Fake tanks and diversion drones would obfuscate threat sensors and QAI information fusion. Standing deception forces and specialist equipment might seem a luxury, but may prove to be indispensable. So tactical sleight-of-hand will be useful, but augmented command via QAI mission support may be prime elixir for the Achilles heel of human decision.
About the author: Lieutenant Colonel Greg Rowlands is an infantry officer with 27 years of Army service. He is a graduate of the Army Command & Staff College and the Capability & Technology Management College. He has served as a staff officer in unit, formation and joint task force headquarters, including as a battle staff and combat tactics instructor at the US Army Infantry School.
 Quantum physics has multiple applications, which are currently being researched in addition to Quantum Super-Computing, such as Quantum Radar and Quantum Communications. A Quantum Command Post might also feature these game-changing technologies, making it an even more formidable threat in future conflicts. https://www.news.com.au/technology/innovation/military/quantum-radar-canada-joins-race-to-develop-new-allseeing-eye-to-defeat-stealth-interference/news-story/acc1d00bf038a6da8fa9d11f8787fa29
 Moral dislocation is the undermining of a fighting force’s legitimacy by targeting the source of moral strength, such as breaking the bonds between leadership, people and military forces or manipulation of attitudes of stakeholders, populations and military forces. A force is morally dislocated when these attitudes are unsynchronised and fail to complement each other. Australian soldiers fighting at a tactical disadvantage due to a lack of QAI technology may generate a destabilising effect on morale and efficacy that would likely resonate well beyond the military.
 Artificial intelligence software and quantum processing enablers could be deployed at all levels of conflict. But this would create technical complexity due to the increased number of variables (i.e. considerably more chess pieces on several 3D chess boards) in joint warfighting scenarios. Therefore taxonomies of algorithms designed for each level of command post from tactical to strategic may be necessary to provide ‘total force’ risk mitigation from QAI threats.
 It will be vital to maintain a human-in-the-loop with QAI mission support systems to ensure accountability remains vested in the commander and not the algorithm. Checks and balances will be necessary for combat mission approval.
 Inclusive of bespoke cyber and electronic warfare capabilities designed to disrupt and circumvent QAI technologies.
 Employment of QAI technology will trigger ethical dilemmas, cyber security concerns and Geneva Conventions for the use of force. This will include the key question of automated targeting, as it will no doubt arise during QAI system development. Resolving these critical issues will be paramount if Army elects to progress a comprehensive autonomous modernisation program: http://blogs.icrc.org/law-and-policy/2018/04/11/human-judgement-lethal-decision-making-war/