0% found this document useful (0 votes)
10 views

A Command and Control System for Air Defense Forces with Augmented Reality and Multimodal Interaction

Uploaded by

Felipe Kramm
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

A Command and Control System for Air Defense Forces with Augmented Reality and Multimodal Interaction

Uploaded by

Felipe Kramm
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

CGDIP 2020 IOP Publishing

Journal of Physics: Conference Series 1627 (2020) 012002 doi:10.1088/1742-6596/1627/1/012002

A Command and Control System for Air Defense Forces with


Augmented Reality and Multimodal Interaction

Long Chen*, Wei Wang, Jue Qu, Songgui Lei and Taojin Li
Air force engineering university in Xi 'an, shaanxi province, China
*
E-mail:[email protected]

Abstract. This work proposes a framework for the command and control system of air defense
forces based on augmented reality technology. The system uses Microsoft HoloLens hardware
as a carrier to realize the holographic display of battlefield situation information. The system
can help the commander to perceive and control the situation on the battlefield in an efficient
manner, in order to reduce the decision-making time of the commander. Commanders can
efficiently interact with the system through gestures and voice, deploy simulated military units
in the system through gestures, and send commands to weapon systems through the system. At
the same time, the system also supports multi-person real-time collaborative operation, which
enables the joint decision-making and sharing of commanders. This system is of great
significance to improve operational efficiency and reliability, and at the same time provides an
open platform for operational process planning, deduction, simulation, and verification
evaluation.

1. Introduction
In the field of military science , "command and control" (C2 ) the term originated in the "Art of War"
(1836) a book, Jomini used "Conmmand" and "Control of Operations" respectively when he
expounded "army command and operational Control" [1]. Command and control system refers to the
information system for commanders and their command organs to conduct command and control of
combatants and main battle weapons and equipment. It is mainly composed of hardware platforms
such as information processing, display, transmission and monitoring, as well as processing of system
command and combat functions. Software composition . The accusation system has the functions of
intelligence receiving and processing and situation generation, auxiliary decision-making, combat
simulation and evaluation, information display and distribution, tactical calculation, command
issuance, security and confidentiality, force management, training simulation, etc. Its basic task is to
assist commanders to master the battlefield in time Situation, scientifically formulate operational plans,
and quickly and accurately issue operational commands to the troops.
Sources many modern battlefield, the war situation the rapidly changing, the explosive growth of
information to the commander 's perception of the battlefield situation has brought new challenges.
The battlefield information needs to be quickly transmitted to the commander through the information
display interface. After the commander makes a decision, the control information is quickly fed back
to the accusation system through appropriate means. The traditional way of displaying system
information is mainly based on the two-dimensional computer screen. The information display is not
intuitive and efficient, which leads to the increase of the commander's cognitive load and the low
efficiency of human-computer interaction. Augmented reality, as a tool that assists people in cognition
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution
of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Published under licence by IOP Publishing Ltd 1
CGDIP 2020 IOP Publishing
Journal of Physics: Conference Series 1627 (2020) 012002 doi:10.1088/1742-6596/1627/1/012002

and interaction in the real environment, has its own technical characteristics to provide new solutions
to the situational awareness of commanders. Augmented reality systems used in military command
and control systems in can greatly improve warfighter situational perception can force [2]. This article
uses VR and MR technology to develop a set of air defense anti-missile holographic battlefield
command and control system. The system provides a holographic battlefield situation information
construction and command interaction implementation method, which can enhance the realism and
immersion of situation awareness, enhance the system's intelligent information service level, and
intelligently and efficiently provide images for commanders and staff. The intuitive battlefield
situation and visual information interaction interface realize a kind of intelligent information service
method for future advanced command post that supports different combat personnel and different
combat forms.

2. Background

2.1. Status Quo of AR Application in the Army


AR technology is also being applied in the military as well. In order to improve the effectiveness of
command and control cabin human-computer interaction, a number of US military research
institutions to carry out research next-generation allegations cabin human-computer interaction, the
US Defense Advanced Research Projects Agency from 2008-2014 continued support aimed at
improving Research on the augmented reality combat command human-computer interaction system
ARC4 ( Augmented Reality Command, Control, Communicate, Coordinate ) of tactical units in
outdoor day and night sports combat [3]. United States space agency in Space Situation Awareness
(SSA)User Defined Operating Picture (UDOP) ) project to explore augmented reality environment in
2D , 3D information display problems, analyzed and in physical space AR objects perform
multi-modal human-computer interaction problems based on eye movements, gestures, speech, and
controllers, problems with scaling, positioning, and directional operations of AR objects . 2D vision,
2D texture, and 3D visual object visualization in AR virtual environment Methods, such as
unconstrained AR virtual space that can configure the user's virtual environment [4].

2.2 Introduction to Holoens


HoloLens is a typical hardware device of mixed reality technology. When users wear the device, their
vision is not blocked, and they can still walk in real space freely [5]-[7], this is shown in figure 1.
HoloLens can track the user's movement and line of sight changes, project the virtual holographic
image into the user's eyes in the form of light projection, and at the same time support the user to
complete various forms of real-time interaction with virtual objects such as gestures, voices and gazes.
HoloLens has the following advantages over VR glasses.

Figure 1. HoloLens holographic processing unit


(1) Small size, can work independently, do not need additional host to provide computing power.
Traditional VR glasses need to render the entire virtual environment frame by frame, while AR only

2
CGDIP 2020 IOP Publishing
Journal of Physics: Conference Series 1627 (2020) 012002 doi:10.1088/1742-6596/1627/1/012002

needs to render a small number of objects superimposed in the real space, which greatly reduces the
computation.
(2) Scene positioning. The bottom layer of HoloLens comes with its own environment mapping
and positioning algorithm, which can adapt to various environments without additional auxiliary
pendants. The hologram in HoloLens does not change its position in real space when the user is
temporarily away, because this environmental information is already stored in the HoloLens.
(3) There are many ways to interact. In the virtual scene, users can operate on virtual objects in a
variety of ways, which are voice, gesture and gaze respectively. The operation of virtual objects by
gesture interaction has replaced the existence of the mouse in traditional computers.
(4) General application. The operating system of the HoloLens platform is Windows Holograpic,
which is also customized based on Windows 10. So the Windows 10 UWP universal application can
run smoothly on HoloLens. This not only reduces research and development and migration costs, but
also enables development efficiency to be greatly improved.

3 Technical Approach
The system architecture is shown in Figure 2. The system can support multiple users simultaneously
online for use, the user through gestures, voice and system be multi-channel interaction. The system
consists of a database, a Server / Client server, and a Host domain. The database is used for the storage
of data required for system operation and the construction of system virtual simulation scenarios,
including combat scenario data, combat weapon 3D model libraries, 3D terrain libraries, and various
model control scripts. Server / Client server is used to realize the real-time interaction of multiple
Hololens , and provides conditions for communication between Hololens . The Host domain contains
system interaction equipment Hololens and users. The system is constructed using the following
technology.

Figure 2. System architecture diagram

3.1 Interface design technology


The interface interaction in mixed reality is different from the traditional two-dimensional graphical
interaction in the form of windows, icons, menus, and cursors (WIMP) . Its purpose is to overcome the

3
CGDIP 2020 IOP Publishing
Journal of Physics: Conference Series 1627 (2020) 012002 doi:10.1088/1742-6596/1627/1/012002

two-dimensional limitations of traditional interaction and construct a more Natural and intuitive
three-dimensional interactive environment. Interactive devices currently used for desktop graphical
interfaces, such as a mouse, trackball, and touch screen, have only two degrees of freedom
( translation along the plane x and y axis ) , while objects in three-dimensional space have the
characteristics of moving in six directions of freedom, including along the three-dimensional space X ,
Y , Z axis translation and about the X , Y , Z axis rotation, due to the increased degree of freedom,
windows, menus, icons and conventional two-dimensional cursor in the three-dimensional interactive
environment will destroy the sense of space, and also Makes the interaction process very unnatural, so
new interaction interfaces need to be designed. Three-dimensional user interface surface does not
replace the traditional two-dimensional graphical user interface paradigm, but to solve the poor
performance of traditional interactive mode field, compared to traditional two-dimensional graphical
user interface, three-dimensional user interface has the following advantages:
(1) Enhance the user's comprehensive ability to process information, these abilities include
cognition, perception, learning and memory;
(2) Provide a new space for organizing, carrying, and presenting more complex information, which
can more clearly and intuitively show the relationship and difference between different types of
information;
(3) Make the information display more direct and easier to understand; it can introduce many more
natural and rich behaviors in human reality into the traditional human-computer interaction.
The user interface in the mixed reality system includes two parts: 3D Widget and 3D model. 3D
Widget is a concept derived from a two-dimensional graphical interface, and is an entity that
encapsulates three-dimensional geometric shapes and their behavior. The three-dimensional geometric
shape refers to the appearance of the 3D Widget itself, which is generally a rectangular
three-dimensional patch that can be scaled, rotated, and stretched in a three-dimensional space. The
behavior includes the interactive control of 3D Widget with other objects in the 3D scene and the
display of object information. 3D the Widget interact also comprise two types, one 3D the Widget
itself as the 3D objects in the scene, can be changed by its interaction position, size, direction, and
other attributes; Second 3D the Widget contents displayed, by gestures, gaze Etc. for interactive
operations. The specific design is:
(1) Position of the 3D Widget under the viewpoint: As an object in the virtual scene, the 3D Widget
can be moved, rotated, and scaled by the user. This operation can be controlled by gesture signals. In
the mobile Widget mode, the incremental matrix is moved by the hand. To calculate the amount of
movement of the 3D Widget , the specific calculation formula is:
MV  M H M H1  MV  (1)
MV is the matrix of the current frame 3D Widget under the viewpoint, M H is the current frame
virtual hand gesture obtained from gesture recognition, MH’-1 is the virtual hand gesture obtained from
the previous frame gesture recognition, and MV’ is the matrix of the previous frame 3D Widget under
viewpoint .
(2) 3D Widget 's movement with the head: After the user's head moves, the 3D Widget remains
stationary relative to the objects in the real world. Therefore, after obtaining the posture of the head
movement, calculate the 3D Widget from the head The offset matrix obtained by motion:
M W  M h M h01  M W0
(2)
MW current frame 3D Widget's position matrix relative to the head, MH is the current frame head
pose matrix, MH0-1 is the initial pose of the head, and MW0 is the 3D Widget's position matrix in the
initial state.
3D Widget activation status determination: the face of the interface of 3D Widget and 3D models,
interactive operation for only one of the only active object. The system judges the interference
between the ray formed by the center point of the screen as the origin and the direction of the

4
CGDIP 2020 IOP Publishing
Journal of Physics: Conference Series 1627 (2020) 012002 doi:10.1088/1742-6596/1627/1/012002

viewpoint and the 3D Widget , and calculates the first object that interferes with the ray. The object is
the active 3D Widget .

3.2 Implementation of multi-channel interaction


The realization of interactive gestures is another highlight of the system. After selecting the target,
interactive operations are performed through interactive gestures , completely eliminating the
dependence on input devices such as the mouse and keyboard. Generally in the research of augmented
reality interactive gestures, interactive gestures are classified into static gestures and dynamic gestures
for detection and recognition respectively according to the application purpose and gesture state . This
system uses a dynamic gesture detection, the hand is placed in front of the user 120 °× 120 °viewing
angle where , by collecting human hand depth of bone nodes, backbone nodes trajectory obtained as a
main characteristic gesture recognition, by comparing the starting backbone nodes, the end point and
the moving distance, and calculate the classification process, the final result obtained gesture
recognition. Two important gestures are Bloom and Air Tap, as shown in Figure 3 :

(a) Bloom gesture (b) Air Tap gesture


Figure 3. Gesture diagram
By Microsoft provides MRTK ( Mixed Reality Tool Kit ) the SDK can design different gestures in
response , using Gesture the Recognizer function can create different gesture operation type , call the
Input Module1 space IManipulationHandler classes and IInputClickHandler class
OnManipulationStarted, OnManipulationUpdated, OnManipulationCompleted, OnManipulationCan---
celed function to define different gesture trigger events.
In terms of voice interaction, you can enable the dictation recognizer by editing
MicrophoneManager, write the Global Listener function to start global dictation, and then call the
KeywordRecognize function in the I SpeechHandler class to specify the vocabulary to be recognized.
The GrammarRecognize function defines the relevant grammar. After the specified vocabulary is
recognized, Use OnPhraseRecognized function to trigger the corresponding event.

3.3 Multi-person collaboration technology


The display and command decision of battlefield situation requires the cooperation of multiple people,
so the system should be able to satisfy multiple users to operate a situation scene at the same time, and
the results of the operation are simultaneously displayed in all equipment. The two main issues that
need to be addressed in multi-person collaboration are :
(1) Decision conflict in multi-person collaboration
(2) Scene consistency in multi-person collaboration.
When performing multi-person collaborative operations on virtual battlefield decisions, multiple
operators may operate and control the same virtual object at the same time. If the system cannot
handle the situation where multiple objects access the same object, it may cause system abnormalities
or decisions The results are confusing, so it is particularly important to design a sound multi-person
collaborative access conflict resolution strategy. The server should use the data information of the
users participating in the collaborative operation to perform simulation calculations to judge the
conflict and effectively resolve the conflict.

5
CGDIP 2020 IOP Publishing
Journal of Physics: Conference Series 1627 (2020) 012002 doi:10.1088/1742-6596/1627/1/012002

In the command and control decision-making process, each user ’s operation has the same weight
for server simulation calculations. Each calculation must be performed according to the user ’s latest
frame of complete operation data. The calculation is completed and displayed in real-time
visualization. Affect the operating experience of each user. When multiple users work together, users
will consciously avoid conflicts and operate in order. The server simulation simulation calculation will
complete the calculation within a set period. Generally, there will not be conflicts when different
instructions are issued for the same object, but unavoidable access conflicts will also occur. For
example, when two users make a decision on a certain mobile in a decision There will be differences
in the deployment of forces and simultaneous deployment to different locations. At this time, data
conflicts will occur in the calculation. Therefore, it is necessary to use a certain strategy to resolve
conflict operations, as shown in Figure 4 . When the server calculates according to the latest frame of
complete operation data of each user, if there is a conflict of access data during the process, the server
will select the correct client operation according to the specific first-come-first-served rule as the
preferred operation. , And then judge the conflict of access data until the conflict disappears, and
finally output the calculation result and visualize it.

Figure 4. Conflict Resolution Strategy


The multi-person collaboration solution of this system is realized through the collaboration of
multiple HoloLens glasses. During the collaborative operation, each user participating in the
disassembly operation interacts with the virtual model through gestures, voice, gaze, and only the user
sees in his HoloLens glasses The virtual model is the same as the virtual model seen by other users,
which is the real collaborative operation. Therefore, the system must ensure the real-time consistency
of the position and state of the virtual model in the real scene. For example , when user 1 performs the
operation, user 2 can also see the entire operation process, and when the state of the virtual model that

6
CGDIP 2020 IOP Publishing
Journal of Physics: Conference Series 1627 (2020) 012002 doi:10.1088/1742-6596/1627/1/012002

user 2 sees is the same as the state that user 1 sees, he can also perform his own operation, and other
users participating in collaboration The operation result of user 2 can be seen in real time .
Multi-person collaboration data transfer is achieved through the Socket protocol, through the
Server / Client server interface to build the server and transfer client data. This system introduces a
message consistency check in the host domain in the server - client one-to-one communication mode,
and adds a real-time message synchronization mechanism between devices based on the traditional
socket network connection, as shown in Figure 5 . When any shared message data in the host domain
marked with the [ SyncVar ] label is modified on the server and client , the data is shared to all devices
in the host domain through the wireless network , and before responding to the received data operation,
the device will data received in the host conducted public domain, whether the data information to
check all equipment received unanimous, as the same fruit, then further rendered according to the
message data, otherwise, have to give up all the equipment in response to this message, until the
device is shared message The data becomes consistent.

Figure 5. message synchronization mechanism diagram

4. System functions
We build a multiplayer sync air defenses, command and control system used, the system can achieve
situational awareness, command is sent, the deployment of troops and shared decision-making of
feature , shown in Figure 6 below.

Figure 6. System functions


(1) Situational awareness: Thanks to the combination of virtual and real AR technology and
real-time interaction, this system can help commanders to efficiently understand the battlefield

7
CGDIP 2020 IOP Publishing
Journal of Physics: Conference Series 1627 (2020) 012002 doi:10.1088/1742-6596/1627/1/012002

information. For example, the deployment of enemy forces, the performance of our weapons, and the
terrain of the battlefield are displayed to the commander in three dimensions to reduce the
commander ’s cognitive load and better control the battlefield situation, as shown in Figure 7 .

(a) Radar range display (b) Air defence missile fire range display

(c) Display of enemy strike intentions


Figure 7. Battlefield situation information display
(2) Command sending: The commander can click the 3D model of our weapon to generate an
operation command based on the battlefield situation information, and control its system to
counterattack.
(3) Force deployment: The commander can retrieve the desired force unit model from the 3D
model library. Such as various types of anti-aircraft missile vehicles. The commander can hold the
gesture to interact with the 3D model and move its position in the terrain to complete the deployment
and adjustment of the force unit.
(4) Decision sharing: Thanks to the realization of multi-person collaboration, commanders can see
the decision-making situation of other commanders at the same time, and can also show their
decision-making to other commanders. Of course, this can only be done after the judgment and
command of the conflict resolution machine.

5. Conclusion and Expectation


This system successfully realized the battlefield situation based on augmented reality technology and
dynamic holographic display of the combat process, and supported multiple interactive modes such as
gestures and voice to command and control air defense operations. This system is of great significance
for the improvement of combat efficiency, and at the same time provides a new platform for the
planning, deduction, simulation and verification evaluation of the combat process. The system still has
deficiencies. The following describes the areas where the system can be further researched, developed
and tested:

8
CGDIP 2020 IOP Publishing
Journal of Physics: Conference Series 1627 (2020) 012002 doi:10.1088/1742-6596/1627/1/012002

(1) More natural display effect. Although the system realizes the battlefield situation virtualization
and command control, it is limited by the hardware conditions. The display viewing angle is limited,
and the display delay problem when multiple users collaborate on the same local area network needs
to be improved.
(2) Real-time access to combat information. The true meaning of the intelligent holographic
situation display and command and control system should be connected to the battlefield information
in real time to provide real-time display and guidance for the commander of the commander.
Unfortunately, due to the non-uniformity of the data interface of each weapon system, the function is
difficult to achieve .
(3) A more flexible holographic application interaction method. The holographic application
developed by the system needs to interact with 3D menus and virtual objects through gaze, gestures,
voice, and so on to control the movement of the model. If more intelligent interaction methods such as
eye movement control and brain-computer interface can certainly make the operation experience A
qualitative change has occurred.

References
[1] Henri D J 1838 The Art of War New York, NY: Greenhill Press
[2] Julier S, Baillot Y, Lanzagorta M, Brown D G and Rosenblum L 2000 BARS: battlefield
augmented reality system Nato Sympo-sium on Information Processing Techniques for Military
Systems pp 9-11
[3] Gans E, Roberts D, Bennett M, Towles H, Menozzi A and Cook J 2015 Augmented reality
technology for day/night situational awareness for the dismounted Soldier Display Technologies
and Applications for Defense, Security, and Avionics IX;and Head- and Helmet-Mounted
Displays XX, 947004 (21 May 2015)
[4] Jenkins M, Wollocko A, Negri A and Ficthl T 2018 Virtual, Augmented and Mixed Reality:
Applications in Health, Cultural Heritage, and Industry, pp.272-288
[5] Taylor A G 2016 develop microsoft holoLens apps now Apress
[6] Chen H, Lee A S and Swift M 2015 3D collaboration method over holoLens™ and skype™ end
points International Workshop on Immersive Media Experiences. ACM, 2015
[7] Hockett P and Ingleby T 2016 Augmented Reality with Hololens: Experiential Architectures
Embedded in the Real World

9
Reproduced with permission of copyright owner. Further reproduction
prohibited without permission.

You might also like