VTD Ebook W
VTD Ebook W
SPEED
LIMIT
60
Authors:
Marius Dupuis
Dr. Luca Castignani
Dr. Keith Hanna
Smart
Autonomous
Mobility
Enable. Accelerate. Deploy.
Enabling customers of all types and
sizes to accelerate and deploy
a bold autonomous mobility vision.
mscsoftware.com/autonomous
Table of
contents
Foreword 04
Every year, 1.24 million people die in traffic accidents and 50 million are injured worldwide
(WHO data, 2013), and over 90% of these collisions are due to human error. The deployment of Level 5
autonomous vehicles can potentially save hundreds of thousands of lives every year. Simulation has a
big role to play in accelerating the development of this sector. Industry leaders across the globe
including companies like General Motors, BMW, Audi, Volkswagen are leveraging virtual testing to
validate and to verify Advanced Driver Assistant Systems (ADAS) and autonomous driving systems.
This is where MSC Software wants to make a significant contribution through solutions like VTD where
we experiment every relevant driving condition, including system faults and errors. Companies like
Waymo is running a fleet of 25,000 virtual cars 24/7, simulating 13 million kilometers per day.
Simulation is critical to us for achieving billions of miles of testing for automated driving development.
With our e-book on autonomous driving, we hope the readers will gain valuable insights on recent
Research and Development in the self-driving space. The book also endeavors to shed some light on
why autonomous driving is important and what is realistically achievable in the next 5 to 10 years.
Simulation Framework
‘C
Since the development of CDAS requires at least two
attention in recent years within automobiles. interacting vehicles, the implementation and the
In order to increase the quality of signals,
the availability and the perception range as well as to framework for connected vehicles. Figure 2 gives an
decrease the latency and the probability of total failure, overview of the proposed architecture used within
advanced perception systems consisting of camera, this project (reference 1). The detailed description of
radar and lidar systems with vehicle-to-vehicle (V2V) interfaces and functionality follows hereinafter.
communication are required. Moreover, V2V
communication enables advancements from individual A. Application
to cooperative decision making. Advanced driver
assistant systems (ADAS), which determine their
planner implemented in ADTF (Automotive Data and
“cooperative behavior”, are capable of increasing the Time triggered Framework) and a controller for each
total utility of a group of cooperative vehicles. However, involved vehicle. There are three relevant interfaces
several technical issues have to be resolved on the way
environmental model and the current vehicle state
systems (CDAS) on our public streets. For example, provided by the simulation gateway. The second
handling the misuse of the communication channel and interface to the network enables the communication
between vehicles. The last interface to the simulation The central component is the task control
gateway realizes the controllability of the vehicle. coordinating additional modules with the help of the
module manager. Additional modules are the
B. Simulation Gateway scenario with roads and vehicle information, the
For each vehicle (here an example is shown for three controlled vehicles and the Image Generator (IG).
The virtual environment transmits its information via
among others two tasks: the modeling of the perception Ethernet on the Real Time Data Bus (RDB)
(environment and vehicle state), and the reaction to interface. Furthermore, the Simulation Control
controller outputs. The interface for the vehicle state Protocol (SCP) interface provides a mechanism for
includes, but is not limited to, the velocity, the longitudinal operating the simulation.
and lateral acceleration, and the steering wheel angle.
D. Network
C. Virtual Environment
The network simulation can emulate the
The software Virtual Test Drive (VTD) developed by communication of the application via for example,
VIRES provides the virtual environment we used. ETSI ITS G5. In order to simulate the signal damping,
the analog model uses information about line of sight
and distances between the communicating vehicles.
The RDB interface and the map of VTD (*.xodr
format) contain the required information.
Simulation Results
preferences of each vehicle. The truck focuses on As an initial scenario, a driver starts an overtaking
maneuver on a rural road. The driver misjudges the
and the blue vehicle prioritizes comfort. situation and the danger of a collision with the
Each vehicle comes to a different evaluation or rating detects the danger and starts/triggers the
of the offers, because of the varying preferences. cooperative maneuver planning. The detection
The varying preferences can be caused by different criterion could also be the time to collision (TTC). The
brands, different vehicle models (sedan, van or SUV), TTC is calculated as the quotient of distance and
or by an online driver monitoring system. Table II relative velocity. The calculated cooperative
shows the results of the evaluation of each plan by maneuver plan targets the completion of the
each vehicle and the result of the two proposed overtaking maneuver of the red vehicle and a
selection criteria. The selected solution (bold) deceleration of the truck and the blue vehicle.
represents the compromise of the solution options.
Plan (c) is selected by the sum criterion and plan (a)
is selected by the squared sum criterion. longitudinal controller has a linear increasing controller
error. This is caused by a constant velocity error. A
B. Closed Loop Simulation possible reason is that the longitudinal controller does
The closed loop or hardware-in-the-loop simulation maneuver. In this case the vehicle decelerates
enables a study to evaluate the control error stronger than planned. The lateral controller shows an
considering communication and calculation latencies overshooting. The vehicle stays with 50 cm maximum
controller error in a safe condition (stays on road, no applications disable a development and later
collisions with obstacles). This error is caused by validation without considering the multi-directional
latencies and systematic errors in the feed forward
controller. However, systematic errors, difference
between vehicle dynamics model and inverted model components and considers modeling of perception,
in the feed forward controller, are made on purpose. communication, and controlling of several vehicles in
A perfect vehicle dynamics model in the feed forward a virtual environment.
controller is impossible in reality, because of for
example changing loads, changing wheel Reference
characteristics, and changing surface etc. Further
controller adaption will be done with the help of real 1. “A Cooperative Driver Assistance System:
Decentralization Process and Test Framework” by Kai
• Drawbar
• Sand Climb
• Handling • Traverse
• Lane-change
Real-Time Model • Cornering
• Ride
• RMS roads
• Halfrounds
• Obstacles Uncertainty
Quatification
Virtual Test Drive
• Mapping
• Route Selection
(with Luciad)
T
he mobility of a ground vehicle can be the difference
between mission success and mission failure on the
battlefield. In today’s defense environment, there is a
need to create rapidly deployable, highly mobile vehicle
platforms that operate reliably across various terrain and road
types. Vehicle simulation capabilities for assessing performance
for different environmental conditions and operational scenarios
have increased significantly in recent years.
Virtual| Test
Volume IX - Summer 2019 Drive | vires.com| |19
mscsoftware.com 11
Figure 3 Adams Model Validation against Test Data
Vehicle Capabilities
Elevation
Figure 5 Mapping workflow, showing speed made good and route prediction
Virtual| Test
Volume IX - Summer 2019 Drive | vires.com| |21
mscsoftware.com 13
Q&A
Autonomous
Vehicle Testing
With Christopher Kinser, General Motors,
Milford, Michigan, USA
E
ngineering Reality Magazine recently interviewed Chris Kinser from General
Motors, the Director of their Global Autonomous Driving Center in Michigan
and an industry expert in rapidly emerging sector of vehicles (AVs). He has
G
Figure 1: Cruise Autonomous Cars eneral Motors operates a total vehicle performance center at the Milford
Proving Ground in Michigan (Figure 2). The Global Autonomous Driving
Center is a subset of this work focused on developing active safety
features like advance park assist, lane keep assist, full-speed range
adaptive cruise, and Super Cruise. This work is guided by GM’s vision of
a future with zero crashes, zero emissions, and zero congestion. The mission of our team
is to provide smooth, capable driver assist systems that delight our customers.
The industry standard scale for levels of autonomy (SAE) is helpful from an academic
perspective when discussing vehicles and their capabilities. However, when we begin
development of a new vehicle or system, we don’t start with a level in mind, but rather
with the use case and a set of features that we believe we can safely implement. It is this
focus on safety that guides us through the process.
General Motors is the only company that has everything from design, engineering
validation, and testing all under one roof. This is more than just designing and building
the vehicle. It also includes everything from in-house security and connectivity systems
to software development and high-resolution mapping. Having everything under one roof
puts us in a unique position to safely develop and deploy autonomous vehicle technology.
Super Cruise
Super Cruise is an advanced driver assistance feature that enables hands-free driving on
supported roads. It combines adaptive cruise control and lane-centering control with a driver
attention system (Figure 3) to allow you to drive with your hands off the wheel and eyes
on the road. Super Cruise is aimed at providing comfort and convenience in long-distance
travel and daily commutes. Customers receive updated maps on a regular basis (Figure 4).
Figure 2: General Motors operates a total
vehicle performance center at the Milford
Proving Ground in Michigan
Virtual
Volume X - Winter 2019 mscsoftware.com
| Test Drive | vires.com| |11
16
Safety is
engineered
into every
step in Cruise’s
self-driving
vehicles Figure 3: General Motor’s Super Cruise
development,
manufacturing, Cruise Autonomous Vehicle
testing and (AV) Program
validation. In May 2016, GM completed the into every step in Cruise’s self-driving
acquisition of Cruise Automation a vehicles including design, development,
Silicon Valley startup with considerable manufacturing, testing and validation.
self-driving software development On a typical day, Cruise autonomous
expertise. Combined with our expertise in test vehicles safely execute 1,400 left
engineering and developing vehicles, our turns and our teams analyze all that data
teams began testing self-driving vehicles and apply learnings. Based on Cruise’s
in San Francisco, CA, Scottsdale, AZ experience of testing self-driving vehicles,
and Warren, MI. By September 2017, we every minute of testing in San Francisco
revealed our first self-driving test vehicle is about as valuable as an hour of testing
built from the start to operate on their in the suburbs because of the complex
own with no driver (1). Safety is engineered decisions being made.
Reference
‘How we built the first real self-driving car
(really)’, Kyle Vogt, Cruise, September 11, 2017
Blog Post: https://medium.com/cruise/how-
we-built-the-first-real-self-driving-car-really-
bd17b0dbda55
Multi-Resolution
Traffic Simulation
for Connected Car Applications
using VIRES VTD
Since those systems often exhibit safety- Microscopic Traffic Simulator: SUMO
critical features, rigorous testing and We chose to use Simulation of Urban
validation must be completed before their MObility (SUMO) as the traffic simulator
mass adoption. Although real road tests responsible for the simulation of the
using physical prototype vehicles offer the low-resolution area (LRA). SUMO is
highest degree of realism, the large amount a microscopic, space-continuous,
of resources needed to perform large-scale and time-discrete simulator. While it is
and extensive testing of vehicular networks employed in a wide range of research
renders their use impossible. Simulations domains, its most notable use is shown
are essential to validate the performance in a high number of research papers
of such solutions in large-scale virtual regarding VANET simulations. SUMO is
environments. Furthermore, simulation- well known for its high execution speed,
based evaluation techniques are invaluable as well as for its extensibility. SUMO is
for testing those complex systems in a wide ideally suited to simulate a high number
variety of dangerous and critical scenarios of vehicles residing in the LRA due to
without putting humans at risk. its efficiency, which is partly achieved
through its simplified driver model (which
In the automotive industry, the use of determines the path a vehicle will take).
simulation (Figure 1) is well established
in the development process of traditional Nanoscopic Traffic and Vehicle
driver assistance and active safety systems, Simulator: VIRES Virtual Test Drive
which primarily focus on the simulation of We employ the nanoscopic traffic and
individual vehicles with a very high level of vehicle simulator VIRES Virtual Test Drive
detail. When investigating and evaluating (VTD) for the simulation of the high-
the performance of ADAS based on resolution vehicles. VTD was developed
vehicular communication, this isolated
view of a single vehicle alone or a small
number of vehicles in the simulation is not
sufficient anymore. Potentially, every vehicle
equipped with wireless communication
technology could be coupled in a feedback
loop with the other road users participating
in the vehicular network, and therefore,
the number of influencers that need to be
considered is drastically increased.
Virtual
Volume X - Winter 2019 mscsoftware.com
| Test Drive | vires.com| |69
20
for the automotive industry as a virtual test been reached for SUMO and the condition
environment used for the development TVTD ≥ TVTD + SSUMO is therefore fulfilled, the
of ADAS and Autonomous Vehicles. Its state of the high-resolution vehicles is sent
focus lies on the interactive high-realism to SUMO through a gateway. This triggers
simulation of driver behaviour, vehicle the simulation of the next timestep in the
dynamics, and sensors. VTD is highly low-resolution model, and as a result, the
modular, so any standard component may positions of the low-resolution vehicles
be exchanged by a custom and potentially are passed back. These vehicles are now
more detailed implementation. Its standard classified, and, if applicable, the change
driver model is based on the intelligent of resolution is performed for individual
driver model; however, an external driver vehicles. When an exchange of a vehicle
model may be applied if necessary. The between the simulators happens, the
same concept applies to the vehicle previously mentioned inherent difference
dynamics simulation, where the standard in the underlying road network may cause
single-track model can be substituted by problems if a vehicle cannot be mapped
an arbitrarily complex vehicle dynamics based on its position in a specific lane due
model adapted for specific vehicles. Each to differences in accuracy. This is especially
simulated vehicle can be equipped with true for complex intersections which are
arbitrary simulated sensors, for example a modelled quite differently.
RADAR sensor, which is shown in Figure 2.
After all the resolution changes have been
Offline Pre-processing successfully completed, the simulation is
Figure 2: 3D visualization of a simulated
RADAR sensor in VIRES VTD To Enable Coupling unblocked again and the next timestep can
The two simulators rely on different data be simulated. This synchronization is very
formats representing the modelled road important to ensure reproducible simulation
network. In order to be able to run a results across multiple simulation runs.
co-simulation of both simulators, the
underlying data basis must match. VTD
uses the OpenDRIVE format to specify Dynamic Spatial Partitioning
the road network. This specification of The Simulated Area
models the road geometry as realistically
as possible by using analytical definitions. Our approach aims to couple traffic
SUMO on the other hand approximates the simulation models of different resolutions
road network geometry by line segments. at dynamic regions of interest. Contrary to
There are also differences in the modelling conventional traffic simulation, we are not
of intersections and lane geometries. interested in investigating a large number
To achieve a matching database, we of vehicles from a bird’s eye perspective,
convert the road network in an offline pre- but the focus is rather on a single vehicle
Figure 3: Dynamic partitioning of the processing step from OpenDRIVE to the (or a limited number of vehicles) which are
simulation area file format SUMO supports. used to conduct a test drive in the virtual
environment. This vehicle of interest has the
Online Coupling and Synchronization ADAS system under investigation onboard,
The coupling of the simulators at and is referred to as the EGO car. The
simulation runtime is based on the simulated measurements and sensor values
master-slave principle. Figure 4 shows are fed into the ADAS, and depending on its
this sequence of operations during a type and its use case, the respective ADAS
single simulation step, in which VTD and directly or indirectly influences the vehicle’s
SUMO can operate with different temporal state and behaviour.
resolutions without losing synchronization.
Based on this distance criterion, an area
SVTD is the length of a time step for the high- of interest is defined that centres around
resolution area (HRA), whereas SSUMO is the the EGO car, and in which the defined
length of a time step for the low-resolution simulative high-fidelity requirements
area (LRA). Typically, the nanoscopic must be fulfilled. Since the EGO car is
simulation is run at a higher frequency driving continuously through the virtual
than the microscopic one. TVTD and TSUMO environment, this area of interest is likewise
respectively denote the local simulation being moved along. We therefore partition
time in each simulator. At the beginning of the global area of the simulation dynamically
Figure 4: Comparison of simulation resolution each simulation step, a new timestep is into a high-resolution area (HRA) and a
switching simulated in VTD. If the next timestep has low-resolution area (LRA). Figure 3 shows
Virtual
Volume X - Winter 2019 mscsoftware.com
| Test Drive | vires.com| |71
22
Figure 6: Simulation performance—nanoscopic simulation only Figure 7: Simulation performance: multi-resolution simulation
Conclusion
S
elf-driving is becoming more and more testing have we done so far? Waymo, the world’s
realistic. Every day, thousands of autonomous leading autonomous driving company in road testing,
vehicles (Figure 1) are being tested on the has accumulated an impressive 9 million miles in the
roads by companies like Waymo, Cruise, Uber, Tesla, past 9 years. However, even if we increase that effort
and some of those companies have accumulated by 10 fold, it would still take about 100 years for us
millions of miles of road testing data, enhancing and to complete the validation of one self-driving system,
validating their autonomous “brain”, with the hope that if we solely rely on road testing.
in the near future, full automation can be achieved.
As long as you only have to check a few use cases
When the Pumpkins Take a Stroll (in the range of tens), you can easily test them on
real roads. However, in order to assure safety for
Today, everyone understands the importance of road Autonomous Vehicles, the number of conditions to
testing for self-driving vehicles, and the industry is
spending a fortune on it. On an average, a fully
equipped autonomous vehicle can cost more than
half a million dollars, so a small fleet of 20 vehicles
would mean a 10-12 million dollars investment in the
hardware itself, to perform the road testing for
autonomous driving. However, is road testing really
enough to help us reach level 5 autonomy in the
foreseeable future?
surfaces to simply capturing the basic sensor The AI Driver is the core of every autonomous system,
characteristics (in order to achieve the maximum and users can easily connect VTD to their own AI
Driver to carefully validate them under all conditions,
the variety of the sensor models, team VTD is also including sensor failure or misbehavior such as mud
working with the world-leading sensor manufacturers sputters covering a portion of a LiDAR. MSC Software
like Leica and NovAtel (all part of Hexagon). is also working with its sister company,
AutonomouStuff (both part of Hexagon), to connect
D. 3D Driving Environment AutonomouStuff’s AI Driver to VTD so partners of
AutonomouStuff can run their physical road tests and
A 3D virtual environment can be generated either virtual tests with exactly the same AI brain.
from inside VTD, or from scanning the actual roads.
Creating the environment inside VTD gives you In summary, today Hexagon owns many of the
maximum control over all the details, while simulation and testing assets necessary for
generating the 3D environment from measurements autonomous car projects: sensors and technology
(LiDAR/camera) is more realistic and much faster. to manage smart intermittent sampling, HD maps
With Hexagon’s new Leica Pegasus:2 mapping from Hexagon Geosystems, and a turnkey platform
platform and its connection with VTD (Figure 5), for autonomous vehicle development from
engineers are expected to speed up the “Road AutonomouStuff. Add in MSC Adams vehicle
Digitization” by a factor of 20 in the near future. modeling, VTD to recreate the external environment
E. Scenarios and Data Management SimManager, and there is a very compelling turnkey
autonomous vehicle solution toolset for both
With millions of scenarios to be evaluated at each simulation and testing awaiting both OEMs and
step of the autonomous vehicle development, there Start-ups around the world.
is simply no way to manage everything manually.
Indeed, Intel calculates that 1 Petabytes of data will
be generated each day by an autonomous vehicle.
That is where SimManager, the simulation
management platform of MSC, comes into play to
store the generated data and appropriately label
them for easy access at any stage. With such a
D
ue to advancements in sensor technology and data
processing algorithms over recent years, great Sensor measurement models, on the other hand, are based on
progress has been made to enable automated driving a physical description of the measurement process, and they
systems to improve safety and comfort for the vehicle driver generate low-level measurement data based on the virtual
and occupants. Yet, due to the complexity of self-driving, one scene. Models of this type are commonly used for a variety of
of the main challenges remains in ensuring and validating the sensors in robotics research, while the measurement models
safe conduct of the automated driving systems for public use. for automotive sensors are only emerging.
Virtual worlds provide a suitable, safe and controlled In this article, we introduce a sensor measurement model for
environment to handle an important part of the required testing an automotive LiDAR sensor. The model is based on a ray
and validation efforts. A proper choice of scenarios as well as tracing approach for the simulation of the measurement
the generation of virtual sensor data that closely matches reality process. This enables the real-time generation of a LiDAR Point
are among the central requirements for the success of the virtual Cloud within the framework of an automotive driving simulator.
development approach. Virtual sensor data is generated by By directly comparing data from the real-world test drive to
means of sensor models that form a central component of the virtual data generated by the sensor model in a virtual
virtual environmental perception (Figure 1). This perception data environment, we are able to quantify the accuracy and validity
constitutes one of the main input streams for the decision of the sensor model using appropriate metrics.
making algorithms of an automated driving system. Hence, the
fidelity of the sensor model is a deciding factor for the viability SENSOR MEASUREMENT MODEL
and validity of virtual development and testing.
A. Real-time Ray Tracing in a Driving Simulator
Generally speaking, there are two types of sensor models:
We consider the scanning type of LiDAR sensor, which is
Sensor error models aim to reproduce the statistical typically used in the automotive industry. This type of sensor
characteristics of errors, i.e. deviations between the perceived determines distance by measuring the travel time of a laser pulse
reflected by a target surface. Its angular resolution is achieved by
means of scanning, i.e. by moving the transmitted laser beam as
well as the selective field of view of the optical detector array
successively over the sensor’s complete field of view. Most
commercially available systems at this time employ a
mechanically rotating mirror for the scanning task. The operating
principle of this type of sensor lends itself to a modeling
approach using ray tracing techniques. The virtual environment
for the proposed sensor model is provided by the Vires VTD
driving simulation software (Figure 2), which offers a ray tracing
framework based on the Nvidia OptiX ray tracing engine.
.mat
Matlab ROS
Figure 2. LiDAR Model Simulation in VIRES VTD Figure 3. Tool chain for sensor model validation
Virtual| Test
Volume IX - Summer 2019 Drive | vires.com| |15
mscsoftware.com 30
(a) (b)
Figure 5. Sampling grids for ray tracer: (a) Cartesian (b) Spheric
Figure 6. Visualization of Point Cloud: (a) real Point Cloud, (b) synthetic Point Cloud from SC1, (c) synthetic Point Cloud from SC2
Figure 7. Visualization of occupancy grids: (a) real occupancy grid, (b) synthetic occupancy grid from SC1 (c) synthetic occupancy grid from SC2.
Model Validation Overall Barons Pearson For evaluation of the environment model output, the real world
State Level Error correlation correlation
scenario is re-simulated, and scan grids as well as occupancy
grids are computed using generic Point Clouds from the two
EDM PC 8729.2 0.733 0.824
sensor configurations. The scan grid results are shown in Figure
SC1
SG 1.0816 × 10 6
0.637 0.679
7. Visually comparing the scan grid representations of the two
OG 2.3668 × 106 0.602 0.677
sensor configurations with the real data, we can see a higher
alignment between the real scan grids and the scan grids from
EDM PC 8566.4 0.743 0.832
the SC2. To quantify this observation, three metrics are applied
SC2
SUMMARY AND FUTURE WORK paper show a higher correlation between real and synthetic data
using the sensor model with a spherical ray tracer sampling grid.
In this article, we propose a physically motivated sensor
measurement model based on a ray tracing approach for an
automotive LiDAR sensor. The model was employed to faithfully
recreate the full sensor processing chain in a virtual environment See the Latest Solutions
with the help of VIRES VTD. Furthermore, a full processing chain in Autonomous Driving:
in the virtual environment starting from low-level sensor data and www.mscsoftware.com/autonomous
ending with the first fusion stage of the whole automated driving
system was reproduced in a virtual environment. With the
presented setup, it is possible to evaluate real driving situations
and reconstruct them in the simulation from high-fidelity data for Reference
static and dynamic scenarios. As sample use cases, we showed
a static situation on a parking lot. We could quantify how closely 1. “Generation and Validation of Virtual Point Cloud Data for
the internal environment representation, i.e. the input to the Automated Driving Systems”, T. Hanke, A. Schaermann, M. Geiger,
K. Weiler, N. Hirsenkorn, A. Rauch, S-A. Schneider, and E. Biebl,
automated driving function, matches between real world scenario IEEE 20th International Conference on Intelligent Transportation
and the simulation using a raw data LiDAR sensor model and Systems (ITSC), October 2017, Yokohama, Japan
appropriate validation metrics. The results represented in this
Virtual| Test
Volume IX - Summer 2019 Drive | vires.com| |17
mscsoftware.com 32
Shaping Smarter Simulation
with Artificial Intelligence
By Dr. Horen Kuecuekyan, Director of Product
Development & Artificial Intelligence, MSC Software
S
imulation provides key insights into system AI-model, such as decision trees, random forests,
behavior and performance, especially with fuzzy logic, Markov decision-based artificial neural
design optimization and validation. However, networks DQN (Deep Queue Networks), and various
there are many instances where simulation or design other refinements beyond DQNs. In many instances,
exploration is not applicable because of limited an AI-model is not required to have the same fidelity
computational resources. as an actual simulation model, since most engineers
simply expect the trained AI-model to be better and
Artificial Intelligence (AI) is a promising approach to more consistent than the engineering judgement or
help reduce the less important simulation scenarios by simplified (reduced order) models.
studying the existing simulation data. Many different
machine learning algorithms are applied to train an When there is not a sufficient amount of physical
data available, simulation generates the simulation
data to train a reliable AI-model.
Autonomous Systems (STEAS). The AI Sampling will set and create a feasible and relevant set of test cases,
generate the test plan to perform the relevant and which covers all the different situations that can occur.
important scenario simulations (Figure 1) to validate
either ADAS (advances driver assistant system) or Creating the Predictive Models: We refer to the
fully autonomous systems. predictive models as “AI Twins” (Figure 3). AI Twins
can predict the outcome of simulation studies and
One of the main questions people ask around can be used in the product development lifecycle
autonomous systems is, “how can we generate and when performing the traditional simulation is too
test the millions of scenarios that are needed to costly or takes too much time.
virtually validate an AI Driver?”
Our AI team at MSC Software is working with the goal
With AI-sampling, we are developing a solution to to train AI to learn from simulations, to extend the
handle this huge “event space” (Figure 2). The basis for knowledge over time, and dramatically increase the
AI-sampling is a parametric and modular scenario performance and efficiency in the modeling process.
library, which allows the creation of a broad set of
individual test cases. The simulation results are analyzed Applying AI and machine learning tools in the
and classified on their relevance and diversity, and then technological applications can enhance simulation
used in the AI Sampler as inputs to create a more efficiency, improve product quality and reduce
refined test case set with each iteration. This test case production costs. AI Sampling will automate the
set represents all the different behavior patterns to be simulation generation, sample the constantly growing
tested, verified, and applied to analyze the simulation design space, and help autonomous driving
databases on their relevance. By applying this iterative developers to capture the millions of scenarios that
process, AI Sampling can learn how to optimize the test are needed to achieve level 5 autonomy.
Virtual
Volume Test
VIIIDrive 2018 | |81
| vires.com
- Winter 34
Road Testing or Simulation?
– The Billion-Mile Question
for Autonomous Driving
Development
By Dr. Luca Castignani, Chief Autonomous Strategist, MSC Software
Now let’s take a look at simulation or virtual testing. In my opinion, Thirdly, with simulation, engineers would be able to test the
there are a few key reasons why simulation is more applicable functions of the controller software in the early design stages.
than road testing or proving ground for autonomous system One would be able to test the different functions of the
development, especially in the initial phases of the project. software separately with model-in-the-loop simulations without
having to wait for the entire control system to be completed.
First, virtual testing is more scalable when it comes to cost. A Since you can replay the virtual scenarios as many times as
fully equipped autonomous vehicle can cost up to half a million you want, it’s much easier and cheaper to analyze, debug or
dollars, so a fleet of 200 vehicles would mean a 100 million iterate the core algorithms without having to consider the
dollars investment in the hardware itself (vehicles, sensors, data nuances of the actual production software.
Virtual| Test
Volume IX - Summer 2019 Drive | vires.com| |85
mscsoftware.com 37
Finally yet importantly, it is much more convenient to create Of course, the autonomous vehicle shares the road with other
permutations of a situation with virtual testing. Engineers can easily vehicles, which can be bicycles, motorbikes, cars, buses,
repeat the same test with a different set of parameters, like more trucks with trailers, Segway, a police officer on a horse or
pedestrians, higher speed, less sensor visibility, lower road friction, anything else. Anything that is allowed to be driven on the road
and many more. Permutations of a few basic scenarios with should be included in this case. And any of those participants
multiple parameters creates thousands of scenarios. And that’s the might have their own way of interacting with the rest of the
key to ensure robustness and reliability of driving algorithms. traffic. For example, a motorcycle splitting lanes during a traffic
jam, while a large truck can easily get stuck in the traffic
Autonomous Vehicle (AV) simulation is different from traditional because of its slow acceleration, and a cyclist might decide to
vehicle simulations in a sense that apart from the vehicle itself, move from the sidewalk to the middle of the road to make a left
also the “environment” in which the vehicle operates is turn. It is important that all those traffic participants be
fundamental to assessing how it copes with all driving captured in their unique ways of maneuvering.
situations. The “environment” of an AV is quite rich (and
sometimes crowded) as it includes all other vehicles, The pedestrian and their behaviors also need to be modeled,
pedestrians, animals and of course the road, the sidewalks, especially the way they interact with the oncoming traffic. The
buildings and even weather conditions. So, let’s take a closer engineers need to reproduce the gestures of the pedestrians,
look into all these components. for instance, whether or not they are watching the traffic, when
they are distracted by texting on the phone while crossing the
To start with, the engineers need a vehicle model which street. Animals’ behaviors can be even more unpredictable, like
represents the same dynamics characteristics as the actual jumping in front of the vehicle erratically, getting stuck in the
vehicle. When you train the AI controller to drive the actual middle of the road, or staring at the car when it’s approaching.
vehicle, the vehicle model needs to incorporate not only the
correct mass and engine power, but also other correct The last important factor one needs to consider for the
behaviors like braking efficiency, or the load transfer during environment simulation is weather and lighting, which is
cornering events. All these performances are heavily influenced critical since it impacts the way sensors perceive the scene.
by the fundamental suspension designs (dampers, antiroll When it’s raining outside, the vehicle needs to slow down
bars…) and the tire-road interactions. because the driver’s vision and the road friction have been
changed. With the low-lying sun during sunset or sunrise,
Besides the vehicle model, the 3D environment also needs to human driver needs to wear sunglasses because otherwise
be carefully constructed. 3D environments include the road he/she couldn’t really see the road clearly. Similarly, they also
network, which defines the space that the vehicle can occupy, affect sensors like cameras, RADARs, or LiDARs. Fog
and when and how the vehicle can occupy each lane. Besides reduces the visibility of a camera (and absorbs energy from a
the road itself, the immediate surroundings of the road is RADAR) and LiDARs are sensitive to rain drops since they
equally important. Trees and bushes can obstruct the view of scatter the laser beams.
the traffic signs, pedestrian from the sidewalks may suddenly
decide to cross the street, buildings on the side of the street Actually, the perceived sensor data is the most valuable piece
may cast shadows on to the road or reduce GPS accuracy. All of information that the AV simulation provides. With this data
these elements have to be realistically modeled to properly set accurately available, engineers can focus on the following
the scene where the actions take place. phases of the Autonomous Driving development.
Figure 4. Simulating pedestrian crossing the road while on cell Figure 5. Simulating vehicle driving during evening. Simulation done
phone. Simulation done in VIRES VTD. in VIRES VTD.
Virtual| Test
Volume IX - Summer 2019 Drive | vires.com| |87
mscsoftware.com 39
Marius Dupuis
Managing Director
VIRES, Simulationstechnologie GmbH an MSC Software Company
Marius Dupuis is the Managing Director, VIRES Simulationstechnologie GmbH, an MSC Software Company. Dupuis holds
a degree as "Diplom" engineer in aerospace from Stuttgart University.
He began his professional career in 1995 at Eurocopter, Germany in the field of flight simulation for engineering purpos-
es. Dupuis co-founded VIRES in late 1996 and began working full-time as General Manager of VIRES in 1998. He has been
with VIRES ever-since and is fully dedicated to participating in the development of VIRES software.
Dr. Luca Castignani is the Head of Autonomous Mobility Strategy, MSC Software. He has been designing
customer-oriented solutions in NVH and MBD. In his previous role, Dr. Luca served in Ferrari Auto leading Vehicle
Dynamics, NVH and later as the head of CAE. He was in charge of designing the new generation of Ferrari suspensions
that fits the whole range of cars. Dr. Luca has a PhD in Vehicle Dynamics from Politecnico di Milano (Italy) in cooperation
with Pirelli Tires.
Dr. Keith Hanna is the Vice President, Marketing of MSC Software. Dr. Hanna brings over 25 years of experience in the
CFD, CAE, EDA and PLM industries, spanning a wide range of global technical and marketing roles inside Siemens PLM,
Mentor Graphics Corp., ANSYS Inc. and Fluent Inc.
His career prior to engineering simulation included practical experience of the metallurgical and mining industry at Br.
Steel and De Beers. He has both BSc and PhD engineering degrees from the University of Birmingham in England and is
a respected commentator on the CFD/CAE industry, a pioneer of CFD in sport, and a former member of the Executive
Committee of the International Sports Engineering Association.
Copyright © 2020 Hexagon AB and/or its subsidiaries. All rights reserved. Hexagon, the Hexagon logo, and other logos, product and service names of Hexagon
and its subsidiaries are trademarks or registered trademarks of Hexagon AB and/or its subsidiaries in the United States and/or other countries. All other trademarks
belong to their respective owners. Information contained herein is subject to change without notice. Hexagon shall not be liable for errors contained herein.
www.mscsoftware.com