Proceedings of The 2015 Conference On Au
Proceedings of The 2015 Conference On Au
Edited by
David J. White
Ahmad Alhasan
Pavana Vennapusa
Sponsors
Caterpillar, Inc.
Iowa Department of Transportation
Center for Industrial Research and Service, Iowa State University
Department of Civil, Construction, and Environmental Engineering, Iowa State University
Midwest Transportation Center, Iowa State University
Proceedings of the 2015 Conference on Autonomous
and Robotic Construction of Infrastructure
,RZD6WDWH8QLYHUVLW\
Copyright and reprint permission: Abstracting is permitted with credit to the author and source. Copying
LQGLYLGXDOSDSHUVLQZKROHRULQSDUWIRUQRQSUR¿WFODVVURRPHGXFDWLRQDOSXUSRVHVLVSHUPLWWHGZLWKFUHGLW
to the author and source. For all other purposes, all rights, title, and interest, including copyright, in indi-
vidual papers herein are owned by the author(s) of the individual papers.
/LEUDU\RI&RQJUHVV,661
,6%1
The accuracy and validity of the text and data presented in the papers are the responsibility of the various
authors and not of the Center for Earthworks Engineering Research, the Institute for Transportation, Iowa
6WDWH8QLYHUVLW\RURWKHUVSRQVRUVRIWKH&RQIHUHQFHRQ$XWRQRPRXVDQG5RERWLF&RQVWUXFWLRQRI
Infrastructure.
0DUFLD%ULQN3URGXFWLRQ(GLWRU
Institute for Transportation
Iowa State University
6/RRS'ULYH6XLWH
$PHV,$
ACKNOWLEDGMENTS
The Center for Earthworks Engineering Research would like to thank the following organizations for spon-
VRULQJWKH&RQIHUHQFHRQ$XWRQRPRXVDQG5RERWLF&RQVWUXFWLRQRI,QIUDVWUXFWXUHDQGWKHUHODWHG
proceedings of papers:
Caterpillar, Inc.
86'HSDUWPHQWRI7UDQVSRUWDWLRQ2I¿FHRIWKH$VVLVWDQW6HFUHWDU\IRU5HVHDUFKDQG7HFKQRORJ\
iv
Contents
vii Foreword
David J. White
([SORUDWRU\6WXG\RQ)DFWRUV,QÀXHQFLQJ8$63HUIRUPDQFHRQ+LJKZD\
Construction Projects: as the Case of Safety Monitoring Systems
6XQJMLQ.LPDQG-DYLHU,UL]DUU\
v
159 Bridge Structural Condition Assessment using 3D Imaging
6LPRQ/DÀDPPH<HOGD7XUNDQDQG/LDQJ\X7DQ
197 Discrete Element Modelling (DEM) For Earthmoving Equipment Design and
Analysis: Opportunities and Challenges
Mehari Tekeste
5RERWLF+\EULG'DWD&ROOHFWLRQ6\VWHP'HYHORSPHQWIRU(I¿FLHQWDQG6DIH+HDY\
Equipment Operation
&KDR:DQJDQG<RQJ.&KR
'HVLJQLQJ'LJLWDO7RSRJUDSK\2SSRUWXQLWLHVIRU*UHDWHU(I¿FLHQF\ZLWKD
Primitives and Operators Approach
Caroline Westort
vi
Foreword
By David J. White, Richard L. Handy Professor of Civil, Construction, and Environmental Engineering and Director,
Center for Earthworks Engineering Research, Iowa State University of Science and Technology
Infusion of autonomous and robotic operations into infrastructure construction holds promise for
VLJQL¿FDQWO\LPSURYLQJWKHTXDOLW\HI¿FLHQF\DQGVDIHW\RIEXLOGLQJ$PHULFD¶VLQIUDVWUXFWXUH'XH
to the complex developments required in the areas of theory, data analytics, machine systems,
RSHUDWRUWUDLQLQJVSHFL¿FDWLRQVGHYHORSPHQWDQGVHQVRUVLWLVHQYLVLRQHGWKDWSDUWQHUVKLSVEH-
WZHHQDFDGHPLDLQGXVWU\DQGJRYHUQPHQWDJHQFLHVDUHQHHGHGWRPDNHVWUDWHJLFDQGHI¿FLHQW
DGYDQFHPHQWVLQWKHQHDUWHUPWRUHDOL]HPD[LPXPEHQH¿W
Although there are efforts focused on the topic of robotics for manufacturing, seemingly little
DWWHQWLRQKDVEHHQJLYHQWRWKHIXOOVSHFWUXPRIWHFKQRORJLHVVHQVRUVV\VWHPVVSHFL¿FDWLRQV
training, visualization, and data analytics needed for automating the in situ manufacturing of
our infrastructure. Robotics (including co-robotic operations) and autonomous equipment op-
erations hold great potential for advancement. Even a small improvement in productivity could
JHQHUDWHDVLJQL¿FDQWUHWXUQJLYHQWKHPRUHWKDQELOOLRQDQQXDOLQYHVWPHQWLQLQIUDVWUXFWXUH
construction. Unfortunately, infrastructure construction has only limited theory established for
process control.
7KH&RQIHUHQFHRQ$XWRQRPRXVDQG5RERWLF&RQVWUXFWLRQRI,QIUDVWUXFWXUHDW,RZD6WDWH
University was organized to discuss application and theory for advancing autonomous and
robotic-guided equipment to improve productivity, quality, reliability, and safety for civil infrastruc-
ture construction and maintenance. Unlike some conferences that focus on practice or theory
and are dominated by one particular discipline, this conference provided information on how
autonomous robotics systems are being applied in practice, discussed emerging technologies,
DQGGHVFULEHGWKHRULHVWKDWZLOOQRGRXEWVKDSHWKHIXWXUHRIWKLV¿HOG
Focusing attention on this topic can be used to leverage needed investment in people and
facilities to better anchor this area as a “core” national research focus. Given that this area of
research and development covers a spectrum of needs, participants were invited to bridge the
gaps within and between academia and industry.
Papers presented at the conference are collected in this proceedings. They cover a range of
topics including mobile robotic operations, visual analysis, terrain modeling, simulations inspired
E\QDWXUDOSURFHVVHVPXOWLGLPHQVLRQDOPRGHOLQJ'SULQWLQJVHQVRUVGDWDSURFHVVLQJDQG
new applications for construction technologies like photogrammetry. An underlying theme for
many of the presentations was data analytics—trying to make sense of and bring value to the
deluge of data now being generated during construction operations (e.g., machine telematics,
digital photos, weather data, sensors, etc.). These papers show that there is tremendous inter-
est in this topic and the future is bright.
:KHQZHORRNEDFNDQG\HDUVIURPQRZ,EHOLHYHZHZLOOVHHWKDWWKHSDUWLFLSDQWV
in this conference continued their tremendous and exciting work to advance autonomous and
robotics operations, that new collaborations were formed, and that new developments have in-
GHHGPDGHVLJQL¿FDQWFRQWULEXWLRQVWRLPSURYLQJKXPDQOLYHVE\EHWWHUPHQWRIWKHLQIUDVWUXFWXUH
construction processes.
vii
Integrated Mobile Sensor-Based Activity Recognition
of Construction Equipment and Human Crews
Reza Akhavian
Lianne Brito
Amir Behzadan
Department of Civil, Environmental, and Construction Engineering
University of Central Florida
12800 Pegasus Dr.
Orlando, FL 32816
[email protected]
[email protected]
[email protected]
ABSTRACT
Automated activity recognition of heavy construction equipment as well as human crews can
contribute to correct and accurate measurement of a variety of construction and infrastructure
project performance indicators. Productivity assessment through work sampling, safety and
health monitoring using worker ergonomic analysis, and sustainability measurement through
equipment activity cycle monitoring to eliminate ineffective and idle times thus reducing
greenhouse gas emission (GHG), are some potential areas that can benefit from the integration
of automated activity recognition and analysis techniques. Despite their proven performance
and applications in other domains, few construction engineering and management (CEM)
studies have so far employed non-vision sensing technologies for construction equipment and
workers’ activity recognition. The existence of a variety of sensors in ubiquitous smartphones
with evolving computing, networking, and storage capabilities has created great opportunities
for a variety of pervasive computing applications. In light of this, this paper describes the latest
findings of an ongoing project that aims to design and validate a ubiquitous smartphone-based
automated activity recognition framework using built-in accelerometer and gyroscope sensors.
Collected data are segmented to overlapping windows to extract time- and frequency domain
features. Since each sensor collected data in three axes (x, y, z), several features from all three
axes are extracted to ensure device placement orientation independency. Finally, features are
used as the input of supervised machine learning classifiers. The results of the experiments
indicate that the trained models are able to classify construction workers and equipment
activities with over 90% overall accuracy.
INTRODUCTION
With the ever-growing infrastructure projects, demands for information and automation
technologies in the architecture, engineering, construction, and facility management (AEC/FM)
industry is rapidly increasing. 3D printing in design and construction of affordable houses
(Krassenstein 2015), drones for various applications in construction jobsites (Irizarry et al.
2012), and Micro-Electro-Mechanical Systems (MEMS) sensors for tracking and monitoring of
construction resources (Akhavian and Behzadan 2015) are some of the examples of how latest
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the author(s), who are responsible for the
facts and accuracy of the information presented herein.
cutting-edge technology serves the AEC/FM industry. The latter, in particular, has ample
unrivalled potentials for use in day-to-day operations due to the existence of various sensors in
a major technology platform carried by almost everyone these days, the smartphone.
Ubiquity, affordability, small size, and computing power of mobile phones, equipped with a host
of sensors have made them ideal choices for tracking and monitoring of construction resources.
In particular, these built-in sensors can provide invaluable information regarding the
performance, safety, and behavior of construction workers and equipment in the field. For
example, activity analysis using inertial measurement unit (IMU) sensors including
accelerometer and gyroscope can help evaluate the time spent on interconnected construction
tasks. Such information results in better understanding and potential improvement of the
processes involved. Moreover, effective and timey analysis and tracking of the construction
resources can help in productivity measurement, progress evaluation, labor training programs,
safety and health management, and greenhouse gas emission (GHG) and fuel consumption
analysis in construction projects.
This paper presents the results of a smartphone sensors-based machine learning platform for
accurate recognition and classification of activities performed by construction equipment and
workers. Specifically, process data is collected using smartphone built-in accelerometer and
gyroscope from construction equipment and human crews. Certain data pre-processing is then
performed to prepare the data as the training input of supervised machine learning classifiers.
The outputs of the classifiers are the labels of various activities carried out by the construction
resources and detected using this framework.
LITERATURE REVIEW
Human Activity Recognition using Sensors
The initial efforts on human activity recognition date back to the late ’90s (Foerster et al. 1999).
During the last 15-16 years, the use of MEMS sensors has increased tremendously for
acquiring knowledge on humans’ activities. With the growing demand in activity recognition for
different fields of research and practice, the cost of system implementation and prototyping has
also decreased, which makes the technology more affordable and accessible to businesses and
the industry. Most of the activity recognition research during the past decade has been
conducted using wired or wireless accelerometers attached to different parts of a subject’s
body. It has been proven that the accuracy of the recognition algorithm improves as sensors are
attached to more than one part of the body, while a disadvantage of this approach is that the
presence of multiple asynchronous sensors each communicating through a separate interface
make the data collection process computationally inefficient and tedious, while ergonomically
obtrusive and uncomfortable for the subject. In a study by Frank et al. (2010) MEMS-based
IMUs attached to the performer’s waist were used to present four different algorithms and
recognize daily activities such as standing, running, walking, and falling. Even though most of
the activities were recognized with an accuracy of 93-100%, the rate of accuracy of detecting
falling was 80%.
Activity recognition has been widely used in the medical field primarily for elderly patient
monitoring. In a study by Gupta and Dallas (2014), waist-mounted triaxal accelerometer was
used to classify gait events into six daily living activities including run, jump, walk, sit and
transitional events including sit to stand and stand to kneel. While most of the previous studies
measured the activities with accelerometer sensor, some studies added gyroscope sensor to
improve the accuracy of classification. For instance, in a study aiming at accurate and fast fall
Design and development of a small-size, unobtrusive, and low-cost data collection scheme has
been one of the most important challenges in improving the accuracy of activity recognition in a
more affordable and robust manner (Lara and Labrador 2013). During the last decade and with
the advancement of smartphones, a major paradigm shift occurred in the data collection
scheme for human activity recognition. As mentioned before, nowadays smartphones include a
wide variety of computationally powerful sensors. Many researchers seized this opportunity to
build more pervasive human activity recognition systems. Kwapisz et al. (2011) conducted a
study on 29 Android smartphone users that utilized accelerometer sensor to monitor daily
activities such as walking, jogging, ascending stairs, descending stairs, sitting, and standing for
a period of time. The most accurate classification results were achieved in two relatively
straightforward activities: sitting and jogging (compared to walking, standing, and climbing
up/down the stairs). Again, even though the target activities were recognized with 90% accuracy
in most cases (except for climbing up/down the stairs), recognizing such daily and
straightforward activities is relatively easier than those carried out in a construction site, for
instance. Dernbach et al. (2012) published the results of their study aimed at measuring simple
and more complex daily activities using built-in accelerometer and gyroscope sensors in
smartphones. Simple activities included biking, climbing stairs, driving, lying, running, sitting,
standing, and walking while complex activities included cleaning (i.e. wiping down the kitchen
counter top and sink), cooking (i.e. heating a bowl of water in the microwave and pouring a
glass of water from a pitcher in the fridge), medication (i.e. retrieving pills from the cupboard and
sorted out a week’s worth of doses), sweeping (i.e. sweeping the kitchen area), washing hands,
and watering plants. Their results showed that while an accuracy of around 90% was achieved
for simple activities, more complex activities were best classified with less than 50% accuracy.
Moreover, in all activity types, adding gyroscope data to accelerometer data improved the
results by 10-12%. Another research that used smartphone accelerometer and gyroscope
sensors was conducted by Wu et al. (2012). In this research an iPod Touch was used for data
collection measuring 13 different daily activities (e.g. stair climbing, jogging, sitting) from 16
human subjects. Using only the accelerometer data, the accuracy in recognizing these activities
ranged between 50% to 100%, while adding the gyroscope data helped improve the results by
another 3.1% to 13.4%.
Construction workers’ activity recognition using vision-based systems has been also the subject
of some other studies. For example, 3D range image cameras were used for tracking and
surveillance of construction workers for safety and health monitoring (Gonsalves and Teizer
2009; Peddi et al. 2009). Gonsalves and Teizer (2009) indicated that if their proposed system is
used in conjunction with artificial neural network (ANN), results would be more robust for
prevention of fatal accidents and related health issues. In their study on construction workers’
unsafe actions, Han and Lee (2013) developed a framework for 3D human skeleton extraction
from video to detect unsafe predefined motion templates. All of these frameworks, although
presented successful results in their target domain, require installation of multiple cameras (up
to 8 in some cases), have short recognition distance (maximum of 4 meters for Kinect) and
Non-vision-based (a.k.a. senor-based) worker activity analysis has recently gained popularity
among CEM researchers. Cheng et al. (2013) used ultra-wide band (UWB) and Physiological
Status Monitors (PSMs) for productivity assessment. However, the LoD in recognizing the
activities was limited to identification of traveling, working, and idling states of workers and could
not provide further insight into identified activities. In another set of research studies aiming at
construction equipment activity analysis to support process visualization, remote monitoring and
planning, queueing analysis, and knowledge-based simulation input modeling, the authors
developed a framework by fusing data from ultra-wide band (UWB), payload, and orientation
(angle) sensors to build a spatio-temporal taxonomy-based reasoning scheme for activity
classification in heavy construction (Akhavian and Behzadan 2012, 2013, 2014).
As one of the first accelerometer-based activity recognition studies, Joshua and Varghese
(2011) developed a work sampling framework for bricklayers. The scope of that study, however,
was limited to only a single bricklayer in a controlled environment. Moreover, their proposed
framework used accelerometer as the sole source of motion data. Also, the necessity of
installing wired sensors on the worker’s body may introduce a constraint on the worker’s
freedom of movement. In another study, Ahn et al. (2013) used accelerometers to classify an
excavator operations into three modes of engine-off, idling, and working. Further decomposition
of these activities, however, was not explored in their study.
METHODOLOGY
In this study, data are collected using mobile phone accelerometer and gyroscope sensors.
Collected raw sensory data are segmented into windows containing certain number of data
points. Next, key statistical features are calculated within each window. Furthermore, each
segment is labeled based on the corresponding activity class performed at the time identified by
the timestamp of the collected data. In order to train a predictive model, five classifiers of
different types are used to recognize activities performed in the data collection experiments.
Figure 1 depicts the steps from data collection to activity recognition.
Data Preparation
A major step before transforming raw data into the input features for machine learning
algorithms is removing noise and accounting for missing data. When collecting data for a long
period of time, it is possible that the sensors temporarily freeze or fail to properly collect and
store data for fractions of a second to a few seconds and in return, compensate for the missing
data points by collecting data in a rate higher than the assigned frequency. In such cases, a
preprocessing technique to fill in for missing data points and removing redundant ones can help
insuring a continuous and orderly dataset. Also, since the raw data are often collected with a
high sampling rate, segmentation of the data helps in data compression and prepares data for
feature extraction (Khan et al. 2011). If segmentation is performed considering an overlap
between adjacent windows, it reduces the error caused by the transition state noise (Su et al.
2014). The length of the window size depends on the sampling frequency and the nature of
activities targeted for classification from which data is collected (Su et al. 2014).
Feature Extraction
Feature is an attribute of the raw data that should be calculated (Khan et al. 2011). In data
analytics applications, statistical time- and frequency-domain features generated in each
window are used as the input of the training process (Ravi et al. 2005). The ability to extract
appropriate features depends on the application domain and can steer the process of retaining
the relevant information. Most previous studies on activity recognition used almost the same set
of features for training the models and classification of activities (Shoaib et al. 2015).
Data Annotation
Following data segmentation and feature extraction, the corresponding activity class labels
should be assigned to each window. This serves as the ground truth for the learning algorithm
and can be retrieved from the video recorded at the time of the experiment.
Supervised Learning
In supervised learning classification, class labels discussed in Data Annotation (above) are
provided to the learning algorithms to generate a model or function that matches the input (i.e.
features) to the output (i.e. activity classes) (Ravi et al. 2005). The goal is to infer a function
using examples for which the class labels are known (i.e. training data). The performance of this
function is evaluated by measuring the accuracy in predicting the class labels of unseen
examples. Researchers have used different types of supervised classification methods for
activity recognition (Kim et al. 2013; Reddy et al. 2010; Sun et al. 2010). Details of the
supervised learning classification algorithms are discussed in the next Section.
Activity Recognition
Once the model is trained and its parameters are finalized, it can be used for recognizing
activities for which it has been trained. While data is being collected to determine the activities
according to a trained classifier, such data can be stored in a dataset repository and be added
to the existing training data, so that the model is further trained with a richer training dataset.
( )= (1)
in which ( ) is the activation function (i.e. hypothesis), is a matrix of model weights (i.e.
parameters), and is the features matrix. In this study, in order to minimize the cost function,
the most commonly used ANN training method, namely feed-forward backpropagation is used.
Considering a set of randomly chosen initial weights, the backpropagation algorithm calculates
the error of the activation function in detecting the true classes and tries to minimize this error by
taking subsequent partial derivatives of the cost function with respect to the model weights
(Hassoun 1995).
( )=1 (2)
in which is the Gini index, is the fraction of items labeled with value and is the number of
classes. The process of splitting is repeated iteratively for all nodes until they are pure. A node
is considered pure if it contains only observations of one class, implying a Gini index of zero, or
that there are fewer than 10 observations to split.
( ) ( ) ( ) ( ) ( ) ( )
= ( ) +( ) + +( )
(3)
in which is the Euclidean distance, is an existing example data point which has the least
distance with the new example, is the new example to be classified, and is the dimension
of the feature space.
Logistic Regression
Logistic regression is a type of regression problems in which the output is discretized for
classification (Hastie et al. 2009). Logistic regression seeks to form a hypothesis function that
maps the input (i.e. training data) to the output (i.e. class labels) by estimating the conditional
probability of an example belonging to class k given that the example actually belongs to the
class k. This is accomplished by minimizing a cost function using a hypothesis function and
correct classes to find the parameters of the mapping model (Hastie et al. 2009). The
hypothesis function used in this research is the same as the activation function introduced in
Equation 1 (the Sigmoid function) and thus the cost function to minimize is as shown in
Equation 4,
( )= () () () ()
[ log + 1 log 1 ] (4)
in which ( ) is the cost function, m is the number of training examples, ( ) is the ith training
example, and ( ) is the corresponding correct label. Once the cost function is minimized using
any mathematical method such as the Gradient Decent (Hastie et al. 2009) and parameters are
found, the hypothesis will be formed. In multi-class classification, the one-versus-all method is
VALIDATION EXPERIMENTS
In this Section, the description and details of two separate experiments conducted using
construction equipment and workers in order to validate the designed activity recognition
methods are provided.
Among all extracted features, there are some that may not add to the accuracy of the
classification. This might be due to the correlation that exists among the collected data and
consequently extracted features, since many actions result in a similar pattern in different
directions and/or different sensor types (i.e. accelerometer vs. gyroscope). Therefore, in order to
reduce the computational cost and time of the classification process, and increase its accuracy,
a subset of the discriminative features is selected by filtering out (removing) irrelevant or
redundant features (Pirttikangas et al. 2006). In this study, two filtering approaches are used:
ReliefF and Correlation-based Feature Selection (CFS). ReliefF is a weighting algorithm that
assigns a weight to each feature and ranks them according to how well their values distinguish
between the instances of the same and different classes that are near each other (Yu and Liu
2003). CFS is a subset search algorithm that applies a correlation measure to assess the
goodness of feature subsets based on the selected features that are highly correlated to the
class, yet uncorrelated to each other (Hall 1999).
Using CFS, irrelevant and redundant features were removed which yielded 12 features (out of
42). These features were than ranked by ReliefF using their weight factors. The first 12 features
selected by ReliefF were compared to those selected by CFS and the 7 common features in
both methods were ultimately chosen as the final feature space. Table 1 shows the selected
features by each filter as well as their intersection.
Table 1. Selected features by CFS and ReliefF and their intersection (A: Accelerometer,
G: Gyroscope)
Filter Selected Features Common
Selected
Features
CFS A_mean_x, A_mean_y, A_mean_z, A_peak_x, A_iqr_y, G_mean_z
A_iqr_z, A_correlation_z, A_rms_z, G_mean_x, A_mean_x
G_mean_y, G_mean_z, G_variance_x G_mean_x
A_mean_y
A_mean_z
A_iqr_z
ReliefF G_mean_z, A_mean_x, G_mean_x, A_peak_z, A_peak_x
A_mean_y, A_correlation_y, A_correlation_x,
A_mean_z, A_iqr_z, A_peak_x, A_peak_y, G_rms_z
However, the choice of the learning algorithm is highly dependent on the characteristics and
volume of data. As a result, a “single” best classifier does not generally exist and each case
requires unique evaluation of the learning algorithm through cross validation (Goldberg 1989).
Therefore, a number of learning algorithms are tested in this research to compare their
performance in classifying actions using sensory data.
In this experiment, classification was performed by labeling the classes in different LoDs. The
first set of training and classification algorithms is applied to three classes namely Engine Off,
Idle, and Busy. Next, the Busy class is broken down into two subclasses of Moving and
Scooping, and Moving and Dumping, and so on. As stated earlier, for action classification, five
supervised learning methods were used: 1) Logistic Regression, 2) K-NN, 3) Decision Tree, 4)
ANN (feed-forward backpropagation), and 5) SVM. Using different classifiers reduces the
uncertainty of the results that might be related to the classification algorithm that each classifier
uses.
As stated earlier, construction equipment activity recognition has been previously explored
through vision-based technologies. Gong et al. (2011) reported an overall accuracy of 86.33%
for classification of three action classes of a backhoe. In a more recent study, Golparvar-Fard et
al. (2013) achieved 86.33% and 76.0% average accuracy for three and four action classes of an
excavator, respectively, and 98.33% average accuracy for three action classes of a dump truck.
Although the target construction equipment are different in each case and action categories
varies in these studies, the developed framework in this study that uses IMUs for the first time
for construction equipment action recognition shows promising results when compared to
existing vision-based systems that have been the subject of many research studies in the past
few years.
The classification accuracies are reported for 3 activity categories listed in Table 3. The
following activity codes are used in reporting the results: in the first category, activity sawing
(SW) and being idle (ID) are classified. In the second category, activities hammering (HM),
turning a wrench (TW), and being idle (ID) are classified. Finally, in the third category
classification is performed on the activities loading sections into wheelbarrow (LW), pushing a
loaded wheelbarrow (PW), dumping sections from wheelbarrow (DS), returning an empty
wheelbarrow (RW), and being idle (ID). Table 3 shows the results of training and 10-fold cross
validation classification accuracy the subject performing activities of category 1.
According to Table 3, over 99% training accuracy was achieved in category 1 using ANN
classifier. This confirms the hypothesis that IMU data pertaining to a single activity performed by
different workers contain highly distinguishable patterns. However, training accuracy is not an
appropriate measure to assess the ability of using such data for new instances of the same
activity. Nevertheless, the stratified 10-fold cross validation results confirm that regardless of the
nature of classification algorithm, a single activity can be recognized with over 96% accuracy
using all five classifiers.
Since it is very likely that a construction worker performs more than one highly distinguishable
activity at a time, activities performed in category 2 are designed such that they produce almost
the same physical arm movement. Table 4 shows the training and 10-fold cross validation
classification accuracy results of both subjects performing activities of category 2.
Similar to category 1, the training accuracies are high particularly for the ANN classifier and the
decision tree. While CART decision trees are not very stable and a small change in the training
data can change the result drastically as appears in the outcome of the 10-fold cross validation,
ANN presents an average of around 90% accuracy for both subjects. This is while all other
classification methods performed almost the same with a slight superiority of KNN relative to the
other algorithms. This result is particularly important considering the fact that the two activities in
category 2 (i.e. hammering and turning a wrench) produce almost similar physical movements in
a worker’s arm.
According to Table 5, again decision tree gained a high accuracy in training while as expected;
its performance is not the same in 10-fold cross validation evaluation. However, except for the
decision tree and SVM, all other classifiers, namely ANN, KNN, and logistic regression resulted
in around 90% average accuracy for both subjects. Similar to the other two categories, the
feedforward back-propagation implementation of the ANN resulted in the highest accuracy
among all.
In order to study construction equipment activity recognition, a case study of front-end loader
was used to describe the methodology for action recognition and evaluate the performance of
the developed system. In doing so, several important technical details such as selection of
discriminating features to extract different LoDs for classification, and choice of classifier to be
In case of workers’ activity recognition, built-in sensors of ubiquitous smartphones have been
employed to assess the potential of wearable systems for activity recognition. Smartphones
were affixed to workers’ upper arms using armbands, and accelerometer and gyroscope data
were collected from multiple construction workers involved in different types of activities. The
high levels of training accuracies achieved by testing several classification algorithms including
ANN, decision tree, KNN, logistic regression, and SVM confirmed the hypothesis that different
classification algorithms can detect patterns that exist within signals produced by IMUs while
different construction tasks are performed. Through 10-fold stratified cross validation, algorithms
were trained with 90% of the available data and the trained models were tested on the
remaining 10%. In different categories of activities, around and over 90% accuracy was
achieved. This promising result indicates that built-in smartphone sensors have high potential to
be used as integrated data collection and activity recognition platforms in construction
environments.
FUTURE WORK
A potential direction for future work in this research will be to explore whether the results
achieved so far can be used for automatically extracting process knowledge such as activity
durations and precedence logic for the purpose of ubiquitously updating and maintaining
simulation models corresponding to field operations. In addition, another branch of future work
rooted in the current research is automated identification of unsafe workers’ postures in
physically demanding construction activities. Work-related Musculoskeletal Disorder (WMSD),
back, knee, and shoulders injuries are among the most common injuries that can be prevented
or reduced by complying with Occupational Safety and Health Administration (OSHA) or the
National Institute for Occupational Safety and Health (NIOSH) standards and rules (NIOSH
2015; OSHA 1990).
ACKNOWLEDGMENTS
The authors would like to acknowledge the help and support of Hubbard Construction, for
providing access to active construction facilities for equipment data collection experiments. Any
opinions, findings, conclusions, and recommendations expressed in this paper are those of the
authors and do not necessarily reflect the views of Hubbard Construction. Also, the authors are
grateful to the construction engineering research laboratory students who assisted in the data
collection process.
REFERENCES
Ahn, C. R., S. Lee, and F. Peña-Mora. 2013. "The Application of Low-Cost Accelerometers for
Measuring the Operational Efficiency of a Construction Equipment Fleet." Journal of
Computing in Civil Engineering.
Akhavian, R., and A. H. Behzadan. 2012. "An Integrated Data Collection and Analysis
Framework for Remote Monitoring and Planning of Construction Operations." Advanced
Engineering Informatics 26 (4):749–761.
Ahmad Alhasan
Department of Civil, Construction and Environmental Engineering
Iowa State University
394 Town Engineering Building
Ames, IA 50011-3232
Email: [email protected]
ABSTRACT
Road roughness is a key parameter for controlling pavement construction processes and for
assessing ride quality of both paved and unpaved roads. This paper describes algorithms used
in processing three-dimensional (3D) stationary terrestrial laser scanning (STLS) point clouds to
obtain surface maps of point wise indices that characterize pavement roughness. The backbone
of the analysis is a quarter-car model simulation over a spatial 3D mesh grid representing the
pavement surface. Two case studies are presented, and results show high spatial variability in
the roughness indices both longitudinally and transversely (i.e., different wheel path positions). It
is proposed that road roughness characterization using a spatial framework provides more
details on the severity and location of roughness features compared to the one-dimensional
methods. This paper describes approaches that provide an algorithmic framework for others
collecting similar STLS 3D spatial data to be used in advanced road roughness characterization.
INTRODUCTION
Road surface roughness increases vehicle operation and travel delay costs (Gao and Zhang
2013; Ouyang and Madanat 2004); reduces vehicle durability (Bogsjö and Rychlik 2009; Oijer
and Edlund 2004); and reduces ride quality and structural performance (Al-Omari and Darter
1994). Structural performance diminishes faster on rough roads because rough features
increase dynamic stresses that accelerate structural deterioration (Lin et al. 2003). Accurate
evaluation of pavement roughness levels and modes is a key factor in optimizing maintenance
decisions (Chootinan et al. 2006; Kilpeläinen et al. 2011; Lamptey et al. 2008) and a leading
indicator in construction quality assurance/quality control (QC/QA).
In 1986, the International Roughness Index (IRI) was introduced as a time stable pavement
roughness measurement (Sayers et al. 1986; Sayers et al. 1986). Since then IRI has been
widely used because it empirically correlates with ride quality and vehicle operating costs (Gao
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the author(s), who are responsible for the
facts and accuracy of the information presented herein.
and Zhang 2013; Ouyang and Madanat 2004; Tsunokawa and Schofer 1994). IRI is calculated
by mathematically simulating the quarter-car dynamics and accumulating the quarter-car
suspension response induced by variations in a vertical profile. Current teste method for
measuring longitudinal profiles using an inertial profiler (ASTM E950 / E950M-09) is most suited
for collecting data in one or two or even few profiles, however it is hard to synchronize these
profiles to test the transverse variability in elevation at a specific point. Karamihas et al. (1999)
investigated the variability in the IRI values for different profiles across the road, and reported
that two profiles across the lane are not representative of the entire lane and that drivers typical
wander laterally within a range of 50 cm (20 in).
Another limitation when reporting summary indices with fixed analysis interval, is the ability to
detect localized features, Swan and Karamihas (2003) proposed reporting IRI continuously by
applying a moving average to the suspension response. Studies have shown that local features
affect rider comfort, pavement stresses, and cause most vehicle fatigue damage (Bogsjö and
Rychlik 2009; Kuo et al. 2011; Oijer and Edlund 2004; Steinwolf et al. 2002).
Unlike paved roads, there is a lack of a common set of criteria to evaluate unpaved roads, many
local agencies use visual inspection to estimate an IRI value (Archondo-Callao 1999; Namur
and de Solminihac 2009; Walker et al. 2002). Some agencies combine visual inspection with
direct measurement of defects (e.g., pothole depth, corrugation spacing) (Soria and Fontenele
2003; Woll et al. 2008). Several studies have pointed out the importance of precise assessment
of unpaved road conditions using indirect data acquisition methods such as unmanned aerial
vehicles (UAV), ground penetrating radar (GPR), and accelerometers to help transportation
agencies decide whether to maintain or upgrade these roads (Berthelot et al. 2008; Brown et al.
2003; Zhang 2009; Zhang and Elaksher 2012).
Recent developments in laser scanning techniques and light detection and ranging (LIDAR)
sensing have motivated researchers and practitioners to adopt these technologies due to the
accurate and rich data measurements. Recent studies (Fu et al. 2013; Tsai et al. 2010; Zalama
et al. 2011) have demonstrated the effectiveness of laser scanning and LIDAR in identifying
geometrical features of interest (e.g., cracks, bumps). Also, recent studies have investigated the
applicability of using stationary three dimensional (3D) laser scanning techniques in obtaining
IRI by selectively extracting track profiles (Chang and Chang 2006) or by analyzing the surface
to develop spatial roughness maps (Alhasan et al. 2015).
This study will introduce an overview of stationary laser scanning approach and the practical
needs for applications in road roughness assessment. Also a discussion of a procedure and
associated algorithms that can be used in processing 3D point clouds obtained for paved and
unpaved sections, to obtain spatial surface maps of rectified slopes (RS) and IRI values across
road sections. A brief review of frequency based analysis (i.e. Fast Fourier transform, FFT and
Continues wavelet transform) will be introduced as well. Frequency based approaches can
reveal the sources of road roughness. The backbone of the analysis approaches described
herein is a quarter-car model.
After acquiring the data, the scan should be registered in a specialized software. Many activities
can be performed using software packages, such as registration, fitting geometries. Registration
of clouds is done by identifying common targets (spheres or flat targets as shown in Figure 2)
appearing in the full scans of each consecutive station that share a common spatial domain with
other stations. These targets are used as benchmarks to geospatially reference scans by
matching the common target locations in each scan, and thus stitching the scans to produce a
full 3D cloud. Figure 3 shows examples of the registered point clouds. The variation in color
indicates the material reflectivity.
After registration, the point clouds are cleaned of unnecessary data points, where the sections
of interest should be separated from areas beyond the edges of the sections and noise from any
passing vehicles that might appear in the scans. The final data points left after cleaning can be
DATA PROCESSING
Visioning Algorithms
To use the road surface in simulations developed algorithms require a uniformly spaced grid the
longitudinal direction. To achieve that a mesh grid is formed with grid elements that have a
predefined x and y edge dimensions. The grid centre elevation is calculated as the average of
all cloud points falling in that grid region. All points are rotated and translated to a local
coordinate system corresponding to the longitudinal and transvers axes (Zalama et al. 2011).
Transformation for processing along horizontal curves can be achieved by constructing a
curvilinear local coordinate system; however, for simple geometries without curves the point
cloud is rotated globally, where the x axis corresponds to the longitudinal direction and the y
axis to the transverse direction. Vertical slopes are corrected by subtracting the z elevation
along a quadratic fit from the z coordinate of the corresponding transformed points (Alhasan et
al. 2015). Figure 4 shows a pavement section point cloud data after processing in the visioning
algorithm.
The dynamics of the system can be described by four first-order differential equations presented
in a matrix form (Sayers and Karamihas 1996) (Equations 1—4):
X AX B h ps
(1)
Where;
X >z s z s z u z u@
T
(2)
ª 0 1 0 0 º
« c c »
« k2 k2 »
A « 0 0 0 1 »
« k2 c k1 k 2 c»
« P »
¬ P P P¼ (3)
B >0 0 0 k1 P @
T
(4)
Where hps is the elevation of the profile after applying the moving average smoother;
zs and
zu
z z
are the elevations of sprung and unsprung masses; s and u are elevation time derivatives of
sprung and unsprung masses; k1 is the tire spring coefficient divided by the sprung mass, k2 is
X i S X i 1 P h ps (5)
Where;
A'x v
S e
P A1 S I B (6)
X, A, hps, and B are predefined in Equations 1—4, 'x is the spacing between profile points, v is
the assumed speed of the model, and I is the identity matrix. The finite difference algorithm is
implemented in many commercial packages and proved efficiency when used in analysing long
profiles. However, the algorithm suffers from long memory effects, where the rectified slope at a
point is affected by the elevation of previous points back to several meters, reaching 10 meters
in some cases, and thus the starting point affects the results when analysing short profiles
(Swan and Karamihas 2003). The critical length of the profile depends on several factors, the
smoothness of the profile to be analysed and the sampling frequency of the data acquisition
system.
Wavelet transform projects (transforms) the profile on a wavelet function at different scales
(layers of details) to result in wavelet coefficients describing the correlation between the signal
and the wavelet function. This decomposition allows examining the location of features with
certain frequency bands with known effect on vehicle response, and thus provides a valuable
tool to determine localized features (Alhasan et al. 2015).
CASE STUDIES
Road roughness of two road sections was evaluated, the sections include a 55 m long rural
unpaved road and a 58.8 m long HMA overlay over a jointed plain concrete pavement (JPCP)
pavement. Figures 6a and 6b show the rectified slope map for rural road and the paved road
respectively.
For the unpaved road the left lane extends between stations 0 and 3 in the transverse distance,
and the right lane extends between stations 0 and -3. This map reveals great details, for
instance the central region is unsystematically rough compared to the rest of the scan area,
however a localized rough region appears in the left lane between stations 50 and 55, which
corresponds to a loose pile of aggregate. The paved road surface map includes an
approximately 3 m wide lane that extends between stations 0 and 4. The map shows the
severity and location of the reflective cracks in the transvers direction.
The average of rectified slope values along each profile is defined as its IRI, since the map was
generated by simulating a quarter-car model moving at a speed of 80 km/h. Figure 7 shows the
IRI values versus width for both roads, it can be noticed that IRI values are highly variable. Due
to the high variability in IRI values it is proposed to base the conclusions regarding surface
roughness on the IRI versus width plots, and if a single summary index is needed median would
be a more proper way to report overall IRI values than the average. The advantage of using the
median is robustness to outliers.
To investigate the reason for low IRI values for the unpaved section, two profiles were
transformed to frequency space, one profile corresponding to maximum IRI value and another
profile corresponding to minimum IRI value. Each profile was decomposed into its constituent
spectrum using the FFT (Figure 8). Presenting the results as height amplitude versus spatial
frequency clearly shows different components of amplitude and frequency. The threshold for
“smooth” was set at amplitude less than 0.4 mm. “Unsystematically rough” is defines as
amplitude greater than 0.4 mm, but variable over a range of spatial frequencies. “Corrugation” is
defined as amplitude with a central peak of greater than 0.7 mm and a spatial frequency of 1.5
to 2.5 (1/m). By examining the results this way, the source of roughness can be distinguished.
And it can be seen that highest amplitudes are due to corrugations, however the quarter-car
filter attenuates these frequencies, which results in substantially lower IRI values.
Figure 8. FFT for road profiles in (a) smooth surface with corrugations,
(b) unsystematically rough surface with corrugations.
REFERENCES
ProVAL: Profile Viewing and Analysis Software, version 3.4 (2013),The Transtec Group, Inc.
available from www.RoadProfile.com.
Al-Omari, B., and Darter, M. I. (1994). "Relationships between international roughness index
and present serviceability rating." Transportation Research Record(1435).
Alhasan, A., White, D. J., and De Brabanter, K. (2015). "Continuous wavelet analysis of
pavement profiles." Automation in Construction, Under review.
Alhasan, A., White, D. J., and De Brabanter, K. (2015). "Quantifying Unpaved Road Roughness
from Terrestrial Laser Scanning." 2015 TRB 94th Annual MeetingWashington, D.C.
Alhasan, A., White, D. J., and De Brabanter, K. (2015). "Spatial pavement roughness from
stationary laser scanning." International Journal of Pavement Engineering, Under review.
Archondo-Callao, R. (1999). "Unpaved roads roughness estimation by subjective evaluation."
The World Bank.
ASTM E950 / E950M-09 "Standard Test Method for Measuring the Longitudinal Profile of
Traveled Surfaces with an Accelerometer Established Inertial Profiling Reference."ASTM
International, West Conshohocken, PA, 2008, www.astm.org.
ASTM E1926-08 "Standard Practice for Computing International Roughness Index of Roads
from Longitudinal Profile Measurements."ASTM International, West Conshohocken, PA,
2008, www.astm.org.
Berthelot, C. F., Podborochynski, D., Stuber, E., Prang, C., and Marjerison, B. (2008).
"Saskatchewan Case Studies of Network and Project Level Applications of a Structural
Asset Management System." Proc., 7th International Conference on Managing
Pavement Assets. TRB Committee AFD10 on Pavement Management Systems,
Transportation Research Board, Washington, DC Retrieved October, 2011.
ABSTRACT
Autonomous mobile robots equipped with arms have the potential to be used for automated
construction of structures in various sizes and shapes, such as houses or other infrastructures.
Existing construction processes, like many other additive manufacturing processes, are mostly
based on precise positioning, which is achieved by machines that have a fixed mechanical link
with the construction and therefore relying on absolute positioning. Mobile robots, by nature, do
not have a fixed referential point, and their positioning systems are not as accurate as fixed-
based systems. Therefore, mobile robots have to employ new technologies and/or methods to
implement precise construction processes.
In contrast to the majority of prior work on autonomous construction that has relied only on
external tracking systems (e.g., GPS) or exclusively on short-range relative localization (e.g.,
stigmergy), this paper explores localization methods based on a combination of long-range self-
positioning and short-range relative localization for robots to construct precise, separated
artifacts in particular situations, such as in outer space or in indoor environments, where
external support is not an option.
Achieving both precision and autonomy in construction tasks requires understanding the
environment and physically interacting with it. Consequently, we must evaluate the robot’s key
capabilities of navigation and manipulation for performing the construction in order to analyze
the impact of these capabilities on a predefined construction. In this paper, we focus on the
precision of autonomous construction of separated artifacts. This domain motivates us to
combine two methods used for the construction: 1) a self-positioning system and 2) a short-
distance relative localization. We evaluate our approach on a miniature mobile robot that
autonomously maps an environment using a simultaneous localization and mapping (SLAM)
algorithm; the robot’s objective is then to manipulate blocks to build desired artifacts based on a
plan given by a human. Our results illuminate practical issues for future applications that also
need to integrate complex tasks under mobile robot constraints.
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the author(s), who are responsible for the
facts and accuracy of the information presented herein.
I. INTRODUCTION
Construction automation is an interesting field focused on applying automating processes to
reduce the cost of construction and/or to increase operational efficiency. Developments in
robotics sciences have recently led to the use of various robotic platforms to achieve
construction automation objectives, although fully automated construction is still a dream of civil
engineers. Robotic developments have shown that robots could potentially perform construction
tasks where human presence is impossible, undesirable, or intensively expensive, for instance,
construction in hazardous areas after natural or man-made disasters such as earthquakes and
nuclear accidents; construction under difficult physical conditions such as undersea or outer
space locations; and construction where in area that are not readily accessible to humans or
that require an initial structure to prepare the environment for human arrival. Robots can be
used to build these structures for particular situations in the autonomous mode without explicit
human intervention or with some levels of planning interaction conducted with a human
supervisor.
Generally, a robot performing autonomous construction has to adapt itself to the sensed
environment, make decisions regarding the execution of its task, and replan when its task is not
executable. Mobile robots represent one type of robotic system that could be used for
construction automation. Applying mobile robots to construction opens new approaches in this
field. For instance, building large structures without being confined by dimensions is a challenge
for current technologies; for example, we might need huge and expensive fixed-based
fabricating systems (e.g., 3D printers) to build giant structures. Capabilities of mobile robots,
however, allow them to create objects without fixed-base system constraints (e.g., size of the
printer’s frame constraint). Similar to social insects, such as ants, a group of mobile robots can
work cooperatively, as a collective system, to efficiently build large-scale.
In contrast to these advantages, mobile robots, by nature, do not have a fixed referential point
and their positioning systems are not as accurate as fixed-based systems. Existing construction
processes, like many other additive manufacturing processes, are mostly based on precise
positioning, which is achieved by machines that have a fixed mechanical link with the
construction and rely on absolute positioning. Therefore, mobile robots have to compensate for
this weakness with new technologies and/or methods to supply precision for construction
processes. Although equipping robots with external tracking systems (e.g., GPS, camera)
provides an accurate positioning system, but we aim to implement and study localization
methods based on self-positioning system to autonomously handle construction tasks,
especially where external tracking systems is are difficult to access and are expensive (e.g.,
undersea). On the other hand, the accuracy of self-positioning system is not sufficient to handle
construction processes; therefore, we aim to combine it with short-range relative localization to
provide the required precision for construction of structures spatially separated from one
another, which we refer to here as separated artifacts.
In this paper, our goal is to develop a construction system by which robots are able to build
separated artifacts. We evaluate our approach with a miniature autonomous mobile robot and
simple blocks in unknown environments. The robot’s objective is to build the artifacts using both
the simultaneous localization and mapping algorithms (SLAM) and stigmergy based on a
human-prescribed blueprint. In fact, it employs SLAM using the LIDAR scanner to autonomously
map an environment and determine the current robot position. The robot's end effector is
equipped to many IR-sensors that allow it to sense previously placed blocks in order to place
subsequent blocks; this approach is commonly referred to as stigmergy [1].
Another general external system is the GPS used by a robotic excavator with centimeter
resolution allowing them to determine position accurately and then to control the motion of the
robots [5]. In [6], the ROCCO robot was developed to assemble heavy blocks in industrial
buildings with standardized layouts. It was equipped with digital angular encoders and an
external global position sensor (telemeter) correct error. In [6], a method was demonstrated in
simulation by which robots are able to build 2D structures of desired shapes by blocks. A robot
acts as a stationary beacon to help other robots find its position. In [7], robot placed blocks of
alternating color along a straight line starting with a pre-placed seed block located underneath a
beacon. Although using external system improves the positioning system capability, many
additional localization devices are required, which might be impossible or very expensive to
provide, for example, in outer space or undersea construction. In contrast to these works, the
robot is completely autonomous in our work and does not rely on any motion-capture systems
or external localization systems.
However, some robots applied short-range relative localization for construction. Werfel et al. [8]
present 3D collective construction in which enormous numbers of autonomous robots build
large-scale structures. They employed a ground mobile robot that was inspired by activity of
termites. Robots climb on the structure to drop passive solid blocks on top of it. They just use
six active infrared (IR) sensors to recognize white stripes on the blocks and then determine their
path and final destination. Novikov et al. [9] have built 3D shape structures by using deposition
amorphous material with mobile heads. This method allows an object to be printed independent
from its size of the object.
Stroupe et al. [10] present construction by two platform robots SRR and SRR2K in an outdoor
environment. Each rover is holonomically equipped with a forward-facing stereo pair of cameras
and a four degree-of-freedom arm. A triple-axis force-torque sensor on the gripper helps the
rover maintain coordination for transporting and placing rods. This model provides high-
precision manipulator placement by comparing the observed position of beam markers on the
end effector with the obtained kinematics position of the end effector.
Magnenat et al. [14] used a miniature autonomous robot with a magnetic manipulator to grasps
ferromagnetic self-alignment blocks. This robot also has the LIDAR and camera on top. It used
the odometry and laser data to perform SLAM and employed the front camera and proximity
sensors to provide the required data for dropping blocks. The goal of this research was to use
ten blocks to build a simple tower. We advance this research by studying the precision of using
both a short-range relative localization and a self-positioning system for separated artifacts.
The arena is a 200 cm × 100 cm rectangle with a flat surface that contains few obstacles. Note
that the environment is unknown for the robot, and SLAM is used to inform the robot of its
position and to map the environment for path planning as well. The artifacts, as illustrated later,
are composed of simple polystyrene blocks with an attached stripe of ferromagnetic metal on
the lower part of the body (Figure 1). Each block is 6 cm in length, 6 cm in width, and 18 cm in
height, and it weighs approximately 20 g. The size and weight of the block are chosen to satisfy
the requirements of the robot’s gripper and the LIDAR.
After grasping the block, the robot moves toward the destination point. The path-planning
algorithm determines the global path to the destination. It also sets the local path planning
during its movements to avoid collision with dynamic obstacles and to correct the path based on
the robot’s improved position. The first block of the artifact will be dropped after the robot fine-
tunes its position using the accurate movement behavior. 1 The robot returns and takes a new
block from the repository. Now, the robot is ready to drop the second block of the artifact. The
robot drops the second block beside the first block using stigmergy. In this section, we
explained the construction scenario and assumptions. In the next sections, we first describe the
robot hardware and then provide the details of the control system, including low-level behaviors
and control architecture.
1
See section 3-3-2
B. Robot platform
For the experiment, we used a miniature and modular robot called marXbot [15] (Figure 2). The
robot is 17 cm in diameter and 18 cm in height. It consists of four modules as follows:
Base: The non-holonomic base has 2 degrees of freedom (DOF), a 38Wh lithium polymer
battery, and 24 proximity sensors.
Gripper: The 3 DOF magnetic manipulator consists of a magnetic switchable device to grasp
ferromagnetic objects. This module has 20 proximity sensors for the alignment usage.
Computer board: This module includes the main computer based on a 533MHz Freescale
i.MX31 with 128MB of RAM and running LINUX.
LIDAR: The 360° laser distance scanner (Neato LIDAR < $100) perceives walls and obstacles.
C. Control system
I) Control architecture
The control architecture, shown in Figure 3, consists of several layered modules. At the top, the
builder planner serves to execute an overall construction of separated artifacts by generating a
sequence of high-level sub-goals. These sub-goals either take the form of target poses for robot
movement, which are delegated to the navigation block, or block manipulation sub-goals (such
as pickup and place), which are delegated to the middle planner.
The middle planner, implemented using state-of-the-art AI planning technologies [17], renders
the robot fully autonomous in its manipulation of blocks in the environment. For each
manipulation goal (pickup, drop, place-adjacent-to), the robot accesses a (pre-computed)
conditional plan that iteratively selects among low-level task executions (e.g., approach-to-
within-manipulation-distance, align-gripper-angle, grasp-block). Each of these low-level tasks is
implemented as a finite state controller that, at 5Hz, senses using the gripper infrared and
actuates the treel 4 and gripper motors. Due to sensory inaccuracies and environmental
imperfections, the low-level tasks are not deterministic in their effects. For instance, a brief mis-
measurement of infrared distances caused by fluctuations in ambient lighting could cause the
align-gripper-angle controller to over-rotate the gripper such that the robot loses sight of the
block that it is aligning with. When such unintended effects occur, the robot's conditional plan
gracefully recovers by selecting the next appropriate task. In essence, the conditional plan
composes a complex and dynamic sequences of tasks that is theoretically guaranteed [18] to
eventually bring the robot to its manipulation goal.
Figure 3. Control architecture in which green and blue boxes represent hardware and
software layers, respectively.
2
Move_base is a 2D navigation stack that receives information from sensors and a goal pose and then directs the
mobile base by determining safe speed and reliable paths.
3
Hector_mapping is a SLAM algorithm using LIDAR systems such as the Hokuyo UTM-30LX. The system has
been used on unmanned ground robots, unmanned surface vehicles.
4
Treel refers to a combination of a wheel and a track.
Traveling: This behavior consists of raising the gripper in order that the attached block and
gripper do not interfere with the robot movements and SLAM.
Accurate movement: This behavior occurs in two modes to move the robot precisely. When
the robot approaches a destination point, this behavior moves the robot to put it in the precise
Drop/Place-Adjacent-To: This behavior consists of dropping a block either in the first place of
the artifact or directly adjacent to existent blocks. To build an artifact, the robot has to drop its
first block. Thus, it simply lowers the gripper after finding the precise position and then
disengages its magnetic switchable device. It then lifts its manipulator slightly and moves back
for a short distance. For the remaining blocks of the artifact, the robot goes toward the artifact
(1) and scans for it using the magnetic gripper’s IR sensors. The robot computes the distance to
the artifact and moves accordingly to tune its position (2). Then it rotates 90 degrees while it
rotates the gripper in the reverse direction with the same angular speed. In this position (3), the
robot aligns itself laterally and then finds the left edge of the placed block to drop the new one
exactly beside it (4). The robot performs a 90-degrees leftward rotation while the gripper rotates
in the opposite direction (5). It then moves forward, lower, and rotates the gripper by a few
degrees at a time to avoid collision with the other blocks (6). It then lowers its gripper slightly
and moves back a short distance. Finally, the robot tilts the gripper and moves forward to push
and line up the block (7).
For stigmergy, the robot uses infrared sensors to align itself with respect to the blocks that are
already part of the artifact. Because the artifacts are not fixed to the ground, the dropping
operation may cause a displacement of the existing blocks. As illustrated in Figure 10, the
average errors of stigmergy for blocks B4 and B8 (first block of the second and third artifacts)
are different from the errors measured after using only SLAM (see Figure 9). This shows that
the sigmergy action moved these blocks. This means, for instance, that when the robot pushed
the block to align it, it also pushed previously-placed blocks because of the errors in stigmergy
positioning. Indeed, if the robot could apply a perfect stigmergy, we would expect to see the
same precision for other blocks of the artifact as we saw for the first block.
Figure 9. The left graph shows the translational error and the right graph shows the
rotational error when the robot drops the first, fourth, and eighth blocks using just SLAM.
Figure 10. The left graph shows the translational error, and the right graph shows the
rotational error when the robot is supposed to build the artifact with SLAM and
stigmergy.
Finally, we measured the surface of each ideal artifact that is occupied by blocks placed by the
robot. Ideally, the overlap percentage between the blocks and the given blueprint has to be a
100%. Figure 11 shows the overlap percentage for two construction types: single separated
blocks or separated multiple-block artifacts. Note the average percent of the single-block
construction is 57.88%, but it is 73.53% in artifact construction. This increase of performance
does not mean that multiple-blocks artifacts are placed more precisely, because adjacent blocks
can compensate for the positioning error for the global covering of the artifact surface. This
Figure 11. The left graph depicts the overlap percentage for dropping blocks based on
the using SLAM, and the right graph depicts the overlap percentage for building the
artifact based on using SLAM and stigmergy.
This section provided an analysis of the precision achieved by a mobile robot in building
separated artifacts using SLAM and stigmergy. The SLAM algorithm was used as a localization
and mapping method by a miniature mobile platform in an unknown environment. In our
experiment, we have a translational error of about 21 mm (one-hundredth of the diagonal of the
environments) in a static environment using a miniature robot with a low-cost LIDAR. It is
difficult to evaluate how this error will scale in a real environment because the SLAM accuracy
depends on many factors; environment dimensions will clearly impact precision, as larger
distances will generate larger measurement errors. The quality and quantity of landmarks also
impacts the estimation of the position by the SLAM algorithm. A dynamic environment can also
cause a loss of precision, as dynamic obstacles can hide interesting landmarks. Finally the
quality of the sensor for distance measurement impacts the whole system directly. Obviously,
there is a need to further develop the sensory system and SLAM algorithms for complex
artifacts in real and large environments. Despite a lot of progress in the past decades in the
SLAM field, high-precision applications are still challenging. For the time being, SLAM could
help to find approximate construction sites and then the robot could use other methods to follow
the construction, such as stigmergy as used in this research.
In this research, we applied stigmergy based on a pure IR-sensing system while the mechanical
stigmergy can be more suitable to place the blocks. Today, companies are designing and
manufacturing prefabricated components to increase construction speed and efficiency. New
prefabricated components could be designed and made for robotic use in automated
construction. For example, components with male–female connectors allow for automatic
assembly in a more robust way [11]. Developing construction methods based on mechanical
stigmergy or using force-sensing systems could provide a new way to place components in a
more reliable and precise way.
Autonomous construction is also a complex application in which many failures can occur. These
failures can propagate from one step to another, for instance, if the robot incorrectly grasps the
V. Conclusion
This paper presents an autonomous construction system for building separated artifacts with
simple blocks. We used a miniature mobile robot that autonomously mapped an environment
using a SLAM algorithm and then manipulated blocks to build desired and separated artifacts.
Our approach was based on the combination of two methods: a self-positioning system (SLAM)
to find the construction place in an unknown environment and stigmergy to build coherent
artifacts. The control system allowed the robot to perceive and pick up the block, move toward
the construction place in an unknown environment, and drop the block based on a human-
prescribed blueprint. We observed that, even in an ideal environment, positioning using SLAM is
not sufficiently precise. This task still requires improvement in sensing technology. We also
observed that stigmergy allow the creation of coherent constructions. The process analyzed in
this paper, based on mobile blocks and sensing stigmergy, could be improved by having blocks
fixed when dropped and employing mechanical stigmergy.
As a result, in future works, we are focusing on developing robot hardware and improving the
SLAM algorithm. We also plan to develop stigmergy and use force-sensing systems for
mechanical stigmergy. Thanks to stigmergy and hardware development, robots could be used
to build complex artifacts such as a multi-layer wall with prefabricated components.
REFERENCES
[1] G. Theraulaz and E. Bonabeau, “Coordination in distributed building,” Science, vol. 269,
no. 5224, pp. 686–8, Aug. 1995.
[2] J. Willmann et al. “Aerial robotic construction towards a new field of architectural
research,” Int. J. Archit. Comput., vol. 10, no. 03, pp. 439–460, 2012.
[8] J. Werfel, K. Petersen, and R. Nagpal, “Distributed multi-robot algorithms for the
TERMES 3D collective construction system,” RSJ Int. Conf. Intell. Robot. Syst. (IROS),, 2011.
[9] P. Novikov, S. Maggs, D. Sadan, S. Jin, and C. Nan, “Robotic positioning device for
three-dimensional printing,” CoRR, vol. abs/1406.3, pp. 1–14, 2014.
[10] A. Stroupe et al. “Sustainable cooperative robotic technologies for human and robotic
outpost infrastructure construction and maintenance,” Auton. Robot., vol. 20, no. 2, pp. 113–
123, Apr. 2006.
[12] Y. Terada and S. Murata, “Automatic modular assembly system and its distributed
control,” I. J. Robot. Res., vol. 27, no. 3–4, pp. 445–462, Mar. 2008.
[13] F. Nigl, S. Li, J. Blum, and H. Lipson, “Structure-reconfiguring robots: autonomous truss
reconfiguration and manipulation,” IEEE Robot. Autom. Mag., vol. 20, no. 3, pp. 60–71, 2013.
[15] M. Bonani et al. “The marXbot, a miniature mobile robot opening new perspectives for
the collective-robotic research,” in 2010 IEEE/RSJ International Conference on Intelligent
Robots and Systems, 2010, pp. 4187–4193.
[16] S. Magnenat and P. Rétornaz, “ASEBA: a modular architecture for event-based control
of complex robots,” IEEE/Asme Trans. mechatronics, vol. 16, no. 2, pp. 321–329, 2011.
[17] S. Witwicki and F. Mondada, “Circumventing robots’ failures by embracing their faults: a
practical approach to planning for autonomous construction,” in The 29th AAAI Conference on
Artificial Intelligence, 2015, pp. 4298–4299.
[18] C. Muise, V. Belle, and S. A. McIlraith, “Computing contingent plans via fully observable
non-deterministic planning,” in The 28th AAAI Conference on Artificial Intelligence, 2014, pp.
2322–2329.
Jedadiah F. Burroughs
Geotechnical and Structures Laboratory
US Army Engineer Research and Development Center
3909 Halls Ferry Road
Vicksburg, MS 39180
[email protected]
Todd S. Rushing
Geotechnical and Structures Laboratory
US Army Engineer Research and Development Center
3909 Halls Ferry Road
Vicksburg, MS 39180
[email protected]
C. Phillip Rusche
Geotechnical and Structures Laboratory
US Army Engineer Research and Development Center
3909 Halls Ferry Road
Vicksburg, MS 39180
[email protected]
ABSTRACT
This study compared the use of two compressed earth block press machines and the properties
of compressed earth blocks made with natural and stabilized soils. The Vermeer BP714 Block
Press uses a hydraulically driven, two-stage compression process to produce compressed earth
blocks with consistent density and dimensions similar in shape to concrete masonry units. The
AECT Impact 2001A uses a hydraulically driven, fully automated single stage compression
process to produce modular compressed earth blocks with simple prismatic geometries at a rate
of 240 blocks per hour. This study focused on the advantages and disadvantages of using each
press, as well as the compressive strength development of compressed earth blocks made with
selected soils and soil stabilizers. Recommendations for the suitability of each machine for
different soil conditions are given.
PROBLEM STATEMENT
It is estimated that one third of the world’s population lives in earthen structures. For many
centuries, adobe blocks have been used as the main earthen construction element. The
process for making adobe blocks is a time-consuming and inefficient process that can last in
excess of three weeks (Dominguez 2011). In an effort to more efficiently study this traditional
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the author(s), who are responsible for the
facts and accuracy of the information presented herein.
construction material, the US Army Engineer Research and Development Center has used
compressed earth blocks (CEBs) as surrogates for adobe blocks.
Compressed earth blocks are earthen construction elements produced using a mechanized
hydraulic ram to compact soil into a mold. The soil is molded in a relatively dry state (generally
less than 10% moisture content) resulting in blocks that are strong and weather resistant. Since
CEBs have very consistent geometry, they can be readily stacked to form walls of structures.
Doors, windows, fixtures, and utilities can all be incorporated in similar ways to masonry
construction. Earthen structures made using CEBs are inexpensive, versatile, and energy
efficient. For these reasons, CEBs have become a popular construction technique in
disadvantaged areas. CEB machines are commercially available to produce blocks in a range of
sizes and shapes.
One example is the AECT Impact 2001A. This automated machine produces CEBs at a rate of
four blocks per minute that are 12-inches long, 6-inches wide, and between 2-inches and 4.5-
inches tall. Each block weighs approximately 15 to 20 lbs depending on soil type, block
thickness, moisture content, etc. Soil is loaded into a hopper situated above a single stage
compression ram. All soil particles must be finer than ½ inch to pass the screen covering the
hopper. Soil is manually loaded into the hopper by means of buckets, shovels, etc. A full hopper
will produce between eight and nine blocks. Once soil is loaded, the diesel engine is started and
block production is initiated by pressing the start button. Soil is dropped from the hopper into a
rectangular cavity that sets the length and width dimensions at 12-inches and 6-inches
respectively. The depth of the cavity is determined by the position of a hydraulic ram and is set
by the user. Once soil has been loaded, the cavity is covered by an automatically sliding tray
and pressed by a vertical ram. The end of the compression stage is selected by the operator,
and can be either a set dimension or a maximum forming pressure. For the AECT Impact
2001A, the maximum hydraulic pressure that is applied is 3000 psi. If the maximum pressure is
reached prior to reaching the required thickness, compression is stopped. The block is then
pressed out of the mold vertically and pushed out of the machine. Because the compressive
force is limited, blocks of varying thickness can be produced from a single run of material.
Variability in moisture throughout a soil can cause changes in the amount of soil that fills the
cavity and thus influence final block dimensions. Blocks can be produced with initial soil
moisture contents from approximately 5% to 20%. In general, wetter soil produces thinner
blocks. Block production continues as long as there is adequate soil filling the hopper. The
AECT Impact 2001A and example blocks are shown in Figure 1. This trailer mounted system
can either be towed with a standard two inch ball hitch or moved easily on site by hand.
Another commercially available CEB machine is the Vermeer BP714. This semi-automated
machine produces blocks of uniform size, approximately 14-inches long, 7-inches wide, and 4-
inches tall, with interlocking geometric features and through-holes at a rate of three blocks per
minute. With the through-holes, the Vermeer BP714 produces CEBs that are reminiscent of
concrete masonry units (CMUs). Each block weighs approximately 20 lbs, which varies
depending on soil type, moisture content, and additives. The through-holes align when stacked
and can be fitted with reinforcing steel or conduit and/or filled with mortar or grout for further
strengthening. Similar to the AECT Impact 2001A, soil is loaded into a screened hopper. The
hopper on the Vermeer BP714 holds enough soil to produce approximately 14 blocks. Rather
than being fully automated, the Vermeer BP714 is controlled by three control levers and
functions in a semi-automated fashion. One lever controls a vertical ram used for filling the soil
cavity and compressing to a fixed thickness. One lever controls a horizontal ram used to
maneuver the dirt tray to move soil from the hopper to the compression cavity. The third lever
Since compressed soil is susceptible to erosion by water, CEBs are usually produced using soil
that has been mixed with a stabilizer to increase durability. Soil stabilization techniques are well
known for a variety of engineering purposes, and known techniques can be applied to CEBs.
Portland cement added at about 6% to 8% by volume is a common soil stabilizer that undergoes
a hydraulic reaction that binds soil particles. Many other additives are used for soil stabilization
including polymers, petroleum byproducts, microorganisms and their secretions, enzymes,
fibers, minerals, pozzolans, and industrial waste products such as fly ash.
RESEARCH METHODOLOGY
For the purposes of this study, a clayey sand (SC) from central Louisiana was chosen for use.
Soil classification data is given in Table 1. This soil was chosen due to its sand and clay
contents. Previous studies had shown that soils with greater than 50% sand content and at least
10% clay performed better than lower sand or clay soils (Dominguez 2011). Also, the plasticity
index is given as 20 indicating low plasticity clay particles. High plasticity clays tend to be more
highly expansive when exposed to moisture, which can negatively affect the performance of
compressed earth blocks. Also given in Table 1 is soil characterization data on three other soil
types for reference. SM Blend is very similar to what is found in most arid climates. NP
designates that the soil is non-plastic. Non-plastic soils have little to no natural cohesion, so
binders are required to produce even low quality CEBs. Buckshot clay is a high plasticity, very
expansive clay. While soils such as this tend to produce CEBs with satisfactory green
properties, the CEBs tend to lose structure and stability due to changes in humidity. Agricultural
loam is representative soil for most agricultural fields. While materials such as this do exhibit
some natural cohesion, the high silt content leads to low strength CEBs. For stabilized soils,
Type I portland cement was chosen as the stabilizing agent. The addition of portland cement at
6-8% by volume is a common stabilizer of soils because it undergoes a hydraulic reaction to
bind soil particles together. Various forms of portland cement are also available in most areas
throughout the world, so studying its use in CEBs can have wide-reaching implications.
Soil was excavated approximately one week prior to production of CEBs. The soil was stored in
a climate controlled area to minimize moisture content changes. The soil was used at its in-situ
moisture content. As it was field excavated, the soil contained many large clumps that were
much too large to pass the screens on either machine’s hopper. In an effort to eliminate the
majority of the clumps and to better homogenize the material, the soil was tilled and placed in a
paddle mixer prior to use. The soil after being removed from the paddle mixer is shown in Figure
3. Even after tilling and mixing with the paddle mixer, the soil still has some large clumps. Many
of these large clumps were broken down by hand prior to adding the material to the CEB
machines. The paddle mixer used in this study is shown in Figure 4. For stabilized blocks,
portland cement was added at a rate of 7% by volume while the material was mixing in the
paddle mixer. The addition of dry powdered portland cement helped to remove moisture from
the soil and allow the mixing action to eliminate more clumps. Soil before and after the addition
of portland cement is shown in Figure 5. The soil after the addition of portland cement is much
more uniform than the soil without binders.
The first part of the study focused on the production of blocks with natural soil. Approximately 35
blocks were made using each machine and stored in a climate controlled area exposed to the
ambient atmosphere. The second part of the study focused on the production of blocks with soil
stabilized with portland cement. Approximately 35 blocks were produced with stabilized soil and
wrapped in plastic and stored in a climate controlled area. These blocks were wrapped to
minimize the loss of moisture needed to react with the added portland cement.
After blocks had been allowed to dry/cure for a specified time, the full CEBs were tested using a
400-kip hydraulic press. Full blocks were tested per the recommendations of the machine
manufacturers. Due to the irregular surface profile of CEBs produced with the Vermeer BP714,
custom compression plates were used to evenly transfer the load across the cross-section. At
least 10 blocks were tested for each soil or soil-cement combination at seven, 14, and 28 days
to help increase statistical accuracy and reliability. These ages are standard testing ages for
cementitious materials and can be used to show strength development over time.
KEY FINDINGS
The production of blocks varied dramatically between machines and soils. The AECT Impact
2001A press is much more user friendly for first time users. After cranking the diesel engine, all
that was necessary to produce blocks was the addition of soil to the hopper and pressing the
start button. First time users of the Vermeer BP714 have to learn the correct operation of the
hydraulic levers to produce blocks. Incorrect sequencing of the control levers can lead to poor
block consolidation or repetitive compression. As a result, initial block production is a much
slower process. Periodically, as soil continues to be compacted with the Vermeer BP714, the
compression pins need to be cleaned to function properly. Once a consistent operating
procedure is learned, the larger hopper on the Vermeer BP714 allows for more blocks to be
made before additional soil is needed.
For natural soil, the production of blocks was much easier with the AECT Impact 2001A. The
soil was very sticky in its natural condition and did not readily fall from the hopper when using
the Vermeer BP714. This caused problems in block production because too little material was
filling the press cavity. To alleviate some of these issues, soil was manually added to the cavity
and rodded to aid in consolidation. No such problems arose using the AECT Impact 2001A.
Once portland cement was added, both machines functioned at much higher efficiency. With
less clumps being present and the soil being drier overall, the material much more readily filled
the compression cavities of both machines. The manual addition and rodding of soil with the
Vermeer BP714 was not needed with stabilized soil.
At seven days, the AECT Impact 2001A CEBs with natural soil are over twice as strong as the
same blocks made in the Vermeer BP714. This is partly because of the production difficulties
when using the Vermeer machine with natural soil. The CEBs were most likely not compacted
as well as ideal. The stabilized soil tested with nearly identical strength regardless of machine.
This is indicative of the consistency of the soil after the addition of portland cement.
After 14 days, once again the AECT Impact 2001A CEBs with natural soil exhibited much
greater compressive strength than the blocks made with the Vermeer BP714. Blocks made with
stabilized soil in the AECT Impact 2001A showed a nearly 300 psi increase over the seven days
since the previous test age. The blocks made with stabilized soil in the Vermeer BP714 showed
no strength increase between seven and 14 days. This was an unexpected result attributed to
the moisture retained in the different block types. The Vermeer blocks were much wetter in
appearance and feel at 14 days than were blocks made with the AECT machine.
Minimal strength increase was seen between 14 and 28 days with natural soil CEBs. The
maximum average compressive strength of the Vermeer natural soil blocks was lower than the
CONCLUSIONS
The data recorded from the natural soil blocks indicated that the AECT Impact 2001A can be
used to produce blocks with significant quality advantages over blocks made with Vermeer
BP714 when the soil used may be in less than desirable conditions. The data recorded from the
stabilized soil blocks showed that, for more uniform and free flowing soil, the Vermeer BP714
produced higher quality blocks than the AECT Impact 2001A. The Vermeer BP714 has a two-
stage compression process that allows for more uniformly consolidated blocks, rather than the
one-stage compressive action of the AECT Impact 2001A. However, the AECT Impact 2001A is
Figure 7. Blocks exiting the AECT Impact 2001A (left) and Vermeer BP714 (right).
ACKNOWLEDGEMENTS
The authors would like to acknowledge the Transatlantic Division of the US Army Corps of
Engineers for funding this research study. Additionally, we could not have completed this study
without the help of our many technicians who worked tirelessly to assure that blocks were
produced and cured in the manner prescribed. Special thanks are given to Mr. Lawrence Jetter
and Mr. Jimmy Allen for their help with the AECT Impact 2001A press and to Mr. Adam de Jong
for his help with the Vermeer BP714 press. Permission to publish is granted by the Director,
Geotechnical and Structures Laboratory.
REFERENCES
Dominguez, T. (2011). Guide G-521: ABCs of Making Adobe Bricks. New Mexico State
University, College of Agricultural, Consumer and Environmental Sciences. Las Cruces,
NM: New Mexico State University.
Yu Du
Industrial and Manufacturing Systems Engineering Department
Iowa State University
0068 Black Engineering Building
Ames, Iowa 50011
[email protected]
Michael C. Dorneich
Industrial and Manufacturing Systems Engineering Department
Iowa State University
3028 Black Engineering Building
Ames, Iowa 50011
[email protected]
Brian L. Steward
Agricultural and Biosystems Engineering Department
Iowa State University
2325 Elings Hall
Ames, Iowa 50011
[email protected]
Eric R. Anderson
C&F Dynamics Systems Modeling
Deere & Company
18600 S John Deere Rd
Dubuque, Iowa 52001
[email protected]
Lawrence F. Kane
A&T Dynamic System Modeling
Deere & Company
1100 13th Ave
East Moline, Illinois 61244
[email protected]
Brian J. Gilmore
Advanced Systems Engineering
Deere & Company
One John Deere Place
Moline, Illinois 61265
[email protected]
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the author(s), who are responsible for the
facts and accuracy of the information presented herein.
ABSTRACT
Greater understanding of how highly skilled operators achieve high machine performance and
productivity can inform the development of automation technology for construction machinery.
Current human operator models, however, have limited fidelity and may not be useful for
machinery automation. In addition, while physical modeling and simulation is widely employed
in product development, current operator simulation models may be a limiting factor in
assessing the performance envelope of virtual prototypes. A virtual operator modelling approach
for construction machinery was developed. Challenges to the development of human operator
models include determining what cues and triggers human operators use, how human operators
make decisions, and how to account for the diversity of human operator responses. Operator
interviews were conducted to understand and build a framework of tasks, strategies, cues, and
triggers that operators commonly use while controlling a machine through a repeating work
cycle. In particular, a set of operation data were collected during an excavator trenching
operation and were analyzed to classify tasks and strategies. A rule base was derived from
interview and data analyses. Common nomenclature was defined and is explained. Standard
tasks were derived from operator interviews, which led to the development of task classification
rules and algorithm. Task transitions were detected with fuzzy transition detection classifiers.
INTRODUCTION
Introducing new product features can impact machine performance goals such as higher
productivity or fuel economy. Virtual design, the process by which new features are modeled
and tested in a simulation environment, is typically conducted early in the design process where
it is less expensive to make changes. While machines have been modeled with a fidelity that
enables robust testing, approaches to operator modelling technology are limited, which in-turn
limits the ability of engineers to make solid comparisons in the virtual prototyping stage between
different design alternatives. Given the tightly coupled, non-linear nature of the sub-system
dynamics in off-road vehicles, combined with a strong human-in-the-loop involvement of
operators, dynamic simulation of the complete vehicle system must include the operator,
environment, and working tasks (Filla et al., 2005).
Expert human operators display several characteristics: humans can adapt quickly to context
using prior experience and training; humans have the ability to integrate contextual cues and
strategies; and expert operators can often outperform automated functions. As human operators
gain experience, their operations progress from a primarily knowledge-based behavior, to rule-
based behavior, and finally to skill-based behavior (Rasmussen, 1983). Knowledge-based
behavior depends on explicitly formulated goals and plans. With more practice, operators
become rule-based, where sequences of action become rules to follow. Eventually, the expert
exhibits skill-based behavior, where much of the action takes place without conscious control
(Rasmussen, 1983). These human characteristics are quite different from those of automated
machine systems.
A virtual operator model is designed explicitly to be independent of the vehicle model. Without
this independence, operator models that are highly tuned for particular vehicle models must be
retuned when vehicle designs are changed. To avoid the cumbersome nature of this tight
dependency, an operator model should adapt to changes in vehicle capabilities such as
available power or mechanical linkage constraints.
The objective of this work was to develop an approach to virtual operator modeling (VOM). The
VOM is an encapsulated model independent from the machine model with a well-defined
interface. The outputs of the VOM are the control inputs to the construction machine model.
The VOM should simulate human operators’ behavior and decision making to generate
appropriate control inputs for vehicle model simulation. The initial phase of this work is
described in this paper.
The methodology used to inform the VOM design included expert operator interviews, machine
data analysis, and task-based state modelling. The ability to have adaptive VOM will enable
enhanced performance analysis including fuel efficiency, productivity, and component loading
and strategies for robust automation technology development.
RELATED WORK
Preliminary studies of operator modeling found in autonomous vehicle research helps to
develop an understanding of how a virtual operator model can be used to improve autonomous
control systems. Data collection, task analysis and human behavior study techniques can be
combined to gather, organize, and represent how human operators perform certain operations,
supported by their strategies, situation awareness, knowledge, and decision making process.
Autonomous Control
Autonomy in semi-controlled environments like those associated with construction or agricultural
application requires specification and generation of human-like behavior. Han et al. (2015)
presented a multi-layered design framework for behavior-based autonomous or intelligent
systems. Intelligent systems have the ability to perceive cues from the environment and
machine and plan processes for adapting to different situations. Blackmore et al. (2007)
proposed a behavior-based approach to agricultural field robotics with the capability to perform
operations in unknown field conditions, which includes integrated human-like adaptation ability
with intelligence for perception, and decision making in robotics.
MODELING METHODS
For this project, the excavator trenching operation was selected as the target construction
machine operation for virtual operator development. Excavator trenching is a very common
construction operation, which contains multiple tasks and deals with multiple situations. During
the operation, an operator needs to make a trench at a predetermined location and orientation
with defined dimensions, and dump the material either in a defined area or into a truck.
Operators tend to work at their maximum ability to finish trenching as soon as possible. To
automate the trenching operation, an autonomous system must mimic human operator behavior
to adapt to different situations or disturbances during the operation.
To develop a virtual operator model that replicates human operator behavior, operator
interviews were first conducted to understand the approach of operators and to collect
information about their behavior. The modeling structure described in Figure 1 describes the
elements of the VOM.
The vehicle model provides machine signals like cylinder extension length and velocity. Vehicle
data can be translated into the absolute position and orientation of machine implement elements
such as buckets and booms through a kinematic model of the machine. These dynamic
variables are closely related to the visual cues that the human operator uses for decision
making during the work cycle. The signal flow in this operator model and vehicle model structure
forms a closed loop for simulation.
The virtual operator model consists of three modules: the human perception module, the task
model module, and the control model module. The human perception module is responsible for
acquiring the vehicle model measurements and transforming them into information at the human
perception level that human operators use for decision making. The vehicle model produces
measurements such as cylinder displacements. However, humans perceive the vehicle in terms
of position and location, such as bucket height. A kinematic model translates raw vehicle data
into human-level perceptual cues. The human perception model triggers transitions between
tasks by interpreting these visual cues and sounds to trigger transitions between tasks. The task
model module uses the results from the human perception module to determine the sequence
and status of tasks. The control model module uses task goals, modified by external conditions
like soil condition, environmental condition, to generate control inputs for the vehicle model.
Three steps were followed to develop a modeling approach for virtual operator modeling. The
first step was to interview operators, observe the operation, and acquire machine data.
Secondly, the operation was analyzed to define the tasks and relate machine data to those
tasks. The third step developed the transition detection classifier to identify transitions between
tasks, which led to a state sequence model.
Data Collection
Operator interviews were conducted to gain a deeper understanding of the information needed
and operator behaviors during the trenching operation. An interview protocol was developed to
determine an operator’s operating experience, behavior, strategies, and possible problems
Machine data recorded during predefined excavator trenching operations were used for analysis
of operator’s operational behavior. An excavator was equipped with cameras inside the cab and
outside the cab, which captured both video and audio, an eye tracking sensor to record and
track the real-time view field of the operators, and sensors were used to log data produced from
the machinery itself. Machine operation data were collected during the operation with signal
channels of operator inputs, cylinder positions, and relative speed and direction.
Machine data were analyzed and related to the task analysis, which described the tasks
identified for trenching and specified control input information related to each task. Data analysis
was performed on the machine data to identify different tasks defined from task analysis, which
represents the actual operation information about the sequence, timing information, and control
inputs of the tasks.
Task and data analysis resulted in a qualitative task analysis and a quantitative data-based task
classification, which provided an accurate description of the trenching operation. A task model
was established to represent the trenching operation with detailed information about human
operator behavior, strategies, and control input for each task.
MODELING RESULTS
Task and Data Analysis Results
The tasks and sub-tasks of the trenching operation were identified from the operator interviews
and are summarized in a nominal task timeline (see Figure 2). Five main tasks were identified
within the trenching operation: bucket filling, bucket lifting, swing to dump, dumping, and swing
back to trench. The timing of the start and end of each task was estimated through review and
analysis of video of a trenching operation for one of the participants. It was noted in both
interviews and video analysis that the tasks overlap. Task overlap was a consistent theme
among all participants – one participant said that the more expert the operator, the more he or
she can overlap tasks to increase efficiency and reduce cycle time. While the video analysis of
timing provided a qualitative estimation of task overlap, vehicle data analysis (described later in
this section) was used to obtain more precise estimates of task timing.
Figure 2. Task timeline based on human operator interviews. Task start and end times
were calculated based on analysis of videos of trenching operations.
The durations of specific tasks were overlaid on traces of the machine cylinder extension
lengths, swing speed, and operator inputs (see Figure 3). The topmost graph of Figure 3 shows
the cylinder extension positions for the Boom, Arm, and Bucket. The remaining graphs show
operator control inputs for Swing, Boom, Arm, and Bucket. All these signals were used to
identify the five tasks: Bucket Filling, Bucket Lifting, Swing to Dump, Dumping, and Swing to
Trench within two work cycles. Rectanglar bars with different colors were used to show the start
time point and end time point of tasks with task names on them. Timing information of tasks
could be directly read from the diagram.
Max
Swing Control Input
Medium Swing to
Dump Swing to Trench Swing to Dump Swing to Trench
Min
20 25 30 35 40 45
Time in Seconds
Boom Control Input
Max
Boom Control Input
Medium
Bucket Lifting Bucket Lifting
Min
20 25 30 35 40 45
Time in Seconds
Arm Control Input
Max
Arm Control Input
Medium
Bucket Filling Bucket Filling
Min
20 25 30 35 40 45
Time in Seconds
Bucket Control Input
Max
Bucket Control Input
Medium
Dumping Dumping
Min
20 25 30 35 40 45
Time in Seconds
Figure 3. Data Characterization with Task Timing Overlaid on Traces of the Operator
Inputs and Machine Configurations.
Transition Detection
Task transition identification aims to predict when the operator transitions attention from one task
to another based on vehicle state. Measured vehicle signals were classified into human
perceivable descriptive information, which enabled human-like reasoning rules. For example, by
comparing the bucket height to the ground surface, three levels were defined for states
BelowSurface, NearSurface, and AboveSurface.
INALTable 1 contains the example of rules used in the fuzzy transition detection classifiers to
detect the transition between Swing to Trench and Bucket Filling.
With this information, the operator model transitions through the task model and provides
correct reference inputs to the controller and task model. Figure 1 represents the transition
identification result between Swing back to Trench and Bucket Filling. The green line represents
the fuzzy classification result for the transition detection. The blue line represents the ground
truth task for Bucket Filling, which describes when Bucket Filling starts and ends. By
comparison of the traces, the correct detection happens when the green line starts to rise
slightly ahead of blue line, since the goal of the classifier is to predict a transition between tasks.
If the green line raises later than the blue line, the transition is detected late.
Figure 4. Transition Detection Results between Swing back to Trench and Bucket Filling
ACKNOWLEDGEMENTS
This work was funded by Deere & Company. The opinions expressed herein are those of the
authors and do not necessarily reflect the views of Deere & Company.
Chen Feng
Kurt M. Lundeen
Suyang Dong
Vineet R. Kamat
Department of Civil and Environmental Engineering
University of Michigan
2350 Hayward Street, 2340 G.G. Brown Building
Ann Arbor, MI 48109-2125
[email protected]
ABSTRACT
The ability to automatically estimate the pose of an articulated machine's base (e.g., tracks or
wheels) and its articulated components (e.g., stick and bucket) is crucial for technical innovations
aimed at improving both safety and productivity in several construction tasks. Vision-based pose
estimation, in which optical cameras monitor fiducial markers to determine their three dimensional
relative pose (i.e., position and orientation), offers a promising low cost alternative to currently
available sensor packages that are non-ubiquitous and cost prohibitive. This paper presents an
overview of the design and implementation of this technique. A computer vision based solution
using a network of cameras and markers is proposed to enable a three-dimensional pose tracking
capability for articulated machines. First, markers are installed on both the machine’s components
and the jobsite. The site markers are installed and surveyed so their poses are known in a project
coordinate frame. Cameras then observe the markers to measure their relative transformations,
and calculate the machine components’ poses in the project frame. Several prototypes have been
developed to evaluate the system’s performance on an articulated excavator. Through extensive
sets of uncertainty analyses and field experiments, the proposed method has been shown to
achieve centimeter level depth tracking accuracy at a 15m range with only two ordinary cameras
and multiple markers, providing a flexible and cost-efficient alternative to other commercial
products that use infrastructure dependent sensors like GPS. A working prototype has been
tested on several active construction sites with positive feedback from excavator operators
confirming the solution’s effectiveness.
INTRODUCTION
The construction industry has long been affected by high rates of workplace injuries and fatalities.
According to the Census of Fatal Occupational Injuries (CFOI) report (Bureau of Labor Statistics,
2013), the construction industry had the largest number of fatal occupational injuries, and in terms
of rate ranked the fourth highest among all industries.
In addition to the safety concerns, there are also increasing concerns of relatively stagnant
productivity rates and skilled labor shortage in the construction industry. For example, recently
the construction sector in the United Kingdom is reported to be in urgent need of 20% more skilled
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the author(s), who are responsible for the
facts and accuracy of the information presented herein.
workers and thus 50% more training provision by 2017, to deliver projects in planning
(LCCI/KPMG, 2014).
Excavation is a typical construction activity affected by the safety and productivity concerns
mentioned above. Excavator operators face two major challenges during excavation operations,
described as follows:
First is how to maintain precise grade control. Currently, grade control is provided by employing
grade-checkers to accompany excavators during appropriate operations. Grade-checkers
specialize in surveying and frequently monitor the evolving grade profile. The evolving grade
profile is compared to the target grade profile and this information is communicated by the grade-
checker to the excavator operator. The operator reconciles this information and adjusts the
digging strokes accordingly. This process is repeated until the target profiles are achieved.
Employing grade-checkers is not only dangerous but also results in a significant loss in excavation
productivity due to frequent interruptions required for surveying the evolving profile.
Second is how to avoid collisions to either human workers, buried utilities, or other facilities,
especially when excavator operators cannot perceive the digging machine’s position relative to
hidden obstructions (i.e., workers or utilities) that it must avoid. According to the aforementioned
CFOI report, among all the causes for the 796 fatal injuries in the U.S. construction industry in
2013, the cause of striking by object or equipment comprised 10 percent. This percentage is even
higher in other industries such as agriculture (19%), forestry (63%), and mining (23%). Besides
directly causing fatal injuries on jobsites, construction machines can also inadvertently strike
buried utilities, thus disrupting life and commerce, and pose physical danger to workers,
bystanders, and building occupants. Such underground strikes happen with an average frequency
of about once every six minutes in the U.S. by the nation's leading organization focused on
excavation safety (Common Ground Alliance, 2015). More specifically, excavation damage is the
third biggest cause of breakdowns in U.S. pipeline systems, accounting for about 17% of all
incidents, leading to over 25 million annual utility interruptions (US DOT PHMSA, 2015).
Automation and robotics in construction (ARC) has been extensively promoted in the literature as
a means of improving construction safety, productivity and mitigating skilled labor shortage, since
it has the potential to relieve human workers from either repetitive or dangerous tasks and enable
a safer collaboration and cooperation between construction machines and the surrounding human
workers. In order to apply ARC and increase intelligence of construction machines to improve
either safety or productivity for excavation and many other activities on construction jobsites, one
of the fundamental requirements is the ability to automatically and accurately estimate the pose
of an articulated machine (e.g., excavator or backhoe). The pose here includes the position and
orientation of not only the machine base (e.g., tracks or wheels), but also each of its major
articulated components (e.g., stick and bucket).
When a construction machine can continuously track its end-effector's pose on the jobsites, such
information can be combined together with the digital design of a task, either to assist human
operators to complete the task faster and more efficiently, or to eventually finish the task
autonomously. For example, an intelligent excavator being able to track the pose of its bucket can
guide its operator to dig trenches or backfill according to designed profiles more easily and
accurately with automatic grade-check. This can eventually lead to fully autonomous construction
machines. When construction machines becomes more intelligent, it can be expected to save
time in training operators and thus to mitigate skilled labor shortage and also improve productivity.
Thus, from a safety, productivity, and economic perspective, it is critical for such construction
machines to be able to automatically and accurately estimate poses of any of their articulated
components of interest. In this paper, a computer vision based solution using planar markers is
proposed to enable such capability for a broad set of articulated machines that currently exist, but
cannot track their own pose. A working prototype (Figure 1) is implemented and shown to enable
centimeter level excavator bucket depth tracking.
The remainder of the paper is organized as follows: Related work is reviewed first. The authors’
technical approach is then discussed in detail. The experimental results are presented afterwards.
Finally, the conclusions are drawn and the authors’ future work is summarized.
PREVIOUS WORK
The majority of the construction machines on the market do not have the ability to track their
poses relative to some project coordinate frames of interest. To track and estimate the pose of an
articulated machine, there are mainly four groups of methods.
First are the 2D video analysis methods, stimulated by the improvement in computer vision on
object recognition and tracking. Static surveillance cameras were used to track the motion of a
tower crane in (Yang, Vela, Teizer, & Shi, 2011) for activity understanding. Similarly in
(Rezazadeh Azar & McCabe, 2012) part based model was used to recognize excavators for
productivity analysis. This type of methods generally require no retrofitting on the machine, but
suffers from both possibilities of false or missed detection due to complex visual appearance on
jobsites and the relative slow processing speed. Although real-time methods exist as in
(Memarzadeh, Heydarian, Golparvar-Fard, & Niebles, 2012; Brookshire, 2014), they either cannot
Second are stereo vision based methods. A detailed 3D model of the articulated object was
required in (Hel-Or & Werman, 1994) in addition to stereo vision. A stereo camera was installed
on the boom of a mining shovel to estimate pose of haul trucks in (Borthwick, Lawrence, & Hall,
2009), yet the shovel’s own pose was unknown. In (Lin, Lawrence, & Hall, 2013) the shovel’s
swing rotation was recovered using stereo vision SLAM, yet the pose of its buckets was not
estimated. This type of methods can be infrastructure independent if with SLAM, yet some
problems (sensitivity to lighting changes or texture-less regions) remain to be resolved for more
robust applications.
Third are laser based methods, e.g., (Duff, 2006; Kashani, Owen, Himmelman, Lawrence, & Hall,
2010; Cho & Gai, 2014), which rooted from the extensive use of laser point clouds in robotics.
This type of methods can yield good pose estimation accuracy if highly accurate dense 3D point
clouds of the machine are observed using expensive and heavy laser scanners. Otherwise with
low quality 2D scanners, only decimeter level accuracy was achieved [9] (Kashani, Owen,
Himmelman, Lawrence, & Hall, 2010).
Finally are angular sensor based methods, such as (Ghassemi, Tafazoli, Lawrence, & Hashtrudi-
Zaad, 2002; Cheng & Oelmann, 2010; Lee, Kang, Shin, & Han, 2012). They are usually
infrastructure independent and light-weight, but the resulting pose estimation is either not
accurate enough or sensitive to changes of magnetic environment which is not uncommon in
construction sites and can lead to large variations in the final estimation of the object poses.
Moreover this type of methods only estimate the articulated machine's pose relative to the
machine base itself, if without the help of sensors dependent on infrastructure that consume
power and need careful maintenance like GPS. However the use of GPS brings several technical
challenges. For example, construction sites in a lot of cases do not have good GPS signals to
provide accurate position estimation when these sites are located in urban regions or occluded
by other civil infrastructure such as under bridges. Sometimes GPS signals could even be blocked
by surrounding buildings on jobsites and thus fail to provide any position estimation (Cui & Ge,
2003). In addition, since the GPS only provides 3D position estimation, to get the 3D orientation
estimation one needs at least two GPS receivers at different locations of a rigid object. When the
object is small, such as a mini-excavator's bucket, the estimated 3D orientation's uncertainty will
be high.
TECHNICAL APPROACH
In this section, different versions of the proposed articulated machine pose estimation system
design are explained first. Then, the process to calibrate this system is described. Finally,
uncertainty analysis is explored for the system with some important observations of the
relationship between the system configuration and its stability, i.e., uncertainty of the estimated
pose.
System Design
As mentioned previously, this computer vision based articulated machine pose estimation solution
relies on a method called marker based pose estimation. Generally, marker based pose
estimation firstly finds a set of 2D geometry features (e.g., points or lines) on an image captured
by a calibrated camera, then establishes correspondences between another set of 2D or 3D
geometry features on a marker whose pose is known with respect to a certain coordinate frame
There are two ways of applying marker based pose estimation for poses of general objects of
interest. As shown in Figure 2, one way is to install the calibrated camera 1 rigidly on the object
of interest (in this case, the cabin of the excavator), and pre-survey the marker 1's pose in the
project coordinate frame. The other way is to install the marker 2 rigidly on the object (in this case,
the stick of the excavator), and pre-calibrate the camera 2's pose in the project coordinate frame.
As long as the camera 2 (or the marker 1) stays static in the project coordinate frame, the pose
of the excavator's stick (or the cabin) can be estimated in real-time.
However, these basic configurations don't always satisfy application requirements. For example,
if only the camera 1 and the marker 1 are used, the excavator's stick pose cannot be estimated.
On the other hand when only the camera 2 and the marker 2 are used, once the stick leaves the
field of view (FOV) of the camera 2, the stick's pose becomes unavailable as well. Thus it is
necessary to take a camera's FOV into consideration when designing an articulated machine
pose estimation system. This understanding leads to the camera marker network design
proposed as follows.
3 1 1 w World
Frame
Pose
Observation
Pose
3 2 2 Constraint
Thus, if at least one path exists between any two nodes in such a graph, the relative pose between
them can be estimated. In addition, any loop in the graph means a constraint of poses that can
be used to improve the pose estimation. For example, in Figure 3, marker 2's pose in the world
frame can be found via a path through camera 3 whose own pose in the world frame is pre-
calibrated. The marker 2's pose can also be better estimated when observed by the rigidly
connected camera 1 and 2 whose relative pose is pre-calibrated, since a loop is created.
Applying this concept to articulated machine pose estimation results in numerous possible
designs. One of the possible camera marker networks is shown in Figure 4, camera 1 observes
the benchmark while camera 2 observes the stick marker, and the rigid transformation between
the two cameras is pre-calibrated. Thus as long as the two markers stay inside the two cameras'
FOV respectively, the stick's pose in the world frame can be estimated. It is worth noting that this
only illustrate a simple configuration. With more cameras and markers in the network, there are
more chances of creating loops and thus improving pose estimation, especially considering that
surveillance cameras are becoming popular in construction jobsites whose poses can be pre-
calibrated and thus act as the camera 3 in Figure 3.
Camera-Marker Graph
S w
2 1 B
Prototypes
Multiple prototypes have been implemented to realize the above described camera marker
network designs. Figure 5 demonstrates one of the early prototypes implementing a single-
camera multiple marker configuration. A mechanical device driven by a synchronous belt was
The prototype functions by means of two markers. The first marker, termed stick marker, is rigidly
attached to the stick with a known relationship to the bucket’s axis of rotation. The second marker,
termed rotary marker, is attached at a location removed from the vicinity of the bucket. The rotary
marker is constrained with one degree of rotational freedom and a known angular relationship to
the bucket. If the bucket’s geometry is also known, or measured onsite, then all necessary
information is available to deduce tooth pose, as shown in Figure 6.
Stick Marker
Rotary Marker
Due to the potential interference of the rotary marker and any obstructions during excavation, the
above synchronous belt prototype was slightly modified and evolved to the current prototype as
shown in Figure 7. The newer working prototype implements the multiple-camera multiple-marker
configuration similar to Figure 5. Two cameras are rigidly mounted forming a camera cluster. A
cable potentiometer is installed on the bucket’s hydraulic cylinder to track the relative motion of
the excavator bucket and the stick even if the bucket is deep inside the earth. In addition to
possessing a cable potentiometer for measuring linear displacement, the device contains a
System Calibration
Three types of calibration are necessary for an articulated machine pose estimation system
implementing the above camera marker network design.
The first type is intrinsic calibration which determines internal parameters (e.g., focal length) of all
cameras in the system. This is done using same methods as in (Feng, Xiao, Willette, McGee, &
Kamat, 2014).
The second type is extrinsic calibration which determines relative poses (e.g. dotted edges in the
graph) designed to be calibrated before system operation. There are two kinds of such poses: 1)
poses of static markers in the world frame, and 2) poses between rigidly connected cameras. The
first kind of poses can be calibrated by traditional surveying methods using a total station. The
second kind of poses, however, cannot be directly surveyed physically since a camera frame's
origin and principal directions usually cannot be found or marked tangibly on that camera. Thus
to calibrate a set of m rigidly connected cameras, a camera marker graph needs to be constructed
as denoted in Figure 8. A set of n markers' poses need to be surveyed in the world frame. Then
when the m cameras observe these n calibration markers, the graph is formed to estimate each
camera's pose in the world frame and thus their relative poses between each other (i.e., edges
with question mark) are calibrated. It is suggested to ensure that multiple loops exist in this graph
to improve the accuracy of the poses to be calibrated. Such loop exists as long as at least two
markers are observed by a same camera simultaneously. It is also worth noting that with enough
many calibration markers, each camera's intrinsic parameters can be further optimized together
with their extrinsic parameters.
1 2 3 …... n
1 ? 2 ? m
Figure 8. A camera marker graph for extrinsic calibration
Uncertainty Analysis
It is not sufficient to only estimate the pose of an articulated machine. The uncertainty of the
estimated pose is critical for the following reasons. Firstly the uncertainty provides a measure of
the confidence level of the estimated pose, which is necessary for many downstream applications
(e.g., deciding buffer size for collision avoidance). Secondly it serves as a tool for evaluating the
stability of the pose estimation system under different configurations, and thus further guiding to
avoid critical configurations that lead to unstable pose estimation.
To perform uncertainty analysis on the proposed camera marker network pose estimation system,
the system is firstly abstracted as the following state space model:
Z F( X ; Y , C ) (1)
where X is the state vector of the network (usually encodes the poses of nodes in the graph), Z
is the predicted measurement vector containing image coordinates of all the points projected from
markers, Y is the known parameters (usually contains marker points' local coordinates), C is the
calibrated parameters (usually encodes all cameras' intrinsic parameters and all calibrated
poses), and F is the system's observation function parameterized by Y and C, i.e., the camera
perspective projection function.
For example, for a network of a single camera and a single marker, X is a 6 u 1 vector that
encodes the marker's orientation and position in the camera frame; Y is a 3n u 1 vector containing
n marker points' coordinates from surveying; C is a vector of the camera intrinsic parameters. If
another marker is added to this network, Y should be extended with points on the new marker.
Uncertainty Propagation
No matter how complex such a network is and what method is used to get an initial estimate of X
(either PnP or homograph decomposition), the optimized state X̂ can be calculated by the
following least square optimization, i.e., bundle adjustment:
2
ˆ
X arg min Zˆ F( X; Y, C) (2)
X PZˆ
where PẐ is the a priori covariance matrix of the actual measurements Ẑ , typically assumed as
V u2 I when image coordinates are measured with a standard deviation of V u .
wF
where J is the Jacobian matrix of F evaluated at X̂ .
wX ˆ
X
Using this method, some important empirical conclusions on the basic single-camera single-
marker system are found about relationships between system stability and configuration, based
on numerical experiments, which are useful for more complex system design and are listed as
follows. Similar analysis will be performed to multiple-camera or multiple-marker system in future
work.
1. The marker's origin/position in the camera frame, c t m , has the largest uncertainty along a
direction nearly parallel to the camera's line of sight to the marker, i.e., c t m itself. Figure 9
exemplifies this observation at two randomly generated poses between the camera and the
marker.
2. The largest uncertainty of marker's position in the camera frame increases approximately
quadratic to the marker's distance to the camera; compared to which the two smallest
uncertainty's increases are almost negligible. Figure 10 shows a typical example.
3. The largest uncertainty of marker's position in the camera frame increases approximately
linear to the camera focal length; compared to which the two smallest uncertainty's increases
are almost negligible. Figure 11 shows a typical example.
Firstly, the outdoor detectability of markers was tested. A marker's detectability is a function of
many factors including the marker size, the distance between the marker and the camera,
included angle between the camera viewing direction and the marker plane's normal direction,
and also image resolution. Since the distance between the marker and the camera is the most
critical factor affecting the method's feasibility in real applications, this experiment is performed
by fixing other factors and then gradually increasing the distance of the marker in front of the
camera, until the algorithm fails to detect the marker, and recording the distance. Varying other
factors and repeating this process results in Table 1. One can consult this table to decide how
large the marker should be to fit application need.
Finally, for uncertainty propagation, one needs to have a prior estimation of the image
measurement noise's standard deviation V u . This is achieved by collecting multiple images under
a static camera marker pose. Repeating this process for different poses and collecting
Mockup Experiments
As previously mentioned, a multiple-camera multiple-marker articulated machine pose estimation
prototype has been implemented with the application of estimating an excavator's bucket depth
in a project frame, which could be used for automatic excavation guidance and grade control.
The top row of Figure 14(a) shows the camera cluster of the prototype in Figure 1, and different
experiment configurations to test the depth estimate's accuracy. The experiments were setup by
observing the two markers in the bottom row of Figure 14(a) using the two cameras in the cluster
respectively. Then the depth difference between the two markers was estimated using the
proposed method, while the ground truth depth difference between the two marker centers was
measured by a total station with 1 mm accuracy. Figure 14(b) illustrates the configurations of
different sets of such experiments, for comprehensive tests of the method's accuracy under
several system and design variations. The first set varies one of the marker's pitch angle (top row
of the figure). The second set varies its height (bottom-left). The third set varies its distance to the
camera (bottom-middle). And the fourth set varies the number of tags used in that marker (bottom-
right).
Marker Pitch
|Depth Error|<1 in
Marker Pitch Marker Height
|Depth Error|<1 in
Marker Distance illumination
Prototype Experiments
Another experiment was conducted to characterize the performance of the cable potentiometer
prototype. The prototype was installed and tested on a Caterpillar 430E IT Backhoe Loader, as
shown in Figure 16. The experiment involved calibrating the sensor system, placing the backhoe
in random poses, using the sensors to estimate bucket tooth height, and comparing estimates
with ground truth measurements obtained using a total station. The experiment’s pass/fail criterion
was set at 2.5 centimeters (1 inch) of absolute error.
A total of eight trials were conducted. For each trial, three components of tooth position (x, y, and
z, where y corresponds to the zenith direction) were measured and compared with ground truth
measurements, as shown in Figure 17. Of the twenty-four data points collected, only three points
exceeded the pass/fail criterion of 2.5 centimeters (1 inch) of absolute error. Though the system’s
ability to estimate x-position only marginally met the pass criterion, the system’s performance as
a whole was deemed satisfactory, especially considering that accuracy along the zenith direction
is more important in many excavation applications.
ABS Error
3.5
3.0
2.5
ABS Error (cm)
2.0
1.5
1.0
0.5
0.0
x-position y-position z-position
The cable potentiometer prototype was also tested on an active construction site, as shown in
Figure 1 and Figure 18. The system was used to assist the operator in trenching operations. A
computer screen was mounted in the operator’s cabin, providing a display of bucket height relative
to a desired trench grade specified by the job plans.
Shown in Figure 19 is a short section of trench in which the operator conducted a side-by-side
comparison of traditional grading versus grading guided by the sensor system. The resulting
trench depth differences between the manual grade and the guided grade (by the prototype) were
less than 1 inch, which fulfils the need of many construction applications.
Traditional Grading
The authors’ current and planned work in this research direction is focused on continuously
improving the estimation accuracy such as taking the uncertainty of calibrated parameters C into
ACKNOWLEDGMENTS
This research was funded by the United States National Science Foundation (NSF) via Grants
CMMI-1160937, CMMI-1265733, and IIP-1343124. The authors gratefully acknowledge NSF’s
support. The authors also thank Walbridge Construction Company, Eagle Excavation Company,
and the University of Michigan Architecture, Engineering and Construction (AEC) division for their
support in providing access to construction equipment and job sites for experimentation and
validation. The authors would also like to thank Dr. Manu Akula, Dr. Ehsan Rezazadeh Azar, and
Mr. Nicholas Fredricks for their invaluable help during experiments. Any opinions, findings,
conclusions, and recommendations expressed in this paper are those of the authors and do not
necessarily reflect the views of the NSF, Walbridge, Eagle Excavation, or the University of
Michigan. Suyang Dong and Vineet R. Kamat have a significant financial and leadership interest
in a start-up company named Perception Analytics & Robotics LLC (PeARL), and are the
inventors of technology (proposed to be licensed by PeARL) involved in or enhanced by its use
in this research project.
REFERENCES
Borthwick, J., Lawrence, P., & Hall, R. (2009). Mining haul truck localization using stereo vision.
Proceedings of the Robotics and Applications Conference, 664, p. 9.
Brookshire, J. (2014). Articulated pose estimation via over-parametrization and noise projection.
Cambridge: MIT.
Bureau of Labor Statistics. (2013). Census of Fatal Occupational Injuries (CFOI) - Current and
Revised Data. Retrieved February 3, 2015, from http://www.bls.gov/iif/oshcfoi1.htm
Cheng, P., & Oelmann, B. (2010). Joint-angle measurement using accelerometers and
gyroscopes—A survey. IEEE Transactions on Instrumentation and Measurement, 59(2),
404--414.
Cho, Y., & Gai, M. (2014). Projection-Recognition-Projection Method for Automatic Object
Recognition and Registration for Dynamic Heavy Equipment Operations. Journal of
Computing in Civil Engineering, 28(5), A4014002.
Common Ground Alliance. (2015, March 03). Survey finds that nearly half of homeowners who
plan to dig this year will put themselves and others at risk by not calling 811 before
starting. Retrieved May 18, 2015, from http://commongroundalliance.com/media-
reports/press-releases/survey-finds-nearly-half-homeowners-who-plan-dig-year-will-put
Cui, Y., & Ge, S. (2003). Autonomous vehicle positioning with GPS in urban canyon
environments. IEEE Transactions on Robotics and Automation, 19(1), 15--25.
Duff, E. (2006). Tracking a vehicle from a rotating platform with a scanning range laser.
Proceedings of the Australian Conference on Robotics and Automation, 12.
Feng, C., & Kamat, V. R. (2012). Plane Registration Leveraged by Global Constraints for
Context-Aware AEC Applications. Computer-Aided Civil and Infrastructure Engineering,
28(5), 325-343.
Feng, C., Xiao, Y., Willette, A., McGee, W., & Kamat, V. R. (2014). Towards Autonomous
Robotic In-Situ Assembly on Unstructured Construction Sites Using Monocular Vision.
Proceedings of the 31th International Symposium on Automation and Robotics in
Construction, (pp. 163-170). Sydney, Australia.
Ghassemi, F., Tafazoli, S., Lawrence, P., & Hashtrudi-Zaad, K. (2002). An accelerometer-based
joint angle sensor for heavy-duty manipulators. Proceedings of IEEE International
Conference on Robotics and Automation, 2, pp. 1771--1776.
Masoud Gheisari
Building Construction Science Program
Mississippi State University
823 Collegeview St. 132C Howell Building
Mississippi State MS 39762;
[email protected]
Javier Irizarry
School of Building Construction
Georgia Institute of Technology
245 4th Street, NW, Rm 138
Atlanta, GA 30332
[email protected]
ABSTRACT
This user-centered study explored the usability of Unmanned Aerial Systems (UASs) in Georgia
Department of Transportation (GDOT) operations. The research team conducted a series of
semi-structured interviews with subject matter experts in GDOT divisions. Interviews focused on
(1) the basic goals of the operators, (2) their major decisions for accomplishing those goals, and
(3) the information requirements for each decision. Following an interview validation process, a
set of UASs design characteristics that fulfill user requirements of each previously identified
division was developed. And ultimately five reference systems were proposed using a house-of-
quality viewgraph to illustrate the relationships between GDOT tasks and potential UASs aiding
those operations. Each of these five reference systems (Flying Camera, Flying Total Station,
Perching Camera, Medium Altitude & Long Endurance (MALE), and Complex Manipulation) was
then discussed in terms of its Airframe, Payload, Control Station, and System capabilities.
These technical requirements determined would aid in more rapid development of test UASs for
GDOT use as well as advance GDOT’s implementation of UASs to help accomplish the
Department’s goals.
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the author(s), who are responsible for the
facts and accuracy of the information presented herein.
INTRODUCTION
Innovative applications of Unmanned Aerial System (UAS) for improved mapping operations
take advantage of several inherent characteristics of UAV systems. For instance, aerial video,
collected by visible or infrared video cameras deployed on UAV platforms, is rapidly emerging
as a low cost, widely used source of imagery for response to time-critical disaster applications
(Wu and Zhou 2006) or for the purpose of fire surveillance (Wu et al. 2007).
During the past decade, UASs have been applied in a wide range of traffic management,
transportation and construction disciplines related to Departments of Transportation (DOTs),
including traffic surveillance (1,2), traffic simulation (3,4), monitoring of structures (5,6),
avalanche control (7), aerial assessment of road surface condition (8), bridge inspection (9), and
safety inspection on jobsite (10). States such as Virginia (11), Florida (12), Ohio and
Washington (1), Utah (13) were leading UAS application and implementation within their DOTs.
This chapter introduces a wide variety of UAS applications in traffic management, transportation
and construction disciplines related to DOTs. One of the very recent UAS-based projects
supported by a DOT was in the State of Georgia in which a group of researchers from the
Georgia Institute of Technology explored the feasibility of using this leading edge technology in
several divisions and offices within GDOT (14, 15, 16). The goal of this study was aligned with
the Federal Aviation Administration (FAA) goals of efficient integration of UASs into the nation’s
airspace for civilian and public applications. Integrating UAS into GDOT practices might seem
very beneficial at first glance, but the most important issue that was investigated at the first step
in that GDOT-supported project was to see how usable this technology could be within GDOT
divisions and offices.
RESEARCH METHODOLOGY
Having the UAS might seem very useful for most GDOT practices but the very first issue that
should be resolved is whether this technology would be usable for different applications within
GDOT divisions and offices. A usable UAS should be designed firstly by investigating the user
requirements across all divisions and offices of GDOT and then identifying and developing a set
of design characteristics for the UAS based on the previously identified user requirements. Even
when a real UAS is designed based on user requirements; it should be tested using real users
of the system to evaluate its applicability and usability. The work plan of this research has been
illustrated in Figure 1.
The Engineering Division develops environmental studies, right-of-way plans, construction plans
and bid documents through a cooperative effort that results in project design and
implementation. Moreover, the division is responsible for supporting and maintaining all
engineering software, engineering document management, and state wide mapping. Four
interviews were conducted with persons in charge of activities conducted by the engineering
division.
The main responsibility of the Intermodal Division is to support and facilitate the development
and implementation of intermodal policies, planning, and projects in the highway program and
organize all major statewide non-highway programs for the development of a comprehensive
transportation system. The intermodal division consists of four main programs: (1) aviation
programs is tasked to guide airport development and to assure a safe and well-maintained
system of public-use airports; (2) transit programs provide transit capital and operating
assistance to the urban and all metropolitan planning organizations in Georgia; (3) rail programs
include track inspection and safety investigation for the Georgia rail system in cooperation with
the Federal Railroad Administration; (4) waterways programs maintain the navigability of the
Atlantic Intracoastal waterway and Georgia's deep water ports in Savannah and Brunswick.
Consequently, five interviews were conducted with members from the intermodal division,
including at least one interviewee from each of the above programs.
The Permits and Operations Division ensures a safe and efficient transportation system by
collecting traffic data, addressing maintenance needs (e.g. related to traffic lights) and
regulating the proper use of the state highway system. In order to improve traffic flow and
coordinate traffic engineering, traffic safety, and incident management statewide, the division
collects traffic data (e.g. flow, speed, counts) using a wide range of devices (e.g. video cameras,
microwave sensors, and computer applications pertaining to traffic services). This division
consists of four offices: Transportation Data, Utilities, Traffic Operations, and Maintenance.
Interviews were conducted with seven engineers at management, district and area office levels.
Figure 5 shows the created UAV requirement matrix for the identified divisions within GDOT; the
left part is pertaining to the notion of UAV classes, while the right part shows the sensor suites.
Based upon the task classification of being either local or distributed, the related primary
descriptor (a length or an area, respectively), and the task attributes duration and frequency, the
identified tasks can be binned.
The Control Station groups requirements for hard- and software for the control station utilized by
the UAS operator, however, the software considered is related to the graphical user interface
(GUI) of the interface and does not include specific guidance, navigation, and control (GNC)
aspects. The details of these requirements are discussed in the following table (Table 2).
Guidance, navigation, and control (GNC) requirements are grouped into the System section,
which mainly contains capability features of the reference systems (Table 3)
*KHLVDUL ,UL]DUU\
System B – Flying Total Station
System B expands upon System A by providing higher quality measurements of the
environment. The main focus of the system is to act as a flying total station, i.e. perform tasks
similar to the ones done by a survey crew, but faster, especially in otherwise unprepared
environments. As the systems’ expected operational radius is limited, the UAV could be
tethered to a power outlet at the control station site (see Figure 7).
Usage Scenario: The flying total station could be used anytime survey data or location data of
survey quality is needed. The system could be brought on scene, maneuvered to the area to be
surveyed (either through FPV video or conventional third person flying), and the survey could be
started. Post processing would presumably be similar to post-processing regular total station
data.
Airframe: System B would need an airframe that is capable of prolonged hover operations,
precise positioning, and a certain level of failure tolerance to protect against the loss of
potentially expensive sensors. These requirements would point towards a multi copter which is
designed for redundancy. This could be a hexa- or octocopter which is sized so that not all
rotors are needed to stay airborne. As people, both cooperative as well as non-cooperative,
would presumably negatively impact the survey process by blocking line of sight, it can be
assumed that the system would be operated in the presence of relatively few cooperative
people, as such reducing the requirements for safety through shrouds, etc.
Payload: The system would presumably carry LiDAR equipment to replicate a total station.
Additionally, altimeter, for example sonar or laser based, could be used to establish a correct
above ground altitude and in reverse determine the elevation of the terrain. Additional onboard
computational power might be required to process the LiDAR data.
Control Station: The control station for System B would most likely be comprised of a powerful
ruggedized laptop as well as a GNSS reference station to establish a differential correction for
the navigation solution.
Required Infrastructure: System B would not need any special communications landing site
infrastructure to be operated. However, due to the size of the UAV as well as the additional
antennas for the control station, the system most likely would be comprised of some larger
crates (to be transportable by pickup or SUV). An alternative would be a dedicated trailer or van.
(For more details on dedicated vehicles, see Section 5.2.6, the infrastructure requirements for
System E.)
Capabilities: The increased computation power both on- as well as offboard would allow the
system to not only utilize vision to detect and identify objects, but also to process video data in
combination with the LiDAR data to perform SLAM-based high precision navigation.
Furthermore, the system should be able to allow working with digital plans and drawings, for
example to check the correct location of construction features, roadway markings, or property
boundaries.
*KHLVDUL ,UL]DUU\
System C – Perching Camera
System C is also expands upon System A, but mainly focusing on a prolonged capturing of the
environment. Based upon that, operational endurance is a main application goal of System B.
This could either be achieved through highly efficient flight, which might be hard to achieve
given that a considerable amount of that flight time could be hover or hover-like operations, or it
could be achieved through perching (see Figure 8).
Usage Scenario: Due to the pertinent standby capability perching provides, System C could
mainly be used in two modes: as an ad-hoc deployed UAS for local, on-site inspection or
measurement tasks, or as a deployed-on-demand system. The former usage is comparable to
that of System A. The later usage would imply that System C UAVs have strategically located
fixed “base stations,” which would serve as recharging stations and potentially as
communication towers. A user requiring services provided by System C would use a control
station, request a UAV, and be given control over the closes available air unit. Once the
operator doesn’t need the services any longer, the UAV would be released and return to a base
station.
Airframe: As a result from the perching requirement, the airframe is required to be relatively
small to be able to get to potential perching locations as well as being robust to inadvertent
collisions close to the selected perching location. Furthermore, the airframe needs to be
equipped with a landing gear of sorts to facilitate perching on poles, or traffic signal installations.
These requirements could be realized with a smaller scale hexa-copter or potentially a small
electric conventional helicopter.
Required Infrastructure: The perching capability recommends System C for an extended dual
use: one the one hand as a mobile UAS with the described capabilities, on the other hand as a
static continuously operating camera. A potential scenario could be the deployment of several
dedicated perching locations which could double as a charging or refueling station. If the
System C units then provide a MANET capability, System C units could, for example, replace
the conventional Navigator cameras installed throughout the Atlanta metro area. Resulting from
that, a set of permanent perching locations are needed. These base stations would provide
recharging capabilities, allow easy access for maintenance, and could also double as R/F
communication outlets spanning the MANET utilized by System C.
Payload: Given that perching could limit the achievable attitudes while landed, actuated video
and non-video sensor packages are presumably necessary to compensate for that. Given the
use cases for System C, it seems likely that the system would also provide telepresence
equipment as well as advanced networking capabilities.
Control Station: The control stations for System C could be of several types. Given the
potential use cases, operators should be able to use control stations tailored to them and their
particular needs: HERO personnel, for example, could utilize a rugged tablet computer based
system comparable to a unit used to control System A; GDOT employees working in the Traffic
Control Center could use a software that operates on their desktop computers; traffic
engineering and traffic management personnel could use a laptop. The control station would
provide a FPV interface and a graphical tool for waypoint navigation as well as indicating
measurement areas or regions of interest.
Capabilities: System C would mainly provide vision based capabilities, potentially making use
of dedicated external computation centers to support limited computation power available in
tablet based control stations. The system would expand upon the capabilities of System A,
especially toward traffic related tasks.
CONCLUSION
After conducting interviews with 24 individual in the four selected GDOT divisions, the research
team identified tasks that could benefit from the use of UAS technology. The majority of the
tasks in GDOT divisions with the highest potential for benefitting from UAS technology are
centered around collecting data, providing information, and decision making based on the data.
Each task is also characterized by particular attributes (e.g. location where the tasks are
performed and the time required to complete a given task) that yield a better understanding of
the environmental conditions. Thus, UAS technical requirements that embed the operational
and technical requirements for development of a potential UAS have been investigated. The
result of this investigation was the identification of five potential systems. Given the issues with
cost related data collection in this study, it is recognized that additional research is needed to
obtain a clearer idea of the economic and intangible benefits of the use UASs for GDOT
operations. A possible departure point would be the selection of construction related tasks. It
would be possible to perform a detailed tasks analysis for a construction jobsite inspection task
to set the base for UAS operator system interface needs. The analysis would include a detailed
assessment of the current practice and shadowing of personnel performing the task. In that way
an estimate of the time and cost of performance could be developed. Based on this analysis, a
potential UAS flight path through a jobsite could be established. Using a staff mounted sensor
suite as a UAS mock-up or an off-the –shelf UAS, sensor data including video would be
collected along the established flight paths. Then, a software replica of the site would be
developed, using the collected data. The system developed would be used in a staged field test
in an access-controlled construction site to validate the simulation results. This activity (and
preparations for it) would include direct coordination with the FAA. The technical requirements
determined would also aid in more rapid development of test UASs for GDOT use as well as
advance GDOT’s implementation of UAS(s) to help accomplish the Department’s goals.
ACKNOWLEDGMENTS
The authors gratefully acknowledge the support provided for this research by GDOT under grant
12-38. The views and opinions presented in this paper are those of the authors do not
necessarily reflect the views, opinions or policy of GDOT.
Bertrand Lemasson
Environmental Laboratory
U.S. Army Engineer Research and Development Center
3909 Halls Ferry Road
Vicksburg, MS 39180
ABSTRACT
While the software controlling autonomous systems has grown more complex, controlling
systems of robots with multiple autonomous agents remains a challenging research area.
Assigning tasks or roles to each individual agent in the group is time consuming and may
restrict collaborative behaviors. One possible solution to this problem is to enable individual
autonomous agents with relatively simple interactive logic that allows more complex group
behaviors to emerge. We have tested the validity of this method by applying simple social
models derived from collective animal behavior to autonomous unmanned ground vehicles. We
have evaluated the results of this approach using high-fidelity simulations in the Virtual
Autonomous Navigation Environment (VANE). Our initial findings demonstrate the potential of
biologically inspired control algorithms for enabling navigation in groups of autonomous ground
vehicles.
INTRODUCTION
The potential for autonomous navigation by unmanned ground vehicles (UGV) holds great
promise for military applications. However, unlike autonomous navigation of civilian vehicles,
which can be operated on previously mapped road networks, military vehicles may require off-
road navigation with little previously known information about the state of the terrain. In addition,
without the benefit of well-marked lanes, road signs, and rules of the road, autonomous
navigation in off-road environments must work in a dynamic, unconstrained environment.
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the author(s), who are responsible for the
facts and accuracy of the information presented herein.
In this work we will demonstrate the feasibility of using biologically inspired control algorithms for
UGV operating in teams with manned ground vehicles. These algorithms work by having each
robot compare its own state to the state of the vehicles (both manned and unmanned) around it.
We use high-fidelity simulations from the Virtual Autonomous Navigation Environment (VANE)
[1] to show that these biologically-inspired algorithms give rise to group behaviors in teams of
manned and unmanned UGV that depend on the ratio of manned to unmanned vehicles.
BACKGROUND
Biologically inspired control
We implemented biologically inspired algorithms for controlling the robotic vehicles. In this work,
the inverse distance weights how individuals perceive one another; closer vehicles are given
more importance than those farther away. While there are additional mechanisms used by
animals to process information, such as selective attention to salient features [2,3], our goal was
to begin by integrating cognitive effects at the simplest level (sensory perception) to explore
their impact on the collective motion of realistically simulated ground vehicles.
In the algorithm used in this work, the robots desired velocity was determined by comparing its
own velocity with all the other vehicles, and weighting the averaged results by the inverse of the
distance calculates velocity. So for vehicle i with velocity the desired velocity is given by
& 1 n
& &
vi '
w
¦ w (v
j 1,i z j
ij j vi ) (1)
where n is the total number of vehicles and the weights are given by
& &
wij ri rj (2)
and the total weight factor w is given by
w ¦
wij .
j
(3)
Sensor models in the VANE use detailed ray tracing to simulate the reflection of visible and
infrared wavelengths in the environment. This includes environmental light for camera sensors
and active sources such as the lasers in LADAR systems. The VANE also features a detailed
GPS model that accounts for multipath reflections, satellite occlusion, dilution-of-precision, and
atmospheric effects [6].
We performed experiments with groups of 4, 7, and 10 vehicles. For each group of vehicles, we
varied the number of vehicles that were autonomous. In the experiments, the autonomous
vehicles were not equipped with any sensing capability, but all the vehicles communicated their
state (position and velocity) to a common server to which all other vehicles subscribed. This was
the only information that the vehicles shared.
The initial configuration of the vehicles was a staggered column, as depicted in Figure 1, with an
average longitudinal spacing of 10 meters and an average lateral spacing of five meters. The
initial position of each vehicle was randomly varied by up to 2.5 meters in any direction, and the
initial orientation was randomly varied by 25 degrees. These ranges were chosen to give the
maximum possible variation to the simulations while ensuring the vehicles did not collide with
each other.
In the experiment, the driven vehicles abruptly changed course after five seconds to simulate an
avoidance maneuver. In the maneuver, the manned vehicles quickly turned to the left while
maintaining formation. Ten trials were run for each combination of manned and unmanned
vehicles shown in Table 1, and the average alignment at 15 seconds was calculated for each
configuration.
In Figure 3, we show the paths of vehicles from one of the simulations with seven vehicles. In
the left figure, only one vehicle was manned, while in the right figure, four vehicles were
manned. The manned vehicles are denoted with a thick red line, while the unmanned vehicles
are denoted with a thin gray line. The results depicted in Figure 3 show the typical result that
after the maneuver initiated at five seconds, the unmanned systems tended to move as a group,
and that group was influenced by the number and proximity of manned vehicles. Systems with
greater than 50% manned vehicles tended to recover alignment after the maneuver, while
systems with less than 50% recovered with proportionally varying degrees of success.
Figure 3. Comparison of the paths taken by a group of 7 vehicles with six unmanned
systems (left) and 3 unmanned systems (right)
The current dynamics agree with theoretical and empirical work on the influence of majority-
minority interactions in the collective movement decisions of groups. When individuals are
unaware of who is in charge, and pool all social cues indiscriminately, followers will either adopt
a new course that is a compromise (Fig. 3, left), or else follow the majority (Fig. 3, right) [7.8]. In
future work we will explore how selective attention to particular features can shift these
dynamics across groups of increasing size [3]. In addition, we will parameterize those
algorithms to allow optimization of the control algorithms for a particular task such as navigation
REFERENCES
[1] Goodin, C., George, T. R., Cummins, C. L., Durst, P. J., Gates, B. Q., & McKinley, G. B. (2012). The
Virtual Autonomous Navigation Environment: High Fidelity Simulations of Sensor, Environment, and
Terramechanics for Robotics. In Earth and Space 2012 (pp. 1441-1447).
[2] Ballerini, M., Cabibbo, N., Candelier, R., Cavagna, A., Cisbani, E., Giardina, I., Lecomte, V.,
Orlandi, A., Parisi, G., Procaccini, A., Viale, M., & Zdravkovic, V. (2008) Interaction ruling animal
collective behavior depends on topological rather than metric distance: evidence from a field
study. Proceedings of the National Academy of Sciences, U.S.A.,105(4), 1232-1237.
[3] Lemasson, B. H., Anderson, J. J., & Goodwin, R. A. (2013). Motion-guided attention promotes
adaptive communications during social navigation. Proceedings of the Royal Society B: Biological
Sciences, 280(1754), 20122003.
[4] Tasora, A., & Anitescu, M. (2009). A fast NCP solver for large rigid-body problems with contacts,
friction, and joints. In Multibody Dynamics (pp. 45-55). Springer Netherlands.
[5] Creighton, D. C., McKinley, G. B., Jones, R. A., & Ahlvin, R. B. (2009). Terrain Mechanics and
Modeling Research Program: Enhanced Vehicle Dynamics Module (No. ERDC/GSL-TR-09-8). Engineer
Research and Development Center, Geotechnical and Structures Laboratory.
[6] Goodin, C., Durst, P. J., Gates, B., Cummins, C., & Priddy, J. (2010). High fidelity sensor simulations
for the virtual autonomous navigation environment. In Simulation, Modeling, and Programming for
Autonomous Robots (pp. 75-86). Springer Berlin Heidelberg.
[7] Couzin, I. D., Krause, J., Franks, N. R., & Levin, S. A. (2005). Effective leadership and decision-
making in animal groups on the move. Nature,433(7025), 513-516.
[8] Biro, D., Sumpter, D. J., Meade, J., & Guilford, T. (2006). From compromise to leadership in pigeon
homing. Current Biology, 16(21), 2123-2128.
Fangyu Guo
Department of Civil, Construction, and Environmental Engineering
Iowa State University
136 Town Engineering Building
Ames, IA, 50011
Email: [email protected]
Yelda Turkan
Department of Civil, Construction, and Environmental Engineering
Iowa State University
428 Town Engineering Building
Ames, IA, 50011
Email: [email protected]
Charles T. Jahren
Department of Civil, Construction, and Environmental Engineering
Iowa State University
456 Town Engineering Building
Ames, IA, 50011
Email: [email protected]
ABSTRACT
The adoption of 3D modeling and automatic machine guidance (AMG) are becoming more
popular in the transportation industry. With a 3D model uploaded to an on-board computer
within a piece of heavy construction equipment, operators can easily monitor machine
operations with respect to grad and location or engage the machine to produce the proper grade
automatically. Thus, it provides great convenience and improved productivity for field workers.
AMG and 3D modeling have been identified as enabling technologies for a Civil Integrated
Management (CIM) system. When CIM is implemented, an entire transportation agency and
stakeholder partners share in the use and development of a common data pool that is
accessible to authorized users involved with all phases of a transportation facility life cycle (such
as planning, design, construction, maintenance and rehabilitation) and all departments in the
agency (Administration, finance, operation and others). The concept of CIM was developed and
promoted by the United States Federal Highway Administration in 2013 and was established to
make better use of accurate data and information that results from the utilization of advanced
technologies and/or tools thus to facilitate more effective decision making for transportation
projects.
Using the CIM concept and framework, technologies such as 3D modeling and AMG could be
more efficiently adopted within the full life cycle of a transportation facility. More importantly,
data could be collected and managed systematically in the early phases of a project life cycle so
they could be useful for later phases of the facility lifecycle. The purpose of this study is to
investigate how CIM system could support autonomous construction and vice versa. During a
domestic scan effort, seven state agencies and their contractors collaborated to present their
extensive experiences on certain CIM related practices and tools. In particular, the experiences
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the author(s), who are responsible for the
facts and accuracy of the information presented herein.
of the agencies that were under investigation regarding 3D modeling and AMG will be
addressed in this paper. In addition, the benefits and challenges of using 3D modeling and AMG
will also be discussed.
PROBLEM STATEMENT
Productivity, quality, and safety are important aspects of any transportation project and are used
as measures of project success. Various technologies and tools have been used to achieve
better productivity, quality, and safety. Adoption of 3D engineered models in the transportation
industry is relatively behind the adoption of Building Information Modeling (BIM) in the building
industry. However, this is now changing; the transportation industry is adopting 3D engineered
models at an increasing rate. In 2010, the adoption rate of 3D engineered models for
transportation projects was 27%, which was increased to 46% in 2012 (McGraw-Hill 2012). The
successful implementation of BIM in the building industry is a good example. It is critical for
transportation industry to investigate the lessons learned from the building industry and start
implementing 3D engineered models and other supporting technologies such as Automatic
Machine Guidance (AMG) in their projects. The implementation of 3D engineered models is
expected to bring benefits such as less rework and change orders (Parve 2013), more effective
communications (Olde and Hartmann 2012), reduced time and costs, and improved quality and
productivity (Myllymaa and Sireeni 2013; Cylwik and Dwyer 2012). Similarly, the major benefits
of using AMG include increased quality and productivity, and reduced time and costs on job
sites (Peyret 2000).
For an agency planning to implement a new technology, it is important to investigate the best
practices and lessons learned from other agencies that already have extensive experiences with
that particular technology. Since Iowa DOT was identified as one of the leading state DOTs for
3D modeling implementation (EDC 2013), this paper presents a case study on Iowa DOT’s best
practices and lessons learned from the use of 3D modeling and AMG.
RESEARCH OBJECTIVES
The objectives of this study are presented as follows:
x Learn Iowa DOT’s progress on the use of 3D engineered models and AMG;
x Study how the agency transitioned from the traditional processes to the new processes
of implementing 3D engineered models and AMG;
x Identify the benefits and challenges of using 3D engineered models and AMG within the
agency;
x Conclude lessons learned from the adoption of 3D engineered models and AMG for the
agency.
RESEARCH METHODOLOGY
The results presented in this paper are obtained from the National Cooperative Highway
Research Program (NCHRP) Domestic Scan project 13-02 - Advances in CIM. The scan project
investigated the current practices related to Civil Integrated Management (CIM) adopted by
various state DOTs. During the project, 3D engineered models and AMG were identified as two
of the enabling technologies for CIM implementation. The scan panel members were composed
of various state DOTs and the Federal Highway Administration (FHWA) personnel, and a
KEY FINDINGS
Progress on 3D Engineered Models and 3D Renderings
Transitioning from 2D to 3D design is one of the main goals of Iowa DOT. Iowa DOT uses 3D
renderings extensively to communicate with public, which allows the public to have a better
understanding about the project and its impact on the surrounding environment. Great amount
of details can be added to 3D renderings, although it might be challenging to add paint lines on
roads. Furthermore, these renderings can be used as a starting point for 3D design. Currently,
they are modeling all their highway projects in 3D using Bentley Corridor Modeler software
package. Newly hired personnel are given proper training on 3D modeling knowledge and skills.
When Iowa DOT first started 3D modeling, 2D cross sections would be developed and used to
build a 3D model. This has been improved over the years. Today, 3D models are developed first
and cross sections are extracted from the 3D models. Model development process has been
standardized within the agency to ensure designers use the same approach to build the model,
so that the same type of information can be obtained from a 3D model. Iowa DOT also adjusted
the proportion of in-house modeling development effort over time. They used to develop all 3D
models in-house, which consumed great amount of their time since they wanted to ensure the
resulting products fit contractors’ needs. Currently, 70% of the 3D models are developed in-
house, and the rest 30% are developed by their consultants. The standardized tools and
templates, which were developed by Iowa DOT, are shared with their consultants. With this
adjustment, time and effort for in-house modeling development is reduced while maintaining the
same model quality. The completed models are typically provided to the contractors before the
bidding process, since design-bid-build is the dominant project delivery method for their
projects. Overall, it can be concluded that 3D models and 3D renderings help improve
communications between Iowa DOT and their contractors and with the public respectively.
Some of the challenges experienced by Iowa DOT in implementing 3D modeling are as follows.
First, copyright issues can be quite challenging. It is hard to determine who can change or
modify which information. Iowa DOT has a copyright policy and the state of Iowa has a
copyright law in place. However, this is not yet included in the consultant contracts. Second, the
entire model development process has to be changed. Attribute data should properly be
assigned to each 3D object in 3D models. In the future, Iowa DOT plans to develop 4D (3D+
time (schedule)) animations for their projects. Having 4D models is expected to give public a
better idea about how the project will look like once it is completed.
Progress on AMG
In the past, contractors who wanted to implement AMG would take 2D project plans to their
consultants and have them build 3D models based on the 2D plans, so that they could use the
Based on Iowa DOT’s experiences, stringless paving technology helped increase the quality of
paving greatly. Stakes only need to be placed every 1000 feet when using stringless paving,
which greatly improves the safety in the field. Competition among contractors became more
intense with little changes in bid prices. Although setting up the machine control in the field
increased the upfront project costs, the overall project cost did not change much compared to
traditional process of not using AMG.
From contractor’s perspective, the following benefits were observed when AMG was used:
x Productivity and accuracy are greatly improved with the adoption of AMG
x Owners tend to be more satisfied with the product delivered with AMG.
x Design changes could be integrated easier and faster.
x Problems could be detected early in the process via 3D models that are intended to be
used for AMG.
x Field safety is greatly improved since less people are needed on the job site for AMG-
implemented projects.
Lessons Learned from the Model Centric Design
Iowa DOT learned the following lessons from implementing 3D modeling and AMG for highway
projects.
x Before implementing a new technology or a new approach, it is important to set the
goals first, and then determine how to achieve them.
x It is beneficial to consult with industry practitioners to better understand their needs.
Iowa DOT started to use standard data exchange format (LandXML), naming
convention, color paper, and others based on the feedback they received from their
contractors and consultants.
x It is beneficial to set up some rules to standardize the process and keep those rules
consistent. For example, color plans were adopted about five or six years ago within the
agency. The same color coding system was used for all projects to maintain the
consistency, so that there were no confusions about the color codes.
x It is critical to organize the data or documents (naming convention, file format, etc.) well.
Better organized documents help people perform better when searching and sorting
information.
x It would be beneficial to maintain continuous communication with industry associations.
For example, discussions with the associated general contractors of America (AGC)
helped Iowa DOT determine a future direction that matches the direction of AGC. Also,
CONCLUSIONS
This paper focused on the implementation of 3D engineered models and AMG by Iowa DOT,
which was identified as one of the leading state DOTs in this area. A brief introduction was
provided on how the agency transitioned from traditional design and construction processes to
new processes where 3D models and AMG are used. Benefits, challenges, and lessons learned
from using these technologies were also discussed. The findings of this study would be
beneficial to the agencies that are planning to implement these technologies within their
organization. Before implementing a new technology, it is important for agencies to consult
other agencies and/or companies with extensive experiences using a particular technology and
discuss it with software or hardware vendors to set up a clear goal and select the best way to
start. Lessons learned from pilot projects can be great guidance for future projects. Benefits and
challenges should also be clearly identified. This may help motivate employees to use the
technologies and tools. Finally, as agencies become more experienced with new technologies
and tools, proper standards should be put in place to improve the overall work efficiency.
ACKNOWLEDGMENTS
The material presented in this paper is based on the presentations and documents obtained
during the Scan 13-02 project - Advances in Civil Engineering Management (CIM), which was
supported by the National Cooperative Highway Research Program (NCHRP) 20-68A US
Domestic Scan Program. Scan team members included John Adam of Iowa DOT, Katherine
Petros and Brian Cawley of the US Federal Highway Administration, Rebecca Burns of
Pennsylvania, Duane Brautigam of Florida DOT, Julie Kliewer of Arizona DOT, John Lobbestael
of Michigan DOT, Stan Burns and Randall R. Park of Utah DOT, David Jeong of Iowa State
University. The scan team was supported by Harry Capers, Jake Almborg, and Melissa Jiang of
Aurora and Associates, P.C.. Their assistance in analyzing the data collected during the site
visits is gratefully acknowledged. We would also like to convey our sincere thanks to Iowa DOT
for their continuous support during this project.
REFERENCES
Cylwik, E. and Dwyer, K. (2012). “Virtual Design and Construction in Horizontal Infrastructure
Projects.” Engineering News-Record,
<http://enr.construction.com/engineering/pdf/News/Virtual%20Design%20and%20Constructi
on%20in%20Horizontal%20Construction-05-03-12.pdf> (May 02, 2015)
Every Day Counts (EDC). (2013). “”3D Engineered Models for Construction Implementation
Plan.” Federal Highway Administration. 1-20.
Maxwell, J.A. (2013). Qualitative Research Design (3rd ed.). Thousands Oaks, CA: SAGE
Publications, Inc.
McGraw Hill Construction. (2012). “The Business Value of BIM.” SmartMarket Report, New
York.
Myllymaa, J., and Sireeni, J. (2013). “Cost Savings by Using Advanced VDC Tools and
Processes, Case Studies from Europe,” Presentation at the 2013 Florida Department of
Transportation Design Training Expo, June 12-14, 2013.
http://www.dot.state.fl.us/structures/designexpo2013/2013ExpoPresentations.shtm (May
02, 2015)
Kevin K. Han
Department of Civil and Environmental Engineering
University of Illinois at Urbana-Champaign
205 N. Matthews Ave
Urbana, IL 61874
[email protected]
Jacob J. Lin
Department of Civil and Environmental Engineering
University of Illinois at Urbana-Champaign
205 N. Matthews Ave
Urbana, IL 61874
[email protected]
Mani Golparvar-Fard
Department of Civil and Environmental Engineering & Computer Science
University of Illinois at Urbana-Champaign
205 N. Matthews Ave
Urbana, IL 61874
[email protected]
ABSTRACT
Actual and potential progress deviations during construction are costly and preventable.
However, today’s monitoring programs cannot easily and quickly detect and manage
performance deviations. This is because (1) the current methods are based on manual
observations made at specific locations and times; and (2) the captured progress information is
not integrated with “as-planned” 4D Building Information Models (BIM). To facilitate this process,
construction companies have focused on collecting as-built visual data through hand-held
cameras and video recorders. They have also assigned field engineers to filter, annotate,
organize, and present the collected data in comparison to 4D BIM. However, cost and
complexity associated with collecting, analyzing and reporting operations still result in sparse
and infrequent monitoring and therefore a portion of the gains in efficiency is consumed by
monitoring costs. To address current limitations, this paper outlines a formal process for
automating construction progress monitoring using visual data captured via camera-equipped
Unmanned Aerial Vehicles (UAVs) and 4D BIM. More specifically, for data collection, formal
methods are proposed to identify monitoring goals for the UAVs using 4D BIM and
autonomously acquiring and updating the necessary visual data. For analytics, several methods
are proposed to generate 3D and 4D as-built point cloud models using the collected visual data,
to integrate them with BIM, and to automatically conduct appearance-based and geometrical
reasoning about progress deviations. For reporting, a method is proposed to characterize the
analyzed and identified progress deviations using performance metrics such as the Earned
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the author(s), who are responsible for the
facts and accuracy of the information presented herein.
Value Analysis (EVA) or Last Planner System (LPS) concepts. These metrics are then
visualized via color-coding the BIM elements in integrated project models, presented to project
personnel via a scalable and interactive web-based environment. The validation of this
formalism is discussed based on several real-world case studies.
INTRODUCTION
Actual and potential progress deviations during construction of buildings and infrastructure
systems are costly but are preventable. To capture construction performance deviations
accurately and to make effective and prompt decisions based on the captured performance
deviations, frequent and accurate methods are needed for tracking construction progress. If
implemented on a daily basis, such methods can effectively bridge the gap in information
sharing between daily work execution and weekly work planning/coordination, ultimately leading
to improved project efficiencies (Bosché et al. 2014; Yang et al. 2015).
Today’s construction progress tracking methods, however, do not provide the accuracy and
frequency necessary for effective project controls. This is in large due to the labor intensive
processes involved with as-built data collection and analysis (Bae et al. 2014). It is common for
field engineers and superintendents to walk around the site, take photos, and document the
progress of ongoing operations, nevertheless this process is time consuming as it needs to be
conducted per daily task and for all associated construction elements. The ability to perform
monitoring and photographic documentation is also restricted by hard-to-reach areas. This
constraint negatively impacts the completeness of the data collection process. Once this data is
collected (whether it is complete or not), the field engineers filter, annotate, organize, and
present the collected data in comparison to 4D BIM. However, the cost and complexity
associated with the analysis and reporting operations result in sparse and infrequent monitoring
(Bae et al. 2013). Therefore through these activities for project controls, a portion of the gains in
efficiency is consumed by monitoring costs.
Over the past few years, research has focused on addressing current limitations through (1)
automatically generating 3D point cloud models from unstructured images and video sequences
(Brilakis et al. 2011; Golparvar-Fard et al. 2009), (2) aligning the resulting 3D point cloud models
with 4D BIM (Golparvar-Fard et al. 2011), and then (3) methods that can automatically infer the
status of work tasks and their relevant elements using geometry information (Golparvar-Fard et
al. 2012), appearance information (Han and Golparvar-Fard 2015), and via leveraging
formalized construction sequencing knowledge and reasoning mechanisms (Han and
Golparvar-Fard 2014a). While significant progress is achieved, yet many areas in research are
still open which require further investigation.
This paper presents a formal process for automating construction progress monitoring via
images taken with camera-equipped Unmanned Aerial Vehicles (UAVs) and 4D BIM. To
address the current limitations in accuracy and completeness of the data collection, we propose
to build on the emerging practice of using camera-equipped UAVs for collecting close-range site
imagery. Different from these practices, we propose to use 4D BIM to identify monitoring goals
for visual data collection, path planning, and autonomous navigation of the UAV through both
exterior and interior construction scenes. We also present a method that leverages BIM (1) to
improve accuracy and completeness of the 3D image-based point cloud modeling via UAV
captured images, and (2) for a better alignment of the images and the resulting point cloud
119
Han, Lin, Golparvar-Fard
models with the 4D BIM. For analytics, methods are presented to leverage geometry,
appearance, and inter-dependency information among elements together with formalized
construction sequencing knowledge to document progress for work tasks that have
corresponding physical elements. A new system architecture is also proposed for visualizing the
collected images, produced 3D point cloud models, and 4D BIM in a scalable web-based
environment. This environment allows user interactions which is particularly important for
collecting inputs on task constraints and those tasks that do not necessarily have physical
element correspondences. As shown in Figure 1, the resulting integrated project models can be
used to minimize the gap in information communication between work task coordination and
daily task execution. It can also support the identification and removal of work constraints and
root-cause analysis in construction coordination meetings.
End
Weekly Work Planning Daily Work Execution, and
Master Scheduling
and Coordination Performance Monitoring and Controls
Figure 1. The planning, execution and monitoring of weekly work plans and how
integrated project models generated via images taken from camera-equipped UAVs and
BIM can improve current work flows.
In the following, we provide an overview on the state of the practice and research on using
camera-equipped UAVs and BIM for progress monitoring. Next, we propose a formalized
procedure for leveraging 4D BIM for enhanced data collection, analytics, and communication of
construction progress deviations. A discussion is also provided on how the proposed procedure
and its associated tools can improve current information sharing and enhance construction
coordination processes.
120
Han, Lin, Golparvar-Fard
To operate autonomously, the UAV operators manually place waypoints on 2D maps and
leverage GPS coordinates for path planning and navigation purposes. The reliance on GPS for
navigation limits autonomous data collection to outdoors and causes major difficulties in dense
urban areas. Also the presence of steel components on sites affect the accuracy of the
magnetometers used on these platforms for navigation purposes, potentially causing safety
issues on construction sites. Placing waypoints manually can also create potential safety
hazards as the 2D maps used for navigation do not reflect the most updated status of the
resources on site, and leave the UAV operators to approximate the 2D location and height of
the new construction and resources such as mobile cranes.
Because the locations where construction progress is expected on project sites are not known
in advance, current best practices require the UAVs to cover the entirety of a site. Since current
batteries are limited to 15-35min operations, covering the entirety of a project site needs
multiple flights and manual supervision by the UAV operators. With the number of images
increasing, the computation time necessary for processing these images, generating 3D point
cloud models, and analyzing work-in-progress also exponentially grows.
While research (Lin et al. 2015a; Siebert and Teizer 2014; Zollmann et al. 2014) has focuses on
leveraging UAV-based images for progress monitoring, the aforementioned challenges are still
remained unexplored. Figure 2 (a) shows a camera-equipped UAV flying onsite, (b) an operator
using the controller to navigate the UAV for data collection and (c) a commercially available
application (DJI Ground Station 2015) for setting the waypoints for UAV navigation.
Figure 2. (a) camera-equipped UAV (b) operator executing data collection task on the
construction site, (c) waypoints setting in commercially available application (DJI Ground
Station 2015).
(1) Generating large panoramic images of the site and superimposing these large-scale high
resolution images over existing maps– While these images provide excellent visuals to ongoing
operation, they lack 3D information to assist with area-based and volumetric-based
measurements necessary for progress monitoring. Also none of the current commercially
available platforms provide a mechanism to communicate who is working on which tasks at
what location.
(2) Producing 3D point cloud models– Over the past decade, the state-of-the-art in image-based
3D modeling methods from computer vision domain has significantly advanced. These
121
Han, Lin, Golparvar-Fard
developments have led to several commercially available platforms that can automatically
produce 3D point cloud models from collections of overlapping images. Several AEC/FM firms
have started to leverage these platforms to produce as-built 3D point cloud models of their
project sites via images taken from camera-equipped UAVs. Nevertheless, today’s practices are
mainly limited to measuring excavation work and stockpiles. This is because highly overlaying
images taken with a top-down view can produce high quality 3D models of these operations.
However creating complete 3D point cloud models for building and infrastructure systems also
requires the UAVs to fly around the structure to capture work in progress. Because there is not
automated mechanism for identifying most informative viewpoints, often the produced 3D point
cloud models are incomplete. Also the state of the art Structure from Motion (SfM) techniques
for image-based 3D reconstruction –as used in (Golparvar-Fard et al. 2012; Golparvar-Fard et
al. 2011) - may distort angles and distances in generated point cloud models. Figure 3 shows an
example of a point cloud model that was generate via highly overlapping images taken around
the perimeter of a construction site. The reconstructed 3D point cloud is up-to-scale and exhibits
problems in completeness and accuracy.
(1) Analyzing the physical occupancy of the as-built models: Research on occupancy based
assessment methods use 3D point cloud models and BIM to monitor whether or not structural
elements, Mechanical/Electrical/plumbing (MEP) components, or temporary resources such as
formwork and shoring are physically present in the as-built point cloud model. These methods
122
Han, Lin, Golparvar-Fard
are still challenged with the lack of enough details in (1) BIM and (2) work breakdown structure
of the schedule. The 4D BIM may not have a physical representation for all elements,
particularly for the temporary resources. Hence model-driven monitoring is not possible for all
elements. The lack of formalized construction sequencing knowledge such as steps in
placement of concrete elements (i.e. forming, reinforcing, placing, and stripping) in 4D BIM also
adds to the complexity of the assessments, since without knowing exactly when each element is
expected to be placed, identifying the most updated progress status is not possible.
(2) Analyzing the appearance of the changes of the as-built models: The latest research on
appearance-based assessment of the as-built models superimposes the BIM with the 3D point
cloud models and then back-projects the BIM elements to the images used to generating the
point cloud model. From these back projections, several squared shape image patches are
sampled and used to analyze the observed construction material. The most observed material
from these image patches is then used to infer the most updated status of progress for each
element in 4D BIM (Han and Golparvar-Fard 2015).
While significant advancement in research has been made, still applying these methods to a
full-scale projects requires (1) accounting for the lack of details in 4D BIM, (2) addressing as-
built visibility issues, (3) creating large-scale libraries of construction materials that could be
used for appearance-based monitoring purposes; and (4) methods that can jointly leverage
geometry, appearance, and interdependency information in BIM for monitoring purposes.
The 3D interfaces also have a number of challenges that currently prevent their wide spread
application. These interfaces either show the BIM or the 3D point cloud models. Hence it is
difficult to differentiate and highlight the changes between isolated as-planned and as-built
models. Also, the measurement tools in these interfaces are only capable of basic interactive
operations such as 3D volumetric measurements from point cloud models. To facilitate
information sharing and communication, the integrated project models – produced by
superimposing BIM and the 3D point cloud models– should remain accessible by all onsite and
offsite practitioners. Without having access to scalable and interactive web-based systems that
can support such functionalities, it will be difficult to use the integrated project models to support
smooth flow of information among project participants.
Overall, there is a lack of an end-to-end formalized procedure that can take account for visual
as-built data collection, progress monitoring analytics, and visualization. In the following, a
formal procedure is presented that can address current limitations. The opportunities for further
research in each step of the procedure are discussed as well.
123
Han, Lin, Golparvar-Fard
FORMAL METHODS FOR AUTONOMOUS MONITORING OF CONSTRUCTION PROGRESS
This section proposes a procedure for autonomous vision-based monitoring of construction
progress, which consists of 1) data collection using camera-equipped UAVs, 2) vision-based
analytics and comparison with 4D BIM for reasoning about progress deviations, and 3)
performance analysis and visualization. Figure 4 illustrates these steps in detail.
Data collection
Figure 5 shows two different procedures for collecting visual data using UAVs for construction
progress monitoring. In the first procedure, the images are directly used to produce 3D and 4D
point cloud models. In the procedure shown in Figure 5b, the 4D BIM is used to assist with the
data collection process. Since the 4D BIM entails information about where changes are
expected to happen on a construction site, it can serve as a great basis for identifying scanning
goals, path planning and navigation. This strategy can also potentially address the current
limitations associated with path planning and navigation of the UAVs in interior spaces and
dense urban areas. This is particularly important as current methods primarily rely on GPS for
UAV control purposes while at indoor scenes or dense urban areas, reliable GPS is not
accessible.
To support a complete and accurate image-based reconstruction – whether BIM is used for data
collection or not– images should be taken with an overlap of 60-70%+. This rule of thumb
guarantees detection of sufficient number of visual features in each image, which is typically
required for standard Structure form Motion procedures. Taking images with such overlaps
require adjusting and controlling the flight path and speed of the UAV. BIM guided data
collection process as discussed in (Lin et al. 2015b) will certainly require less number of images
and can more intelligently assist with choosing informative views necessary for progress
monitoring purposes.
(a) (b)
124
Han, Lin, Golparvar-Fard
Data analytics
Alignment of image and BIM is the first step in vision-based analytics for construction progress
monitoring. In early research (Golparvar-Fard et al. 2012), a pipeline of image-based
reconstruction procedures consisting of Structure from Motion and Multi View Stereo algorithms
were used to generate point clouds and then transform them into the BIM coordinate system.
However, the 3D point cloud models generated through these standard procedures are up to
scale. To recover the scale, a user-driven process is required to select at least three
correspondences between the BIM and the point cloud. Utilizing these correspondences
between the 3D point cloud model and BIM, the least square registration problem can be solved
for the 7 degrees-of-freedom (3 rotation, 3 translation, 1 uniform scale) to transform and scale
the point cloud to the BIM coordinate system. This manual process can be improved by
leveraging BIM as a priori and adopting a constrained-based procedure for image-based 3D
reconstruction which is shown in Figure 6 (Karsch et al. 2014). The results from preliminary
experiments in (Karsch et al. 2014) show that the accuracy and completeness of the image-
based 3D point clouds can be significantly improved with using BIM as a priori (Figure 7).
125
Han, Lin, Golparvar-Fard
Geometry and appearance-based progress monitoring analysis is the next step of data
analytics. As discussed in section 2 current practices are suffering from challenges of geometry
and appearance-based progress monitoring methods. To formalize the utilization of integrated
project models for progress monitoring purposes and facilitating information flows, the following
practical challenges that should be addressed first:
(1) Lack of detail in as-planned models- For accurate progress monitoring analytics, the as-
planned model should contain a BIM with level of development of 400/450. It should also reflect
daily operational details within the work break down structure (WBS) of the schedule which is to
be integrated with BIM. Nevertheless, in today’s practice of project controls, daily operation-level
tasks are not often reflected in the WBS. Their corresponding elements such as scaffolding and
shoring are also not typically represented in BIM. Hence with using these models, it is not easy
to identify “who does which work at what location”, especially for works related to temporary
structures and for detecting both geometry and appearance based changes. Formalizing
knowledge of construction sequencing and then enhancing the BIM with relevant reasoning
mechanism, can enhance current progress monitoring methods. Figure 8 illustrates an example
of this issue. As can be seen in the figure, the rebars and formwork should have been present in
the as-planned models for accurate assessment of work-in-progress on placement a concrete
foundation wall.
Figure 8. The necessary LoD in BIM that can enable comparison between as-built and as-
planned for progress monitoring (Han et al. 2015)
(2) Limited visibility- Although taking images from different viewpoints – assisted by the UAVs–
may reduce the challenges associated with no visibility to construction elements, yet any
progress monitoring method should still be able to account for progress reasoning based on
126
Han, Lin, Golparvar-Fard
partial element visibilities. Formalizing knowledge of construction sequencing and integrating
that into BIM through a reasoning mechanism can address this issue to some degree. The
preliminary study conducted in (Han and Golparvar-Fard 2014b) shows that such formalized
knowledge can enhance the performance of vision-based progress monitoring methods by five
to seven percent. Figure 9 illustrates a case with a visibility issue that can be resolved by this
reasoning mechanism.
As a first step towards addressing these challenges, (Lin et al. 2015a) introduces a new web
platform that can visualize as-built vs as-planned models. This new platform allows users to
access integrated project models on smartphones and tablets. To present large-scale point
cloud models in a convenient and scalable manner considering the limited memory available in
commodity smartphones and tablets, the point cloud models are structured in form of nested
octrees similar to (Scheiblauer et al. 2015). Figure 10 shows the developed web platform for
visualizing as-built point clouds generated by camera-equipped UAV and as-planned BIM.
Figure 10b in particular shows an example of the nested octree.
This data structure subsamples the point cloud and only shows relevant points depending on
the user viewpoint. Also the number of points projected to the same pixel on the screen will be
reduced according to the level of details chosen by the user. Loading a point cloud with density
of 10 million points takes only approximate to 2 seconds on a standard commodity smartphone.
The manipulation method can also be adjusted to first person or third person view controls. The
platform is built with BIMServer (Beetz et al. 2010) to integrate 4D BIM (Figure.10 g-h) for
visualization and information retrieval purposes. The semantic information necessary for
progress monitoring (e.g. expected construction materials and element inter-dependency
information) can be queried from BIM and presented through the integrated platform. To assist
with documenting work in progress for those tasks that do not have any geometrical
representation, several new tools are also created that allow the point cloud model to be directly
color coded and semantics such as “who does which work at what location” to be accounted for
based on various locations. This location-based method allows the issues associated with level
of detail in BIM to be also addressed, since now the users can push and pull information from
any user-annotated location (with or without a BIM representation). Integrating various
workflows such as quality control as shown in Figure 1 is part of ongoing research.
DISCUSSION
The integrated project model presented in the previous section can bridge the gap in information
sharing between the downstream feedback (onsite activities) and the short-term planning
(coordination meeting). By providing a near real-time visualization of ongoing work, particularly
who does which work at what location, it allows the most updated status of work in progress to
be communicated among all parties on and offsite. The platform shows potential in supporting
easy and quick identification of potential performance problems. It can also support root-cause
analysis discussions in coordination meetings, ultimately leading to a more smooth flow of
production in construction. Instead of measuring and communicating retrospective progress
127
Han, Lin, Golparvar-Fard
metrics such as the EVA or Percentage Plan Complete (PPC) metrics, intuitive and data-driven
communication of work in progress can allow for measuring progress based on more
informative metrics such as task maturity or Task Anticipated (TA) and Tasks-Made-Ready
(TMR) (Hamzeh et al. 2015).
Figure 10. Web platform for visualizing as-built point clouds generated by camera-
equipped UAV and as-planned BIM. (a) as-built point cloud; (b) the nested octree for pre-
processing and visualizing point cloud in a scalable manner; (c) the location and
orientation of the UAV-mounted camera when images were taken derived from the
Structure from Motion algorithm; (d) viewing the point cloud models through one of the
camera view points and texture mapping the frontal face of the camera frustum; (e) area
measurement tool which directly operated on the as-built point cloud; (f) color coding
part of the as-built point cloud for communicating who does which work at what location;
(g) and (h) integrated visualization of the as-built point cloud and the as-planned BIM for
two different construction projects.
128
Han, Lin, Golparvar-Fard
CONCLUSION AND FUTURE WORK
This paper presents a formalized procedure via utilizing images taken by camera-equipped
UAVs and BIM for generating integrated project models for construction progress monitoring
purposes. The various aspects of data collection, analysis, and communication as it pertains to
these integrated project models were discussed. These integrated project models have potential
for enhanced information flow and improving situational awareness on construction projects.
They can also support identifying and removing work constraints in coordination meetings, and
measuring proactive metrics such as task maturity, Task Anticipated (TA), or Tasks-Made-
Ready (TMR). The current limitations in data collection and analysis were discussed as well.
Ongoing research is focused on addressing these limitations as well as conducting pilot projects
on using the proposed procedures for creating integrated project models and validating their
potential in smoothening information flow and improving situational awareness on construction
projects.
ACKNOWLEDGMENTS
This research is financially supported by the National Science Foundation (NSF) Grant CPS
#1446765. Any opinions, findings, and conclusions or recommendations expressed in this
material are those of the authors and do not necessarily reflect the views of the National
Science Foundation. The technical support of the industry partners in providing access to their
sites for data collection and assisting with progress monitoring analytics is also appreciated.
REFERENCES
Bae, H., Golparvar-Fard, M., and White, J. (2013). “High-precision vision-based mobile
augmented reality system for context-aware architectural, engineering, construction and
facility management (AEC/FM) applications.” Visualization in Eng, Springer, 1(1), 1-13.
Bae, H., Golparvar-Fard, M., and White, J. (2014). “Image-Based Localization and Content
Authoring in Structure-from-Motion Point Cloud Models for Real-Time Field Reporting
Applications.” Journal of Computing in Civil Engineering, DOI:
1023 .1061/(ASCE)CP.1943-5487.0000392, B4014008, 637–644.
Beetz, J., van Berlo, L., de Laat, R., and van den Helm, P. (2010). “bimserver. org–An Open
Source IFC Model Server.” Proceedings of the CIB W78 2010: 27th International
Conference–Cairo, Egypt, (Weise 2006), 16–18.
Bosché, F., Ahmed, M., Turkan, Y., Haas, C. T., and Haas, R. (2014). “The value of integrating
Scan-to-BIM and Scan-vs-BIM techniques for construction monitoring using laser scanning
and BIM: The case of cylindrical MEP components.” Automation in Construction, 49, 201-
213, Elsevier.
Brilakis, I., Fathi, H., and Rashidi, A. (2011). “Progressive 3D reconstruction of infrastructure
with videogrammetry.” Automation in Construction, Elsevier, 20(7), 884–895.
129
Han, Lin, Golparvar-Fard
Golparvar-Fard, M., Pena-Mora, F., and Savarese, S. (2009). “D4AR- A 4-Dimensional
Augmented Reality model for automating construction progress data collection, processing
and communication.” Journal of ITCON, 14(1), 129–153.
Golparvar-Fard, M., Peña-Mora, F., and Savarese, S. (2011). “Integrated Sequential As-Built
and As-Planned Representation with Tools in Support of Decision-Making Tasks in the
AEC/FM Industry.” Journal of Construction Engineering and Management. 137(12), 1-21.
Golparvar-Fard, M., Peña-Mora, F., and Savarese, S. (2012). “Automated Progress Monitoring
Using Unordered Daily Construction Photographs and IFC-Based Building Information
Models.” Journal of Computing in Civil Engineering, 10.1061/(ASCE)CP.1943–
5487.0000205.
Hamzeh, F. R., Saab, I., Tommelein, I. D., and Ballard, G. (2015). “Understanding the role of
‘tasks anticipated’ in lookahead planning through simulation.” Automation in Construction,
49, Part A(0), 18–26.
Han, K., and Golparvar-Fard, M. (2014a). “Multi-Sample Image-Based Material Recognition and
Formalized Sequencing Knowledge for Operation-Level Construction Progress Monitoring.”
Computing in Civil and Building Engineering, I. Raymond and I. Flood, eds., American
Society of Civil Engineers, 364–372.
Han, K., and Golparvar-Fard, M. (2014b). “Multi-Sample Image-Based Material Recognition and
Formalized Sequencing Knowledge for Operation-Level Construction Progress Monitoring.”
International Conference on Computing in Civil and Building Engineering, 2014, 364–372.
Han, K., Lin, J., and Golparvar-Fard, M. (2015). “Model-driven Collection of Visual Data using
UAVs for Automated Construction Progress Monitoring.” Int’l Conference for Computing in
Civil and Building Engineering 2015, Austin, TX June 21-23.
Karsch, K., Golparvar-Fard, M., and Forsyth, D. (2014). “ConstructAide: analyzing and
visualizing construction sites through photographs and building models.” ACM
Transactions on Graphics (TOG), ACM, 33(6), 176.
Lin, J., Han, K., Fukuchi, Y., Eda, M., and Golparvar-Fard, M. (2015b). “Model-Based Monitoring
of Work-in-Progress via Images Taken by Camera-Equipped UAV and BIM.” 2nd
International Conference on Civil and Building Engineering Informatics, Tokyo, Japan.
M. Golparvar-Fard, Pena-Mora, F., Savarese, S., Golparvar-Fard, M., Pena-Mora, F., and
Savarese, S. (2011). “Monitoring changes of 3D building elements from unordered photo
collections.” Computer Vision Workshops (ICCV Workshops), 2011 IEEE International
Conference on, 249–256.
Scheiblauer, C., Zimmermann, N., and Wimmer, M. (2015). “Workflow for Creating and
Rendering Huge Point Models.” Fundamentals of Virtual Archaeology: Theory and
Practice, A K Peters/CRC Press.
130
Han, Lin, Golparvar-Fard
Siebert, S., and Teizer, J. (2014). “Mobile 3D mapping for surveying earthwork projects using an
Unmanned Aerial Vehicle (UAV) system.” Automation in Construction, Elsevier, 41, 1–14.
Yang, J., Park, M.-W., Vela, P. A., and Golparvar-Fard, M. (2015). “Construction performance
monitoring via still images, time-lapse photos, and video streams: Now, tomorrow, and the
future.” Advanced Engineering Informatics, Elsevier. 29 (2), 211–224.
Zollmann, S., Hoppe, C., Kluckner, S., Poglitsch, C., Bischof, H., and Reitmayr, G. (2014).
“Augmented Reality for Construction Site Monitoring and Documentation.” Proceedings of
the IEEE, IEEE, 102(2), 137–154.
131
Han, Lin, Golparvar-Fard
Exploratory Study on Factors Influencing UAS
Performance on Highway Construction Projects: as
the Case of Safety Monitoring Systems
Author 1 Sungjin Kim, 2 Javier Irizarry
School of Building Construction, College of Architecture
Georgia Institute of Technology
280 Ferst Drive, 1st Floor
Atlanta, GA, 30318
Email: [email protected], [email protected]
ABSTRACT
Highways are one of the most important infrastructure systems that support our society.
Infrastructure construction projects, particularly highway projects, have become increasingly
large and complex. It is critical that project managers protect workers from accidents by
providing a safe work environment. Therefore, safety managers should frequently monitor the
worksite conditions and prevent accidents on construction sites. Unmanned Aerial Systems
(UASs) have a great potential to fly and monitor highway construction sites, since the horizontal
layout of these jobsites has few obstacles that disturb the operation of UASs. Comprehensive
literature reviews aim to refine factors influencing the performance of UAS as safety monitoring
systems on the jobsite. A questionnaire survey was developed to analyze UAS implications on
highway projects and critical factors based on the derived potential 29 factors and 17 benefits
that can be performance measure attributes. As a beginning step of understanding UAS in
construction environments, this exploratory research contributes to defining critical factors
influencing the performance of UAS on highway construction environments.
INTRODUCTION
Infrastructure construction projects require a significant amount of time, capital and human
resources during the project life cycle. It has also required more advanced construction
management systems to assists project managers with the significant challenges faced in
continuously evolving worksites. One of the most momentous challenges in infrastructure sites
is construction workers’ safety. Even with the progress made in site safety, the laborers are still
exposed to hazardous conditions. The Occupational Safety and Health Administration (OSHA)
in the United States requires that project owners and contractors have a duty to prevent workers
from having fatal accidents on worksites. According to the U.S. Bureau of Labor Statistics, 796
construction-related fatal injuries were occurred in 2013 (BLS 2015). This statistic indicated the
fatality ratio in construction industry is a big concern in the United States.
One of the safety managers’ main responsibilities is to observe workers’ behavior, work
sequencing and the jobsite conditions to prevent workers from having construction-related
accidents. For the last few years, many researchers have developed advanced Information
Technology (IT) based safety monitoring systems to reduce accidents on construction sites. Lin
developed workers’ behavior monitoring system on construction site (Lin et al. 2013) and
Naticchia (2013) developed real-time unauthorized interference control system based on RFID
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the author(s), who are responsible for the
facts and accuracy of the information presented herein.
(Naticchia et al. 2013). As an important part of our infrastructure, highway construction projects
have unique characteristics. These horizontal construction projects can extend through long
distances. Hence, they need monitoring systems that can cover safety issues on these
extensive construction worksites and their surrounding environments.
A proposed method to address this issue is the use of Unmanned Aerial System (UAS), which
are remotely operated without an onboard operator. UASs can maximize the efficiency of safety
monitoring in highway construction projects because no vertical obstacles affecting UAS
operations would be present. UASs can easily monitor all site areas by circling around the site
under a safety manager’s control and delivering real-time images and videos. This technology
has been utilized to perform monitoring or inspecting in various industries, for example,
monitoring soil erosion (D’Oleire-Oltmanns et al. 2012) monitoring forest fires (Hinkley and
Zajkowski 2011) and bridge inspection (Eschmann et al. 2012; Morgenthal and Hallermann
2014). In particular, Irizarry developed the concept of UAS-based safety inspection system
during construction (Irizarry et al. 2012).
However, the use of UASs on construction sites is in very early stages and their impact on
construction management tasks is not known and the implication of UASs use in construction is
not fully understood. Although UAS is being researched in various industries around the world,
very few studies on the impact of UAS applications in construction have been carried out. In
order to understand the potential of this technology in construction environments, performance
measurement and evaluation of UASs for tasks, such as safety monitoring, is necessary for
supporting the implementation of UAS technology in all types of construction projects. This
research defines critical factors influencing UAS performance through extensive literature
review, surveys and interviews with construction and IT professionals. This study aims to define
performance measurement factors for the performance of UAS in construction safety monitoring
tasks in highway construction or other infrastructure projects. Furthermore, the main findings will
be able to contribute to our understanding of the potential impact of adoption of UAS technology
in the Architecture, Engineering, Construction and Operations (AECO) industry.
PROBLEM STATEMENT
Zhou (2013) comprehensively reviewed 119 published studies in terms of advanced safety
management technology between 1986 and 2012 (Zhou et al. 2013). Even though the number
of research focusing on IT-Based safety control systems has dramatically increased over two
decades (Figure 1A), the construction accident rate has been still unstable and frequent in the
AECO industry (Figure 1B). In addition, the construction injury rate is a big concern with this
industry having the highest fatality ratio 21 percent from all industries in the United States
(Figure 1C) (BLS 2015) Zhou concluded that future studies should pay more attention to
proactive safety management systems, such as safety monitoring or information systems rather
than the reactive system for achieving the goal of zero accidents on worksites (Zhou et al. 2013).
As one of the proactive safety control systems, Irizarry (2012) suggested the concept of a UAS
safety inspection and monitoring system in the AECO industry. He proposed the UAS could
provide aerial images as well as real-time videos from a range of locations around the jobsite to
safety managers with fast access, since it was equipped with high resolution camera and
Wireless system (Irizarry et al. 2012). Even though UAS research has been of increased
interest to practitioners in the AECO industry, the concept’s application has not been perfectly
investigated. Moreover, researchers have never measured UAS’s effects on various
construction environments. For this reason, measurement and evaluation of the performance of
UAS-based safety monitoring system is needed. In addition, many important factors associated
Therefore, there are gaps in most practitioners’ understanding of which factors can affect the
performance of UAS system. This paper will answer the research questions stemming from
three main research problems and fill the aforementioned gaps (Table 1.) The goal of this study
is to define critical factors that may contribute or influence the performance of UAS-based
monitoring system through industry-wide surveys.
UASs’ potential applications How can a UAS be applied Research Gaps are
(i.e. Safety Monitoring task) for safety monitoring or
is not understood inspection in construction? (1) What factors can
affect performance of
What are critical factors UAS in case of safety
that may influence the monitoring tasks, (2)
performance of UAS in How they affect, and
(3) What their impacts
Factors that may contribute safety monitoring?
are on the highway
to the performance of UAS
construction worksite
and benefits from UAS have What are potential benefits
not been investigated of UAS in case of safety
monitoring, or the potential
attributes to evaluate the
performance of UAS?
Lee and Kim (2009) developed a mobile safety monitoring system, which is based on hybrid
sensor, in order to reduce the rate of fatal accidents on construction sites. Prabhu (2005)
developed a Tablet PC application to inspect jobsites for safety issues and prevent fatalities.
Leung et al (2008) developed a monitoring system based on network cameras that can be used
in a web environment to observe the quality of construction work. Lin et al (2013) developed a
model for tracking workers’ behavior on a large-dam construction site. Naticchia (2013)
reviewed previous literature and developed a monitoring system based on ZigBee technology in
order to detect unauthorized worker behavior in real-time. Table 2 summarizes previous
research about safety monitoring systems.
UAS Applications
Unmanned Aerial System (UAS)
Unmanned Systems have brought great benefits to military missions and most recently to
civilian life. These systems are unmanned hardware platforms equipped to perform data
collection and sometimes processing, and to operate without direct human intervention, such as
Observing atmospheric
Meteorology (Reuder et al. 2009)
Boundary Layer
Monitoring daily work of the
(Rinaudo, Chiabrando, Lingua 2012)
excavation
Archeology
Creating 3D models of
(Saleri et al. 2013)
archaeological heritages
(Eschmann et al. 2012; Morgenthal
Bridge inspection
and Hallermann 2014)
Infrastructure
(Eschmann et al. 2012) Building and facility inspection
RESEARCH METHODOLOGY
This exploratory study aimed defining and analyzing factors that may influence the performance
of a UAS-based safety monitoring system was conducted in three main phases and activities as
shown in figure 3. The stages are (1) development of a conceptual factor model, (2) survey
instrument, (3) analyzing and defining factors. The following sections in this chapter describe
the steps of the study in more detail.
The first phase began with a comprehensive literature review and interviews to develop an initial
factor model. The UAS-based safety inspection concept was used in study (Irizarry et al. 2012),
factor analysis for implementing safety management system was performed (Ismail et al. 2012),
and Nitihamyong and Skibniewski (2006) method for defining success or failure factors and
measures of a web-based project management system was adopted in this study. In addition,
RESULTS
Developing Conceptual Factor model
A total of 46 potential factors were derived based on the three previous studies and input from
the interview with IT and construction professionals. These factors include 29 potential factors
and 17 potential benefit attributes. Three main factor groups include (1) UAS system features,
(2) Project Team features, and (3) Project Features that may impact the performance of UAS
safety monitoring system in highway construction projects. Particular UAS systems may
influence the performance of UAS, since they have some specifications like controller, battery
and camera. They need to be considered in order to develop an optimal safety monitoring
system. In addition, implementing such a system on construction projects requires various
conditions, such as interest, training, knowledge, or experience, to be considered. This is one of
the most important considerations to define factors for successful performance of UAS. All
control systems for construction management require different functions, environments, and
benefits on construction projects. The project has varied conditions and features so they should
be considered to implement UAS and measure UAS performance. In this paper, the factor of
project type and owner type are defined as highway construction project and owner as
government agency respectively.
In addition, there are several other measures such as benefits of performance that should be
considered when evaluating the performance of UAS-based safety monitoring systems on
highway construction environments. These are anticipated impacts when the UAS safety
monitoring system is implemented. Finally, a conceptual factor model was developed as
described in Figure 4. Table 4 summarizes the potential factors associated with the three factor
groups and Table 5 describes the potential attributes related to the benefits or performance of
UAS safety monitoring system.
Almost all respondents (80%) had positive expectations of the use of a UAS safety monitoring
system on highway construction environments. They mainly agreed with that a UAS would be a
good fit for safety monitoring on highway construction sites since these are more horizontal and
open sites, and they can extend for miles. In addition, they suggested and recommended that
USAs can access areas humans sometimes cannot directly access, can monitor work flow,
jobsite logistics and material stocking on the jobsite, and have the capability to reduce the costs
safety monitoring. On the other hands, only two (20%) were concerned that the FAA will not
allow legally the use of UAS on the construction site and the battery life of UAS would limit the
effectiveness of UAS for safety monitoring tasks.
Since 40% of respondents had no experience with this tool, this could affect on the reliability of
the result of this analysis. Therefore, this paper has only an exploratory analysis and introduces
preliminary results. Continued research will include field flight test with a small UAS platform
where project personnel will participate in the test and then be asked to provide their opinion
about the factors and performance of UAS’s applications on highway construction projects.
However, more than half of respondents did not know or disagreed that two factors, such as (1)
UAS system's easy upgradability without manufacturer's services and (2) sub contractors trades
involved contribute to the performance of the UAS safety monitoring system on highway
construction projects. In addition, one-third of the professionals did not know if duration of
project, complexity of the construction tasks and user type (safety manager or hire pilot) were
related to the considerations for UAS’s performance for safety monitoring tasks on the jobsites
or not. Since the number of response for this survey is small, these factors must be considered
after collecting more data from construction and UAS practitioners. Table 6 provides details on
the magnitude of the impact of the factors based on indications of responses collected from the
respondents.
Regarding the benefits and performance attributes, 14 out of 15 personnel agreed that effective
monitoring of the whole work areas could be achieved with a UAS safety monitoring system on
highway construction projects. The apparent benefits or performance measures are (1) Effective
monitoring of traffic and vehicles on sites, (2) Effective monitoring of heavy or hoisting
equipment on sites, (3) Enhanced identification of actions related to potential hazards on sites
with 87% of respondent in agreement. One explanation is that use of small UAS platform in
highway construction sites can be a potential safety control system and contribute to the
effective monitoring tasks on large worksites. However, the time it takes to perform safety
monitoring and simplifying the documentation process or reducing wasted time for
documentation could not be considered as benefits from utilizing a small UAS. In addition, they
were doubtful that a UAS monitoring system can improve overall project performance or can
reduce number of construction accidents on highway construction worksites. Table 7 illustrates
more details about UAS benefits or performance measures.
A total of 29 potential factors that may influence the performance of UAS monitoring system,
particularly, the case of safety condition monitoring on highway construction environments was
developed. In addition, 17 expected benefits, can be used for evaluating performance of UAS
safety monitoring system on jobsites, are also derived in this paper. Even though a limitation
was revealed during data collection, this paper presents two main preliminary results: First, UAS
system characteristics and inner team’s characteristics can certainly influence the UAS
performance for safety monitoring tasks on highway construction projects. That is why there
were not a lot of real implications and implementations of UAS on real construction sites so the
professionals mostly considered technical factors affecting UAS performance.
Second, the potential benefits from UAS applications on highway construction projects or the
performance measure attributes could not be well focused. It was difficult for many survey
participants to indicate definite answers. For this reason, field tests should be conducted with a
UAS platform on highway construction projects, because the construction project personnel did
not have experiences with UAS for safety monitoring on the jobsite. Through those tests,
additional survey participants who have experience with UAS on highway construction projects
could be included as well as improving the reliability of the collected data for futures studies.
CONCLUSIONS
The application of UAS in the construction industry is relatively new yet the technology can bring
many benefits to industry practitioners. A UAS can be utilized as a safety monitoring system on
highway construction projects when equipped with high-resolution cameras and an array of
sensors including a Global Positioning System (GPS) and Gyroscope. Highway construction
environments have a big advantage to fly small UAS around the jobsite since that project can
extend long distances without vertical obstructs. Nevertheless, there is doubt of the
effectiveness of UAS safety monitoring system implementation on highway projects because the
factors influencing their performance are not fully understood. Since most practitioners are still
uncertain about the benefits or performance of UAS, measuring performance and improvement
of UAS implementation is very difficult or impossible at the present time.
This exploratory study has discussed the results of a questionnaire to obtain professional ‘s
perceptions about UAS monitoring system of safety conditions on highway construction
environments. The perceptions of 15 industry professionals working in the United States were
collected to analyze the critical factors and performance attributes of the UAS system. As a
result, 29 factors that may influence the performance of UAS safety monitoring and 17 attributes
to evaluate the performance were experimentally analyzed as the initial step of this UAS
research in the AECO industry. The analysis resulted in two main research findings. First, the
This exploratory study contributes to establish the initial group of factors that may influence the
performance of a UAS-based safety monitoring system in highway construction environments.
In addition, as the beginnings of understanding UAS applications in the construction industry,
the potential benefits of the system and performance measures were identified. It is definitely an
important step to increase UAS implementation as well as clearly a guide the development
UAS-based system strategies in the AECO industry.
BLS. (2015). “Revisions to the 2013 Census of Fatal Occupational Injuries (CFOI) counts.” BLS,
(August).
D’Oleire-Oltmanns, S., Marzolff, I., Peter, K. D., and Ries, J. B. (2012). “Unmanned aerial
vehicle (UAV) for monitoring soil erosion in Morocco.” Remote Sensing, 4(11), 3390–
3416.
Eschmann, C., Kuo, C., and Boller, C. (2012). “Unmanned Aircraft Systems for Remote Building
Inspection and Monitoring.” Proceedings of the 6th European Workshop on Structural
Health Monitoring, July 3-6, 2012, Dresden, Germany, 1–8.
FAA. (2015, MAY 10A). Public Operations - Certificate of Waiver of Authorization (COA)
FAA. (2015, MAY 10C). Authorizations Granted Via Section 333 Exemptions.
Hinkley, E. A., and Zajkowski, T. (2011). “USDA forest service–NASA: unmanned aerial
systems demonstrations–pushing the leading edge in fire mapping.” Geocarto
International, 26(2), 103–111.
Irizarry, J., Gheisari, M., and Walker, B. N. (2012). “Usability assessment of drone technology
as safety inspection tools.” Electronic Journal of Information Technology in Construction,
17(September), 194–212.
Ismail, Z., Doostdar, S., and Harun, Z. (2012). “Factors influencing the implementation of a
safety management system for construction sites.” Safety Science, Elsevier Ltd, 50(3),
418–423.
Jenkins, D., and Vasigh, B. (2013). “The economic impact of unmanned aircraft systems
integration in the United States.” (March), 1–40.
Lee, U.-K., Kim, J.-H., Cho, H., and Kang, K.-I. (2009). “Development of a mobile safety
monitoring system for construction sites.” Automation in Construction, Elsevier B.V.,
18(3), 258–264.
Leung, S. W., Mak, S., and Lee, B. L. P. (2008). “Using a real-time integrated communication
system to monitor the progress and quality of construction works.” Automation in
Construction, 17(6), 749–757.
Lin, P., Li, Q., Fan, Q., and Gao, X. (2013). “Real-time monitoring system for workers’ behaviour
analysis on a large-dam construction site.” International Journal of Distributed Sensor
Networks, 2013.
Morgenthal, G., and Hallermann, N. (2014). “Quality assessment of Unmanned Aerial Vehicle
(UAV) based visual inspection of structures.” Advances in Structural Engineering, 17(3),
289–302.
Newcome, L. R. (2004). Unmanned aviation: a brief history of unmanned aerial vehicles. Aiaa.
Nisser, T., and Westin, C. (2006). “Human factors challenges in unmanned aerial vehicles
(uavs): A literature review.” School of Aviation of the Lund University, Ljungbyhed.
Reuder, J., Brisset, P., Jonassen, M., Müller, M., and Mayer, S. (2009). “The Small Unmanned
Meteorological Observer SUMO: A new tool for atmospheric boundary layer research.”
Meteorologische Zeitschrift, 18(2), 141–147.
Saleri, R., Cappellini, V., Nony, N., Pierrot-Deseilligny, M., Bardiere, E., Campi, M., and De
Luca, L. (2013). “UAV photogrammetry for archaeological survey: The Theaters area of
Pompeii.” Digital Heritage International Congress (DigitalHeritage), 2013, 497–502.
Sarhan, S., and Fox, A. (2013). “Performance measurement in the UK construction industry and
its role in supporting the application of lean construction concepts.” Australasian Journal
of Construction Economics and Building, 13, 23–35.
Sunkara, P. (2005). “A Tablet PC Application for Construction Site Safety Inspection and
Fatality Prevention.” Louisiana State University.
Toole, T. M. (2002). “Construction Site Safety Roles.” Journal of Construction Engineering and
Management, 128(3), 203–210.
Zhou, Z., Irizarry, J., and Li, Q. (2013). “Applying advanced technology to improve safety
management in the construction industry: a literature review.” Construction Management
& Economics, 31(6), 606–622.
Megan A. Kreiger
Construction Engineering Research Laboratory
U.S. Army Engineer R&D Center
2902 Newmark Dr
Champaign, IL 61822
[email protected]
Bruce A. MacAllister
Construction Engineering Research Laboratory
U.S. Army Engineer R&D Center
2902 Newmark Dr
Champaign, IL 61822
[email protected]
Juliana M. Wilhoit
Construction Engineering Research Laboratory
U.S. Army Engineer R&D Center
2902 Newmark Dr
Champaign, IL 61822
[email protected]
Michael P. Case
Construction Engineering Research Laboratory
U.S. Army Engineer R&D Center
2902 Newmark Dr
Champaign, IL 61822
[email protected]
ABSTRACT
Over the past few years, 3D printing has become a household term, but in the near future this
term may be used to describe method of construction of the house that you’re sitting in. There
are several groups working on developing printers capable of 3D printing building structures.
Researchers, corporations, and makers are working and experimenting with extruding foam,
mortar, concrete, and other materials to print things such as huts, castles, apartments, and
Army structures. Part of this effort is being done by the Engineer Research and Development
Center – Construction Engineering Research Laboratory (ERDC-CERL) in collaboration with
NASA, to create structures for U.S. Army contingency bases using contour crafting. This paper
provides an overview of additive manufacturing techniques applied to the construction of
buildings and building components using extrudable construction materials and discusses the
motivation for research in this field by ERDC-CERL. The use of 3D printing in construction of
structures has the potential to change construction methods and the buildings that surround us
in terms of labor requirements, logistics, energy performance, and speed.
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the author(s), who are responsible for the
facts and accuracy of the information presented herein.
Work done by the U.S. Government
INTRODUCTION
Advances in building construction are a result of experience and education of accountable
parties in the industry, which is preceded by a large body of research on the subject.
Uncertainties that arise from the acceptance of alternative practices as well as the
interdependence on professional knowledge, code regulations, and interdisciplinary
relationships leads to complexities that often result in slow adoption of new technologies
(Dubois and Gadde 2001; Khoshnevis 2004; Balaguer and Abderrahim 2008; Blayse and
Manley 2015). Much of the focus in construction advancement in past decades has been placed
on energy efficiency, fostered by related programs such as Energy Star (Energy Star 2013),
LEED (U.S. Green Building Council 2015), and the development of international regulations
(International Code Council 2014; International Code Council 2015). Projects have
conventionally maintained the application of industry accepted construction methods associated
with the use of common building materials (e.g. Concrete Masonry Units, Cast-in-place
concrete, wood, and steel). However, with greater access to technology, research in automated
construction processes has become increasingly popular.
Masonry construction dominates military building projects domestically and internationally. The
costs associated with traditional construction techniques are ever increasing, in large part due to
the high price of transporting materials to job sites and increases in labor costs. Worker
productivity drives both the cost and time to completion of structures built using concrete
masonry units (CMU). For example, consider a small structure measuring 16’ wide, 32’ long and
8’ tall with two doorways built using CMU. Assuming standard grouting requirements and that a
mason with the assistance of one laborer lays 200 CMU blocks per work day using 8”x8”x16”
blocks, the outside walls of the structure could be completed in approximately two days (Scott
2009). The outer walls of a wood framed structure of this size would take at least 1 day to
complete, with a team consisting of three carpenters and one laborer in order to lift heavy
sections without additional equipment (Freund 2004; “Cost to Frame Wall - Zip Code 47474”
2015). A concrete structure generally takes at least 5 days to cure before any construction or
service loads can be applied, in addition to forming, rebar placement, pouring, form removal,
and other requirements (Simmons 2011). These estimates include only the exterior wall
structure construction time and do not include any time for framing the doorway, provisions for
the roof, reinforcement, internal walls of the structure, exterior cladding or finishing. While this
may be acceptable in many cases, there are situations where this process is too slow and
costly.
The U.S. Army has unique requirements when constructing contingency bases in remote and
potentially hostile environments. These requirements include minimizing the number of Soldiers
or civilian contractors required to be on-site, logistics for shipping of construction materials
(typically lumber) and energy-related fuel, and time to occupancy. As the U.S. Army reduces the
size of the force, fewer personnel required for construction frees up Soldier resources from the
construction itself or security duties. Today’s semi-permanent contingency bases typically use
lumber-based construction, with materials often purchased and shipped over large distances.
Concrete is locally available in much of the world and a capability to use cementitious or other
indigenous materials without a need for form work would substantially reduce logistics for
construction materials. In addition to construction materials, tents and wooden structures in
contingency bases do not offer the high levels of energy efficiency found in today’s domestic
and commercial markets. Energy to heat, cool, and ventilate these structures must be
transported in the form of fuel (usually JP-8), resulting in high costs, increased convoy
requirements and higher exposure of personnel to attack. Well-built structures would mitigate
this risk by decreasing energy requirements.
3D PRINTING
3D printing provides a potential solution to U.S. Army contingency base requirements. Although
many people may associate 3D printing with plastic trinkets, biological applications, or even
printing automobiles, it is capable of much more. As is common with regards to construction
from sky scrapers to sports stadiums, two words often apply, “think larger.” The use of large-
scale 3D printing is often overlooked, as it is a relatively new application of this technology. In
the next few years this term may be associated with the buildings around us. Researchers,
corporations, and hobbyists are working and experimenting with extruding foam, mortar,
concrete, and other materials to print building structures, such as, huts, a castle, apartments,
and Army bases or shelters.
While 3D printing or additive manufacturing (AM) technology was developed in the 1980s, the
field has expanded dramatically over the past 5 years. It has had an impact on many fields and
applications in ways that were previously viewed as too costly or impossible using traditional
manufacturing methods.
3D printing is generally used to reduce material requirements, create custom objects, reduce
labor, and for rapid prototyping. 3D printing of building structures also shares these focuses,
with potential to reduce the amount of materials needed to be shipped to the build site, create
unique buildings, reduce the number of laborers required, and provide a capability to print the
buildings in a relatively short amount of time.
The various groups working on 3D printing buildings have different goals. Some are focused on
a more experimental setup, others on creating a new form of technology by making an efficient
system, and others are focused on looking for practical solutions to building construction.
Different materials are used in each project. Some methods focus on the creation of molds for a
building by creating a cavity to fill with an alternate material, whereas others focus on directly
printing the structure with the print materials. There are also a number of projects using small-
scale printers and objects (e.g. building blocks) to build large structures, but relatively few which
attempt to build large sections or the entire building at one time (Williams 2015). There are
multiple research projects being done by groups all over the world in an effort to effectively 3D
print large-scale structures. This paper reviews and summarizes this work in the next section.
The Engineer Research and Development Center (ERDC) is investigating the use of this
method for the construction of contingency base shelters. In cooperation with the University of
Southern California and NASA, researchers have constructed a number of prototypes that use
3D printing technology to test robotic construction methods using locally sourced concrete
materials. ERDC is currently working towards an on-site construction concrete 3D printer that
will be able to print a 16’x32’x8’ structure within 24 hours. A conceptual representation of the
equipment and process is shown in Figure 1.
CURRENT RESEARCH
Fused-Filament Fabrication
Fused filament fabrication (FFF), also known by other more proprietary terms, is the most
readily available type of 3D printer in households and businesses due to its maker (a community
dedicated to fabricating objects) and open-source following. It uses a filament-type material
which is heated through an extruder and deposited as a string of material on the build platform.
After the first layer is printed, the printer then repeats the process, building the next layer on the
previous one. An example of this type of deposition process is shown in Figure 2.
2m x 3.5m, out of polymer beads instead of the traditional spools. The components are then
trucked to the site or assembled on-site. They are experimenting with multiple materials, but are
focused mainly on polymer-based structures. They are also experimenting with filling their
structure with lightweight foaming eco-concrete to provide structural and insulation properties.
The project is expected to be completed in 2017 (DUS Architects 2015). The methods is limited
to mostly polymers and materials that solidify quickly upon cooling, the major disadvantages of
this method are the material limitations.
Figure 3. Basic representation of powder-bed and inkjet printing. Gray depicts unbound
powder and blue is the binder combined with the powder. The sand spreading apparatus
is not shown.
D-Shape
D-shape is a factory gantry-based powder-bed 3D printer built in the United Kingdom by
Monolite UK Ltd. This system can print up to 6m x 6m x 6m of architectural structures and
components at a time, which are then trucked to the site. It uses granular material deposition
using a powder bed and inkjet method composed of sand and a binder to create structures. It
works by putting down a full layer of sand, then going over the sand with the print heads while
extruding a binder where the structure is desired. The layers are between 5-10mm. The
structure itself takes about 24 hours to solidify, after which the excess sand must be removed to
show the sandstone-like or marble-like structure underneath. Originally using epoxy or
polyurethane, it now uses an organic binder to make an environmentally friendly structure. The
goals of this project are to increase quality and safety, while decreasing time and costs.
D-shape won top prize in the Change the Course - New York City Waterfront Construction
Competition for innovative construction ideas for waterfront infrastructure ran by the New York
City Economic Development Corporation (NYCEDC) in 2013. (D-Shape 2015; New York City
Economic Development Corporation 2013) One example of the projects D-shape is working on
is the Landscape house. The project is going to be printed in sections, filled with fiber-reinforced
concrete, and put into place. (Crook 2013; EeStairs 2015).
While the method does provide greater freedom for architectural design, a drawback to this
technique is that it requires a large amount of material to work with, as the sand or other powder
has to be built up to the top of the print in the entire build area. In turn the powder which
supports the structure during the curing time requires a labor intensive period of excess material
removal before the print can be used.
Extrusion Printing
The extrusion print method is very similar to FFF, but uses a variety of slurry-like or clay-like
materials to create a structure. This type of 3D printer extrudes a fluid-like material out of a
nozzle and builds layer upon layer. The difference is that because the materials used are not
based on a cooling process for solidification, the cure time is longer than in FFF. This means
that there may be a wait time between layers or after a certain number of layers before
proceeding with the print. It also limits the geometry of the printed structure due to the fluid-like
behavior and the freshly printed state dependent on the material used for the print.
IAAC Minibuilders
A research team at the Institute for Advanced Architecture of Catalonia (IAAC) has created a
unique 3D printing system of 3 types of small robots (foundation, grip, vacuum). Each robot has
a different function, contributing to the creation of a large structure made out of concrete (“Small
Robots Printing Big Structures” 2015). The first robot is a foundation robot, which follows a line
to print the first 20 layers. The grip robot clamps onto the existing material and continues to print
the structure. These robots are capable of printing ceilings and lintels by printing horizontally.
After the structure is in place, vacuum robots are placed on the structure and print extra shells.
This system is capable of very flexible shapes and is not limited by the size of the machine. A
disadvantage of this system is that the building envelope is more limited in complexity and
thickness.
Freeform Project
At the Loughborough University in Leicestershire, U.K., a team in the Civil and Building
Engineering department has been developing a factory gantry-based concrete printer with
multiple industrial partners (Buswell and Austin 2015). This printer creates components by
extruding concrete through a single nozzle, which is used in conjunction with a softer material to
support overhangs and is later removed. The use of a foam-based support material in concrete
printers provides the freedom of design that is typically associated with powder-bed print
methods, while limiting the support to needed areas to reduce waste.
Winsun Co.
In China, Winsun has created a factory gantry-based concrete 3D printer for building
components (Winsun 2015). The printer measures 150m long x 10m wide x 6.6m tall, uses
concrete extrusion to print buildings in sections, then ships the pieces by truck, which are
assembled on-site with a crane. The pieces are generally printed such that the roof is one solid
piece with the walls and needs to be tilted up 90 degrees during the build for proper orientation.
They have built at least 10 houses and are creating many types of smaller structures as well.
Win Sun has created a couple of ‘firsts’ in the 3D printing construction world: the first full-sized
printed structure with a roof and the world’s tallest 3D-printed building at 5-stories high, both
from components. This follows a more traditional approach to building construction, where the
pieces are shipped to the site and assembled. This method still potentially reduces waste and
time, but puts the burden on the shipment of materials to the site.
Backyard Castle
Andrey Rudenko and his collaborators have designed and built a gantry-based 3D printer in his
backyard. This printer is mounted on rails at the build site and is capable of printing concrete
structures up to 12’ tall with each layer of concrete measuring 10mm height and 30mm width.
He printed a castle out of concrete with a building footprint of 3m x 5m (Krassenstein 2014;
Rudenko 2015). This method utilizes on-site construction, but still requires the use of printed
components. The use of on-site construction would be beneficial to construction areas that are
difficult to access.
USC
One of the earliest groups to work on 3D printing of structures was the University of Southern
California (USC), led by Behrokh Khoshnevis. Funded in part by the ERDC-CERL, they have
been working on automated construction by contour crafting (CC) for over 10 years (Khoshnevis
2004; Kamrani and Nasr 2006). Their system is an on-site gantry which is mounted on rails at
the build site. It currently has arguably the most advanced shaping and material handling
system. The extruder is a multi-nozzle print head with a trowel to smooth the layers while
printing. The nozzle itself is 3D printed and has an interruptible material delivery system so it
can stop when needed for short amounts of time. USC was funded by NASA to investigate
using the contour crafting technology to print extraterrestrial structures using lunar material
(Khoshnevis 2012). The contour crafting technology has a large amount of research behind it
and is potentially the most feasible for on-site construction. The multiple print heads speed up
print time while still allowing complicated wall cavity geometry. This technique does come with
some limitations as the design is limited to the capabilities of the multi-nozzle print head.
SHORTCOMINGS
There is a movement toward using 3D printing technology for the construction of building
structures, but there are limitations that must be overcome before widespread use is possible.
The current 3D printed building structures are experimental, as further characterization of print
materials, clarification of construction practices and printing processes, and integration into
current building code regulations is required.
Extrusion printing methods show the most promise for acceptance into the construction industry,
as the materials (i.e. clay and concrete) and method of material placement (i.e. concrete
pumping) are similar to those used currently in the field. With over 10 years of experience in
concrete 3D printing, the work at USC provides a first stop for the current body of extrusion
printing. Other technologies, while not as developed, show much promise. The Digital
Construction Platform at MIT is still relatively early in its development, but shows that a simple
modification to current construction technology, such as a boom crane, will allow for large print
areas. It is unclear at this time whether the technology will support concrete printing or is limited
to lightweight materials such as foam. The IAAC Minibuilders provide for flexibility in design and
construction, but may require a change in skills and knowledge (e.g. robotic controls) that is not
as readily available in on a typical construction site.
The methods implored by Win Sun, the Freeform project, and similar may be more readily
accepted due to its similarity to the current precast concrete industry, requiring very little change
for the construction industry as the house is built in sections using cranes. This method does
require transportation of large concrete sections and the use of large machinery to lift them into
place.
Other methods, such as, fused filament fabrication and powder bed printers are more familiar to
those in the AM community, but may be slow to acceptance in the construction industry. The
FFF method used in the 3D Print Canal House project is set to be completed in 2017, however
the technology relies on quick cooling after an initial heating, which limits the materials to
polymers and other rapid cooling materials. The powder bed printer by D-shape provides
architectural freedom, but requires large amounts of powder material to support the print during
setting, leading to material waste after curing despite the ability to recycle leftover powder.
The 3D printing industry still continues to be a novelty to the general public and applications
may still be too new to be generally accepted. A great deal of work needs to be done to get this
technology ready for use in construction, but when it is ready, it has potential to change the
structures around us in ways never previously thought possible. 3D printing may lead to a world
where construction and architectural design are not limited by conventional methods and where
a unique design is merely printed under the same conditions as a basic structure would be.
FUTURE DIRECTION
Still in its infancy, the 3D printing of buildings is a growing field that still requires much research
before the technology will become a viable option for the everyday job site. Further knowledge
is required in the areas of materials testing and characterization, structural design requirements,
the incorporation into building codes and standards, and the standardization of construction
practices. Concrete materials should be characterized based on typical mix designs, required
particle size based on print head diameter, rheology, and the use of indigenous materials. In an
industry where uncertainties exist there is also a need for material testing standards. On the
structural side, the use of reinforcement will be a major concern, whether it is discontinuous
fibers, fiber meshes, or the more familiar welded wire and reinforcing bars with grouting.
Another structural challenge is the printing of roofs or over voids (e.g. windows, doorways,
mechanical openings), and how the voids are reinforced. On the construction side, research
should be focused on general practices for applying the technology on-site (e.g. site grading
requirements, construction requirements, environmental considerations, scheduling and cost
considerations). Furthermore, the incorporation of the technology in building codes and
standards should be the final step in acceptance.
CONCLUSION
There is a growing effort to be able to construct structures using 3D printing technology. Many
projects are able to create small structures or components of structures currently, but more
needs to be done regarding material behavior and applicability for building structures. Building
codes will need to take into account this new technology before it can be readily used. As
technology advances and research is done in the area, this field will continue to evolve and 3D
printed buildings may be a possibility over the next few years.
ACKNOWLEDGMENTS
Special thanks to Eric Kreiger at Bacon Farmer Workman Engineering and Testing, Inc. for
contributions to this paper. Thanks also to Jason Galtieri, and W. Jacob Wagner at ERDC-
CERL for conversations and input on this paper. The authors would also like to thank Dr.
Joshua Pearce at Michigan Technological University for conversations on 3D printing topics.
REFERENCES
Balaguer, Carlos, and Mohamed Abderrahim. 2008. Trends in Robotics and Automation in
Construction. INTECH Open Access Publisher. http://cdn.intechopen.com/pdfs/5555.pdf.
Blayse, Aletha M., and Karen Manley. 2015. “Key Influences on Construction Innovation.”
Construction Innovation 4 (3): 143–54. Accessed May 18.
Buswell, Richard, and Simon Austin. 2015. “Freeform Construction: Partners.” Loughborough
University. http://www.freeformconstruction.com/partners.php.
“Cost to Frame Wall - Zip Code 47474.” 2015. Homewyse.
http://www.homewyse.com/services/cost_to_frame_wall.html.
International Code Council. 2014. 2015 International Energy Conservation Code. ICC.
Crook, Jordan. 2013. “The World’s First 3D-Printed Building Will Arrive In 2014 (And It Looks
Awesome).” Blog. TechCrunch. January 20. http://techcrunch.com/2013/01/20/the-
worlds-first-3d-printed-building-will-arrive-in-2014-and-it-looks-awesome/.
D-Shape. 2015. “The Technology.” Monolite UK. http://www.d-shape.com/tecnologia.htm.
Dubois, Anna, and Lars-Erik Gadde. 2001. “The Construction Industry as a Loosely Coupled
System - Implications for Productivity and Innovativity.” In . Oslo, Norway.
http://www.impgroup.org/uploads/papers/169.pdf.
DUS Architects. 2015. “3D Print Canal House.” Accessed May 18.
http://3dprintcanalhouse.com/.
EeStairs. 2015. “EeStairs Founding Father of the Landscape House.”
http://www.eestairs.com/en/743_eestairs_founding_father_of_the_landscape_house.htm
.
Energy Star. 2013. “ENERGY STAR Certified Homes, Version 3 (Rev. 07) National Program
Requirments.”
http://www.energystar.gov/ia/partners/bldrs_lenders_raters/downloads/National_Progra
m_Requirements.pdf.
Freund, James F. 2004. “How to Estimate the Cost of Rough Carpentry Framing.” American
Society of Professional Estimators.
http://www.aspenational.org/userfiles/file/Technical%20Papers/2007/TechPaper_May20
07.pdf.
International Code Council. 2015. “Overview of the IgCC.” Accessed May 18.
http://www.iccsafe.org/codes-tech-support/codes/2015-i-codes/igcc/.
Kamrani, Ali, and Emad Nasr. 2006. Rapid Prototyping: Theory and Practice. Springer.
Khoshnevis, Behrokh. 2004. “Automated Construction by Contour Crafting—related Robotics
and Information Technologies.” Automation in Construction, The best of ISARC 2002, 13
(1): 5–19. doi:10.1016/j.autcon.2003.08.012.
Khoshnevis, Behrokh, A. Carlson, N. Leach, M. Thangavelu. 2012. "Contour Crafting Simulation
Plan for Lunar Settlement Infrastructure Buildup." Earth and Space 2012: 1458-1467.
doi:10.1061/9780784412190.155.
Krassenstein, Eddie. 2014. “Architect Plans to 3D Print a 2-Story Home in Minnesota Using a
Homemade Cement Printer.” 3DPrint.com. April 22. http://3dprint.com/2471/3d-printed-
home-in-minnesota/.
New York City Economic Development Corporation. 2013. “NYCEDC Announces Three
Winners of Change the Course - The NYC Waterfront Construction Competition.”
NYCEDC. April 10. http://www.nycedc.com/press-release/nycedc-announces-three-
winners-change-course-nyc-waterfront-construction-competition.
Rudenko, Andrey. 2015. “3D Concrete House Printer.” Total Kustom.
http://totalkustom.com/home.html.
Scott, Rick. 2009. “Estimate the Cost of A Concrete Masonry Unit Wall.” Estimating Today, July.
Simmons, H. Leslie. 2011. Olin’s Construction: Principles, Materials, and Methods. John Wiley
& Sons.
“Small Robots Printing Big Structures.” 2015. Minibuilders. http://iaac.net/printingrobots/.
U.S. Green Building Council. 2015. “Guide to LEED Certification.” USGBC.
http://www.usgbc.org/cert-guide.
Williams, Adam. 2015. “Berkeley Researchers Pioneer New Powder-Based Concrete 3D
Printing Technique.” Gizmag. March 12. http://www.gizmag.com/berkeley-researchers-
pioneer-powder-based-concrete-3d-printing/36515/.
Winsun. 2015. Accessed May 18. http://www.yhbm.com/.
World’s Advanced Saving Project. 2015. “About Us - WASProject.” WASP.
http://www.wasproject.it/w/en/wasp/.
Yelda Turkan
Department of Civil, Construction and Environmental Engineering
Iowa State University
428 Town Engineering
Ames, IA 50011-3232
[email protected]
Liangyu Tan
Department of Civil, Construction and Environmental Engineering
Iowa State University
427 Town Engineering
Ames, IA 50011-3232
[email protected]
ABSTRACT
Objective, accurate, and fast assessment of bridge structural condition is critical to timely
assess safety risks. Current practices for bridge condition assessment rely on visual
observations and manual interpretation of reports and sketches prepared by inspectors in the
field. Visual observation, manual reporting and interpretation has several drawbacks such as
being labor intensive, subject to personal judgment and experience, and prone to error.
Terrestrial laser scanners (TLS) are promising sensors to automatically identify structural
condition indicators, such as cracks, displacements and deflected shapes, as they are able to
provide high coverage and accuracy at long ranges. However, there is limited research
conducted on employing TLS to detect cracks for bridge condition assessment, which mainly
focused on manual detection and measurements of cracks, displacements or shape deflections
from the laser scan point clouds. TLS is an advance 3D imaging technology that is used to
rapidly measure the 3D coordinates of densely scanned points within a scene. The data
gathered by a TLS is provided in the form of 3D point clouds with color and intensity data often
associated with each point within the cloud. This paper proposes a novel adaptive wavelet
neural network (WNN) based approach to automatically detect concrete cracks from TLS point
clouds for bridge structural condition assessment. The adaptive WNN is designed to self-
organize, self-adapt, and sequentially learn a compact reconstruction of the 3D point cloud. The
architecture of the network is based on a single-layer neural network consisting of Mexican hat
wavelet functions. The approach was tested on a cracked concrete specimen. The preliminary
experimental results show that the proposed approach is promising as it enables detecting
concrete cracks accurately from TLS point clouds. Using the proposed method for crack
detection would enable automatic and remote assessment of bridge condition. This would, in
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the author(s), who are responsible for the
facts and accuracy of the information presented herein.
turn, result in reducing costs associated with infrastructure management, and improving the
overall quality of our infrastructure by enhancing maintenance operations.
INTRODUCTION
The majority of bridge condition assessments in the U.S. are conducted by visual inspection,
during which a printed checklist is filled by trained inspectors. An inspector must correctly
identify the type and location of each element being inspected, document its distress, manually
record this information in the field and then transcribe that information to the bridge evaluation
database after arriving back at his/her office. This is a complex and time-consuming set of
responsibilities which are prone to error.
Terrestrial laser scanners (TLS) are promising sensors for documenting as-built condition of
infrastructure (Hajian and Brandow, 2012), and they have already been utilized by a number of
state DOTs for this purpose at the project planning phase. Furthermore, TLS technology has
been shown to be effective identifying structural condition indicators, such as cracks,
displacements and deflected shapes (Park et al. 2007; Olsen et al. 2009; Werner and Morris,
2010; Meral 2011; Wood et al. 2012), as they are able to provide high coverage and accuracy at
long ranges. However, there is limited research conducted on employing TLS to detect cracks
for bridge condition assessment, which mainly focused on manual detection and measurements
of cracks, displacements or shape deflections from the laser scan point clouds (Chen 2012;
Chen et al. 2014; Olsen et al. 2013).
The research presented in this paper attempts to automatically detect cracks from TLS point
clouds (Olsen et al. 2009; Anil et al. 2013; Adhikari et al. 2013; Mosalam et al. 2013) for bridge
structural condition assessment. TLS is an advance imaging technology that is used to rapidly
measure the 3D coordinates of densely scanned points within a scene (Fig. 1(a)). The data
gathered by a TLS is provided in the form of 3D point clouds with color and intensity data often
associated with each point within the cloud. Point cloud data can be analyzed using computer
vision algorithms (Fig. 1(b)) to detect structural conditions (Fig 1(c)).
In its raw format, TLS point cloud data contains significant number of data points that is
unstructured, densely and non-uniformly distributed (Meng et al. 2013). Therefore, in machine
learning community, substantial effort has been put in reconstructing 3D shapes from point
The overarching goal of this research is to detect 3D shapes from point clouds real-time while
scanning on-site. However, there exist critical challenges in designing a shape reconstruction
algorithm for real-time adaptive scanning, namely:
Neural networks have been proposed as candidates for providing robust and compact
representations. In particular, Radial Basis Functions (RBF) neural networks have been applied
to the problem of shape reconstruction (Bellocchio et al. 2013). Compared against traditional
types of neural networks, they provide a better approximation, convergence speed, optimality in
solution and excellent localization (Suresh et al. 2008). Furthermore, they can be trained faster
when modeling nonlinear representations in the function space (Howlett 2001). Recent work has
been published in utilizing sequential RBF networks for reconstructing surfaces from point
clouds (Meng et al. 2013). A self-organizing mapping (SOM) (Kohonen 2001) architecture was
used to optimize node placement, and the algorithm provided good accuracy with minimum
number of nodes.
The authors have developed a sequential adaptive RBF neural network for real-time learning of
nonlinear dynamics (Laflamme & Connor 2009), and returned similar conclusions where the
network showed better performance with respect to traditional neural networks. They also
designed wavelet neural networks (WNN) for similar applications in (Laflamme et al. 2011,
Laflamme et al. 2012). WNN are also capable of universal approximation, as shown in (Zhang &
Benveniste 1992). This particular neural network has also been demonstrated as capable to
learn dynamics on-the-spot, without prior knowledge of the underlying dynamics and
architecture of the input space.
The study presented in this paper proposes a novel adaptive wavelet neural network (WNN)
based approach to automatically detect concrete cracks from TLS point clouds for bridge
structural condition assessment. The adaptive WNN is designed to self-organize, self-adapt,
and sequentially learn a compact reconstruction of the 3D point cloud. The approach was tested
on a cracked concrete specimen, and it successfully reconstructed 3D laser scan data points as
wavelet functions in a more compact format, where the concrete crack was easily identified.
This is a significant improvement over previous TLS based crack detection methods as it does
not require a priori knowledge about the crack or the 3D shape of the object being scanned. It
also enables to process 3D point cloud data faster and detect cracks automatically.
Furthermore, since it is designed to self-organize, self-adapt and sequentially learn a compact
reconstruction of the 3D point cloud, it can easily be adapted for real-time scanning in the field,
which will be investigated in the future using the adaptive WNN approach presented in this
paper.
Laser scanners can output extremely high resolution models, but at a much larger file size and
processing time (Boehler and Marbs, 2003). Despite the remarkable accuracy and benefits,
laser scanners’ current adoption rate in the AEC-FM industry is still low, mainly because of the
data acquisition and processing time and data storage issues. Full laser scanning requires
significant amount of time. Depending on the size of the site, it can take days for large scale
high-resolution shots. Accordingly, resulting data file sizes are typically very large (e.g., a single
high resolution scan file size could be a couple of gigabytes or much larger). Therefore, data
storage and processing are the two biggest factors for the low adoption rates of laser scanners
in the AEC-FM industry.
Thus, there is a need for advanced algorithms that enable automated 3D shape detection from
low resolution point clouds during data collection. This would improve project productivity as
well as safety by reducing the amount of time spent on-site. Importantly, practical applications of
the developed algorithms to field laser scanners will be straightforward since commercially
available laser scanners on the market are generally programmable (Trimble Inc. 2015).
(1)
( )= 1 for = 1,2, … ,
The wavelet network maps the coordinate of point =[ , ] using the following function:
(2)
= ,
The wavelet network algorithm is as follows. First, a new point is queried from the scanner,
along with its associated . The shortest Euclidean distance is computed between the location
(3)
=
where = [ , ], and are positive constants representing the learning rate of the network.
Fig. 4 shows a typical fitting result obtained using 59 nodes. The compact representation
provides a good fit of the 3D point cloud, and includes the damage feature. A study was
conducted on the accuracy of the representation as a function of the number of nodes in the
network, by changing the parameter while keeping all other network parameters constant. The
accuracy was measured in terms of the root means square (RMS) error. Fig. 5 is a plot of the
RMS error as a function of the number of nodes. It also shows the relative computing time
versus the network size. In this case, there is a region in which the algorithm provides an
optimal representation in term of RMS error. The decrease in performance for a higher number
of nodes can be attributed to the network parameters that become mistuned. In particular, when
more nodes are allowed in the network and the initial bandwidth is large, one would expect a
relatively higher training period to obtain an acceptable level of accuracy. The relative
computing time changes linearly with the number of nodes in the network.
Figure 4. (a) Point cloud; (b) Compact representation; and (c) Overlap of point cloud and
representation.
While the wavelet network provides an accurate representation of the 3D point cloud, it should
be also capable of extracting key features, such as damage. With this particular example, an
attempt was made to automatically localize the damage and determine its severity. The strategy
consists of identifying regions of wavelets (or nodes) of lower bandwidths, which would indicate
a region of higher resolution, thus the location of a more complex feature (a crack, in this case).
Fig. 6(a) is a wavelet resolution map, which is obtained by computing the average wavelet
bandwidth within a region of the representation. Dark blue areas indicate a high resolution
region, while dark red areas represent low resolution regions. The damage is approximately
localized using this strategy. Next, the crack length and width were estimated by evaluating the
maximum distances along the x- and y-axes within a group of wavelets of low bandwidth. Fig.
6(b) is a plot of the computed crack length and width as a function of the number of nodes. The
approximate crack length is more accurately determined for networks created with a large
number of nodes, but yet yields to an acceptable approximation. The estimated crack width
increases with increasing number of nodes. This is explained by the presence of a high
resolution region around the coordinate [-20, 20], shown in Fig. 6(a), that is perceived as a
crack. A representation created with a large number of functions may over-fit the 3D point cloud.
CONCLUSIONS
A strategy to sequentially construct a compact representation of a 3D point cloud has been
presented. The representation is wavelet network capable of self-organization, self-adaptation,
and sequential learning. It can be utilized to transform thousands of 3D point cloud data
obtained from a TLS or LiDAR into a small set of functions. The proposed wavelet network has
been demonstrated on a cracked cylindrical specimen. It was shown that the algorithm was
capable of replacing a set of 8170 3D coordinates into a set of 59 functions while preserving the
key features of the scan data, which included a crack. By looking at local regions of high-
resolution wavelets, it is possible to localize these features, and estimate their geometry. While
the promise of automatic damage detection has been demonstrated, the development of more
complex algorithms in future work could lead to more accurate numerical localization and
estimation of damage.
ACKNOWLEDGMENTS
This research is funded by Midwest Transportation Center (Award# 011296-00014). The
authors would like to thank Ahmad Abu-Hawash, Justin Spencer, and Michael Todsen from
Iowa DOT for their continuous support by providing us with data, and for sharing their expertise
and experience during this project. Any opinions, findings, conclusions, or recommendations
expressed in this paper are those of the authors and do not necessarily reflect the views of
Midwest Transportation Center or Iowa DOT.
REFERENCES
Alba M.I., Barazzetti L., Scaioni M. Rosina E., and Previtali M. Mapping infrared data on
terrestrial laser scanning 3D models of buildings, Remote Sensing, 3(9): 1847–1870, 2011.
Anil E.B., Akinci B., Garrett J.H., Kurc O. (2013). Characterization of laser scanners for
detecting cracks for post-earthquake damage inspection. Proceedings of the International
Symposium on Automation and Robotics in Construction and Mining, IAARC, pp 313-320.
Barhak, J., & Fischer, A. (2001). Parameterization and reconstruction from 3D scattered points
based on neural network and PDE techniques. Visualization and Computer Graphics, IEEE
Transactions on, 7(1), 1-16.
Bellocchio, F., Borghese, N. A., Ferrari, S., & Piuri, V. (2013). Hierarchical Radial Basis
Functions Networks. In 3D Surface Reconstruction (pp. 77-110). Springer, New York.
Boehler W., Marbs A. (2003). Investigating laser scanner accuracy. Institute for spatial
information and surveying technology. University of Applied Sciences, Mainz, Germany.
Chen S. (2012). Laser Scanning Technology for Bridge Monitoring, Laser Scanner Technology,
Dr. J. Apolinar Munoz Rodriguez (Ed.), ISBN: 978-953-51-0280-9, InTech
Chen, S. E., Liu, W., Bian, H., & Smith, B. (2014). 3D LiDAR Scans for Bridge Damage
Evaluations. Bridges, 10, 9780784412640-052.
Cheok G.S., Leigh S., Rukhin A. (2002). Calibration experiments of a Laser Scanner. Building
and Fire Research Laboratory, National Institute of Standards and Technology, Gaithersburg,
MD, USA.
Gálvez, A., & Iglesias, A. (2012). Particle swarm optimization for non-uniform rational B-spline
surface reconstruction from clouds of 3D data points. Information Sciences, 192, 174-192.
Greaves T., Jenkins B. (2007). 3D laser scanning market red hot: 2006 industry revenues $253
million, %43 growth, SPAR Point Research, LLC 5 (7).
Howlett, R. J., & Jain, L. C. (Eds.). (2001). Radial basis function networks 1: recent
developments in theory and applications (Vol. 66). Springer.
Kohonen, T. (2001). Self-organizing maps (Vol. 30). Springer Science & Business Media.
Laflamme, S., & Connor, J. J. (2009, March). Application of self-tuning Gaussian networks for
control of civil structures equipped with magnetorheological dampers. In SPIE Smart Structures
and Materials+ Nondestructive Evaluation and Health Monitoring (pp. 72880M-72880M).
International Society for Optics and Photonics.
Laflamme, S., Slotine, J. J. E., & Connor, J. J. (2011). Wavelet network for semi-active control.
Journal of Engineering Mechanics, 137(7), 462-474.
Meng, Q., Li, B., Holstein, H., & Liu, Y. (2013). Parameterization of point-cloud freeform
surfaces using adaptive sequential learning RBFnetworks. Pattern Recognition, 46(8), 2361-
2375.
Meral C. (2011). Evaluation of Laser Scanning Technology for Bridge Inspection, MS Thesis,
Civil Engineering Department, Drexel University
Mosalam, K. M., Takhirov, S. M., & Park, S. (2014). Applications of laser scanning to structures
in laboratory tests and field surveys. Structural Control and Health Monitoring, 21(1), 115-134.
Olsen, M. J., Kuester, F., Chang, B. J., & Hutchinson, T. C. (2009). Terrestrial laser scanning-
based structural damage assessment. Journal of Computing in Civil Engineering, 24(3), 264-
272.
Olsen, M. J., Chen, Z., Hutchinson, T., & Kuester, F. (2013). Optical techniques for multiscale
damage assessment. Geomatics, Natural Hazards and Risk, 4(1), 49-70.
Park, H. S., Lee, H. M., Adeli, H., & Lee, I. (2007). A New Approach For Health Monitoring Of
Structures: Terrestrial Laser Scanning. Computer aided civil and infrastructure engineering, 22,
19–30.
Suresh, S., Narasimhan, S., & Sundararajan, N. (2008). Adaptive control of nonlinear smart
base-isolated buildings using Gaussian kernel functions. Structural Control and Health
Monitoring, 15(4), 585-603.
Wang, C., Shi, Z., Li, L., & Niu, X. (2012). Adaptive Parameterization and Reconstruction of 3D
Face Images using Partial Differential Equations. IJACT: International Journal of Advancements
in Computing Technology, 4(5), 214-221.
Werner T., Morris D. (2010). 3D Laser Scanning for Masonry Arch Bridges. Proceedings of FIG
Congress, Facing the Challenges – Building the Capacity.
Wood, R. L., Hutchinson, T. C., Wittich, C. E., & Kuester, F. (2012). Characterizing Cracks in
the Frescoes of Sala degli Elementi within Florence’s Palazzo Vecchio. In Progress in Cultural
Heritage Preservation (pp. 776-783). Springer Berlin Heidelberg.
Vosselman G. and Maas H.-G. Airborne and Terrestrial Laser Scanning, first ed. Whittles
Publishing, Dunbeath, UK, 2010.
Zhang, Q., & Benveniste, A. (1992). Wavelet networks. Neural Networks, IEEE Transactions on,
3(6), 889-898.
JeeWoong Park
School of Civil and Environmental Engineering,
Georgia Institute of Technology,
790 Atlantic Dr. N.W.,
Atlanta, GA 30332-0355, United States,
[email protected]
Willy Suryanto
School of Civil and Environmental Engineering,
Georgia Institute of Technology,
790 Atlantic Dr. N.W.,
Atlanta, GA 30332-0355, United States,
[email protected]
ABSTRACT
Safety is considered one of the most importance components that need to be successfully
addressed during construction. However, dynamic nature and limited work space of roadway
work zones create highly occupied working environments. This may further result in hazardous
proximity situations among ground workers and construction equipment. In fact, historical
incident statistics prove that the current safety practice has not been effective and there is a
need for improvement in proving more protective working environments. This study aims at
developing a technically and economically feasible mobile proximity sensing and alert
technology and assessing it with various simulation tests. Experimental trials tested the sensing
and alert capability of the technology against its accuracy and reliability by simulating
interactions between equipment and a ground worker. Experimental results showed that the
developed mobile technology offers not only adequate alerts to the tested person in proximity
hazardous situations but also other advantages over the commercial products that may play an
important role in overcoming the obstacles for rapid deployment of new technology in
construction segments.
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the author(s), who are responsible for the
facts and accuracy of the information presented herein.
INTRODUCTION
With the development of wireless technology in the last decade, mobile devices have become
an essential component in our daily life being used for multiple purposes. The advancement in
the wireless technology enabled most of the recently produced cars equipped with a Bluetooth
technology. The driver is then able to communicate with his/ her mobile device via a Bluetooth
enabled car, triggering phone calls listening to music and radio without having to making
physical contacts with the device. This has turned our daily activity of driving a car into a much
safer experience, allowing the driver to better focus on the road. According to (Statista, 2015),
the population of smartphone users have rapidly been increasing and the expected number of
population in the U.S. is 183 million, which is more than a half U.S. population (See Figure 1).
Safety is one of the most importance components that need to be successfully addressed
during construction. However, dynamic nature and limited work space of roadway work zones
create highly occupied working environments. This may further result in hazardous proximity
situations among ground workers and construction equipment. 962 deaths of workers were
recorded at a road construction sites from 2003 to 2010. In addition, about 30% of deaths
related to construction 2012 were resulted from being struck by a vehicle. These historical
incident data prove that the current safety practice has not been effective and there is a need for
improvement in proving more protective working environments. Recent industrial (ENR, 2015)
efforts have been found with deploying cameras and motion sensors near the blind spots of a
piece of equipment. Various proximity sensing devices have been discussed and evaluated by
(Ruff, 2007), (Begley, 2006), (Marks and Teizer, 2012) and (Larsson, 2003). Tested and
evaluated systems in the past research require external hardware, such as camera, laser
scanner, tripod, power supply lines, heavy antenna, or tags. These are the major components of
each of the systems to achieve communication between a hazardous source and an object that
is potentially in a dangerous zone. While being major components, these requirements are
barriers that limit the systems’ feasibility and practicality in dynamic construction applications. In
addition to the infrastructure requirement, there are other parameters that are crucial for
assessing a system’s feasibility and practicality, including detection area, cost, maintenance,
accuracy, precision, consistency, alert method, adaptability, required power sources, ease of
Although smart devices are already pervasive and embedded into our society, their potential
uses in construction industry have not been discussed and minimal research and
experimentation have not been conducted with utilizing smart devices, despite the efforts made
with other technologies in the last decade. This study proposes a wireless proximity sensing
technology that utilizes Bluetooth transmitters and mobile devices to create a proximity sensing
and warning system. The authors consider that several characteristics of smart devices, such as
pervasiveness, availability, and familiarity to end users, are the key factors to realize a feasible
and practical technology. In the following sections, an extensive overview of the proposed
system will be discussed, and experimental validation and conclusion will follow.
OBJECTIVE
In adopting a proximity sensing and alerting system, several factors play an important role for a
system to be feasible and pragmatic. They include detection area, cost, maintenance, accuracy,
precision, consistency, alert method, size of infrastructure, adaptability, required power sources,
ease of use, ease of deployment, and others. The main objective of this study is to develop and
validate a proximity sensing system that is economically and technically feasible and practical.
The system should provide minimal infrastructure, adaptability with calibrating ability,
intensifying alerts to reflect the degree of dangerousness, and real-time alerts to pedestrian
workers and equipment operators during hazardous proximity situations. Widely available smart
devices and low-cost Bluetooth transmitters are utilized to create a proximity sensing and alert
system. Through field experimentations, their performance have been tested and assessed in
various aspects.
Audible
Worker’s
Beacon Alert &
PPU
(EPU) Vibration
No
Keep
Monitoring
Methods
The proposed system is a Bluetooth based system whose communication among devices is
accomplished by radio signal transmission. The transmitted radio signal is recorded and the
recorded Received Signal Strength (RSS) estimates the approximate horizontal distance
between the beacons and receiver. The user can perform calibration to set his/ her desired
distance range at which the system initiates alerts. This calibrated range is then trisected to
provide intensifying alerts to indicate the degree of dangerousness of a created hazardous
situation. Upon the creation of the hazardous situation, the worker’s PPU immediately turns to a
beacon to send out signals to the nearby operator’s PPU. With this signal, alerts and
visualization can be realized on the equipment operator’s PPU.
Calibration is desired for two major reasons. First, the RSS is dependent on environmental
conditions. For different equipment and environmental conditions, there is no guarantee for the
FIELD VALIDATION
To validate the proposed proximity sensing and alert system, a set of experimental trials were
designed and conducted to evaluate the system. To assess the system reliability and
effectiveness, (1) trials were performed at eight different angles centered from a piece of
equipment and (2) two different pieces of equipment were tested, such as a wheel loader and a
dump truck. The design of experimental simulation is to emulate real-time construction roadway
work zone operations. Presented material in this paper is mobile workers and static equipment
situation. Figure 4 shows the test bed and approach angles for worker and equipment
interaction simulations during testing. A worker equipped with a worker’s PPU approaches to a
piece of construction equipment and the alert distance (at which breach into hazard zone is
detected) was recorded for each of the trials. For each angle of the eight angles, 20 trials were
made for statistical data collection.
For these two different sets of trials, calibration was only performed for the trial with wheel
loader. The purpose of this was to observe the difference in RSS behavior, thus difference in
alert distance accuracy, in different environmental conditions and to confirm the needs of
calibration. In the wheel loader trials, the alert distance was set to 12 meters, and the same
setup was used for the truck trials. Statistically analyzed data for two complete sets of trials are
tabulated in Table 1, and Figure 5 shows plots for the sets of trials. In the simulation with the
wheel loader, the overall average alert distances did not deviate significantly from the set
distance of 12 meters except at 135°. This drop needs be investigated considering various
factors, such as battery, interference with the surroundings, or potential mal-functions of
transmitter. As seen in both Table 1 and Figure 5, overall average alert distances for the dump
Figure 5. Field trials with a wheel loader (left) and a dump truck (right)
Table 2 displays another statistical result showing recall values. Different values of distance at
failure were used to compute recall values as the definition of distance at failure may be
different depending on the application. As Table 2 is read, one should keep in mind that the
average alert distances of the two simulations (wheel loader and dump truck) were different,
and therefore, their direct comparisons should not be made. For the entire trials, the system
performed with less than 3% recall rates for less than five meters distance at failure. Five meter
distance boundary seems reasonable as alerts are simultaneously provided to both the
CONCLUSION
This study aims at developing a technically and economically feasible mobile proximity sensing
and alert technology and assessing it with various simulation tests. In order to overcome the
barrier of deployment costs, a cost effective system was developed. This system is based on
Bluetooth technology, which is already widely available in most of the recent smart devices.
Also, the system offers minimal infrastructure, ease of deployment, calibration functionality and
adaptability, compared with other similar proximity sensing and alert systems. Experimental
trials were designed and performed to evaluate the proposed proximity sensing and alert
system for its capability to offer real-time situational awareness via alerts to pedestrian workers
and equipment operators working in proximity hazardous situations. Results of the simulated
tests showed that this system was acceptable in providing pedestrian workers and equipment
operators multiple forms of alerts. Upon the detection of a potential hazardous situation,
immediate alerts were provided to both the pedestrian worker and the equipment operator. This
can help to minimize the proximity related accidents by providing an additional chance of time
and space for the workers to escape from the hazardous scenes.
REFERENCES
Castleford, D., Nirmalathas, A., Novak, D., and Tucker, R. S. (2001). "Optical crosstalk in fiber-
radio WDM networks". IEEE Transactions on Microwave Theory and Techniques, 49(10),
2030–2035. doi:10.1109/22.954826
ENR. (2015). "More Blind-Spot Sensors Make Jobsites Safer". Retrieved April 9, 2015, from
http://enr.construction.com/products/equipment/2015/0216-65279more-blind-spot-sensors-
make-jobsites-safer.asp
Larsson, T. (2003). "Industrial forklift trucks: Dynamic stability and the design of safe logistics".
Safety Science Monitor, 7(1), 1–14. Retrieved from http://www.diva-
portal.org/smash/record.jsf?pid=diva2:430110
Marks, E., and Teizer, J. (2013). "Method for testing proximity detection and alert technology for
safe construction equipment operation". Construction Management and Economics, 31, 1–
11. doi:10.1080/01446193.2013.783705
Marks, E., and Teizer, J. (2012). "Proximity Sensing and Warning Technology for Heavy
Construction Equipment Operation" (pp. 981–990). Construction Research Congress.
doi:10.1061/9780784412329.146
Ruff, T. (2007). "Recommendations for evaluating and implementing proximity warning systems
on surface mining equipment". Retrieved from http://stacks.cdc.gov/view/cdc/8494/Print
Statista. (2015). "Smartphone users in the U.S. 2010-2018 | Forecast". Retrieved from
http://www.statista.com/statistics/201182/forecast-of-smartphone-users-in-the-us/
Lie Tang
Department of Agricultural and Biosystems Engineering
Iowa State University
2325 Elings Hall
Ames, Iowa 50011
[email protected]
Shufeng Han
John Deere Intelligent Solutions Group
4140 114th Street
Urbandale, IA 50322
[email protected]
ABSTRACT
Design frameworks can be helpful in the development of complex systems needed to automate
machines. Designing autonomous off-road machinery requires having the means for managing
the complexity of multiple interacting systems. A design framework, consisting of four technical
layers, is presented. These layers are (1) machine architecture, (2) machine awareness, (3)
machine control, and (4) machine behavior. Examples of technology advanced in development
efforts of autonomous, robotic platforms for agricultural applications are provided. Linkages
were made to applications in the construction machinery sector. Similarities between agricultural
and construction automation exist in each of the technical layers.
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the authors, who are responsible for the facts
and accuracy of the information presented herein.
Precision agriculture is a management strategy to reduce the management scale from field
scale to sub-field scales – on a meter-by-meter scale or in management zones with similar soils
or topography. Precision agriculture is an approach to intensively managing spatial and
temporal variability of agricultural fields. It is enabled by automation technologies, but there are
many examples around the world where precision agriculture management is practiced under
low technology conditions.
In the large scale agricultural practices of North America, Europe, and Australia, among others,
automated operation of agricultural machines has been relied upon to achieve the goal of
precision agriculture. As an example, variable-rate application of chemical inputs, one of the
major precision agriculture practices, needs the application rate to be changed on-the-go and
sometimes within every square meter of a field. Manual operation and control of the machine is
infeasible under large field conditions. Thus automatic rate control has been implemented on
these machines.
However, while many agricultural machinery operations have some automation technology
embedded in them, there are few examples of autonomous or robotics machines in agriculture.
Based on the experience of the industry with automation technology, the vision of autonomous
and robotic systems has developed nevertheless. The use of small field robots is desirable for
many precision agriculture practices, such as soil sampling, crop scouting, site-specific weed
control, and selective harvesting. Robotic applications are not only desirable, but are also more
economically feasible than conventional systems for some agriculture applications (Pedersen et
al., 2006).
There are several underlying motivations to move to more autonomous and robotic agricultural
field operations. First in many cases, automated processes can achieve a greater precision in
meeting performance specifications than can humans. An automatically guided tractor, for
example, can be driven through a field with smaller deviations from a straight line resulting in
lower overlapped application of inputs and fewer skipped areas which were not properly treated.
Automation thus leads to greater input efficiency. The availability and cost of labor can be
prohibitive in agriculture particularly because of the timeliness requirements of agricultural
processes such as planting and harvesting. Agricultural automation can extend the productivity
of human labor by working in collaboration with humans in a co-robotic fashion. Recent
increases in agricultural productivity have been achieved through increasing equipment size.
Many agricultural machines are now facing limits to larger sizes, and increased size also leads
to soil compaction which has a negative impact on crop yield. Autonomous field robots have
potential to address productivity barriers and reduced soil compaction at the same time through
small vehicle platforms operating in a fleet to accomplish the needed work rates. Such small
vehicles also have the potential to reduce energy inputs (Toledo et al., 2014).
While there are many differences between the agricultural machinery and construction
machinery sectors, there are also several similarities. Similarities between the two sectors
include machinery interaction with media such as soil or biomaterials with uncertain physical
parameters that are spatially and temporally varying. Both sectors need to lower costs and
improve input efficiency, as well as improve performance and productivity. Additionally, in both
cases, human operators interact with the machines, and other people work in close proximity to
the machines, so safety is of primary importance. Construction and agriculture both seek to
minimalize environmental impact while developing an infrastructure that meets human needs.
The goal of this paper is to present a design framework developed for autonomous or robotic
machines in agriculture that it might inspire similar thinking about robotic construction machines,
particularly those used for road construction or earthmoving.
Machine Architecture
In autonomous systems engineering, an architecture is a means for managing complexity.
Autonomous and robotic machines of necessity are complex systems comprised of various
components and sub-systems; many of which are complex systems themselves. Since
individual humans and teams are limited in their time and resources, as well as their ability to
keep track of details, they need a way to manage system complexity during development. The
principle of abstracting complexity through encapsulation of components and using clearly
defined interfaces to the components is generally what is meant by the phrase “robotic system
or software architecture.”
Autonomous robots require, to varying degrees, architecture for both hardware and software.
Just thinking about the components required for a particular robotic application and how those
components are connected to one another and are interacting with one another is a simple
example of system architecture. Potential is excellent for leveraging the work across research
teams through system architectures that can be shared. These architectures can be proprietary
so that development teams can internally manage complexity. Architectures can also be open
and public to facilitate more rapid development across development teams, as well as to
facilitate the interconnectivity of components and sub-systems available on the market.
Kramer and Scheutz (2007) surveyed nine open source robotic development environments, or
system architectures, for mobile robots, and evaluated their usability and impact on robotics
development. Jenson et al. (2014) surveyed available robotic system architectures including
CARMEN, CLARAty, Microsoft Robotics Developer Studio, Orca, Orocos, Player, and ROS.
They also found examples of lesser known architectures which may be more relevant to
agricultural robots, including Agriture, Agroamara, AMOR, Mobotware, SAFAR, and Stanley. Of
these, four architectures, CARMEN, Agroamara, Mobotware, and SAFAR, had field trials for
agricultural applications. However, open source availability was limited and only Mobotware had
been recently updated.
In early efforts to promote architectural thinking about agricultural robots, Blackmore et al.
(2002) proposed a conceptual system architecture for autonomous tractors that consisted of a
set of objects or agents which have well defined narrow interfaces between them. The two types
of agents are processes and databases. A process carries out tasks to achieve a goal. Nine
processes were defined and described: Coordinator, Supervisor, Mode Changer, Route Plan
Generator, Detailed Route Plan Generator, Multiple Object Tracking, Object Classifier, Self-
Awareness, and Hardware Abstraction Layer (Figure 2). Three databases were defined
(Tractor, Implement, and GIS) and are used to store and retrieve data about the machine and its
operational context. This type of architectural thinking could be applied to construction robots as
a means for determining the structure required for construction robots.
The Joint Architecture for Unmanned Ground Systems (JAUGS) has seen some
implementations in agriculture including an autonomous orchard tractor and an autonomous
utility vehicle (Torrie et al., 2002). JAUGS was primarily a standard messaging architecture to
enable components to communicate to one another in a standard manner. Later, JAUGS was
changed to JAUS (Joint Architecture for Unmanned Systems) to be more generally applied to all
types of unmanned vehicles and became a Society of Automotive Engineers standard. The
standard has two parts; the Domain Model which describes the goals for JAUS, and the
Reference Architecture which specifies an architecture framework, a message format, and a
standard message set (Rowe and Wagner, 2008).
The Robotics Operating System (ROS; Open Source Robotics Foundation) is a general open-
source robotic operating system not specific to any application domain. It provides an interface
for passing messages between processes running on different host computing platforms that
make up the computer hardware architecture of a robot. ROS also provides a broad set of
libraries and tools useful for robotics development. Libraries include (1) standard robot message
definitions, (2) the transform library for managing coordinate transform data, (3) a robot
description language for describing and modelling a robot, (4) means for collecting diagnostics
about the state of the robot and (5) packages for common robotics problem such as pose
estimation, localization, and mobile navigation (ROS.org, 2015; Quigley et al., 2009). ROS has
applicability to autonomous construction machines. It has been used as a part of larger system
architectures for agricultural machines.
Robotic software system architectures provide the means for handling complexity through well-
defined processes and messaging, as well as higher level features, all of which are needed for
construction robots. These architectures also promote reusability, which enables research and
development teams to build on one another’s work and move toward more autonomy in
construction machines.
Machine Awareness
The next design framework layer, machine awareness, is built on the machine architecture layer
which contains the transducers that convert machine and environment signals into electrical
signals. Machine awareness, conversion of sensor signals into knowledge about machine state
and work environment, is fundamental to producing autonomous machine behavior.
Autonomous machines must have awareness of their state and location, the work environment
including objects to be avoided and the shape of the terrain, and properties of the material that
the machine is processing. In addition, they must have machine health awareness. This
machine awareness layer mainly consists of localization and perception technologies.
site. In these cases, machine localization using relative position sensors has advantages.
Included in this localization sub-layer are sensor fusion methods enabling more robust
localization through complementary sensors. Sensor fusion can extend localization when one of
the sensor signals is lost and can improve localization accuracy when various error sources
exist from any single sensor in the system.
Before a machine can be classified as autonomous, it must perceive its environment to carry out
its tasks effectively and safely. A primary goal of machine perception is machine safeguarding to
ensure safe operation of the machine. Obstacle detection, recognition, and avoidance are
typical steps in machine safeguarding. Perception algorithms and strategies are built on top of
the perception sensors in the hardware architecture to achieve safeguarding functions.
Both agricultural and construction machines need to perceive features of the environment with
which they are interacting. Agricultural machines need to interact with the crop, soil and field
topography to accomplish field operations. Construction machines also need to interact with the
soil and job site terrain. For autonomous operation, machine perception systems are needed
with the following capabilities: localization to determine where the machine is relative to the
world coordinate systems, object recognition of obstacles around the machine, navigation and
collision avoidance so that the machine can safely interact with its environment, and learning
and inference so that the perception system can solve new problems. Han et al. (2015)
With any machine, failures or breakdowns will occur. Thus, the condition of the machine must
be monitored, and machine health awareness is needed to achieve machine autonomy. In a
human-operated machine, the operator is not only controlling the operation of the machine, but
is also monitoring the machine through visual, audio, or vibration cues to ensure that the
machine is functioning correctly. As machinery is automated, machine condition monitoring,
along with fault detection and diagnosis, also must be automated, although it can still have
some reliance on human intervention when a human operator is present. For driverless,
autonomous machines, machine intelligence to monitor machine health must be in place to
produce machine health awareness with no human assistance – a very big requirement for the
development of these machines. Machine health awareness requires a high degree of
intelligence, perhaps higher than all other requirements for an autonomous machine.
Central to health awareness are technologies often referred to as condition monitoring systems
or fault detection and diagnostic systems. Condition monitoring is typically part of an overall
maintenance strategy for a process, machine or machine system, which will involve a human
manager. It uses signals from a machine acquired with sensors to provide some indication of
the condition of machine components. Based on these signals and their changes over time, with
some signal processing and pattern recognition analysis, managers can make decisions about
what maintenance interventions should be taken and when they should be scheduled.
Implementing a machine health awareness system for an autonomous machine requires several
layers of technology, which can be structured in a format similar to the design framework
presented above (Figure 4). For machine health awareness, there first must be a hardware
layer consisting of sensors that are measuring physical signals known to be related to machine
component condition. Several sensing modes have been used for condition monitoring and will
be described below. Secondly, the signals from the sensors must be preprocessed to remove
abnormal signals and then processed to extract the features that are correlated to machine
component conditions. Next, fault detection applies automatic pattern recognition processes to
determine if a fault has occurred in the system. Generally this step involves finding deviations
from patterns associated with normal operation. Once a fault has been detected, it must be
diagnosed to identify what the fault is and what might have caused it.
The last layer might be the most important for an autonomous machine, i.e., to decide what
action should be taken next and then execute it. Several possible actions can be taken when a
fault occurs, including: (1) initiate a graceful shutdown and remain at current position, (2) stop
operations and move to a designated location for maintenance, (3) stop operations, alert remote
human supervisor for further instructions, or (4) continue operations, and send a warning
message to human supervisor. Blackmore et al. (2002) identified six safety modes similar to
those listed above.
Machine Control
Once an autonomous machine has awareness of its location, environment, and health, the
machine control layer must next be in place. For agricultural applications, machine control is
necessary to navigate the vehicle through the field and to control the implements accomplishing
field operations. In construction applications, the vehicle must be navigated along paths in the
job site and control the soil engaging tools such as buckets or blades or the construction
process such as compacting soil or paving. Agricultural examples of machine control are
presented below to provide insight in how this domain has developed machine control leading
toward machine autonomy.
Navigation control of agricultural machines is highly developed and has progressed through
several generations of automatic guidance technologies as applied to conventional agricultural
vehicles and implements. However, for smaller, next-generation field robots, research questions
exist since robotic vehicle platforms may provide additional degrees of mobility freedom,
through independent four wheel steering (4WS) and four wheel drive (4WD), that can be utilized
for novel navigation control strategies.
The main goal of navigation controls is to automatically guide or steer a vehicle along a path
and to minimize the error between the actual trajectory that the vehicle takes and the desired
path. Automatic guidance of mobile agricultural field equipment improves the productivity of
many field operations by improving field efficiency and reducing operator fatigue. The idea of
automatically guiding vehicles is by no means new, and relevant literature can be found from
several decades back (Parish and Goering, 1970; Grovum and Zoerb, 1970; Smith et al., 1985).
The launching of the Global Position System (GPS) in the early 1990’s led to research
investigating the use of GPS as a positioning system for automatic guidance (Larsen et al.,
1994; Elkaim et al., 1997; Griepentrog et al., 2006; Burks et al., 2013). Commercialization of
GPS-based automatic guidance occurred in the first decade of this century, and was adopted
very quickly to become one of the most highly adopted precision agriculture automation
technologies.
Navigation control for the guidance of construction equipment has different requirements. Often
3D control is required. For grading, the blade height is controlled along with the path of the
vehicle. However, there may be good cross-collaboration between agriculture and construction
motivated by the example of the agricultural robotics research community. Here, the vision of
autonomous systems opened up investigations into new machine forms which are feasible if a
human operator is no longer required. With the new machine forms come new navigation
control strategies. While this process may play out differently in the construction domain,
Implement control has also been implemented commercially for various agricultural machine
operations. For example, in the case of liquid chemical application, chemical application rate
control was first developed and commercialized in the late 1970s, upon which variable rate
application systems were developed in the 1990s. Since that time, more and more aspects of
machine operations have been automatically controlled.
Implement control is also available so that the burden on the operator to control implement
settings can be moved to automatic control. This reduces stress and fatigue on the operator and
gives the operator freedom to take on more of a supervisory role of the overall machinery
system. In addition, implement control often leads to the reduction of errors in the field operation
such as turning on or off the seeding at the wrong location and overlap of adjacent swaths or
skips in chemical application. There are many implement control examples for field crop
machines.
(a) (b)
Figure 5. Robust navigation control of a small four-wheel drive/steer agricultural robot (a)
tracking over a U-shaped path (b) (Source: Tu, 2013).
Blackmore et al. (2007) promoted a structure for defining the behaviors field robots need to
perform agricultural operations autonomously. At the highest level, a field operation is the action
that a robot will carry out to meet the needs of a crops’ cultural practices. Within an operation,
certain tasks must be carried out – either deterministic or reactive. Deterministic tasks can be
planned before the operation starts, are goal-oriented to achieve the objective of the operation,
and can be optimized to best draw on the resources available. Reactive tasks are foreseen
responses to uncertain situations that may occur during the operation. They are captured in
terms of behaviors that the robot should do in response to new situations. For example, when
an unknown obstacle is perceived in the current path of the robot, the robot should behave
according to the type of obstacle. If a tree is perceived in the path, the robot could alter its path
to go around it. If an animal is detected in the path, the robot might wait until it moves away, or
produce stimuli to scare the animal away, or stop and seek guidance from a human supervisor.
An example deterministic task is field coverage where the robot covers a field by navigating
through a predetermined coverage path. Several examples of autonomous machine behavior
research in the agricultural domain are presented below.
Figure 6. Example results from an optimized coverage path planning algorithm for planar
field surface. The inner polygons indicate non-traversable obstacles (Jin and Tang, 2006,
2010).
Optimized Coverage Path Planning on 3D Terrain
More factors must be considered when optimizing the coverage path over terrain with three-
dimensional (3D) topographic features. The main factors are headland turning, soil erosion, and
skipped area. Jin and Tang (2011) approached the problem by first developing an analytical 3D
terrain model with B-Splines surface fits to facilitate the computation of various path costs. Then
they analyzed different coverage costs on 3D terrains and developed methods to quantify soil
erosion and curving path costs of particular coverage path solutions. Similar to the planar field
approaches, they developed a terrain decomposition and classification algorithm to divide a field
into sub-regions with similar field attributes and comparatively smooth boundaries. The most
appropriate path direction of each region minimized coverage cost. A “Seed Curve” search
algorithm was successfully developed and applied to several practical farm fields with various
topographic features (Figure 7).
and produced a different field pattern for each particular operation, which were optimal in non-
working travel distance (Figure 8).
(a) (b)
Figure 8. Differences between traditional turning pattern (a) and optimized turning
pattern (b) from an optimized vehicle routing algorithm for a mowing operation (Bochtis
et al., 2009).
CONCLUSIONS
A design framework for autonomous machines was presented in this paper. While the
framework has emerged from the agricultural domain, its generality allows a broader application
to other domains. The layers of machine architecture, awareness, control, and behavior will
need to be developed for autonomous machines in any domain.
ACKNOWLEDGMENTS
This research was supported by Hatch Act and State of Iowa funds, and this paper is a paper of
the Iowa Agriculture and Home Economics Experiment Station, Ames, Iowa.
ABSTRACT
Simulation of granular materials (soil, rocks) interaction with earthmoving machines provides
opportunities to accelerate new equipment design and improve efficiency of earthmoving
machine performances. Discrete Element Modelling (DEM) has a strong potential to model soil
and rocks bulk behavior in response to forces applied through interaction with machinery.
Numerical representation of granular materials and methodology to validate and
verify constitutive micro-mechanical models in DEM will be presented. In addition, how DEM
codes can be integrated to CAE tools such as multibody dynamics will also be discussed. A
case study of tillage bar-soil interaction was modeled in EDEM to predict tillage draft force and
soil failure zone in front of tool moving at 2.68-m/sec and depth of 102-mm. The draft force and
soil failure zone was predicted at 10% and 20% error from laboratory measured data.
Simulation based tools consist of generating CAD geometry surface mesh, pre-processing,
material model, solver, post-processing and data analytics for engineering decision support.
Modelling geomaterial - tool interaction will require versatile material models, quick and easy
testing methods to generate data for model parameters calibration and numerical tool to solve
geomaterial-tool interaction responses. Generally there are two broad categories of
geomaterial-tool interaction problems 1) load loosening processes for instance in tillage tools,
soil flow from bulldozer blades, soil fill in loader buckets; and 2) load bearing process in soil to
vehicle tractive devices interactions where soil supports vehicle loads and helps generate
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the author(s), who are responsible for the
facts and accuracy of the information presented herein.
traction. The desired engineering objectives are generally to reduce energy expenditure during
cutting and tillage processes, maximize soil fill on buckets, easy soil flow from crawler blades,
maximize traction and optimal soil density for growing crops.
Discrete Element Modelling (DEM) has the potential to simulate soil-tool interaction and could
also be integrated in co-simulation with other systems modelling tools such as Finite Element
Analysis (FEA), Multi-Body Dynamics (MBD) and Computational Fluid Dynamics (CFD). DEM
formulation comprises numerical representation of the particle shape and size, assembly of
particles, constitutive micro-mechanics contact laws that defines force vs. displacement
relationships; and every time step contact detection and explicit numerical integration governed
by Newton’s law of motion (Cundall and Strack, 1997). Details in contact laws and their
formulation are available in literature (Walton and Braun, 1986; Luding, 2008; Cleary, 2010;
EDEM, 2011). The DEM contact laws originated based on Hertizian contact theory and now
there are advanced contact models that defines the relationship between forces and
displacement using material normal and tangential stiffness, coulomb friction coefficient,
damping coefficient, rolling resistance coefficient, cohesion/adhesion and bond parameters.
Researchers have shown the predictive capability of DEM to model behaviour of granular
materials since late 70’s after Cundall and Strack (1979). Some of the works related to
earthmoving include DEM modelling of wide cutting blade to soil interaction on scaled
experimental box to predict forces on crawler blades and soil flow in front of the cutting blade
(Shmulevich et al., 2007); simulation of hydraulic excavator digging process using confining
stress dependent DEM cohesive soil model (Obermayr et al., 2014); and DEM modelling of soft
ground cutterhead (4.2-m in diameter) predicting torque performance for different Tunnel Boring
Machine (TBM) cutterhead designs (Mongillo and Alsaleh, 2011).
This paper illustrates potential opportunities and challenges with DEM for earthmoving machine
virtual protyping construction equipment and automation process. The discussion in this paper
may apply to the interactions of crawler blade (over 3-m X 3-m blade width X blade height for
instance John Deere 1050K and CAT D9), bucket (with loader capacity over 6 M3 (Yd3) for
instance John Deere 844 and CAT 980) and ripper tine with geomaterials ranging from clay-
sized (less than 0.002-mm) to gravel (75-mm).
The challenges with utilizing DEM arise from the difficulty with particle based approximation of
geomaterial and its dynamic behaviour. Geomaterials have spatial-temporal variations in
conditions (wet to dry), wide range of particle sizes (clay to gravel size) and their response to
loading have stochastic bulk response behaviour. Approximation of these realistic geomaterial
type, size and conditions using DEM spheres and solving bulk geomaterial to large size
In DEM codes particle shape is approximated using spheres, glued (clumped) spheres and non-
spherical particles (for instance polyhedron, ellipsoids). Spherical DEM particle representation
Similar to shape representations, particle size approximation to real geomaterial size distribution
also influences DEM bulk material response behaviour and computation effort. The smallest
DEM particle size derives the explicit time step value for DEM calculation. Geomaterial DEM
modelling, it is computationally prohibitive to match equivalent clay, silt and sand size fractions.
DEM particle size scaling is thus necessary and can be done using linear scaling factor of DEM
mean particle size, particle size distribution or ratio number of particle to geometry (wall)
dimension. Lee et al. (2012) applied DEM particle size scaling by a factor of 10 times larger than
the experimentally measured sub-angular uniformly graded fine sand distribution and
normalized by D50. DEM simulation of triaxial compression with polyhedral particle shape, Lee et
al. (2012) successfully reproduced initial packing density and stress-strain of undrained triaxial
compression sand soil test.
Limited studies are available that investigate various DEM shape approximations and particle
scaling methodology and how they relate to achieve the desired quality of dynamics of bulk
material response behaviour and computational effort. Similarly studies are needed on
upscaling methodology from small size simulation used for calibration of DEM model
parameters, shape and size to simulation of large size earth moving applications.
Soil mechanical properties (stress-strain, angle of internal friction, cohesion) obtained from
geotechnical ASTM standard tests (tri-axial, shear and others) can be good candidates. Other
experimental tests that reflect in-situ geomaterial behaviour and allow to measure multiple
dependent variables such as angle of repose, deformation, torque and forces will provide
enhanced accuracy for wider granular dynamics behaviour.
DEM calibration process involves first reproducing initial packing density (void ratio) of particle
assembly with estimated initial model parameters. DEM virtual experiments are then conducted
taking the model parameters as independent variables and response properties similar to the
experimental test as dependent variables. Besides to the material model parameters (stiffness
and coefficients), shape and size parameters can be added as independent variables during
calibration process. Reducing the number of model parameters for calibration is always helpful.
For instance in quasi-static engineering systems, determination of coefficient of restitution may
be less important than shear stiffness and friction coefficients. In dynamic system application for
instance in grind milling and transfer chutes where system performance is dependent on
collision energy losses characterization, coefficient of restitution is important. Sensitivity and
optimization scheme will then be deployed to generate calibrated DEM particle model and
properties to be used for application simulation.
Simulation of Application
Upscaling and computational accuracy
The main engineering value from DEM modelling is obtained from simulation of industrial
application using the calibrated DEM particle model. Depending on the simulation domain of
industrial application, the particle size or distribution used in the calibration process may need
scaling before using it for application simulation. Upscaling DEM particle model into large size
simulation will need know-how on particle: geometry system similarity and scale-invariance of
contact models (Feng et al., 2009). Interpretation of DEM results from virtual equipment design
changes may not necessarily eliminate the uncertainty/stochastic natural geomaterial behaviour
and their associated equipment performance. Having DEM results with acceptable variance and
showed value-added trends of improved performance from virtual equipment changes can be
considered successful outcome.
Some engineering design and analysis may require coupling with CFD, FEM and MBD. For
earth-moving application, coupling of DEM with MBD and FEM will be more applicable to
transfer transient loads from geomaterials into rigid or flexible multibody mechanical or hydraulic
driven systems for equipment (excavator buckets, blade) kinematics motion control and
structural stress analysis. DEM coupling with other tools will involve surface element position
mapping, interpolation and synchronization of sampling time (Favier, 2011) that will affect the
stability and accuracy. For automation of earth-moving operation with soil-tool interaction in the
Table 2. Base line DEM properties for Hertz-Mendlin (No Slip) (EDEM, 2011)
Poisson’s ratio 0.3
Shear modulus (Pa) 1e+06
Density (kg/m3) 2650
Soil:Soil Interaction
Coefficient of restitution 0.01
Coefficient of static friction 0.36
Coefficient of rolling friction 0.4
Soil:Steel Interaction
Coefficient of restitution 0.01
Coefficient of static friction 0.33
Coefficient of rolling friction 0.2
Figure 2. DEM predicted Horizontal (draft) force for the rigid flat bar for 5-mm and 10-mm.
The lab measured draft force was 416 N (Standard deviation = 36.9 N).
Table 3. DEM parameters used for sensitivity study (Base line was considered as lowest
value; HH = High High; MM= Medium Medium; and ML = Medium Low showed ranking for
soil:soil and soil:steel interaction ranges).
Sensitivity Classes for EDEM
runs
DEM Parameters Base HH MM ML
Soil:soil Coefficient of static friction 0.36 0.90 0.60 0.60
Soil:soil Coefficient of rolling friction 0.40 0.90 0.60 0.60
Soil:steel Coefficient of static friction 0.33 0.90 0.60 0.30
Soil:steel Coefficient of rolling friction 0.20 0.90 0.60 0.30
The results for predicting horizontal force (draft) and soil failure from tool bar interaction with soil
are shown in Figure 3. Medium Low (ML) values DEM interaction parameters showed good
prediction (10% error) in horizontal force compared to the lab measured value. The soil failure
zone infront of the tool were better predicted with the base line properties at 20% error. The
DEM parameters providing better estimate for force prediction and soil flow are in different
ranges especially on the soil:soil particle interaction parameters. Further study is needed to
optimize DEM parameters for two conflicting objective functions. The shear modulus may affect
the force prediction and should be considered in next optimization steps.
This exercises demonstrates implementation of the value of adaptive system approach and
know-how of the engineering problem for calibration DEM parameters instead of depending on
individual DEM parameters measurement.
x Simulation of tillage bar-soil interaction was run in EDEM and showed the sensitivity of
Hertz-Mendlin contact model parameters for predicting horizontal tillage tool force and
soil failure zones. This demonstrates the value of adaptive system approach in utilizing
DEM model for tool-soil interaction problems.
x Future research studies are needed on “realistic” geomaterial shape approximate vs.
accuracy of dynamic granular behaviour, development of material tests fit for DEM
calibration purposes, robust calibration and optimization methodology for shape, particle
size and material models; and methods to evaluate application simulation output
uncertainty and variance for earth-moving virtual product development.
REFERENCES
Barker, M.E. (2008). Predicting loads on ground engaging tillage tools using computational fluid
dynamics. PhD. Dissertation. Iowa State University.
Cannon, H. and S. Singh (2000). Models for Automated Earthmoving, In P. Corke and J.
Trevelyan, Editors, Experimental Robotics VI, Lecture Notes in Control and Information
Sciences, Springer Verlag, Sydney. 183-192.
Cleary, P.W. (2010). DEM prediction of industrial and geophysical particle flows. Particuology
10:106-118.
Cundall, P.A. and O.D.L. Strack (1979). A discrete numerical model for granular assemblies.
Geotechnique 29:47-65.
EDEM (2011). EDEM theory reference guide. Edinburgh, UK: DEM Solutions.
Feng Y.T., K. Han, D.R.J. Owen and J. Loughran (2009). On upscaling of discrete element
models: similarity principles", Engineering Computations. 26 (6): 599 - 609
John Favier (2011). Using DEM for Engineering Design and Analysis: Opportunities and
Challenges. 6th International Conference on Discrete Element Modelling (DEM 6) Proceeding.
Golden, Colorda, USA. Pp 36-4.
Hohner D, S. Wirtz and V. Sherer (2015). A study on the influence of particle shape on the
mechanical interactions of granular media in a hopper using the Discrete Element Method.
Powder Technology 278: 286-305.
Itasca Consulting Group, Inc. (2015). PFC (Particle Flow Code in 2 and 3 Dimensions), Version
5.0, Documentation Set. Minneapolis: ICG.
Lee, S. J., Y.M.A. Hashash and E.G. Nezami (2012). Simulation of triaxial compression tests
with polyhedral discrete elements. Computers and Geotechnics 43: 92-100.
Luding S (2008). Cohesive, frictional powders: contact models for tension. Granular Matter 10:
235-246.
Obermayr M, C. Vrettos, P. Eberhard and T. Dauwel (2014). A discrete element model and its
experimental validation for the prediction of draft forces in cohesive soil. Journal of
Terramechanics 53: 93-104.
Shmulevich I., Z. Asaf and D. Rubinstein (2007). Interaction between soil and a wide cutting
blade using the discrete element method. Soil & Tillage Research 97: 37-50.
Walton, O.R. and R.L. Braun (1986). Stress Calculations for assemblies of inelastic spheres in
uniform shear. Acta Mechanica (63):73-86.
ABSTRACT
Use of automated machine guidance (AMG) that links sophisticated design software with
construction equipment to direct the operations of construction machinery with a high level of
precision, has the potential to improve the overall quality, safety, and efficiency of transportation
construction. Many highway agencies are currently moving towards standardizing the various
aspects involved in AMG with developing the design files to implementing them during
construction. In this paper, two aspects of AMG and their impacts on earthwork operations are
discussed. The first aspect deals with the estimation of earthwork quantities and its impact on
productivity on costs. The second aspect deals with the factors contributing to the overall
accuracy of AMG. These two aspects are discussed in this paper using survey responses from
various AMG users (contractors, agencies, software developers, and equipment manufacturers)
and some experimental test results. Both these aspects are critical to understand during
implementation of AMG as these have productivity and cost implications to the users.
INTRODUCTION
Currently, highway agencies are improving electronic design processes that support
construction with automated machine guidance (AMG) and deliver higher quality products to the
public. Equipment providers are rapidly advancing software tools and machines systems to
increase automation in the design and construction process. Motivation to more widely adopt
AMG processes therefore exists. However, the framework for adoption of AMG into the complex
framework of design to construction has not been fully developed. Technical, equipment,
software, data exchange, liability/legal, training, and other barriers, limits progress with AMG
implementation into construction projects. To address these issues, a national level study was
initiated by the Transportation Research Board as the National Cooperative Highway Research
Program (NCHRP) 10-77 study.
In this paper, two specific aspects of AMG that directly influences the earthwork operations are
discussed. The first aspect deals with earthwork quantity estimation using AMG. The second
aspect deals with accuracy of AMG and the various factors that contribute to errors in the AMG
process. Both these aspects are critical to understand in a practical perspective as these have
productivity and cost implications to the contractors and agencies. In the following, each of
these aspects are separately discussed by presenting results of a national survey conducted
with over 500 participants from agencies, contractors, equipment vendors, software vendors,
and some experimental tests conducted by the authors. Survey results of selected questions
are presented herein for brevity, and all results are available in White et al. (2015).
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the author(s), who are responsible for the
facts and accuracy of the information presented herein.
EARTHWORK QUANTITY ESTIMATION
Earthwork pay items are historically objects of great dispute between agencies and contractors.
Proper use of digital information for AMG will likely result in less confusion and more accuracy
than traditional methods of earthwork pay item quantification and payment. According to the
survey responses (see White et al. 2015), a majority of the survey responding contractors
currently use DTMs for estimating quantities, means and methods, constructability, quantity of
the progress of work, and payment. Earthwork pay quantification from AMG must include
mechanisms that all parties in the contract (both the agency-owner and the contractor) can trust.
The efficient use of digital information in AMG applications typically involves creation of a digital
terrain model (DTM) during initial planning, which is then passed to the design phase for
addition of design data in a 3D model. This facilitates efficient computation and measurement of
earthwork quantities for use during the procurement phase (bidding). Finally, the construction
phase involves verification of project as-built quantities.
Figure 1 presents responses from contractors and vendors on the impact of AMG on
productivity gain and project cost savings. A majority of the equipment vendors indicated
potential productivity gain of about 40% and potential cost savings of about 25 to 40% using
AMG. On the other hand, a majority of the contractors indicated potential productivity gain of
about 10 to 25% and potential cost savings of about 10 to 25% using AMG. Productivity gain
and cost savings reported in the literature on earthwork construction projects using AMG is also
presented in Figure 1 (Jonasson et al., 2002; Aðalsteinsson, 2008; Forrestel, 2007; Higgins,
2009; Caterpillar, 2006).
Jonasson et al. (2002) reported productivity gain and cost savings information for a fine grading
project using a motorgrader with different position measurement technologies (i.e., ultrasonic’s,
2D and 3D lasers, and GPS). The productivity gain ranged from about 20 to 100% and cost
savings ranged from about 15 to 40%, depending on the position measurement technology
used. The cost savings were due to a reduction in surveying support and grade checking, an
increase in operational efficiency, and a decrease in number of passes. Their study indicated
that the 3D laser systems required a direct line of sight to the equipment while the GPS systems
did not, which resulted in a small increase in fleet productivity and a decrease in unit cost using
GPS guidance systems over 3D laser systems.
Aðalsteinsson (2008) reported results from a field demonstration project conducted using an
excavator to excavate a trench with 1650 cubic yards of sandy gravel material. In his study, the
AMG approach showed a productivity gain of about 25% over a no AMG approach. Caterpillar
(2006) reported results from a field demonstration project conducted in Spain by constructing
two 80 m identical roads: one road with AMG on construction equipment and the other with
similar equipment but using conventional methods and no AMG. AMG was used for bulk earth
moving and fine grading work. An overall productivity increase of about 101%, fuel cost savings
of about 43%, and increased consistencies in grade tolerances were reported for this project.
The results from these field case studies and survey responses indicate that the productivity
gain and cost savings using AMG on earthwork projects can vary significantly (with productivity
gains in the range of 5% to 270% and cost savings in the range of 10% to 70%). This variation
is most likely because of various contributing factors, such as project conditions, materials,
application, equipment used, position measurement technologies used, and operator
experience.
0
10 20 30 40 50 >50
Cost Savings Using AMG (%)
Figure 1. Survey Responses by Contractors and Vendors and Productivity Gain and
Potential Cost Savings using AMG, and Data obtained from Field Case Studies
Various interpolation methods are available in the literature for generating contour grid data for
DTMs, which include: (a) inverse distance to power; (b) Kriging; (c) local polynomial; (d)
minimum curvature; (e) nearest neighbor; and (f) triangulated irregular network (TIN). To study
the influence of the number of data points, three different data sets, with 78, 38, and 11 data
points, were captured over a 540 m2 area. The area consisted of a sloping terrain with an
elevation difference of about 3.5 m over 60 m length. DTMs were generated using the six
different interpolation methods described above. DTMs generated from 78 data points are
presented in Figure 2.
The accuracy of each DTM that used 78 data points was evaluated using a cross-validation
technique. This technique involved taking out a known data point from the data set, estimating
the point using the model, and comparing the estimated value with the actual one. This process
was repeated for all 78 data points. An absolute mean error (calculated as the average of
absolute value of the difference between the actual and the estimate value) was then calculated
for each interpolation method, as summarized in Table 1.
For this data set, results indicated that the Kriging method is the most accurate method with
0.02 m absolute mean error. The TIN method showed a slightly higher absolute mean value of
0.03 m. The grid generated using the Kriging method with 78 data points was then considered
as a “true” representative surface, and it was used as a comparison to the grid data generated
using the other interpolation methods, as summarized in Table 2. The Kriging method produced
absolute mean error of 0.02 m using 38 data points and 0.05 m using 11 data points. The TIN
method produced slightly higher absolute mean error values. Minimum curvature, local
polynomial, and inverse distance to power methods produced greater absolute mean error
values, compared to the TIN method. The nearest neighbor method could not replicate the
surface terrain, as it doesn’t interpolate the data, which is clearly a limitation of the method.
It is important that existing surfaces are portrayed as accurately as possible, so the model can
be passed ahead to the design, estimation, bidding, and construction phases of the project with
high fidelity. A proper understanding of the factors that influence the accuracy of the DTM is
important to understand and must be addressed during the model development phase.
Using DTM, the surface-surface method can be used to compute quantities, by overlapping the
existing terrain and the design DTM surfaces. The U.S. Army Corps of Engineers (2004)
provides a detailed explanation of the surface-surface quantity estimation method using TIN
surfaces. Many software applications (including Microstation and Autodesk) now have the
capability to easily compute quantities using the surface-surface method. The accuracy of the
generated DTM, as described above, plays a significant role in the estimated earthwork
quantities. Soil shrink-swell factors also affect to the overall quantity estimation, which are
dependent on the soil type, so they must be selected appropriately (Burch, 2007).
Vonderohe et al. (2010) reported that differences between average-end-area and surface-
surface increases as the cross-section levels increase, although the relationship is not linear. As
the cross-section intervals decrease, the computations become theoretically the same. The
differences are observed to be as great as 5% when 100 ft cross-section intervals are used with
the average-end-area method. Such differences can contribute to significant cost discrepancies
for large projects. The advantage of using DTMs is that earthwork quantities can be computed
“on the fly,” as the model is being developed, and also during construction. Various layers and
volumes that represent various bid items and various costs can be collected and categorized
during the design process. Designed surfaces are accurately portrayed and can be passed
ahead in the AMG process with high fidelity.
Limitations
The limitations in all of the above, however, include potentially higher up-front costs for
software, hardware, and highly-trained personnel, and the possible inability to make gut-level
checks for some types of design errors. Downstream personnel may be critical of design
personnel for alternative designs that were not used and documented in unused parts of the
model. Designers may consider inspection of the details of the design process by downstream
personnel to be too invasive of their professional autonomy.
Survey responses from surveyors and planners indicated total station surveying (robotic and
conventional) is considered more accurate than GPS and photogrammetric surveying.
Manufacturers and researchers have published the precision and accuracy values of various
position measurement technologies in the technical literature (Peyret et al., 2000; Retsher,
2002; Barnes et al., 2003; Mautz, 2008; and Trimble, 2008).
It does not appear that the effect of construction process and human errors has ever been
thoroughly studied or quantified. Most contractors, vendors, and agency personnel who
responded to the survey questions reported that these variables play a major role in the overall
accuracy of the AMG process.
GPS-based technologies can overcome the limitations stated above with laser and ultrasonic
technologies, but they don’t offer high vertical accuracy. Peyret et al. (2000) noted that RTK
GPS systems normally have vertical accuracy (±2 cm) or twice the horizontal accuracy (±1 cm).
A vertical accuracy level of ±2 cm is not sufficient for applications such as paving or fine
grading. Another common problem reported with GPS-based technologies is limited availability
of satellites (and, consequently, poor signal attenuation) when operating close to structures,
trees, or underground environments. Currently, the U.S. Air Force is committed to maintaining
availability of 24 operational GPS satellites, 95% of the time (U.S. Air Force 2014) and is
projecting for increased number of satellites in the future. The relative gain in accuracy from an
increased number of satellites may be marginal (Hein et al. 2007), however, AMG users can
expect to increase the chances of having the minimum number of satellites required to achieve
a certain amount of accuracy because of the new additional satellites.
Recent advancements with use of HA-NDGPS with initiatives from FHWA, globally positioned
GDGPS and IGS technologies is providing opportunities to achieve cm level accuracy without
significant on-site investment. U.S. Air Force is currently in the process of developing and
launching a next-generation GPS satellite (GPS III) which will be available for all military and
civilian applications with improved accuracies (U.S. Air Force 2014).
GPS with laser or ultrasonic augmentation offers improved vertical accuracies (2 to 6 mm)
(Trimble, 2008). From recent field studies on concrete paving projects in Iowa, Cable et al.
(2009) found that laser-augmented GPS measurements are somewhat capable of guiding the
paver and controlling elevation to achieve a reasonable profile for low-volume roads, but
recommended that improvements (or fine tuning) in software is required to better control the
elevation that will result in smoother surface profiles.
The level of impact for each of these factors differs with the application type. Speed of operation
affects AMG accuracy and overall project costs. Increasing speed decreases the ability of
machines to react to error signals and, consequently, reduces the accuracy of the
measurement. However, productivity declines as speed declines, impacting project costs. The
effect of speed of operation is clearly interlinked with the abilities of the position measurement
technology feedback response time. The terrain on a job site can have an impact. Although not
critical for paving and fine grading applications, terrain can be critical for general earthwork and
excavation applications.
The type of material and support conditions under the equipment (whether stable or unstable,
uniform or non-uniform) impacts the overall accuracy. Unstable or non-uniform support
REFERENCES
ARINC Inc. (2014). NDGPS Assessment Report, Final Report, Prepared by ARINC Inc. for
Operations Research and Development, Federal Highway Administration, McLean, VA.
Aðalsteinsson, D.H. (2008). GPS Machine Guidance in Construction Equipment, BSc. Final
Project Report, School of Science and Engineering, Háskólinn Í Reykjavík University,
Iceland.
Barnes J., Rizos, C., Wang, J., Small, D., Voigt, G., Gambale, N. (2003). “LocataNet: A New
Pseudolite-Based Positioning Technology for High Precision Indoor and Outdoor
Positioning.” Proceedings of the 16th International Technology meeting of the Satellite
Division of the U.S. Institute of Navigation, Portland, OR.
Burch, D. (2007). Estimating Excavation, Craftsman Book Company, Carlsbad, CA.
Cable, J.K., Jaselskis, E.J., Walters, R.C., Li, L., and C.R. Bauer. (2009), “Stringless Portland
Cement Concrete Paving.” Journal of Constr. Engrg. and Mgmt., 135(11), p.1253-1260.
Caterpillar. (2006). Road Construction Production Study, Malaga Demonstration and Learning
Center, Spain <http://www.trimble-productivity.com/media/pdf/ProductivityReport
CATRoadConstruction2006.pdf> (accessed June 2010).
Daoud, H. (1999). “Laser Technology Applied to Earthworks.’’ 16th IAARC/ IFAC/IEEE
International Symposium on Automation and Robotics in Construction, Universidad Carlos
III de, Madrid, Madrid, Spain, Proceedings C. Balaguer, ed., p. 33–40.
ABSTRACT
This paper introduces a framework of automatic object recognition and rapid surface modeling
to aid the heavy equipment operation in rapidly perceiving 3D working environment at dynamic
construction sites. A custom-designed data acquisition system was employed in this study to
rapidly recognize the selected target objects in a 3D space by dynamically separating target
object’s point cloud data from a background scene for a quick computing process. A smart
scanning method was also applied to only update the target object’s point cloud data while
keeping the previously scanned static work environments. Then the target’s point cloud data
was rapidly converted into a 3D surface model using the concave hull surface modeling
algorithm after a process of data filtering and downsizing to increase the model accuracy and
data processing speed. The performance of the proposed framework was tested at a steel
frame building construction site. The generated surface model and the point cloud of static
surroundings were wirelessly presented to a remote operator. The field test results show that
the proposed rapid target surface modeling method would significantly improve productivity and
safety in heavy construction equipment operations by distinguishing a dynamic target object
from a surrounding static environment in 3D views in near real time.
INTRODUCTION
Visibility-related accidents can be easily caused by the interactions between workers,
equipment, and materials. This problem can lead to serious collisions without pro-active
warnings. There have been a number of advances in vision-aid techniques because lacking full
visibility is a major contributing factor in accidents at construction sites. 3D spatial modeling can
help to optimize equipment control [1,24], significantly improve safety [2-3], monitor construction
progress [4], and enhance a remote operator’s spatial perception of the workspace [5-8].
However, the rapid processing of tens of thousand bits of range data in real time is still an
unsolved problem requiring further investigation [9]. Unstructured work areas like construction
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the author(s), who are responsible for the
facts and accuracy of the information presented herein.
sites are difficult to graphically visualize because they highly involve unpredictable activities and
change rapidly. Construction site operations require real-time or near real-time information
about the surrounding work environment, which further complicates graphical modeling and
updating.
One commonly used method to obtain the 3D position of an object is based on 3D laser
scanning technology [7,10-11]; this method, however, has some limitations, such as low data
collection speed and low object recognition rates [12]. It has always been a challenge to
recognize specific objects from a 3D point cloud in unstructured construction environments
because it is difficult to rapidly extract the target area from background scattered noises in a
large and complex 3D point cloud.
While rapid workspace modeling is essential to effectively control construction equipment [13],
few approaches have been accepted by the construction industry due to the difficulty of
addressing all the challenges of current construction material handling tasks with the current
sensor technologies. Thus, an innovation in rapid 3D spatial information is necessary to meet
the challenges. The main objective of this paper was to validate a 3D visualization framework to
collect and process dynamic spatial information rapidly at a construction job site for safe and
effective construction equipment operations. Multi-video camera integrated vision-based object
recognition and tracking method has been developed, based on which, a smart laser scanning
method was proposed to reduce data size and scanning time.
LITERATURE REVEW
For the operator to monitor blind spots of the workspace from the cab, a vision-based system
using a single or multiple cameras is an inexpensive option [14]. Brilakis et al. [15] introduced
2D vision-based methods that recognize new overlapping feature points and track them in the
subsequent video stream. To acquire a precise 3D position of objects with additional depth
information, generally two or more cameras generate a stereo view after calibration with known
intrinsic parameters. Park et al. [16] achieved more accurate 3D locations of tracking objects by
projecting the centroids of the tracked entities from two cameras to 3D coordinates.
Notwithstanding the recent advances, there are some known drawbacks of vision-based
techniques in tracking moving equipment at the sites: 1) additional infrastructure is needed to
install and maintain cameras; 2) fixed camera locations have limited view angles and
resolutions, and 3) the results are sensitive to lighting conditions [17].
Laser scanners have been extensively utilized to automatically obtain the “as-is” condition of the
existing buildings [18]; they also can be used to classify and capture a complex heavy
equipment operation as it happens or to provide automated feedback to those who are
conducting the operations [7,17,19]. Teizer et al. presented a methodology for real-time 3D
modeling using Flash LADAR which has a limited measurement range and low accuracy for
outdoor use [3]. Lee et al. proposed an automated lifting-path tracking system on a tower crane
to receive and record data from a laser device [13]. Bosche and Hass registered 3D static CAD
objects to laser-scanned point cloud data [20], which can be utilized to efficiently assess
construction processes. However, most of the algorithms were developed mainly to recognize
and register static objects’ models to point clouds. Few applications have demonstrated the
technical feasibility of registering dynamic models to point clouds in real or near real time.
In the authors’ previous studies [17], a model-based automatic object recognition and
registration method, Project-Recognize-Project (PRP), was introduced to register the CAD
models with the corresponding point cloud of the recognized objects through comparing the
recognized point cloud of the objects with existing CAD models in a database (shown in Figure
1). While the PRP approach provides very detailed, accurate solid models in a point cloud, the
Wang, Cho 218
limitation of this method is that it only works for the objects which have corresponding models in
the database. In this study, a non-model based, surface modeling method is introduced to
automatically recognize and visualize dynamic objects on construction sites.
ACKNOWLEDGMENTS
This material is based upon work supported by the National Science Foundation (Award #:
CMMI-1055788). Any opinions, findings, and conclusions or recommendations expressed in this
material are those of the authors and do not necessarily reflect the views of the NSF.
REFERENCES
[1] H. Son, C. Kim, K. Choi, Rapid 3D object detection and modeling using range data from 3D
range imaging camera for heavy equipment operation, Automation in Construction 19(7) (2010)
898-906.
[2] J. Teizer, B. S. Allread, C. E. Fullerton, J. Hinze, Autonomous pro-active real-time
construction worker and equipment operator proximity safety alert system, Automation in
Construction 19(5) (2010) 630-640.
[3] J. Teizer, C. H. Caldas, C. T. Haas, Real-Time Three-Dimensional Occupancy Grid Modeling
for the Detection and Tracking of Construction Resources, ASCE Journal of Construction
Engineering and Management 133(11) (2007) 880-888.
[4] H. Son, C. Kim, 3D structural component recognition and modeling method using color and
3D data for construction progress monitoring, Automation in Construction 19(7) (2010) 844-854.
Wang, Cho 222
[5] Y. Cho, C. Haas, K. Liapi, S. Sreenivasan, A framework for rapid local area modeling for
construction automation, Automation in Construction 11(6) (2002) 629-641.
[6] Y. Cho, C. Haas, S. Sreenivasan, K. Liapi, Error Analysis and Correction for Large Scale
Manipulators in Construction, ASCE Journal of Construction Engineering and Management 130
(1) (2004) 50-58.
[7] Y. Cho, C. Wang, P. Tang, C. Haas, Target-focused local workspace modeling for
construction automation applications, ASCE Journal of Computing in Civil Engineering 26(5)
(2012) 661-670.
[8] Y. Cho, C. Haas, Rapid Geometric Modeling for Unstructured Construction Workspaces,
Journal of Computer-Aided Civil and Infrastructure Engineering 18 (2003) 242-253.
[9] J. Gong, C. H. Caldas, Data processing for real-time construction site spatial modeling,
Automation in Construction 17(5) (2008) 526-535.
[10] P. Tang, D. Huber, B. Akinci, R. Lipman, Automatic reconstruction of as-build building
information models from laser-scanned point clouds: A review of related techniques, Automation
in Construction 19 (2010) 829-843.
[11] D. Huber, B. Akinci, P. Tang, A. Adan, Using laser scanner for modeling and analysis in
architecture, engineering and construction, Proceedings of the Conference on Information
Sciences and Systems (CISS) (2010) Princeton, NJ.
[12] C. Kim, J. Lee, M. Cho, C. Kim, Fully automated registration of 3D CAD model with point
cloud from construction site, 28th International Symposium on Automation and Robotics in
Construction, Seoul, Korea, (2011) 917-922.
[13] G. Lee, H. Kim, C. Lee, S. Ham, S. Yun, H. Cho, B. Kim, G. Kim, K. Kim, A laser-
technology-based lifting-path tracking system for a robotic tower crane, Automation in
Construction 18(7) (2009) 865-874.
[14] T. M. Ruff, Recommendations for evaluating & implementing proximity warning systems on
surface mining equipment, Research Report to the National Institute for Occupational Safety
and Health, Centers for Disease Control, (2007) <
http://www.cdc.gov/niosh/mining/works/coversheet202.html >.
[15] I. Brilakis, M. Park, G. Jog, Automated vision tracking of project related entities, Advanced
Engineering Informatics 25 (2011) 713-724.
[16] M. Park, C. Koch, I. Brilakis, Three-dimensional tracking of construction resources using an
on-site camera system, J. Comput. Civ. Eng. 26(4) (2012) 541-549.
[17] Y. Cho, M. Gai. Projection-Recognition-Projection (PRP) Method for Rapid Object
Recognition and Registration from a 3D Point Cloud, ASCE Journal of Computing in Civil
Engineering (2014) doi: 10.1061/(ASCE) CP.1943-5487.0000332 (in press).
[18] C. Wang, Y. Cho, M. Gai, As-is 3D Thermal Modeling for Existing Building Envelopes Using
a Hybrid LIDAR System, ASCE Journal of Computing in Civil Engineering 27(6) (2013) 645–
656.
[19] Y. Arayici, An approach for real world data modeling with the 3D terrestrial laser scanner for
built environment, Automation in Construction 16 (6) (2007) 816–829.
[20] F. Bosche, C. T. Haas, Automated retrieval of 3D CAD model objects in construction range
images, Automation in Construction 17(4) (2008) 499-512.
[21] H. Bay, T. Tuytelaars, L. V. Gool, SURF: Speeded up robust features, Computer Vision–
ECCV (2008) 404-417.
[22] P. Zhou, Computational Geometry: Analysis and Design on the Algorithms, second edition,
Tsinghua University press, Beijing, 2005.
[23] C. Kim, C. Haas, K. Liapi, C. Caldas, Human-Assisted Obstacle Avoidance System Using
3D Workspace Modeling for Construction Equipment Operation, ASCE J. Comp. in Civ. Engrg.
20(3) (2006) 177-186.
ABSTRACT
This paper focuses on characterizing proposed human-built topographic forms and describing
them parametrically. Two basic approaches exist for characterizing shape algorithmically:
parametric descriptions, which describe discrete geometries, and non-parametric methods,
which for the most part work on fields. This paper offers a brief overview of the range of
parametric modeling options for topography, a set of criteria that need to be fulfilled for any
successful landform design system, and a primitives and operators approach that offers some
specific advantages in the AMG context.
INTRODUCTION
Reshaping land to meet societal needs is a complex, disruptive, time consuming, and costly
effort. Industry increasingly relies on Digital Terrain Models (DTMs) as the principle medium for
landform design and the basis for autonomous construction machinery (AMG). However,
controlling DTM geometry remains a non-trivial algorithmic problem. At the project concept
stage and later during construction, existing 3D manipulation methods are cumbersome and
unwieldy, adding to downtime, guesswork and inefficiencies in the field.
Landform Design
Landform as a creative, expressive medium
Landforms may be used as design elements unto themselves, and also to organize and
establish the base upon which other elements may be composed on a site. Landscape
architects and engineers think of topography in both functional quantitative and spatial
qualitative terms [1-3]. Through use of slope, elevation change, convex and concave re-grading,
both subtle and dramatic meanings can be achieved [4].
Landscape designers use a variety of abstractions which help to organize or synthesize their
design of landform, including 'signatures' of contour lines in plan [2, 4, 5]; compositions of
regular geometric solids [6] and flat planes with break lines and transitions [5]; and processes of
land formation: erosion, deposition, scraping and piling, etc. [7, 8]. The topographic condition of
a site is therefore regarded as an essentially plastic one.
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the author(s), who are responsible for the
facts and accuracy of the information presented herein.
Through 3D modeling, rendering and visualization, designers try out an assortment of design
alternatives on a topographic surface. Through manipulation of 3D models or images design
alternatives are seen, changed, and analyzed [9]. For environmental designers, visualization
works in close concert with manipulation and quantitative and qualitative analysis. The editing
and analysis tasks can be more closely integrated in the digital medium, as shown in Figure 2.
Figure 2: Iterative design loop for topography that shows how digital methods can
integrate editing and analysis tasks more tightly in the iterative life cycle of a design
project.
Control criteria for DTM geometry over the lifecycle of a grading project
Controlling DTM geometry remains a non-trivial algorithmic problem [31, 32]. Methods for
changing DTM geometry are cumbersome and unwieldy [34], thereby limiting the ability to
creatively explore, modify, and optimize topographic form. These shortcomings yield serious
inefficiencies throughout the lifecycle of a project and result in a need for improved tools for
landform design that fulfill the following criteria:
Westort 225
3D
Local geometric control
Ease of handling
Quick response time
Quantitative accuracy
What is generally missing is the ability to iteratively revise, update, edit, modify, manipulate –
i.e., “sculpt” DTMs, both before and during the construction phase as a way to insure quality
design.
Geographic information systems (GIS) have evolved to handle the large datasets characteristic
of landscape, but have mostly focused on the display and analysis of elevation data, rather than
on active tools for manipulating landform. Figure 3 shows the manipulation scopes of action
available in many GISs which allow only either global or ‘local’ changes to a model: a local
change is a change that happens to a single vertex, a global change is one that affects the
entire topographic dataset, e.g., scale changes, which exaggerate the Z value [35].
While some operations exist to limit the scope of activity to a mask of pixels or a specific
polygon, these techniques are not especially useful for landform design. An example of a DTM
editor developed to resolve artifacts resulting from elevation data interpolation has been an
interesting approach [36]. As have algorithms for compression [37-42], line of site, shortest path,
drainage [43], multiple observers[40], local maxima and minima, and drainage patterns[44] and
siting from first principles. These approaches, while promising, have focused on data extraction
and algorithmic techniques acting on geospatial data for existing landform geometry of a digital
terrain surface, rather than proposed geometry – the target concern here.
Westort 226
A key advantage of digital/virtual methods is the ability to directly manipulate a 3D
representation [45-49]. The following list, adapted from [50] and [51] summarizes some of the
3D topographic modeling tools available in current industry-standard CAD software packages:
Data Structures
Contours—Landform design using 2D CAD systems relies heavily upon the
representation of 3D form as contour lines; an abstraction well suited to
representation, but notoriously cumbersome for design. It is also a data structure
with key disadvantages: Oversampling along, and under-sampling between lines.
Manipulating contour lines effectively with CAD systems remains a daunting
challenge, requiring spline curves and geometric constraints that have nowhere
yet been satisfactorily packed for—much less mastered by—designers from any
discipline.
B-Rep—Boundary Representations—The surface of an object is described as a
description stored as a list of vertices, lines joining the vertices, and list of faces.
These include Bezuer-spline curves (B-splines), Non-uniform rational basis spline
(NURB) surfaces
Primitives—A set of simple, generic, 3D models (cube, sphere, cylinder, cone,
torus, wedge, lane and others). These primitives can be scaled, translated, and
rotated within the application, often both interactively (such as with a mouse),
and by numerical input.
CSG—Constructive Solid Geometry—An object is represented as a combination
of simple primitives such as cues, spheres, and cylinders. These basic solids are
used as building blocks for more complex objects by means of a system that
uses Boolean combinations (union, intersection, and difference) to describe the
logical operations of adding two objects, subtracting one from another, or
designing the overlap between two objects.
Voxels—Volume/Solid Modeling. Spatial occupancy enumeration divides –
dimensional space into cubic units called voxels, or a 3D pixels.
Operators:
Swept Forms—a 2-dimensional (XY) section ‘swept’ along a third (Z) dimension.
Extrusion—a template is swept in a direction orthogonal to the plane in which it
lies.
Surface of Revolution—a 2D template, closed or open, rotated about an axis.
Skin—the ability to construct a ‘skeleton’ of a form and then wrap a surface skin
around it to create an object
Patches—same as skin except using boundaries as the skeleton.
Curved Patches—while popular remain too computationally intensive to be
justified or affordable for most landform design.
Primitives
What is called for is a specification of landform primitives—mound, swale, plane, for example –
that carry their own parametric definitions and constraints, and so enable ‘regional’ changes,
between local and global in their scope, in which slopes, radii and other dimensions may be
user-specified and propagated (Figure 3). Geometric modeling with pure Euclidean shape
primitives such as cones and cylinders is promising [8], but by itself is too limited for most
landform design, which more often than not involves the design of a continuous surface rather
Westort 227
than of a solid. While some efforts for landscape are promising [51, 53] no unifying ontology, or
organizing Landscape Information Model (LIM) currently exists.
A wide variety of disciplines and scales use a range of terms and parameters to
describe discrete topographic shape, and they vary qualitatively and quantitatively. Moreover,
landform shapes are frequently described in terms of an underlying DTM Data structure. I.e.,
there are contour line specific forms, or signatures, TIN specific data structures. A sampling of
this diversity includes:
x Domain-specific forms, e.g., Contour line signatures [3-5], American Disabilities Act
design specifications [59].
x Project-type specific (e.g., roadways, levees, sand dunes, water management, golf
courses, battlefields, gaming environments, American Disabilities Act (ADA)
compliant features)
x Individual Project specific forms, A particular project may standardize its own set of
topographic forms for re-use throughout a project.[61]
Westort 228
Figure 6: Example tool-based specific forms.[62]
Operators
Defining a universal set of geometric parameters for landform, coupled to a way to combine the
primitives together, would contribute to dramatically improved geometric control over a DTM
surface. Specification of a set of operations, such as cutting or filling tools, with parametrically
defined shape characteristics (such as angles of slope, depth of fill, etc.) to be performed along
a path. This set of operations is then swept along the path, either over an existing base terrain,
or on a blank surface, and the result is a terrain geometry, which has the desired shape.
Figure 7 shows this initial set, with their blade and path shapes abstracted and combined using
a sweep (extrude) algorithm.
Westort 229
Figure 7: Generic subset of topographic primitives, abstracted as blades and paths
shapes
Operators to generate these sorts of geometries as part of a continuous surface were then
prorammed as a plug-in software to AutoCAD. Figures 8 and 9 show these initial results.
Westort 230
A stand-alone prototype software was then generated as a generic sculpting tool definition,
Figure 9.T his implementation kept geometric change parameters separate from any underlying
DTM data structure description, Figure 10.
The following are some outstanding challenges that would need to be resolved for this to
happen:
Challenge 1: Primitive shapes can have a wide range of relationships with one another. In
the case of a primitive blade shape that is extruded along a primitive path shape, what is the
desired relationship, symmetrical, static, dynamic with other primitive models. How to provide
interactive handles to the user for setting and varying these shape parameters.
Westort 231
Figure 11: Symmetrical versus A-Symmetrical blade and path relationships [65]
Challenge 2: Should the relationship between primitives simulate real world on-the-
ground tool behavior or physical phenomena? E.g. shovels, bulldozers, graders, rakes,
etc.? What soil-type, moisture content is assumed? Gravity? Since final DTM geometry is the
priority, and simulation of the manipulation process itself is of secondary importance, those
geometry determining “real-world” parameters which affect final landform shape will be
prioritized.
Challenge 3: Embedding into the underlying terrain – how does the blade-path complex
embed in the underlying DTM surface? Is it an absolute or relative relationship? How are
these relationships parameterized to optimize interactive user control.
Westort 232
Figure 15: Absolute versus relative relationship of the blade-path-relationship with the
underlying terrain data.
REFERENCES CITED
1. J. Beardsley and N. Grubb, Earthworks and beyond: contemporary art in the landscape:
Abbeville Press, 1989.
2. S. Strom, K. Nathan, and J. Woland, Site engineering for landscape architects: John
Wiley & Sons, 2013.
3. H. C. Landphair and F. Klatt, "Landscape architecture construction," 1979.
4. R. K. Untermann, "Grade easy: an introductory course in the principles and practi ces of
grading and drainage," 1973.
5. P. Petschek, Grading for landscape architects and architects: Walter de Gruyter, 2008.
6. M. Ferraro and R. Mallary, "ECOSITE: A Program for Computer-Aided Landform
Design," Proceedings ACM/SIGGRAPH (NY 1977).
7. R. J. Chorley, R. P. Beckinsale, and A. J. Dunn, History of the Study of Landforms: Or
The Development of Geomorphology: The Life and Work of William Morris Davis vol. 2:
Psychology Press, 1973.
8. B. Etzelmüller and J. R. Sulebak, "Developments in the use of digital elevation models in
periglacial geomorphology and glaciology," Physische Geographie, vol. 41, pp. 35-58,
2000.
9. J. Corner, "Representation and landscape: drawing and making in the landscape
medium," Word & Image, vol. 8, pp. 243-275, 1992.
10. S. M. Ervin, "Digital Terrain Models," Landscape Architecture, Magazine, January, 1994
1994.
11. W. R. Franklin, "Towards a mathematics of terrain."
12. W. R. Franklin, M. Inanc, and Z. Xie, "Two novel surface representation techniques,"
AutoCarto Vancouver, Washington, 2006.
13. W. R. Franklin, Z. Xie, E. Lau, and Y. Li, "Algorithms for terrain and bathymetric sensor
data."
14. R. Weibel and M. Heller, Digital terrain modelling: Oxford University Press, 1993.
15. B. Alsobrooks, "Introduction of 3D Technology & Machine Control Systems," ed.
Westort 233
16. A. Vonderohe, "Status and Plans for Implementing 3D Technologies for Design and
Construction in WisDOT," Construction and Materials Support Center University of
Wisconsin – Madison Department of Civil and Environmental Engineering WisDOT
Project ID: 0657-45-11, 2009/05//undefined 2009.
17. C. Zhang, A. Hammad, and H. Bahnassi, "Collaborative Multi-Agent Systems for
Construction Equipment Based on Real-Time Field Data Capturing," Journal of
Information Technology in Construction, vol. 14, pp. 204-228, 2009/06//undefined 2009.
18. A. Zogheib, "Autonomous Navigation Tool for Real & Virtual Field Robots," in 1st
International Conference on Machine Control & Guidance 2008, 2008, pp. 1-11.
19. C. D. O. TRANSPORTATION, "Digital Design Environment Guide," Newington,
CTOctobor 2007.
20. F. D. o. T. F. E. C. S. O. (ECSO), "Multi-Line Earthwork for Designers," F. D. o. T. F. E.
C. S. O. (ECSO), Ed., ed, 2007, p. 72.
21. T. Hampton. (2005, 2005) 3D Grade Control Puts Designers Right in the Operator’s
Seat. Enigineering News Record. Available:
http://www.dot.state.mn.us/caes/files/pdf/enr_3d_grade_control_10_05.pdf
22. J. J. Hannon, "NCHRP Synthesis 372 Emerging Technologies for Construction
Delivery," Transportation Research Board 978-0-309-09791-8, 2007 2007.
23. J. J. Hannon, NCHRP Synthesis 385 Information Technology for Efficient Project
Delivery: Transportation Research Board, 2008.
24. J. J. Hannon and D. Townes, "GPS Utilization Challenges in Transportation Construction
Project Delivery," in The construction and building research conference of the Royal
Institution of Chartered Surveyors, Georgia Tech, Atlanta USA, 2007, p. 15.
25. T. Hoeft, "lmproving Construction Efficiencies Through 3D Electronic Design," C. Jarhen,
Ed., ed, 2009, p. 1.
26. D. Kratt, "Design Memorandum NO. 18-05-Electronic Files Submittal with the Final
Contract Plans," ed, 2005.
27. M. Leja and R. Buckley, "Cross-Section Preparation and Delivery Memorandum," ed:
California Department of Transportaion, 2004.
28. A. Z. Sampaio, A. R. Gomes, and J. Prata, "Virtual Environment in Civil Engineering:
Construction and Maintenance of Buildings," in ADVCOMP 2011, The Fifth International
Conference on Advanced Engineering Computing and Applications in Sciences, 2011,
pp. 13-20.
29. D. Sheldon and C. Mason, "A Proposal for Statewide CAD Standards in Iowa," Howard
R. Green Company2009/04//undefined 2009.
30. P. Söderström and T. Olofsson, "Virtual Road Construction – a Conceptual Model," in
W78 Conference, Maribor, Slovenia, 2007, pp. 255-261.
31. X. Wang, M. J. Kim, P. E. D. Love, and S.-C. Kang, "Augmented Reality in built
environment: Classification and implications for future research*," Automation in
Construction, vol. 32, pp. 1-13, July 2013 2013.
32. S. Andrews and S. Geiger, "Unlocking Design Data," presented at the 19th Annual
AGC/DOT Technical Conference, New York, 2005.
Westort 234
36. H. Bär, "Interaktive Bearbeitung von Geländeoberflächen-Konzepte, Methoden,
Versuche," 1996.
37. Representing landscapes : a visual collection of landscape architectural drawings /
edited by Nadia Amoroso. Abingdon, Oxon ; New York: Abingdon, Oxon ; New York :
Routledge, 2012.
38. B. Cutler, W. R. Franklin, and T. Zimmie, "Fundamental Terrain Representations and
Operations: Validation of Erosion Models for Levee Overtopping," in NSF Engineering
Research and Innovation Conference, Honolulu, 2009.
39. R. Fabio, "From point cloud to surface: the modeling and visualization problem,"
International Archives of Photogrammetry, Remote Sensing and Spatial Information
Sciences, vol. 34, p. W10, 2003.
40. W. R. Franklin, M. Inanc, Z. Xie, D. M. Tracy, B. Cutler, and M. V. Andrade, "Smugglers
and border guards: the GeoStar project at RPI," in Proceedings of the 15th annual ACM
international symposium on Advances in geographic information systems, 2007, p. 30.
41. M. Inanc, Compressing terrain elevation datasets: ProQuest, 2008.
42. M. Metz, H. Mitasova, and R. Harmon, "Efficient extraction of drainage networks from
massive, radar-based elevation models with least cost path search," Hydrology and
Earth System Sciences, vol. 15, pp. 667-678, 2011.
43. M. V. Andrade, S. V. Magalhaes, M. A. Magalhães, W. R. Franklin, and B. M. Cutler,
"Efficient viewshed computation on terrain in external memory," GeoInformatica, vol. 15,
pp. 381-397, 2011.
44. T.-Y. Lau and W. R. Franklin, "COMPLETING RIVER NETWORKS WITH ONLY
PARTIAL RIVER OBSERVATIONS VIA HYDROLOGY-AWARE ODETLAP."
45. W. J. Mitchell, "A computational view of design creativity," Modeling Creativity and
Knowledge-Base Creative Design, pp. 25-42, 1993.
46. W. J. Mitchell, The logic of architecture: Design, computation, and cognition: MIT press,
1990.
47. W. J. Mitchell, Computer-aided architectural design: John Wiley & Sons, Inc., 1977.
48. W. J. Mitchell, Digital design media: John Wiley & Sons, 1995.
49. M. McCullough, W. J. Mitchell, and P. Purcell, The electronic design studio: architectural
knowledge and media in the computer era: MIT Press, 1990.
50. S. M. Ervin, Landscape modeling : digital techniques for landscape visualization /
Stephen M. Ervin, Hope H. Hasbrouck. New York: New York : McGraw-Hill, 2001.
51. S. Mealing, Mac 3D: Three-dimensional Modelling and Rendering on the Macintosh:
Intellect Books, 1994.
52. M. Flaxman, "Fundamentals of Geodesign," Proceedings Digital Landscape
Architecture, Buhmann/Pietsch/Kretzel (Eds.): Peer Reviewed Proceedings Digital
Landscape Architecture, Anhalt University of Applied Science, Germany, 2010.
53. S. Ervin, "A system for GeoDesign," Proceedings Digital Landscape Architecture, Anhalt
University of Applied Science, Germany, 2011.
54. T. R. Allen, "Digital terrain visualization and virtual globes for teaching geomorphology,"
Journal of Geography, vol. 106, pp. 253-266, 2008.
55. D. G. Brown, D. P. Lusch, and K. A. Duda, "Supervised classification of types of
glaciated landscapes using digital elevation data," Geomorphology, vol. 21, pp. 233-250,
1998.
56. R. Dikau, "The application of a digital relief model to landform analysis in
geomorphology, Raper J., Three Dimensional Applications in Geographical Information
Systems, 1989, 51-77," ed: Taylor and Francis, London.
57. R. Dikau, "The application of a digital relief model to landform analysis in
geomorphology," Three dimensional applications in geographical information systems,
pp. 51-77, 1989.
Westort 235
58. J. P. Wilson, "Digital terrain modeling," Geomorphology, vol. 137, pp. 107-121, 2012.
59. R. L. Mace, "Universal design in housing," Assistive Technology, vol. 10, pp. 21-28,
1998.
60. C. W. Harris and N. T. Dines, "Time-saver standards for landscape architecture,"
McGraw-Hili Book Company. New York, 1988.
61. 3*U\ERĞ0.DOHWRZVND8/LWZLQ-03LMDQRZVNL$6]HSWDOLQDQG0=\JPXQW
"Data preparation for the purposes of 3D visualization," Geomatics, Landmanagement
and Landscape, pp. 19-29, 2013.
62. H. L. Nichols, Moving the earth : the workbook of excavation / Herbert L. Nichols, Jr.,
David A. Day. New York: New York : McGraw-Hill, 2010.
63. C. Y. Westort, "Methods for Sculpting Digital Topographic Surfaces," University of
Zurich, 1998.
64. C. Y. WESTORT, "An Explosion Tool for DTM Sculpting," Trends in Landscape
Modeling: Proceedings at Anhalt University of Applied Sciences 2003, p. 35, 2003.
65. C. Y. Westort, "Corner, End, and Overlap “Extrusion Junctures”: Parameters for
Geometric Control," in Digital Earth Moving, ed New York: Springer, 2001, pp. 78-86.
66. C. Y. Westort, Digital Earth Moving: First International Symposium, DEM 2001, Manno,
Switzerland, September 5-7, 2001. Proceedings vol. 1. New York: Springer, 2001.
67. M. D. Johnson, E. C. Holley, F. P. Morgeson, D. LaBonar, and A. Stetzer, "Outcomes of
Absence Control Initiatives A Quasi-Experimental Investigation Into the Effects of Policy
and Perceptions," Journal of Management, vol. 40, pp. 1075-1097, 2014.
68. D. Gergle and D. S. Tan, "Experimental Research in HCI," in Ways of Knowing in HCI,
ed: Springer, 2014, pp. 191-227.
Westort 236
Applicability and Limitations of 3D Printing for Civil
Structures
Mostafa Yossef
Department of Civil, Construction and Environmental Engineering
Iowa State University
Ames, Iowa, 50011
[email protected]
An Chen
Department of Civil, Construction and Environmental Engineering
Iowa State University
Ames, Iowa, 50011
[email protected]
ABSTRACT
Three Dimensional Printing (3DP) is a manufacturing process that builds layers to create a
three-dimensional solid object from a digital model. It allows for mass customization and
complex shapes that cannot be produced in other ways, eliminates the need for tool production
and its associated labor, and reduces waste stream. Because of these advantages, 3DP has
been increasingly used in different areas, including medical, automotive, aerospace, etc. This
automated and accelerated process is also promising for civil structures, including building and
bridges, which require extensive labor. If successful, it is expected that 3D structural printing
can significantly reduce the construction time and cost. However, unlike applications in other
areas, civil structures are typically in large scale, with length or height spanning hundreds of
feet. They are subjected to complex loadings, including gravity, live, wind, seismic, etc.
Therefore, it is challenging to develop suitable printing tools and materials. As a result, although
there are limited 3D printed buildings, 3DP of civil structures is still at a primitive stage. This
papers aims to explore the applicability of 3DP for civil structures. The first part is devoted to a
review of 3DP in different areas, including 3D printed buildings. Based on the state of art, the
weakness and opportunities of 3DP are identified. Finally, future directions for 3DP in civil
structures are discussed.
INTRODUCTION
Three Dimensional Printing (3DP) was evolved from automated production, which started in the
early twentieth century. It was first applied in manufacturing and automotive industries.
Recently, its applications were expanded to other industries, including medical, aerospace,
construction, etc. This automated and accelerated process is also promising for civil structures,
including building and bridges, which require extensive labor. However, many factors have
limited its further development. As a result, although there are limited application of 3DP in civil
construction, 3DP of civil structures is still at a primitive stage. This paper first reviews the latest
development of 3DP in construction and other areas. It then identifies the limiting factors and
challenges of 3DP. Finally, future directions of 3DP in civil engineering are discussed.
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the author(s), who are responsible for the
facts and accuracy of the information presented herein.
BRIEF HISTORY OF 3D PRINTING
According to 3DPI (2014), 3DP started in the late 1980’s. It was known as Rapid Prototyping
(RP) technology developed by Kodama in Japan. Six years later, Charles Hull invented Stereo
Lithography Apparatus (SLA). In 1987, SLA-1 was introduced as the first commercial RP
system. In 1989, a patent for Selective Laser Sintering (SLS) was issued for Carl Deckard at
University of Texas. Through the 1990’s until early 2000’s, SLS has been developed to focus on
industrial applications such as casting. New terminology, Casting and Rapid Manufacturing
(RM), was introduced for such applications. In 2005, the terminology evolved to include all
processes under Additive Manufacturing (AM). The term Additive Manufacturing (AM) is defined
by ASTM as “a process of joining materials to make objects from 3D model data, usually layer
upon layer” (ASTM Standard 2012). Unlike Subtractive term which means machining away the
material from a block to form the required object. Casting or shaping the material in a mold is
often called Formative process. Table 1 shows a summary of different techniques used in AM
(Buswell et al. 2007).
Process Description
Stereolithography Liquid photopolymer resin is held in a tank. A flat bed is immersed to a
(SLA) depth equivalent to one layer. Lasers are used to activate the resin and
cause it to solidify. The bed is lowered and the next layer is built.
Fused Deposition Extrudes a narrow bead of hot plastic, which is selectively deposited where
Modelling (FDM) it fuses to the existing structure and hardens as it cools.
Selective Laser Utilises a laser to partially melt successive layers of powder. One layer of
Sintering (SLS) powder is deposited over the bed area and the laser targets the areas that
are required to be solid in the final component.
3D Printing (3DP) Based on inkjet printer technology. The inkjet selectively deposits a liquid
binder onto a bed of powder. The binder effectively ‘glues’ the powder
together.
APPLICATIONS OF 3D PRINTING
3DP has been increasing used in different areas. Architectural modelling is one of the major
areas that uses 3DP for developing prototypes that facilitate the communication between the
architect and customer. Architect can print now complex structures and color it as well for better
representation (Gibson et al. 2002). In medical area, 3DP is used to create high quality bone
Automation in construction industry started in terms of robotics [Gambao et al. (2000); Kuntze et
al. (1995); Lorenc et al. (2000); Williams et al. (2004)]. Buswell et al. (2007), (2008) conducted a
review over RM technologies for construction, based on which they developed a Freeform
Construction method. The term of Freeform Construction was defined for methods that deliver
large-scale components for construction without the need of formworks using AM. They
concluded that Freeform Construction could reduce the construction cost and provide freedom
of selecting desired geometry with better performance than traditional method. Lim et al. (2009)
stated that Freeform Construction methods are currently limited to CC (US); Concrete Printing
(UK); and D-shape (Italy).
Khoshnevis (1998) introduced the Contour Crafting (CC) which later become an effective
method of printing 3D houses. Khoshnevis (2004) defined CC as “an additive fabrication
technology that uses computer control to exploit the superior surface-forming capability of
troweling to create smooth and accurate planar and free-form surface”. The idea of CC was to
use two trowels to form a solid planar surface for external edges. Filler material such as
concrete can then be poured to fill the extruded area. They demonstrated that CC can be used
in building structures as shown in Figure 1, where a nozzle is supported by a gantry system
which moves in two parallel lanes. The nozzle is capable of full 6-axis positioning and can
extrude both sides and filler material. CC nozzle can also be used for forming paint-ready
surface, placing reinforcement before pouring concrete, plastering and tiling, plumbing and
installing electrical modules and communication line wiring.
Zhang and Khoshnevis (2013) developed an optimized method for CC machine to efficiently
construct complicated large-scale structures. Extensive research was done to avoid collision
between multiple nozzles. Three approaches were compared, namely: path cycling, buffer zone
path cycling and auxiliary buffer zone. The results indicated that the path cycling and buffer
zone cycling provided the maximum optimization. They concluded that using CC method is
significantly faster than traditional methods and implementation to multi-story building is
possible by climbing as shown in Figure 3.
According to Roodman and Lenssen (1995), the construction industry consume more than 40%
of all raw materials globally. CC can reduce the material waste from 7 tons to almost none for a
single-family home. And the speed of the construction can be increased to one day per house.
Although the ability of using this method in luxury structures or complex structures is still limited,
implementation of CC can help with fast construction of low income housing and emergency
shelter.
The 3D printed houses can provide a cheap and efficient homes of low-income families. The
printed houses consist of different printed parts assembled together to form the house. It can
take less than 24 hour to build one house. However, no details are provided about 3DP of
wiring, plumbing and HVAC, etc.
The latest development of 3DP was from WinSun, a Chinese company. They printed five-story
apartment block using 3DP as shown in Figure 4 (Charron 2015). They stated that the houses
were in full compliance with relevant national standards, which overcomes one of the main
issues that face 3D printed houses. WinSun also printed a decorated house as shown in Figure
5.
3DP can also be used for non-conventional structures. DUS, a Dutch architecture company
used 3DP to design facades integrated with solar panels, where the angle of the solar panel
could be optimized automatically for any location. This can eliminate the need of manufacturing
a mold for every different location (Jordan et al. 2015).
Other automation effort was done by the industry sector. For example, Shimizu Corporation in
Japan developed an automated system that included erection and welding of steel-frames,
laying concrete floor planks, installation of exterior and interior wall panels, and installation of
various other units (Yamazaki and Maeda 1998).
Lim et al. (2012) compared CC, D-shape, and Concrete Printing. They concluded that Concrete
Printing could optimize strength prior to manufacturing, which resulted in less material. It could
also create complex concrete shapes without the need of labor-intensive molding as shown in
Figure 2.
(A)
(B)
Figure 1 - Schematic view of Figure 2 – Complex Concrete Printing
construction of conventional buildings Product
using CC (a) 3D Model (Lim et al. 2012),
(Zhang and Khoshnevis 2013) (b) During printing (Le et al. 2012)
Construction components of significant size are heavy, typically being up to 5 tons. Suitable
equipment needs to be developed in order to lift and move heavy component. However, before
suitable equipment can be developed for large scale structure, in-situ deposit approach, i.e.,
printing lighter parts on site followed by assembly would be an alternative option.
3DP can be especially useful for structures with complex shapes. For example, rubber can be
used to print shock absorbers in a large scale which can help in reducing the seismic effects on
buildings. A prototype is shown in Figure 6.
3DP can also open up a frontier to use new materials. These new materials need to satisfy
specific requirements from 3DP. For example, they need to have proper curing time since the
lower layer needs to support the upper layer. The bonding between different layers should be
strong. These materials also require extensive testing to determine their mechanical properties,
including the properties of the materials, inter- and intra-layers.
Jordan et al. (2015) stated that automated industry will take over the constructions process. This
requires revising building codes to ensure that additive machines are operating within limits and
meet performance criteria. For example, the 3D printed structure should be able to take
complex loads, including gravity, live, wind, seismic, etc., and satisfy the performance
requirements, such as fire, smoke and toxicity. In addition to that, current safety factors are high
due to the consideration of human mistakes. Such factors can be lowered in case of using
automated machines instead of human workforce.
Development of a complete process from the parametric design until printing the building is
needed to control the whole process and eliminate any wasted time during printing. Khoshnevis
(2004) proposed a planning system that shows each component of future automated system.
Figure 7 shows a brief explanation for the proposed plan.
Further research is also needed in connections for 3D printed structures, where few studies are
available. These include, but not limited to, beam-column, column-footing, wall connections, etc.
REFERENCES
3DPI. (2014). “3D Printing History: The Free Beginner’s Guide.” 3D Printing Industry,
<http://3dprintingindustry.com/wp-content/uploads/2014/07/3D-Printing-Guide.pdf> (May
18, 2015).
ASTM. (2012). “F2792. 2012 Standard terminology for additive manufacturing technologies.”
West Conshohocken, PA: ASTM International. www.astm.org.
Buswell, R. A., Thorpe, A., Soar, R. C., and Gibb, A. G. F. (2008). “Design, data and process
issues for mega-scale rapid manufacturing machines used for construction.” Automation in
Construction, 17(8), 923–929.
Charron, K. (2015). “WinSun China builds world’s first 3D printed villa and tallest 3D printed
apartment building.” 3ders.org, <http://www.3ders.org/articles/20150118-winsun-builds-
world-first-3d-printed-villa-and-tallest-3d-printed-building-in-china.html> (May 18, 2015).
Gambao, E., Balaguer, C., and Gebhart, F. (2000). “Robot assembly system for computer-
integrated construction.” Automation in Construction, 9(5-6), 479–487.
Gibson, I., Kvan, T., and Wai Ming, L. (2002). “Rapid prototyping for architectural models.”
Rapid Prototyping Journal, MCB UP Ltd, 8(2), 91–95.
Guthrie, P., Coventry, S., Woolveridge, C., Hillier, S., and Collins, R. (1999). “The reclaimed and
recycled construction materials handbook.” CIRIA, London, UK.
James, W. J., Slabbekoorn, M. A., Edgin, W. A., and Hardin, C. K. (1998). “Correction of
congenital malar hypoplasia using stereolithography for presurgical planning.” Journal of
Oral and Maxillofacial Surgery, 56(4), 512–517.
Jordan, B., Dini, E., Heinsman, H., Reichental, A., and Tibbits, S. (2015). “The Promise of 3D
Printing.” Thornton Tomasetti,
<http://www.thorntontomasetti.com/the_promise_of_3d_printing/> (May 18, 2015).
Khoshnevis, B. (1998). “Innovative rapid prototyping process makes large sized, smooth
surfaced complex shapes in a wide variety of materials.” Materials Technology, 13(2), 52–
63.
Kuntze, H.-B., Hirsch, U., Jacubasch, A., Eberle, F., and Göller, B. (1995). “On the dynamic
control of a hydraulic large range robot for construction applications.” Automation in
Construction, 4(1), 61–73.
Le, T. T., Austin, S. A., Lim, S., Buswell, R. A., Gibb, A. G. F., and Thorpe, T. (2012). “Mix
design and fresh properties for high-performance printing concrete.” Materials and
Structures, 45(8), 1221–1232.
Lim, S., Buswell, R. A., Le, T. T., Austin, S. A., Gibb, A. G. F., and Thorpe, T. (2012).
“Developments in construction-scale additive manufacturing processes.” Automation in
Construction, 21, 262–268.
Lorenc, S. J., Handlon, B. E., and Bernold, L. E. (2000). “Development of a robotic bridge
maintenance system.” Automation in Construction, 9(3), 251–258.
Murray, D. J., Edwards, G., Mainprize, J. G., and Antonyshyn, O. (2008). “Optimizing
craniofacial osteotomies: applications of haptic and rapid prototyping technology.” Journal
RIRUDODQGPD[LOORIDFLDOVXUJHU\࣯RIILFial journal of the American Association of Oral and
Maxillofacial Surgeons, 66(8), 1766–72.
Roodman, D. M., and Lenssen, N. (1995). A Building Revolution: How Ecology and Health
Concerns Are Transforming Construction. Worldwatch Institute.
Song, Y., Yan, Y., Zhang, R., Xu, D., and Wang, F. (2002). “Manufacture of the die of an
automobile deck part based on rapid prototyping and rapid tooling technology.” Journal of
Materials Processing Technology, 120(1-3), 237–242.
Thomas, C. L., Gaffney, T. M., Kaza, S., and Lee, C. H. (1996). “Rapid prototyping of large
scale aerospace structures.” 1996 IEEE Aerospace Applications Conference. Proceedings,
IEEE, 219–230.
Warszawski, A., and Navon, R. (1998). “Implementation of Robotics in Building: Current Status
and Future Prospects.” Journal of Construction Engineering and Management, American
Society of Civil Engineers, 124(1), 31–41.
Yamazaki, Y., and Maeda, J. (1998). “The SMART system: an integrated application of
automation and information technology in production process.” Computers in Industry,
35(1), 87–99.
Zhang, J., and Khoshnevis, B. (2013). “Optimal machine operation planning for construction by
Contour Crafting.” Automation in Construction, 29, 50–67.
Sourabh Bhattacharya
Department of Mechanical Engineering
Iowa State University
2025 Black Engineering
Ames, 50011
[email protected]
ABSTRACT
In this paper, we address a motion planning problem for an autonomous agricultural vehicle
modeled as tractor-trailer system. We first present a numerical approach and a primitive-based
approach for computing the time-optimal path based on given static initial and goal
configurations. In the former approach, we define a value function for the entire state space.
The value function is consistent with the time to reach goal configuration, and it is finally used to
compute the optimal trajectory. In the latter approach, based on the regular and singular
primitives, we present an algorithm to construct such primitives and derive the final path.
Subsequently, we extend the results and present a dynamic motion planning strategy to
accommodate the case of mobile target configuration. Finally, simulation results are provided to
validate the feasibility and effectiveness of these techniques.
INTRODUCTION
Logistics problem can be described as the management of resources in order to meet specific
requirements, or customers. The resources to be dealt with include physical items such as
materials and tools, as well as abstract items such as information and energy. Logistics problem
can be embodied in many aspects, such as business, economics and agriculture. In this paper,
we address the problem of logistics in path planning for agricultural vehicles.
Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, Iowa. © 2015
by Iowa State University. The contents of this paper reflect the views of the author(s), who are responsible for the
facts and accuracy of the information presented herein.
There have been some efforts in the past to address the problem of harvesting in large-scale
farming scenarios. In (Fokkens and Puylaert 1981), a linear programming model for harvesting
operations is presented. The model gives the management results of harvesting operations at
the large scale grain farm. In (Foulds and Wilson 2005) and (Basnet and Foulds 2006),
researchers analyze the scheduling problem of farm-to-farm harvesting operations for hay and
rape seed. These works mainly focus on scheduling harvesting operations from farm to farm. In
contradistinction, our research focuses on the motion planning for an unloading vehicle in a
single field.
Recently, path planning of agricultural machines has received some attention in the research
community. In (Makino, Yokoi, and Kakazu 1999), authors develop a motion planning system,
which integrates global and local motion planning components. In (Ferentinos, Arvanitis, and
Sigrimis 2002), authors propose two heuristic optimization techniques in motion planning for
autonomous agricultural vehicles. In (Ali and Van Oudheusden 2009), authors address the
motion planning of one combine by using integer linear programming formulation. In (Oksanen
and Visala 2009), coverage path planning problem is considered. The presented algorithms not
only aim to find an efficient route, but also ensure the coverage of the whole field. In (Hameed,
Bochtis, and Sørensen 2013), a coverage planning approach is proposed with the consideration
of the presence of obstacles. However, compared with our research, the aforementioned works
do not consider the problem of path planning on unloading vehicles and do not model the
vehicle as a tractor-trailer system.
There has been some previous research to plan optimal trajectories for tractor-trailer model
which are prevalent in farming applications. In (Divelbiss and Wen 1994), authors present an
algorithm to find a feasible path which satisfies the given non-holonomic constraints. In
(Divelbiss and Wen 1997), authors propose a trajectory tracking strategy which controls a
tractor-trailer system moving along a path generated off-line. In (Hao, Laxton, Benson, and
Agrawal 2004), researcKHUV SUHVHQW D GLIIHUHQWLDO IODWQHVVíEDVHG IRUPDWLRQ IROORZLQJ IRU D
tractor-cart moving along with a combine harvester. In (Astolfi, Bolzern, and Locatelli 2004), an
application of using Lyapunov technique is proposed to design the control laws for tractor-trailer
model to follow a prescribed path. Researchers introduce the notion of equivalent size and
propose an approach for path planning based on genetic algorithm in (Liu, Lu, Yang, and Chen
2008). In (Yu and Hung 2012), the tractor-trailer model is regarded as a Dubins vehicle, which
can only move with constant speed and turn with upper bounded curvature. The proposed
algorithm is used to find the shortest path in Dubins Traveling Salesman Problem with
Neighborhoods (Isaacs, Klein, and Hespanha 2011). In (Chyba and Sekhavat 1999) and
(Chitsaz 2013), authors introduce the notion of regular primitive and singular primitive which are
local time-optimal.
The contribution of this paper can be summarized as follows. First, according to the
mathematical model of grain cart, we present a numerical approach, as well as a primitive-
based approach to find the time-optimal solution to the path planning problem in different
situations. To tackle the case of moving target configuration, we further propose a two-stage
motion planning strategy, taking advantage of the previous results. Finally, feasibility and
effectiveness of presented techniques are demonstrated by simulations.
The rest of the paper is organized as follows. In Section II, we present the mathematical models
for both combine and grain cart. In Section III, we present our previous work on the scheduling
of agricultural operations. Based on the scheduling scheme, we formulate the path planning
problem for grain cart in Section IV. In Section V, we present a numerical approach to obtain the
MATHEMATICAL MODELING
In this section, we describe the mathematical models for the combine harvester and the grain
cart.
Combine Harvester
Combine harvester is the machine for harvesting crops, for example, wheat, oats, rye, barley
corn, soybeans and flax. Figure 1 shows a combine at work. In this active mode, the header
cuts the crop and feeds it into threshing cylinder. Grain and chaff are separated from the straw
when crop goes through the concaved grates. The grain, after being sieved, will be stored in the
on-board tank temporarily, and the waste straw is ejected. We use C to denote the maximum
capacity of on-board tank.
Threshing grain loss is an important issue for combine harvester. For any combine, the quantity
of threshing grain loss greatly depends on the forward speed of the harvester. In (Flufy and
Stone 1983), authors show that automatic control has a better performance than manual control
on the threshing grain loss. The forward speed is controlled to give a level of crop feed
according to the required threshing grain loss. In this paper, we simplify the model and assume
that all the combines posses identical constant speed, denoted as vch .
Since the tank does not hold a large capacity, modern combine usually has an unloading auger
for removing the grains from the tank to other vehicles. For most of the combines, the auger is
mounted on the left side, as shown in Figure 1. At this point, a vehicle has to be on the left side
of the combine to empty the tank. Here we denote the unloading rate of the tank using auger as
ru . So when a combine proceeds with the harvesting and the unloading operations
simultaneously, the unloading rate is ru rf (ru ! rf ) .
Figure 2(b) shows a grain cart. We model the grain cart as a trailer attached to a car-like robot.
The robot is hitched by the trailer at the center. The equations of motion for the grain cart are as
follows.
§ x · § v cos T ·
¨ ¸ ¨ ¸
y
q ¨ ¸ ¨ v sin T ¸
¨T ¸ ¨ Z ¸
¨ ¸ ¨ ¸
©E ¹ © v sin E Z ¹
PREVIOUS WORK
In previous work, we addressed the logistics scheduling problem of grain cart during harvesting
operation. A scheduling scheme was proposed for an arbitrary number of combines with a
single grain cart. Based on the mathematical models and proposed scheme, the grain cart could
serve multiple combines without stopping harvesting operation. In the scheme, the grain cart
was scheduled to serve the combines sequentially, so that it needed to move along a trajectory
from one combine to the next combine. Figure 3 shows an example of the path planning. Based
on the scheduling scheme of N-combine case, we have obtained
where 'T denotes the travel time for the grain cart to switch. Since the travel time is
constrained, in this paper we focus on finding the time-optimal solution to the path planning
problem. In the next section, we provide an elaborated problem description.
PROBLEM DESCRIPTION
In this section, we formulate the problem. Consider a grain cart moving from one combine to the
next combine, we would like to find the time-optimal solution for the grain cart to perform the
path planning. In other words, given initial configuration, denoted as qi and goal configuration,
denoted as qg , we intend to compute the path from qi to qg minimizing the total travel time. In
the following sections, we first consider the case when initial and final configurations are static,
following which, we extend the result to a dynamic target configuration case.
NUMERICAL APPROACH
In this section, we present a numerical approach to solve the navigation problem between two
given static configurations. Before computing the trajectory, we first define a value function and
establish Hamilton-Jacobi equation based on tractor-trailer model. Then we present an update
scheme for computing the value function. Finally, the trajectory is computed using the obtained
results.
Hamilton-Jacobi Equation
Denote the set of admissible path from the configuration qi as A xi , yi ,Ti , Ei . Given a goal
configuration qg , we define the corresponding value function u : q o {0} as follows
(Takei, Tsai, Shen, and Landa 2010).
The value function can be regarded as the optimal cost-to-go for the tractor-trailer model with
certain constraints, an initial configuration and a final configuration. By applying dynamic
programming principle for (2), we have
Dividing the terms by 't and taking 't o 0 , we are able to derive
With the equations of motion of the grain cart, Hamilton-Jacobi-Bellman equation is obtained as
follows.
wu wu wu wu wu
1 cos(T ) sin(T ) sin( E ) inf{T( )} (5)
wx wy wE |Z|d1 wT wE
The last term in (5) can be eliminated by applying bang-bang principle w r1 . Since qi is the
goal configuration which has no cost-to-go, we have u (qg ) 0 . For the points located in the
obstacle or outside the space, we define the cost-to-go to be infinity. In the next section, we
present an update scheme for the defined value function.
Update Scheme
In order to find the time-optimal path satisfying (5), we apply fast sweeping method and propose
an update scheme for the value function u (q ) in the entire space. The basic idea is to employ
the fact that the value function has zero cost-to-go at the goal configuration, and to compute the
value function from the nodes close to the goal configuration, to the nodes at farther positions.
With this in mind, we first set up a four dimensional uniform Cartesian grid with refinement
(hx , hy , hT , hE ) . Let ua ,b ,c ,d u (qa ,b ,c ,d ) u (ahx , bhy , chT , dhE ) be the approximation of the solution
on the grid nodes. Moreover, we discretize Z in the range of [1,1] and further define ua*,b ,c , d as
follows.
where q (cos(chT ),sin(chT ), Zi , sin( dhE ) Zi ))T , Zi is the ith element in the discretization and
't is the length of time step. For the value of u (qa ,b ,c ,d q 't ) , we take the value directly if it is
lying on the presented grid, otherwise it is approximated by taking the average value of the
adjacent nodes.
where the superscripts denote the iteration. We set up the termination condition of the
computation as follows.
Computing Trajectory
By using the obtained value function, we are able to derive the time-optimal path from any initial
configuration qi to the goal configuration qg . According to (5), the control law can be
summarized as follows.
x cos T
y sin T
wu wu (9)
T sgn( )
wT wE
E sin E T.
Note that the partial derivative in (9) is obtained by applying central difference approximation. In
the computation of trajectory, the values of u which are not on the grid are computed using a
nearest-neighbor interpolation.
The numerical approach computes the time-optimal trajectory efficiently if the corresponding
value function is provided. The main time consumption is in computing the value function of the
final configuration. But in real implementation, one can compute the value function beforehand.
Therefore, the time cost will not influence the real-time operation on path planning. In the next
section, we consider another approach to address the situation when we do not have such
value function.
PRIMITIVE-BASED APPROACH
Related Work
Based on the model presented in Section II, the time-optimal trajectory satisfies Pontryagin
Maximum Principle. In (Chitsaz 2013), authors define adjoint variables O (Ox , O y , OT , OE ) and
the Hamiltonian as follows.
Depending on Iv and IZ , the optimal trajectory, which is called extremal, consists of two
categories, namely, regular and singular. On one hand, an extremal is called regular if the times
In the regular primitive, Iv z 0 and IZ z 0 . Based on the state equations, a regular primitive
satisfies
T (t ) Zt T (t0 )
x(t ) x(t0 ) (v / Z )(sin(T ) sin(T (t0 )))
y (t ) y (t0 ) (v / Z )(cos(T ) cos(T (t0 )))
t 2v K1 (12)
E (t ) 2(v / Z ) arctan( )
t K1
2
K1
v Z tan( E (t0 ) / 2)
In singular primitive, we have either Iv { 0 or IZ { 0 . Here since the grain cart has a constant
forward speed, we only consider the case of IZ { 0 . It has been proved by (Chitsaz 2013) that if
a IZ -singular primitive contains a straight line segment, either the entire primitive is a straight
line segment, or
D (t ) r2E (t )
Z (t ) d r2sin( E (t )) (13)
S S 5S 7S
d E (t ) d or d E (t ) d
6 6 6 6
in which D denotes the angle between the robot orientation and the line, d denotes the
distance between robot's center and the line. The path for the latter case is called a merging
curve.
In order to minimize the travel time for the grain cart, we consider using straight lines instead of
merging curves for IZ -singular primitives. Furthermore, since qi and qg has the same T , it is
apparent that R1 and R2 should have the same length, and the same central angle J as well.
Therefore, with a given J , the slope of S1 can be computed. Since qi and qg are known, the
entire path can be derived. To minimize the travel time, we change the central angle
corresponding to the regular primitives, and take the trajectory with minimum time in all feasible
trajectories as the final path. The complete algorithm can be found in the following algorithm.
For J 0 oS
Compute the path PJ starting from qi
If PJ reaches qg
If travel time of PJ T f
Tf Travel time of PJ , Pf PJ
End if
End if
End for
static goal configuration. In the figure, t0 denotes the starting time. Thus qg (t0 ) could be
considered as the goal configuration at the beginning. At this point, we attempt to find a 'T
satisfying the ideal case that the grain cart can successfully reach the desired goal
configuration qg (t0 'T ) . With this in mind, we approximate the 'T by using a lower bound on
the length of the path, which is v'T , as shown in Figure 5. Based on geometry, the
approximated 'T could be solved using the following equation.
We let qi and qg (t0 'T ) to be the initial and goal configuration in performing static path
planning. Due to the fact that v'T is the lower bound on the path length for a non-holonomic
vehicle, the grain cart will lag behind the combine when it reaches the goal configuration.
Therefore, in the second stage, we apply a PID feedback control for grain cart to catch up with
the combine. Figure 6 shows the block diagram for the presented control system. Position of the
combine is considered to be the reference input. PID control is applied to the control of the grain
cart's speed.
SIMULATION
In this section, we present simulation results for the aforementioned techniques.
Figure 7 illustrates the paths obtained using numerical approach and primitive-based approach.
Because of the fact that the tractor-trailer model has four state variables, it is hard to visualize
the variations of all these variables. For this reason, in the simulation we only show the path of
( x, y ) , which represents the physical location of grain cart in the environment. A colored arrow
is added to show the final orientation. In this simulation, initial and goal configuration are set to
be qi (1,1, 0, 0)T and qg (4, 4, 0, 0)T , respectively. For numerical approach, Table 1 lists all
the refinement parameters in the computation of value function. In both approaches, the path
computation terminates when the state of the grain cart reaches a small range of the goal
configuration.
Simulation results show that both paths finally reach the goal configuration which validates the
proposed approaches. In the numerical approach, the path could be affected by the error of
using inappropriate refinement parameters. With proper refinement parameters, the path could
be more accurate, but it will lead to higher time consumption in the computation of value
function.
In the second simulation, we compare the performance of using two proposed approaches. We
xg xi
keep yi yg 4 and plot the travel time with respect to the distance ratio , as shown in
y g yi
Figure 8. The results show that with a given ratio, travel time of the two approaches are very
close. Since the numerical approach provides us the time-optimal solution, it can be seen that
the primitive-based approach also has a good performance. Note that in some cases, the
performance of primitive-based approach is better because of the effect of the error in numerical
approach.
Figure 9 illustrates the last simulation, which is an implementation of using dynamic motion
planning. In this simulation, initial configuration of grain cart and initial position of combine are
set to be (1,5, 0, 0)T and (5,9) , respectively. The speed of combine vch 0.4 , whereas the
speed of grain cart v has the maximum 1. The simulation shows the paths of grain cart in both
stages, which demonstrate the feasibility of the proposed dynamic motion planning strategy.
CONCLUSION
In this paper, we addressed the problem of finding the time-optimal path for the grain cart to
navigate from one combine to the adjacent combine. Firstly, a numerical approach for
computing the path based on given static initial and final configuration was presented.
REFERENCES
O. Ali and D. Van Oudheusden, “Logistics planning for agricultural vehicles,” in Industrial
Engineering and Engineering Management, 2009. IEEM 2009. IEEE International
Conference on, pp. 311--314, Dec 2009.
H. Chitsaz, “On time-optimal trajectories for a car-like robot with one trailer,” CoRR, pp. --1--1,
2013.
M. Chyba and S. Sekhavat, “Time optimal paths for a mobile robot with one trailer,” in Intelligent
Robots and Systems, 1999. IROS '99. Proceedings. 1999 IEEE/RSJ International
Conference on, vol. 3, pp. 1669--1674 vol.3, 1999.
A. Divelbiss and J. Wen, “Nonholonomic path planning with inequality,” in Robotics and
Automation, 1994. Proceedings., 1994 IEEE International Conference on, pp. 52--57
vol.1, May 1994.
A. Divelbiss and J. Wen, “Trajectory tracking control of a car-trailer system,” Control Systems
Technology, IEEE Transactions on, vol. 5, pp. 269--278, May 1997.
K. Ferentinos, K. Arvanitis, and N. Sigrimis, “Heuristic optimization methods for motion planning
of autonomous agricultural vehicles,” Journal of Optimization, vol. 23, no. 2, pp. 155--
170, 2002.
M. L. Flufy and G. Stone, “Speed control of a combine harvester to maintain a specific level of
measured threshing grain loss,” Journal of Agricultural Engineering Research, vol. 28,
no. 6, pp. 537 -- 543, 1983.
L. Foulds and J. Wilson, “Scheduling operations for the harvesting of renewable resources,”
Journal of Food Engineering, vol. 70, no. 3, pp. 281 -- 292, 2005. Operational Research
and Food Logistics.
I. Hameed, D. Bochtis, and C. Sørensen, “An optimized field coverage planning approach for
navigation of agricultural robots in fields involving obstacle areas,” International Journal
of Advanced Robotic Systems, vol. 10, no. 231, pp. 1--9, 2013.
J. Isaacs, D. Klein, and J. Hespanha, “Algorithms for the traveling salesman problem with
neighborhoods involving a dubins vehicle,” in American Control Conference (ACC),
2011, pp. 1704--1709, June 2011.
Z. Liu, Q. Lu, P. Yang, and L. Chen, “Path planning for tractor-trailer mobile robot system based
on equivalent size,” in Intelligent Control and Automation, 2008. WCICA 2008. 7th World
Congress on, pp. 5744--5749, June 2008.
T. Oksanen and A. Visala, “Coverage path planning algorithms for agricultural field machines,”
Journal of Field Robotics, vol. 26, no. 8, pp. 651--668, 2009.
R. Takei, R. Tsai, H. Shen, and Y. Landa, “A practical path-planning algorithm for a simple car:
a hamilton-jacobi approach,” in American Control Conference (ACC), 2010, pp. 6175--
6180, June 2010.
X. Yu and J. Hung, “Optimal path planning for an autonomous robot-trailer system,” in IECON
2012 - 38th Annual Conference on IEEE Industrial Electronics Society, pp. 2762--2767,
Oct 2012.