2022 - Visual Structural Inspection Datasets
2022 - Visual Structural Inspection Datasets
com/science/article/pii/S0926580522001728
Manuscript_47d05f290a54a9fee793b403ffe63ebf
Eric Bianchi – Virginia Polytechnic and State Matthew Hebdon, Ph.D., P.E. – University of
University Texas at Austin
[email protected] [email protected]
2 ABSTRACT
3 As research has turned to the success of artificial intelligence to augment the inspection process, the
4 need for image data is particularly present. However, across the structural inspection and structural health
5 monitoring literature, it is commonly noted that image data is scarce. Thus, we have procured the most
6 extensive collection of datasets in the field. We compiled a collection of eighty-six papers with image
7 data and datasets pertaining to structural inspection for machine learning algorithms. This data lake
8 provides an exceptionally rich starting point for researchers to use when beginning their next machine
9 learning application in visual inspection. Additionally, to continue the growth of this data lake, the catalog
10 is available as a collaborative table which may be edited and extended upon over time. Furthermore,
11 through our review, we discovered trends in the experimental data, identified emerging and promising
12 methods, and provided suggestions for data-driven research in the future.
13 1 Introduction
14 Over the past several years, the structural health monitoring research community has seen growth in
15 deep learning algorithms to augment the structural inspection process. Most structural inspections, e.g
16 bridges, are conducted visually. As such, a large subset of the research has focused on the visual
17 inspection tasks and collected data. These tasks typically include the detection or presence of damage
18 instances or the quantification of damage. As an auxiliary task to the finding and quantification of
19 damage, there has been focus on the means to make the collection of these tasks easier. The most popular
20 robotic solution is the use of unmanned aerial systems (UAS). Given that visual inspection is the most
21 common form of structural monitoring, we have narrowed our discussion to only papers which use
22 imagery data.
23 A consensus from several recent literature reviews from 2021 [1]–[3], indicates that there is a
24 general lack of structural inspection data (image data or other data forms). A recent article which trained
25 an image classifier for damage detection in structural domain [4] in which they claimed that, to the best of
26 their knowledge, there were only six publicly available datasets for visual bridge inspection tasks;
© 2022 published by Elsevier. This manuscript is made available under the Elsevier user license
https://www.elsevier.com/open-access/userlicense/1.0/
27 although there are many more datasets publicly available even before the time of their publishing. Even
28 so, it does highlight that many datasets can be difficult to find, or that they are not publicly accessible. We
29 have identified three main causes behind this data scarcity. First, data source owners may have
30 agreements with authors that inhibit them from widely sharing that data. These agreements are often
31 present in infrastructure-related data as sharing it could pose a risk to the public. Second, authors could
32 choose to make their data inaccessible due to the time and resources required for accessibility. Third,
33 there is an enormous variety of data available from infrastructure inspections, and covering all of the
34 combinations of defects, bridge types and bridge elements is extremely difficult to capture adequately.
35 Covering all the combinations of defects, bridge types, and bridge elements becomes extremely difficult
36 to adequately capture. A lack of data is problematic for training supervised deep learning algorithms as
37 they require large quantities of diverse data to generalize effectively. This research was inspired by the
38 difficulty to find, access, and use the currently available domain-specific image-based datasets for
39 structural inspections. Our goal was to quantify the extent of the available image data within the domain,
40 group trends found in the literature, and identify promising techniques and methods.
41 We achieved this goal by cataloging all relevant papers, emphasizing the civil infrastructure
42 structural inspection domain, which used image-data for machine learning applications. We targeted
43 papers in relation to detecting any type of structural damage (detecting, localization, bounding, or
44 extents), quantifying damage (converting pixel space to real-world metrics), and predicting damage
45 (damage evolution). We also considered datasets and algorithms which detected structural material or
46 other auxiliary tasks which may aid in the detection of damage. This did include some post-manufacturing
47 material inspection which could be applied or supplemented into the standard civil infrastructure
48 inspection process. Given these conditions, hundreds of articles were researched from 2010 until 2021.
49 Ultimately, eighty-six papers were found to have used some form of image-based data to train a machine
50 learning algorithm. The authors believe that we have presented a representative snapshot of the current
51 research domain status for structural inspection data.
53 • A cataloged list of available data from eighty-six papers for image-based structural inspection
54 assessment for machine learning applications.
55 • A living, collaborative, catalog of available image data and image datasets in the structural
56 inspection domain [https://github.com/beric7/structural_inspection_main]
57 • A discussion of trends found in the discovered datasets, identified promising and emerging
58 methodologies, and provided suggestions to enhance the future data availability within the
59 structural inspection and structural health monitoring domain.
2
60 2 Literature review
61 There were two driving factors motivating this literature review. The first was to discover as many
62 curated datasets within the structural domain as possible. The second motivating factor was to establish a
63 foundational knowledge of the current methods and techniques for detecting, quantifying, and forecasting
64 damage evolution. We define forecasting damage evolution as the best estimation of the future condition
65 of the element, area, or global structure, based on its current state. Table 2 is a catalog of all papers with
66 datasets and catalog of observed methods from our dataset review and includes unique and emerging
67 techniques in the field.
72 The findings from the review were compiled into a dataset catalog, Table 2. This table combines
73 eighty-six discovered papers with data and repositories of data from 2017-2021 – with a few exceptions
74 from 2012 and 2016. Twenty-six of the discovered papers had datasets which were publicly accessible
75 (green), ten were not yet accessible or gave the option of contacting the author for more information
76 (yellow), and the other forty-five datasets were not listed or not made readily available (red). That meant
77 that well over half of sources did not readily leave researchers accessible data to verify results from their
78 methodologies or use the data for their own methods. Figure 1 depicts the data availability we found over
79 the years 2017 to 2021. On a positive note, data availability seems to be trending upward, but over the
80 course of the four years, the majority were inaccessible.
3
25 100%
90%
20 80%
Discovered Papers
Discovered Papers
70%
15 60%
50%
10 40%
30%
5 20%
10%
0 0%
2017 2018 2019 2020 2021 2017 2018 2019 2020 2021
Year Year
81
82 Figure 1 - Trend of Data Availability for Discovered Papers Over Time (2017-2021)
83 Because of the general state of the data availability, it appears that all too often researchers
84 generated their own datasets, spending valuable resources on annotations and image-processing when
85 they could have focused more on the method itself. Then when the authors do not make their newly
86 generated dataset for their task accessible to the public, the cycle continues. Making data available is
87 important so that researchers can run the experiments, validate results, extend datasets, and utilize the data
88 for other offshoot tasks. All these reasons are especially important for data in machine learning, as more
89 data can drive the performance of models higher.
4
104 Two terms which have been introduced are base material and levels of detection. Base material
105 refers to the common structural material which the system was detecting over. For example, the detection
106 of concrete cracks, spalling, exposed rebar, etc. would have a base material of concrete. We have defined
107 the levels of detection as instance, quantification, change, and forecasting (Table 1).
[I] Instance
(image classification, object detection, semantic segmentation)
[II] Quantification
(transform pixel or point space to real world metric space)
[III] Change
(detected difference between an instance or quantifiable measure)
[IV] Forecasting
(estimation of a future state based on a current state)
109
110 The base level is instance detection [I]. Instance detection can be the detection of some target
111 presence in an image (image classification), a bounding of some object in an image (object detection), or
112 semantic extents of a region in an image (semantic segmentation). Quantification [II] of a region or target
113 cannot happen without instance detection. We define quantification as the transform from pixel or point
114 space to real-world measurable space. Change [III] is the third level of detection. Change detection does
115 not necessarily need to be real-world quantifiable to determine if there has been some progression.
116 Therefore, change detection can include instance detection [I], quantification [II], or both. Our literature
117 review did not discover image-based change detection between 2017 and 2021 when searching for image-
118 based inspection algorithms. We do recognize that there was one paper on image-based change detection
119 for cracks found in tunnels [5], but did not use machine learning in its process. This paper was also
120 mentioned by V. Hoskere when describing change detection in the advances in structural health
121 monitoring [6]. It should be noted that there were point-cloud-based alignment techniques which
122 measured change found in literature [6]. These were not highlighted since we focused on image-based
123 change detection. The last level of detection is forecasting [IV]. Forecasting as defined earlier is a best
124 estimation of the future condition based on a current state. In terms of structural inspection, forecasting
125 can change the number of damaged instances or the extents and quantity of damage. In this sense, it
126 affects all three levels of detection, which is why forecasting is the final level of detection.
5
127 Overall, the table was organized in such a way to highlight trends from these sources as well as
128 providing a solid dataset basis for researcher’s intended applications. While having this table as a starting
129 point is good, does not account for the future datasets and papers with data submitted in our field.
130 Therefore, we established an open-source drop-table in a Google Sheets [7]. It contains the data found in
131 Table 2, with the intention for controlled collaboration and sharing of data sources within our community.
132 The details of this drop table are described in the discussion section.
6
2018 Automated Vision-Based Concrete crack, • Data: 2073 Crack images, 1,400 Joint/edge, 1,511 Not listed in
Detection of Cracks on Joint/edge, Plant, Plant, 2,211 Intact surfaces at (227x227). paper
Concrete Surfaces Using a Intact surfaces • Base Materials: Concrete
Deep Learning Technique • Type: [I], instance detection
[14]
2018 Concrete Cracks Detection Concrete crack • Data: 851 base images (756x756). 80% to training, Not listed in
Based on Deep Learning 20% to testing. paper
Image Classification [15] • Sub-Images: 3,500 augmented images at (256x256).
2,336 Concrete cracks, 1,164 Non-cracks.
• Base Materials: Concrete
• Type: [I], instance detection
2018 Deep Transfer Learning for Task 1: Scene level, • Dataset Name: ‘Structural ImageNet’ or ‘Peer Hub https://apps.peer
Image-Based Structural Task 2: Damage ImageNet’ .berkeley.edu/ph
Damage Recognition [16] state detection, Task i-net/
3: Spalling • Data: As of September 2021, this dataset contains
condition, Task 4: 36,413 images with multiple attributes for the
Material type, Task following baseline recognition tasks: Scene level
5: Collapse mode, classification, Structural component type
Task 6: Component identification, Crack existence check, and Damage-
type, Task 7: level detection.
Damage level, Task • Base Material: Concrete, Steel
8: Damage type • Type: [I], instance detection
• Notes: Image classification and semantic
segmentation – VGGNet and class activation maps
2018 Evaluation of Deep Steel corrosion • Data: 926 base images Not listed in
Learning Approaches based • Sub-Images: 33,039 corroded, 34,148 non-corroded paper
on Convolutional Neural at (128x128)
Networks for Corrosion • Base Material: Steel
Detection [17] • Type: [I], instance detection
• Notes: Exhaustive sliding window
2019 Image-Based Concrete Concrete crack • Dataset Name: ‘ICCD’ Available from
Crack Detection Using • Data: 60,000 cropped images (256x256). 30,000 the
Convolutional Neural crack, 30,000 non-crack. 1455 images with corresponding
Network and Exhaustive (4160 × 3120) were used to evaluate. author upon
Search Technique [18] • Base Material: Concrete request.
• Type: [I], instance detection
• Notes: Sliding window technique
2019 Recognition Of Surface Rolled-in Scale, • Data: 300 base images for each class at (200x200). Not listed in
Defects On Steel Sheet Patches, Crazing, Up-sampled to (224x224) for training/testing. 50% paper
Using Transfer Learning Pitted Surface, training, 50% testing.
[19] Inclusion, Scratches • Base Material: Steel
• Type: [I], instance detection
2019 Patch-Based Crack Crack, Road • Data: 664 base images at (2560x1440). Training Not listed in
Detection in Black Box marking, Intact-road (352), Validation (192), Testing (120) resized to paper
Images Using Convolutional (1,920x1,080).
Neural Networks [20] • Sub-Images: 30,000 augmented, 6,000 augmented
for validation at (40x40)
• Base Material: Asphalt
• Type: [I], instance detection
2019 Automated Region-of- Region • Data: 5,321 base images (5472x3648 and Not listed in
interest Localization and identification 4000x2664). 4,298 at (256x256) resized region of paper
Classification for Vision- interest images (ROIs). 945 ROIs are annotated as
Based Visual Assessment of negative, and 3,353 are positive.
Civil Infrastructure [21] • Base Material: Steel
• Type: [I], instance detection
7
2019 New Automated BIM Object Object Classification • Data: 1,891 base image extracted objects from Not listed in
Classification Method to Industry Foundation Classes IFC models. 795 Beams, paper
Support BIM 412 Columns, 348 Footings, 74 Slabs, 262 Wall
Interoperability [22] object instances.
• Base Material: Other
• Type: [I], instance detection
• Notes: computer generated elements (BIM)
2019 Vision and Entropy-Based Transverse cracks, • Data: 21,600 frames total – 8,423 contained Not listed in
Detection of Distressed Longitudinal cracks, pavement defects. 442 Transverse Cracks, 574 paper
Areas for Integrated Block cracks Longitudinal Cracks, 170 Edge Cracks, 147 Block
Pavement Condition Alligator cracking, Cracks, 659 Alligator Cracking, 197 Potholes, 602
Assessment [23] Potholes, Patches, Patches, 155 Shoving, 263 Rutting, 255 Distortion,
Shoving, Rutting, 536 Raveling, 171 Bleeding, 4,252 Combination.
Distortion, Raveling, • Base Material: Asphalt
Bleeding. • Type: [I], instance detection
• Notes: Defects broken down into 20x20 patches.
2019 Multi-Classifier for Concrete cracks, • Dataset Name: ‘MCDS’ https://doi.org/1
Reinforced Concrete Bridge Efflorescence, • Data: 3,607 base images. 789 Cracks, 311 0.5281/zenodo.2
Defects [24] General defect, Efflorescence, 264 General Defect, 452 No defect, 601506
Scaling, Spalling, 168 Scaling, 427 Spalling, 223 Exposed
Exposed reinforcement, 203 No exposed reinforcement, 355
reinforcement, Rust Rust staining, 415 No Rust staining. Training (2,545),
staining Testing (1,062).
• Base Material: Concrete, Steel
• Type: [I], instance detection
• Notes: Image resolutions not defined
2019 Autonomous Bridge Crack Concrete cracks • Dataset Name: ‘BCD’ https://github.co
Detection using Deep • Data: 4,058 crack and 2,011 background images at m/tjdxxhy/crack
Convolutional Neural (224x224). Training (4,856), Testing (1,213). -detection
Networks [25] • Base Material: Concrete
• Type: [I], instance detection
2019 Automatic Detection of Spalling • Data: 1,240 base images at (100x100), 620 spalling https://github.co
Concrete Spalling Using images, 90% to training, 10% to testing m/NhatDucHoa
Piecewise Linear Stochastic • Base Material: Concrete ng/PL_LR_WS
Gradient Descent Logistic • Type: [I], instance detection D
Regression and Image • Notes: Piecewise linear stochastic gradient descent
Texture Analysis [26] logistic regression (PL-SGDLR) used for pattern
recognition
2019 Deep Learning for Detecting Mold, Stain, Paint • Data: Number and resolution of baseline images not Not listed in
Building Defects Using deterioration specified. paper
Convolutional Neural • Sub-Images: 2,622 sub-images at (224x244).
Networks [27] Training (20%), Validation (20%), Test (20%). 717
Mold, 632 Stain, 594 Paint deterioration, 679 No
damage.
• Base Material: Other
• Type: [I], instance detection
• Notes: localized detection with class activation
8
2019 Bridge Sub Structure Defect Concrete crack, • Data: 180 base images Not listed in
Inspection Assistance by Spalling, Erosion, • Sub-Images: 3926 sub-images paper
using Deep Learning [28] Stain • Base Material: Concrete
• Type: [I], instance detection
• Notes: Exhaustive sliding window with contouring
for sub-structures
2020 An Intelligent Classification Pavement crack, • Data: 600,000 images taken. 40,610 images at Not listed in
Model for Surface Defects Plate fracturing, (2048x2000) scaled to (224x224). 3,000 normal, paper
on Cement Concrete Corner rupturing, 8,090 cracking, 9,790 plate fracturing, 3,320 corner
Bridges [29] Exfoliation, rupturing, 4,180 edge/corner exfoliation, 6900
Skeleton exposure, skeleton exposure, 5,240 repairs. 90% training, 10%
Repairs testing.
• Base Material: Concrete
• Type: [I], instance detection
2020 A Compact Convolutional Other (textures and • Data: Cited several data sources, although did not Cited all sources
Neural Network for Surface surface materials) include links to their pre-processed data. but did not
Defect Inspection [30] • Base Material: Other include
• Type: [I], instance detection processed data
• Notes: Binary segmentation and image classification. or reference to
LW model with exhaustive sliding window for data splits.
surface defects
2021 Development of Open- Steel bearing • Dataset Name: ‘Bearing Condition State Dataset’ https://doi.org/1
source Collaborative condition states • Data: 947 base images of bearings. 137 Condition 0.7294/1662464
Structural Inspection State (CS1), 238 (CS2), 500 (CS3), 99 (CS4). Resized 2.v1
Datasets [31] to (300x300). 90% training, 10% testing
• Base Material: Steel
• Type: [I], instance detection
• Notes: (Annotation Guidelines Included).
EfficientNet B3 for training.
OBJECT DETECTION
2016 Computer Vision-based Collapsed, Non- • Data: 1,850 collapsed base images, 3,420 non- Not listed in
Structural Assessment collapsed collapses base images resized to (256x256). Training paper
Exploiting Large Volumes of (2,636) and Validation (1,317).
Images [32] • Base Material: Concrete, Steel
• Type: [I], instance detection
2018 Visual Data Classification Spalling • Data: 1,086 spalling base images, 3,158 spalling datacenterhub.or
in Post-event Building instances. Resized to (256x256) g for raw image
Reconnaissance [33] • Base Material: Concrete post-disaster
• Type: [I], instance detection data
2018 Automated Road Crack Asphalt crack • Data: 9,053 images. Training (7,240), Testing https://github.co
Detection Using Deep (1,813). Eight different class categories for specific m/TITAN-
Convolutional Neural crack types. lab/Road-crack-
Networks [34] • Base Material: Asphalt detection
• Type: [I], instance detection
• Notes: YOLOv2
2018 Unified Vision-Based Concrete crack, • Data: 3,497 baseline images containing concrete Not listed in
Methodology for Exposed rebar, defects. Training augmented images (7,407), Testing paper
Simultaneous Concrete Cavity (300) images.
Defect Detection and • Base Material: Concrete
Geolocalization [35] • Type: [I], instance detection
• Notes: Big data and coarse image-based localization
of urban city environment concrete images and their
defects.
2018 Bridge Damage Detection Concrete crack, Pop- • Data: 2,206 base images of inspection images Not listed in
using a Single-Stage out, Spalling, (1280×960) and (4000x3000). The distribution of paper
Detector and Field Exposed rebar object instances in the dataset are [35% for crack,
Inspection Images [36][37] 15% for pop-out, 13% for spalling and 37% for
exposed rebar]
• Base Material: Concrete
• Type: [I], instance detection
• Notes: YOLOv3
9
2018 Autonomous Structural Steel corrosion, • Data: 297 base images (6000x4000) Not listed in
Visual Inspection Using Delamination, • Sub-Images: 500 images of concrete cracks (574 paper
Region-Based Deep Concrete crack object instances), 410 images of delamination (1,068
Learning for Detecting object instances), 1,456 images of steel corrosion; 874
Multiple Damage Types [38] (medium corrosion), 963 (high corrosion), 1,193 bolt
object instances. Resolution at (500x375).
• Base Material: Concrete, Steel
• Type: [I], instance detection
• Notes: exhaustive sliding window
2019 Crack and Non-crack Concrete crack • Data: 3,186 Crack Candidate Regions (CCR) were Not listed in
Classification from generated, which consist of 527 actual cracks and paper
Concrete Surface Images 2,659 non-cracks.
Using Machine Learning • Base Material: Concrete
[39] • Type: [I], instance detection
• Notes: SURF and CNN for Crack Candidate Regions
(CCR)
2019 Meta-learning Concrete crack, • Dataset Name: ‘CODEBRIM’ https://doi.org/1
Convolutional Neural Spalling, Steel • Data: 1,590 base images (6000x4000) 0.5281/zenodo.2
Architectures for Multi- corrosion, Exposed • Sub-Images: 5,354 bounding box defect annotations 620293
target Concrete Defect rebar, Efflorescence and 2,506 non-overlapping background bounding
Classification with the boxes. 2507 concrete crack, 1,898 spalling, 833
Concrete Defect Bridge corrosion, 1,507 exposed rebar, 1,559 efflorescence
Image Dataset Concrete bounding box instances at (224x224)
Defect Bridge Image • Base Material: Concrete, Steel
Dataset [40] • Type: [I], instance detection
• Notes: Meta-Learning
2020 Deep Metallic Surface Metallic surface • Dataset Name: ‘GC10-DET’ https://github.co
Defect Detection: The New defects • Data: 3570 grey-scale images. Punching, weld line, m/lvxiaoming20
Benchmark and Detection crescent gap, water spot, oil spot, silk spot, inclusion, 19/GC10-DET-
Network [41] rolled pit, crease, waist folding Metallic-
• Base Material: Steel Surface-Defect-
• Type: [I], instance detection Datasets
• Notes: EDDN, using ‘GC10-DET’
2020 Imaging-based Crack Concrete cracks, • Data: Baseline images at (3264x2448), rescaled to Not listed in
Detection on Concrete Handwriting (448x448). Training (2408) images, Testing (602). paper
Surfaces using You Only • Base Material: Concrete
Look Once Network [42] • Type: [I], instance detection
• Notes: YOLOv2
2021 RDD2020: An Annotated Longitudinal • Dataset Name: ‘RDD2020’ http://dx.doi.org
Image Dataset for Cracks(D00), • Data: 26,336 baseline images at (600x600) and /10.17632/5ty2
Automatic Road Damage Transverse (720x720). 31,000+ instances of road damage. wb6gvg.1
Detection using Deep Cracks(D10), • Base Material: Asphalt
Learning [43] Alligator • Type: [I], instance detection
Cracks(D20) and
Potholes(D40)
2021 Autonomous Detection of Corrosion, Cracked • Data: 2270 baseline images at (4992x2496). Not listed in
Damage to Multiple Steel coating Processed 16,000 images at an unspecified resolution. paper
Surfaces from 360° 4,380 object instances of corrosion, 3,160 Cracked
Panoramas using Deep coating. Training (12,800), Testing (3,200).
Neural Networks [44] • Base Material: Steel
• Type: [I], instance detection
• Notes: Novel proposed Panoramic surface damage
detection network (PADENet).
2021 COCO-Bridge: Structural Bearings, Cover • Dataset Name: COCO-Bridge-2021 https://doi.org/1
Detail Data Set for Bridge plate termination, • Data: 774 base images. 502 Bearing, 211 Cover plate 0.7294/m8pg-
Inspections [45] Gusset plate termination, 652 Gusset plate connection, 1,218 Out 4a02
connections, Out of of plane stiffener object instances. Resized to
plane stiffeners (300x300). Training (719), Testing (55).
• Base Material: Steel
• Type: [I], instance detection
• Notes: Single Shot Detection (SSD)
10
2021 COCO-Bridge-2021+ Bearings, Cover • Dataset Name: COCO-Bridge-2021+ https://doi.org/1
Dataset [46] plate termination, • Data: 1,470 base images. 1,969 Bearings, 335 Cover 0.7294/1662449
Gusset plate plate connections, 1,083 Gusset plate connection, 5.v1
connections, Out of 3,896 Out of plane stiffeners object instances. Resized
plane stiffeners to (300x300). Training (1321), Testing (136).
• Base Material: Steel
• Type: [I], instance detection
• Notes: Annotation Guidelines Included. SSD and
YOLOv4.
SEMANTIC SEGMENTATION
2012 Automatic Crack Detection Concrete crack • Dataset Name: ‘CrackTree200’ Not listed in
from Pavement Images [47] • Data: 206 images at (800x600) paper
• Base Material: Concrete
• Type: [I], instance detection
• Notes: Binary semantic segmentation
2016 Automatic Road Crack Concrete crack • Dataset Name: ‘CFD’ https://github.co
Detection Using Random • Data: 118 images at (480x320) m/cuilimeng/Cra
Structured Forests [48] • Base Material: Concrete ckForest-dataset
• Type: [I], instance detection
• Notes: Binary semantic segmentation – random
structured forests
2016 Visual Change Detection On Concrete crack • Base Material: Concrete Not listed in
Tunnel Linings [5] • Type: [III], change detection paper
• Notes: Semantic segmentation (no deep learning
used)
2017 Deep Active Learning for Generic cracks, • Data: 603 base images (4096x4800) Not listed in
Civil Infrastructure Defect Deposits, Water • Sub-Images: 289,400 sub-images (520x520). 22.6% paper
Detection and Classification leakage positive labels (with defects)
[49] • Base Material: Concrete
• Type: [I], instance detection
• Notes: ResNet, Active Learning (AL) network, use of
Support Vector Machines (SVM)
2018 Research on Bridge Crack Concrete crack • Data: Undefined Not listed in
Detection with Neural • Base Material: Concrete paper
Network Based Image • Type: [I], instance detection
Processing Methods [50] • Notes: self-organizing maps and back propagation
neural networks for crack detection.
11
2018 Automated Bridge Non-Bridge, • Data: Video simulation data (240×320). Training Not listed in
Component Recognition Columns, Beams (37,08), Testing (2,000). paper
using Video Data [51] and Slabs, Others • Base Material: Concrete, Steel
(Non-structural) • Type: [I], instance detection
• Notes: Multi-task semantic segmentation – video
data, bridge components
2018 Vision-based Automated Building, Greenery, • Data: For Scene Classification – 3,403 General, Not listed in
Bridge Component Person, Pavement, 6,842 Urban, 1,652 Bridge. For Bridge Components – paper
Recognition Integrated with Sign and Poles, Training (1135 Bridge, 194 Non-bridges), Testing
High-level Scene Vehicles, Bridges, (234 images).
Understanding [52] Water, Sky, Others, • Base Material: Concrete, Steel
Non-Bridge, • Type: [I], instance detection
Columns, Beams • Notes: Scene classification and bridge component
and Slabs, Others segmentation.
(structural), Others
(Non-structural)
2018 Towards automated post- Building, • Data: 1,000 (scene-building: building, window/door, Scene-Building:
earthquake inspections with Window/Door, debris, sky, greenery) base images (288x288). https://datacente
deep learning-based Debris, Sky • Base Material: Other rhub.org/resourc
condition-aware models Greenery • Type: [I], instance detection es/14160
[53] • Notes: Parallel three-network segmentation model
(scene-building, damage-presence, damage-type).
2018 Towards automated post- Cracks, Spalling, • Data: 665 (damage presence and damage type: Not listed in
earthquake inspections with Exposed rebar cracks, spalling, exposed rebar) base images paper
deep learning-based (288x288).
condition-aware models • Base Material: Concrete
[53] • Type: [I], instance detection
• Notes: Parallel three-network segmentation model
(scene-building, damage-presence, damage-type).
2018 Vision-based Structural Concrete crack, • Data: 324 Spalling, 216 Fatigue crack, 341 Concrete Not listed in
Inspection Using Multiscale Steel crack, Asphalt crack, 435 Asphalt crack, 379 Corrosion at (600x600) paper
Deep Convolutional Neural crack, Spalling, • Base Material: Concrete, Steel, Asphalt
Networks [54] Steel corrosion • Type: [I], instance detection
•
2018 Deep Learning–Based Fully Concrete and • Data: 3,000 flattened 3D images (1024x512). Testing Not listed in
Automated Pavement Crack pavement crack (200), Validation (300), Training (2,500). paper
Detection on 3D Asphalt • Base Material: Asphalt, Concrete
Surfaces with an Improved • Type: [I], [II], instance and quantification detection
CrackNet [55] • Notes: ‘CrackNet II’, Classification, 3D surfaces
2018 Computer Vision-Based Moisture marks • Data: 165 images (2592 × 3888) Not listed in
Model for Moisture Marks • Base Material: Concrete paper
Detection and Recognition • Type: [I], instance detection
in Subway Networks [56] • Notes: Multi-layer perceptron (MLP)
2018 Evaluation of Bridge Decks Concrete crack, • Base Material: Concrete Not listed in
using Non-destructive Spalling, Bitpatch, • Type: [I], [II], instance and quantification detection paper
Evaluation (NDE) Valuation Concrete patching • Notes: Dataset information not listed. Bridge Deck
(NDE) at Near Highway Condition State algorithm (BDCS). Semantic
Speeds for Effective Asset segmentation, contours, condition state classification.
Management
Implementation for Routine
Inspection (Phase III) [57]
2019 Image-based Post-disaster Major/non-major • Data: 1,154 images for all three models. 492 images Not listed in
Inspection of Reinforced failure, Columns, for major failure, 201 images for no major failure. paper
Concrete Bridge Systems Damage 236 images with 344 column instances. 436 images of
using Deep Learning with pixel-level damage.
Bayesian Optimization [58] • Base Material: Concrete
• Type: [I], instance detection
• Notes: Image classification, object detection,
semantic segmentation
12
2019 Surface Fatigue Crack Steel crack • Data: 350 base images Not listed in
Identification in Steel Box • Sub-Images: 67,200 (64x64). Balanced equally paper
Girder of Bridges by a deep between handwriting, cracks, and background.
Fusion Convolutional • Base Material: Steel
Neural Network Based on • Type: [I], instance detection
Consumer-grade Camera • Notes: Exhaustive sliding window technique with
Images [59] semantic segmentation
2019 3D InspectionNet: A Deep Concrete crack, • Data: 3D synthetic dataset with CAD models. 1000 Not listed in
3D Convolutional Neural Spalling models, augmented to generate to 12,000 models. paper
Networks Based Approach • Base Material: Concrete
for 3D Defect Detection on • Type: [I], [II], instance and quantification detection
Concrete Columns [60] • Notes: 3D convolutional neural networks.
13
2019 Robust Pixel-level Crack Concrete crack • Data: 50 base images (6000x4000) Not listed in
Detection using Deep Fully • Sub-Images: 209,801 (256×256). 1.52% crack pixels, paper
Convolutional Neural 98.48% background
Networks [61] • Base Material: Concrete
• Type: [I], instance detection
• Notes: Sliding window technique with semantic
segmentation.
2019 Pixel-level Crack Detection Concrete crack • Data: 2,068 base images (1024x1024). From Research
in Images using segNet [62] • Sub-Images: 5,180 segmented sub-images (256x256) on Bridge Crack
• Base Material: Concrete detection with
• Type: [I], instance detection Neural Network
• Notes: SegNet for cracks Based Image
Processing
Methods
May be upon
request.
2019 DeepCrack: A Deep Concrete crack • Dataset Name: ‘DeepCrack’ https://github.co
Hierarchical Feature • Data: 537 base images m/yhlleo/DeepC
Learning Architecture for • Base Material: Concrete rack
Crack Segmentation [63] • Type: [I], instance detection
2019 Automatic Pixel-level Concrete crack, • Data: 1,375 baseline images at (4032x3016). Resized Not listed in
Multiple Damage Detection Efflorescence, Hole, to (507x376). Augmented with horizontal flip on y- paper
of Concrete Structure using Spalling axis. 636 images centered on Concrete cracks, 770
Fully Convolutional Spalling, 680 Efflorescence, 634 Holes. Training
Network [64] (80%), Testing (20%).
• Base Material: Concrete
• Type: [I], instance detection
• Notes: Fully connected network design. Images
centered on segmentation target.
2020 Defect Detection on Rolling Steel manufacturing • Data: 20 baseline images at (1395x3335) Not listed in
Element Surface Scans defects • Sub-Images: 4,000 real images at (128x128). 4,000 paper
using Neural Image images at (128x128) with synthetically applied
Segmentation [65] defects. Augmented to 32,000 images. Training
(80%), Testing (20%).
• Base Material: Steel
• Type: [I], instance detection
• Notes: For steel manufacturing inspection. Custom
tiny U-Net implementation.
2020 Crack Detection and Generic crack • Data: 1,250 images (344x296 and 1024x796) Some or all
Segmentation Using Deep • Base Material: Concrete data, models, or
Learning with 3D Reality • Type: [I], [II] instance and quantification detection code generated
Mesh Model for • Notes: Mask RCNN, with 3D reality mesh model for or used during
Quantitative Assessment and quantitative assessment the study are
Integrated Visualization proprietary or
[66] confidential in
nature and may
only be
provided with
restrictions.
2020 MaDnet: Multi-task Concrete, Steel, • Data: 1,695 images at (600x600). 435 Asphalt, 595 https://sites.goo
Semantic Segmentation of Asphalt, Spalling, Steel, 665 Concrete. gle.com/view/ill
Multiple Types of Structural Exposed rebar, • Base Material: Concrete, Steel, Asphalt inois-
Materials and Damage in Concrete crack, • Type: [I], instance detection madnet/home
Images of Civil Steel corrosion, • Notes: proposed MaDnet model. Multi-task network ‘coming soon’
Infrastructure [67] Steel cracks, Asphalt architecture. Identify material then identify defects.
cracks
2020 Feature Pyramid and Asphalt cracks • Dataset Name: ‘Crack500’ https://github.co
Hierarchical Boosting • Data: Training (250), Validation (50), Test (200) at m/fyangneil/pav
Network for Pavement (2000x1500) ement-crack-
Crack Detection [68] • Sub-Images: Cropped into Training (1,896), detection
Validation (248), and Test (1,124)
• Base Material: Asphalt
• Type: [I], instance detection
• Notes: Feature Pyramid and Hierarchical Boosting
14
2020 Deep Learning Models for Impact Echo • Data: 2,016 impact echo signals from eight identical https://data.men
Bridge Deck Evaluation laboratory-made concrete specimens. deley.com/datas
using Impact Echo [69] • Base Material: Concrete ets/44rb96872r/
• Type: [I], [II], instance and quantification detection 1
• Notes: 1D CNN with bidirectional LSTM. This
dataset is annotated in two classes: sound concrete
(Class S) and defected concrete (Class D).
2020 Automatic Crack Concrete crack, • Data: 7200 base images (4288x2848) Not listed in
Recognition for Concrete Handwriting, Peel • Base Material: Concrete paper
Bridges using A Fully off, Water stain, • Type: [I], [II] instance and quantification detection
Convolutional Neural Repair trace • Notes: exhaustive sliding window using naïve Bayes
Network and I Bayes Data data fusion model for crack detection
Fusion Based on A Visual
Detection System [70]
2020 Image-based Concrete crack Concrete cracks • Data: 409 base images (4032x3016). Links to
Detection in Tunnels using • Sub-Images: 919 at (512x512). download data
Deep Fully Convolutional • Base Material: Concrete and trained
Networks [71] • Type: [I], instance detection model in article
• Notes: CrackSegNet proposed novel network.
2020 Concrete Defects Inspection Concrete cracks, • Data: Used the ‘CSSC’ [12] dataset and their custom Not listed in
and 3D Mapping using Spalling RGB-D generated data. 10,000 RGB images with paper
CityFlyer Quadrotor Robot estimated depth and normal images generated.
[72] • Base Material: Concrete
• Type: [I], instance detection
• Notes: Depth in-painting network used, InpaintNet.
2020 Automated Defect Concrete • Data: 600 RGB and 496 infrared images. Upon request of
Quantification in Concrete delamination, • Base Material: Concrete the
Bridges Using Robotics and Spalling • Type: [I], [II], instance and quantification detection corresponding
Deep Learning [73] • Notes: Semantically labeled and quantifiable with 3D author.
point cloud data. SLAM. Trained two models
(delamination and spalling).
2021 Bridge Inspection with Spalling, Exposed • Data: Training (653 im–ges - 14,302 defect Upon request of
Aerial Robots: Automating rebar, Corrosion instances), Testing (89 im–ges - 4,804 defect the
the Entire Pipeline of Visual stains, instances). corresponding
Data Capture, 3D Mapping, Efflorescence, • Base Material: Concrete author.
Defect Detection, Analysis, Concrete cracks • Type: [I], instance detection
and Reporting [74] • Notes: Faster-RCNN finetuned with MS-COCO.
15
2021 A Novel Intelligent Concrete crack • Data: 400 RGB and depth image pairs. Not listed in the
Inspection Robot with Deep • Base Material: Concrete paper
Stereo Vision for Three- • Type: [I], [II], instance and quantification detection
Dimensional Concrete • Notes: The authors set up two types of databases for
Damage Detection and storing images for object detection, damage
Quantification [75] segmentation, damage quantification, and 3D
reconstruction. Fusion network with Mask-RCNN.
2021 A Deep Learning-Based Steel crack • Dataset Name: Fine Crack Segmentation (FCS) Upon request
Fine Crack Segmentation Dataset from
Network on Full-Scale Steel • Data: Training (928 crack images), Testing (274 ipcshm@yahoo.
Bridge Images with crack images) at 512x512. com
Complicated Backgrounds • Base Material: Steel
[76] • Type: [I], instance detection
• Notes: Exhaustive sliding window. Proposed new
crack detection model framework FCS-Net –
ResNet50+ASPP+BN.
2021 Attention-guided Analysis of Concrete crack, • Data: Cleaned data from existing sources (SegNet, Sources listed,
Infrastructure Damage with spalling CSSC, RDD). 34,102 labels (27,186 with bounding but processed
Semi-supervised Deep box annotation, 6,919 with segmentation annotation). data not listed in
Learning [77] Training (70%), Validation (15%), Testing (15%). paper
• Base Material: Concrete
• Type: [I], instance detection
• Notes: Attention guided analysis, semantic
segmentation with bounding box detection. Cleaned
dataset used by authors was not listed in paper.
2021 Attribute-based Structural Concrete crack, • Data: 1000 images, 100 images in each class Not listed in
Damage Identification by Steel corrosion, category. paper
Few-shot Meta Learning Spalling, Steel • Base Material: Concrete, Steel
with Inter-class Knowledge fatigue crack, • Type: [I], instance detection
Transfer [78] Concrete • Notes: Meta-learning for few-shot image
honeycomb, classification.
Concrete pockmark,
Salt petering, Rebar
exposure, Coating
failure, Water
leakage
2021 Crack Detection using Concrete crack • Data: [1] 2,600 images (1,300 crack, 1,300 no crack) Sources listed,
Fusion Features‐based at (128x128). [2] 40,000 images (20,000 crack, but processed
Broad Learning System and 20,000 non-crack) at (227x227). data not listed in
Image Processing [79] • Base Material: Concrete paper
• Type: [I], instance detection
• Notes: (FF-BLS) network. Pre-processed data not
listed in paper.
2021 Damage Detection using In- Concrete defects • Data: Sourced from six publicly accessible datasets Listed in paper
domain and Cross-domain (CDS [10], SDNET 2018 [13], BCD [25], ICCD [18], (was in review
Transfer Learning [4] MCDS [24], CODEBRIM [40]). at time of
• Base Material: Concrete writing this
• Type: [I], instance detection article)
• Notes: In-domain and cross-domain transfer learning
2021 Crack Segmentation Concrete cracks • Data: 25,026 baseline images of raw intensity, raw Not listed in
Through Deep range, filtered range, and fused raw images at paper
Convolutional Neural (256x256). Training (15,016), Validation (5,0005,
Networks and Testing (5005)
Heterogeneous Image • Base Material: Concrete, Asphalt
Fusion [80] • Type: [I], instance detection
• Notes: Data fusion with Deep CNNs.
2021 Synthetic Environments for Non-bridge, • Dataset Name: ‘Tokaido’ Not listed in
Vision-based Structural Columns, Beams, • Data: 18,936 baseline images at (1920x1080). 8,648 paper
Condition Assessment of Slabs, Rails, regular, 7,288 close-up, 3,000 pure texture.
Japanese High-speed Sleepers, Other non- • Base Material: Concrete, Steel
Railway Viaducts [81] structural • Type: [I], instance detection
components. Non- • Notes: FCN for Synthetically generated dataset
damage, Concrete
damage, Exposed
rebar.
16
2021 Detecting Cracks and Concrete cracks, • Dataset Name: ‘CrSpEE’ https://github.co
Spalling Automatically in spalling • Data: 2,229 baseline images range from (147x228) to m/OSUPCVLab
Extreme Events By End-To- (4600x3700). Resized to (768x768). /CrSpEE
End Deep Learning • Base Material: Concrete
Frameworks [82] • Type: [I], instance detection
• Notes: Mask R-CNN high definition (HD)
2021 Structural Crack Detection Concrete cracks, • Dataset Name: ‘BCL’ https://doi.org/1
from Benchmark Data Sets steel cracks, • Data: 11,000 baseline images at (256x256). 5,769 0.7910/DVN/R
Using Pruned Fully masonry cracks non-steel cracks, 2,036 steel crack, 3,195 noise URXSH
Convolutional Networks images.
[83] • Base Material: Concrete, Steel
• Type: [I], instance detection
• Notes: BCL proposed as a benchmark dataset for
crack detection
2021 Uncertainty-assisted Deep Concrete • Data: ‘CFD’ dataset, 436 localized structural damage Not listed in
Vision Structural Health images from [58] resized to (215x200), 236 bridge paper
Monitoring [84] components images from [58], semantically edited by
the authors.
• Base Material: Concrete
• Type: [I], instance detection
• Notes: Applied Bayesian inference to deep learning
architecture for quantifiable uncertainty with Monte
Carlo drop-out. Surrogate model for uncertainty.
2021 Structural Material Concrete, Steel, • Dataset Name: Structural Material Semantic https://doi.org/1
Semantic Segmentation Metal Decking Segmentation Dataset 0.7294/1662464
Dataset [85] • Data: 3817 base images Pixel-wise share in training 8.v1
dataset [background: 17.6%, steel: 50%, concrete:
29.8%, metal decking: 2.0%]. Resized to (512x512).
Training (3436), Test (381).
• Base Material: Concrete, Steel
• Type: [I], instance detection
• Notes: Annotation guidelines included. DeeplabV3+
2021 Corrosion Condition State Steel corrosion • Dataset Name: Corrosion Condition State https://doi.org/1
Classification Dataset [86] condition state Classification Dataset 0.7294/1662466
• Data: 440 base images. Pixel-wise share in training 3.v1
dataset [background: 75.4%, fair: 12.6%, poor: 7.9%,
severe: 4.0%]. Resized to (512x512). Training (396),
Test (44).
• Base Material: Steel
• Type: [I], instance detection
• Notes: Annotation guidelines included.
DeeplabV3+
2021 Labeled Cracks in the Wild Concrete crack • Dataset Name: Labeled Cracks in the Wild (LCW) https://doi.org/1
Dataset [87] • Data: 3817 base images. Pixel-wise share in training 0.7294/1662467
dataset [background: 99.7%, cracks: 0.3%]. Resized 2.v2
to (512x512). Training (3436), Test (381).
• Base Material: Concrete
• Type: [I], instance detection
• Notes: Annotation Guidelines Included. DeeplabV3+
GENERATIVE
2020 Anomaly Detection Neural Glass, Texture, • Data: The ‘MVTec’ dataset [89] was used. Not all Not listed in
Network with Dual Auto- Object, Glass, Wood used data was cited. 383 images of glass at paper
Encoders GAN and Its (128x128). 3,815 images of wood at (256x256).
Industrial Inspection MVTec dataset of 5,354 images of varying
Applications [88] resolutions for fifteen common industrial inspection
categories.
• Base Material: Other
• Type: [I], instance detection
• Notes: Anomaly detection with GANs
17
2020 Generative Damage Concrete • Data: 43 baseline images at (6000x3000). Not listed in
Learning for Concrete • Sub-Images: 10,879 sub-images at (256x256). 4,549 paper
Aging Detection using Auto- of damages, and 6,325 non-damage
flight Images [90] • Base Material: Concrete
• Type: [I], [IV], instance detection, forecasting
• Notes: CycleGAN for anomaly detection training.
2021 Balanced Semisupervised Concrete crack, • Data: ‘Peer Hub ImageNet’/‘Structural ImageNet’ Properly listed
Generative Adversarial Concrete non- [16] and SDNET2018 [13]. in paper
Network for Damage damaged, Spalling • Base Material: Concrete
Assessment from Low-data • Type: [I], instance detection
Imbalanced-class Regime • Notes: Balanced batch sampling GAN (BBS-GAN).
[91] Balance dataset with GANs. Image classification.
2021 Forecasting Infrastructure Concrete, Steel, • Dataset Name: ‘Material Segmentation Dataset’ https://doi.org/1
Deterioration with Inverse Metal Decking • Data: 3817 base images, resized to (512x512). 0.7294/1662464
GANs [92] • Base Material: Concrete, Steel 8.v1
• Type: [IV], forecasting
• Notes: StyleGAN2 with InterfaceGAN for identifying
controls. Steel corrosion forecasting and image
manipulation and generation
135
18
25 100%
Discovered Papers
Discovered Papers
20 80%
15 60%
10 40%
5 20%
0 0%
2017 2018 2019 2020 2021 2017 2018 2019 2020 2021
Year Year
149 The base material used in the inspection tasks have been summarized in Figure 3. We are sharing
150 the trends of this characteristic to highlight the imbalance of concrete and steel base-materials.
151 Researchers should consider focusing their efforts on steel base-material damage as there is less content
152 in this area.
153
35
5
30 9
Discovered Papers
25
20
15
25
10
59
5
0
2017 2018 2019 2020 2021
Year
155
156 3.1 Computer Vision Tasks
157 Some of the most impressive results for segmentation classification or bounding box detection have
158 not been made available to the public (Hoskere, Narazaki, Hoang, & Spencer, 2018b, 2018a, 2020;
159 Yasutaka Narazaki, Hoskere, Hoang, & Spencer, 2018). However, there were some notable publicly
19
160 accessible sources available including datasets for image classification [9], [12], [13], [18], [24], [26],
161 [40], object detection [32], [34], [45], segmentation [16], [48], [63], [66], [68], [94], and generative
162 networks [91] [92]. We have highlighted several papers which have data available and accessible in each
163 of these computer vision tasks.
20
193 is unique because it identifies regions on a bridge which are prone to fatigue failure or must be inspected
194 which can improve other inspection tasks like damage detection or add contextual information for
195 unmanned aerial systems.
21
225 this catalog concept could be extended to an open-source location [7] for supervised collaboration. We
226 have built a platform that allows authors to submit their downloadable dataset (following our current
227 Table 2 organization) and request changes or edits to existing information for accuracy and updates. We
228 want to build a community for sharing, collaborating, and growing the structural health monitoring data
229 presence. We believe that a current and accurate data lake, such as this one, will speed our field’s research
230 progression in image-based machine learning.
22
257 instances and scenarios. This helps the annotators label consistently throughout the labeling process. In
258 this way, the guideline becomes a dataset’s roadmap for its extension and expansion. There were five
259 datasets [31], [46], [85]–[87] that included annotation guidelines. Additionally, annotation guidelines are
260 useful when establishing the expectations of a trained model’s performance. The model operator can look
261 at the guidelines and verify if the trained model did what the dataset creator’s intended it to do. The
262 guideline then becomes a model’s expected outcome checklist. We recommend that future datasets for
263 machine learning applications, especially supervised learning, should provide annotation guidelines, and
264 be made publicly available for further collaboration and growth of machine learning in structural
265 engineering applications.
277 Making data findable and accessible is not enough, it must also be interoperable. Interoperable
278 means that the data can fit into workflows, continues to function in the future, and contains a well-defined
279 design. One way to ensure that the data is interoperable is the inclusion of a README file which outlines
280 the contents of the data, correspondence, funding information, keywords, license, etc. in plain text. An
281 example of a README file and what to include can be found here (https://doi.org/10.7294/16624648).
282 Another way to improve the data’s interoperability is to include annotation guidelines that define how the
283 data was annotated to convey the expectations of the model as well as a means of extending the dataset.
284 The third recommendation is to ensure that the submitted data includes the original data (base images) as
285 well as any of the resized data used for training. This allows researchers to rescale the data to fit their
286 network of choice. While images may have been resized to fit the capacity of current GPUs, future GPUs
287 will have the capacity to train and compute on inference with larger images.
288 If the three previous principles are followed, then the data is in a good position to be reusable.
289 However, some additional steps are recommended to guarantee that the research data is reusable. It is
23
290 imperative to clearly define how to cite the data, the extent that the data can be used (license), and the
291 data source. During our review of structural inspection datasets, we occasionally came across papers that
292 combined data from multiple sources but was not properly cited. Following these four principles, we hope
293 to raise the bar for included data. While we have discussed the importance of providing access to data, it
294 is also extremely valuable to include access to the trained models and source code for training and
295 inference. Providing the data, code, and trained models make the research reusable and extends the reach
296 and benefits of the finalized research contribution.
24
321 4.2 Multi-Task Learning
322 Multi-task ensemble learning was introduced into the structural domain by Hoskere as MaDnet, a
323 framework for damage detection [67]. MaDnet is a multi-objective optimization method which trains
324 multiple semantic segmentation networks simultaneously so that they may learn as an ensemble. Instead
325 of training separate networks for separate tasks, they train the networks together to leverage the
326 interdependence of damage to material, material to damage, and coarse damage to finer damage. The
327 networks are therefore derived from coarse to fine semantic tasks. One of the network paths focused on
328 identifying material, the next identified damage, and the third network identified fine-details of damage.
329 This framework captured the contextual information of material and damage to guide one another and de-
330 noise false-positive predictions.
25
354 sensors such as the ZED2 camera, and transmitters which could send data to graphical user interface
355 application [75].
26
386 estimate the certainty of predictions made by the model. The outputs in the final layers of their
387 architectures produced uncertainty inference maps that illustrated the uncertainties in the semantic
388 segmentation predictions. Incorporating uncertainty feedback research into deep learning architectures for
389 structural health monitoring, as also emphasized by [84], is a ripe area to build upon given the public’s
390 reliance on these systems to be safe.
398 The first paper we discuss is the Balanced Batch Sampling BBS-GAN [91]. In one of their GAN
399 use-cases, they used their GAN to generate synthetic images to bolster unbalanced image classes for
400 image classification. Both [90] and [88] used their GANs to perform anomaly detection. [90] proposed a
401 cycleGAN network to detect concrete anomalies and defects. A cycleGAN [106] effectively trains a
402 generative model in a lightly supervised manner. A cycleGAN is typically used to transform unpaired
403 image content into another specific modality or state . For example, they built an unpaired dataset of
404 non-damaged concrete images, , and images of concrete with damage, . These types of networks take
405 any image and transform it from its current state to modality or modality . They were able to detect
406 anomalies and the level of damage in the image by evaluating the overall change after transforming the
407 image to a more restored state, . In this way they could also apply damage to images, transform to ,
408 and synthetically generate concrete damage.
409 The other GAN network we highlight is Forecasting Infrastructure Deterioration using GAN
410 Inversion [92]. This network is a GAN-inversion network. A GAN-inversion network is different than a
411 typical GAN network because it takes real images as inputs, inverts them into the learned latent space as
412 the n-dimensional vector. Because this real image is now embedded in the latent space of the generator, it
413 can be manipulated toward discovered semantic boundaries. For example, in Forecasting Infrastructure
414 Deterioration using GAN Inversion, they could incrementally output years of projected artificial
415 deterioration onto real or synthetic images. The forecasting of deterioration could be useful for bridge
416 inspectors to aid in their decision of action or inaction for a particular structural element.
27
417 5 Conclusion
418 In this paper, we have cataloged the most extensive list of datasets in this field of research for image-
419 based structural inspection. These datasets provide the basis for the publicly accessible Google Sheets
420 data link table for continued collaboration in our domain. We have identified steel base material as a
421 potentially underdeveloped portion of data when compared to the concrete base material datasets in
422 literature. Emerging methods such as image fusion, multi-task learning, change detection, uncertainty
423 feedback in models, condition-based assessment, and forecasting damage evolution were identified as
424 potential future research avenues. Finally, given our experiences with researching and developing
425 datasets, we suggest that researchers include annotation guidelines for their datasets used for supervised
426 learning and follow the FAIR principles for data and models. We believe our success and innovation,
427 especially in machine learning, will be accelerated by collaboration and open-sourced data, models, code,
428 and information.
429 6 References
430 [1] C.-Z. Dong and F. N. Catbas, “A review of computer vision–based structural health monitoring at
431 local and global levels,” Structural Health Monitoring, vol. 20, no. 2, pp. 692–743, Mar. 2021,
432 doi: 10.1177/1475921720935585.
433 [2] S. Sony, K. Dunphy, A. Sadhu, and M. Capretz, “A systematic review of convolutional neural
434 network-based structural condition assessment techniques,” Engineering Structures, vol. 226, p.
435 111347, Jan. 2021, doi: 10.1016/j.engstruct.2020.111347.
436 [3] M. Flah, I. Nunez, W. B. Chaabene, and M. L. Nehdi, “Machine Learning Algorithms in Civil
437 Structural Health Monitoring: A Systematic Review,” Archives of Computational Methods in
438 Engineering, vol. 28, no. 4, pp. 2621–2643, 2021, doi: 10.1007/s11831-020-09471-9.
439 [4] Z. A. Bukhsh, N. Jansen, and A. Saeed, “Damage detection using in-domain and cross-domain
440 transfer learning,” Neural Computing and Applications, 2021, doi: 10.1007/s00521-021-06279-x.
441 [5] S. Stent, R. Gherardi, B. Stenger, K. Soga, and R. Cipolla, “Visual change detection on tunnel
442 linings,” Machine Vision and Applications, vol. 27, no. 3, pp. 319–330, 2016, doi:
443 10.1007/s00138-014-0648-8.
444 [6] B. F. Spencer, V. Hoskere, and Y. Narazaki, “Advances in Computer Vision-Based Civil
445 Infrastructure Inspection and Monitoring,” Engineering, vol. 5, no. 2, pp. 199–222, 2019, doi:
446 10.1016/j.eng.2018.11.030.
28
447 [7] E. Bianchi and M. Hebdon, “Table of Dataset Links for Visual Structural Inspection Image Data,”
448 2021. https://github.com/beric7/structural_inspection_main/) (accessed Dec. 08, 2021).
449 [8] L. Petricca, T. Moss, G. Figueroa, and S. Broen, Corrosion Detection Using A.I : A Comparison of
450 Standard Computer Vision Techniques and Deep Learning Model, 2016, pp. 91–99. doi:
451 10.5121/csit.2016.60608.
456 [10] P. Huethwohl, “Cambridge Bridge Inspection Dataset [Dataset].” 2017. doi:
457 https://doi.org/10.17863/CAM.13813.
458 [11] Y.-J. Cha, W. Choi, and O. Büyüköztürk, “Deep Learning-Based Crack Damage Detection Using
459 Convolutional Neural Networks,” Computer-Aided Civil and Infrastructure Engineering, vol. 32,
460 no. 5, pp. 361–378, 2017, doi: 10.1111/mice.12263.
461 [12] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image
462 Recognition,” International Conference on Intelligent Robots and Systems (IROS), Sep. 2014,
463 Accessed: May 26, 2020. [Online]. Available: http://arxiv.org/abs/1409.1556
464 [13] S. Dorafshan, R. J. Thomas, and M. Maguire, “SDNET2018: An annotated image dataset for non-
465 contact concrete crack detection using deep convolutional neural networks,” Data in Brief, vol. 21,
466 pp. 1664–1668, Dec. 2018, doi: 10.1016/j.dib.2018.11.015.
467 [14] B. Kim and S. Cho, “Automated vision-based detection of cracks on concrete surfaces using a
468 deep learning technique,” Sensors (Switzerland), vol. 18, no. 10, 2018, doi: 10.3390/s18103452.
469 [15] W. R. L. da Silva and D. S. De Lucena, “Concrete Cracks Detection Based on Deep Learning
470 Image Classification,” Proceedings, vol. 2, no. 8, p. 489, 2018, doi: 10.3390/icem18-05387.
471 [16] Y. Gao and K. M. Mosalam, “Deep Transfer Learning for Image-Based Structural Damage
472 Recognition,” Computer-Aided Civil and Infrastructure Engineering, vol. 33, no. 9, pp. 748–768,
473 2018, doi: 10.1111/mice.12363.
474 [17] D. J. Atha and M. R. Jahanshahi, “Evaluation of deep learning approaches based on convolutional
475 neural networks for corrosion detection,” Structural Health Monitoring, vol. 17, no. 5, pp. 1110–
476 1128, 2018, doi: 10.1177/1475921717737051.
29
477 [18] S. Li and X. Zhao, “Image-Based Concrete Crack Detection Using Convolutional Neural Network
478 and Exhaustive Search Technique,” Advances in Civil Engineering, pp. 1–12, Apr. 2019, doi:
479 10.1155/2019/6520620.
480 [19] J. Fu, X. Zhu, and Y. Li, “Recognition Of Surface Defects On Steel Sheet Using Transfer
481 Learning,” CoRR, vol. abs/1909.0, 2019, [Online]. Available: http://arxiv.org/abs/1909.03258
482 [20] S. Park, S. Bang, H. Kim, and H. Kim, “Patch-Based Crack Detection in Black Box Images Using
483 Convolutional Neural Networks,” Journal of Computing in Civil Engineering, vol. 33, no. 3, p.
484 04019017, May 2019, doi: 10.1061/(ASCE)CP.1943-5487.0000831.
485 [21] C. M. Yeum, J. Choi, and S. J. Dyke, “Automated region-of-interest localization and classification
486 for vision-based visual assessment of civil infrastructure,” Structural Health Monitoring, vol. 18,
487 no. 3, pp. 675–689, 2019, doi: 10.1177/1475921718765419.
488 [22] J. Wu and J. Zhang, “New Automated BIM Object Classification Method to Support BIM
489 Interoperability,” Journal of Computing in Civil Engineering, vol. 33, no. 5, p. 04019033, 2019,
490 doi: 10.1061/(asce)cp.1943-5487.0000858.
494 [24] P. Hüthwohl, R. Lu, and I. Brilakis, “Multi-classifier for reinforced concrete bridge defects,”
495 Automation in Construction, vol. 105, no. April, p. 102824, 2019, doi:
496 10.1016/j.autcon.2019.04.019.
497 [25] H. Xu, X. Su, H. Xu, and H. Li, Autonomous Bridge Crack Detection Using Deep Convolutional
498 Neural Networks, in Proceedings of the 3rd International Conference on Computer Engineering,
499 Information Science & Application Technology (ICCIA 2019), 2019. doi: 10.2991/iccia-
500 19.2019.42.
501 [26] N. D. Hoang, Q. L. Nguyen, X. L. Tran, and M. Andrea, “Automatic Detection of Concrete
502 Spalling Using Piecewise Linear Stochastic Gradient Descent Logistic Regression and Image
503 Texture Analysis,” Complexity, vol. 2019, 2019, doi: 10.1155/2019/5910625.
504 [27] H. Perez, J. H. M. Tah, and A. Mosavi, “Deep learning for detecting building defects using
505 convolutional neural networks,” Sensors (Switzerland), vol. 19, no. 16, 2019, doi:
506 10.3390/s19163556.
30
507 [28] P. Kruachottikul, N. Cooharojananone, G. Phanomchoeng, T. Chavarnakul, K. Kovitanggoon, D.
508 Trakulwaranont, and K. Atchariyachanvanich, Bridge Sub Structure Defect Inspection Assistance
509 by using Deep Learning, in Proceedings of 2019 IEEE 10th International Conference on
510 Awareness Science and Technology, iCAST 2019 - Proceedings, Oct. 2019. doi:
511 10.1109/ICAwST.2019.8923507.
512 [29] J. Zhu and J. Song, “An intelligent classification model for surface defects on cement concrete
513 bridges,” Applied Sciences (Switzerland), vol. 10, no. 3, p. 120109, 2020, doi:
514 10.3390/app10030972.
515 [30] Y. Huang, C. Qiu, X. Wang, S. Wang, and K. Yuan, “A compact convolutional neural network for
516 surface defect inspection,” Sensors (Switzerland), vol. 20, no. 7, pp. 1–19, Apr. 2020, doi:
517 10.3390/s20071974.
518 [31] E. Bianchi and M. Hebdon, “Bearing Condition State Classification Dataset.” University Libraries,
519 Virginia Tech, 2021. doi: 10.7294/16624642.v1.
520 [32] C. M. Yeum, “Computer Vision-Based Structural Assessment Exploiting Large Volumes of
521 Images,” 2016. Accessed: Jun. 10, 2018. [Online]. Available:
522 https://engineering.purdue.edu/IISL/Publications/DSc_Dissertations/Chul_Min_Yeum.pdf
523 [33] C. M. Yeum, S. J. Dyke, and J. Ramirez, “Visual data classification in post-event building
524 reconnaissance,” Engineering Structures, vol. 155, no. September 2016, pp. 16–24, 2018, doi:
525 10.1016/j.engstruct.2017.10.057.
526 [34] V. Mandal, L. Uong, and Y. Adu-Gyamfi, “Automated Road Crack Detection Using Deep
527 Convolutional Neural Networks,” IEEE International Conference on Big Data (Big Data), pp.
528 5212–5215, 2018, doi: 10.1109/BigData.2018.8622327.
529 [35] R. Li, Y. Yuan, W. Zhang, and Y. Yuan, “Unified Vision-Based Methodology for Simultaneous
530 Concrete Defect Detection and Geolocalization,” Computer-Aided Civil and Infrastructure
531 Engineering, vol. 33, no. 7, pp. 527–544, Feb. 2018, doi: 10.1111/mice.12351.
532 [36] C. Zhang, C. C. Chang, and M. Jamshidi, “Bridge Damage Detection using a Single-Stage
533 Detector and Field Inspection Images,” Dec. 2018, Accessed: Apr. 16, 2019. [Online]. Available:
534 https://arxiv.org/ftp/arxiv/papers/1812/1812.10590.pdf
535 [37] C. Zhang, C. Chang, and M. Jamshidi, “Concrete bridge surface damage detection using a single-
536 stage detector,” Computer-Aided Civil and Infrastructure Engineering, vol. 35, no. 4, pp. 389–
31
537 409, 2020, doi: https://doi.org/10.1111/mice.12500.
538 [38] Y. J. Cha, W. Choi, G. Suh, S. Mahmoudkhani, and O. Büyüköztürk, “Autonomous Structural
539 Visual Inspection Using Region-Based Deep Learning for Detecting Multiple Damage Types,”
540 Computer-Aided Civil and Infrastructure Engineering, vol. 33, no. 9, pp. 731–747, 2018, doi:
541 10.1111/mice.12334.
542 [39] H. Kim, E. Ahn, M. Shin, and S.-H. Sim, “Crack and Noncrack Classification from Concrete
543 Surface Images Using Machine Learning,” Structural Health Monitoring, vol. 18, no. 3, pp. 725–
544 738, 2019, doi: 10.1177/1475921718768747.
545 [40] M. Mundt, S. Majumder, S. Murali, P. Panetsos, and V. Ramesh, Meta-learning convolutional
546 neural architectures for multi-target concrete defect classification with the concrete defect bridge
547 image dataset, in Proceedings of the IEEE Computer Society Conference on Computer Vision and
548 Pattern Recognition, 2019, vol. 2019-June, pp. 11188–11197. doi: 10.1109/CVPR.2019.01145.
549 [41] X. Lv, F. Duan, J. J. Jiang, X. Fu, and L. Gan, “Deep metallic surface defect detection: The new
550 benchmark and detection network,” Sensors (Switzerland), vol. 20, no. 6, Mar. 2020, doi:
551 10.3390/s20061562.
552 [42] J. Deng, Y. Lu, and V. C. S. Lee, “Imaging-based crack detection on concrete surfaces using You
553 Only Look Once network,” Structural Health Monitoring, p. 147592172093848, Jul. 2020, doi:
554 10.1177/1475921720938486.
555 [43] D. Arya, H. Maeda, S. K. Ghosh, D. Toshniwal, and Y. Sekimoto, “RDD2020: An annotated
556 image dataset for automatic road damage detection using deep learning,” Data in Brief, vol. 36, p.
557 107133, 2021, doi: 10.1016/j.dib.2021.107133.
558 [44] C. Luo, L. Yu, J. Yan, Z. Li, P. Ren, X. Bai, E. Yang, and Y. Liu, “Autonomous detection of
559 damage to multiple steel surfaces from 360° panoramas using deep neural networks,” Computer-
560 Aided Civil and Infrastructure Engineering, pp. 1–15, 2021, doi: 10.1111/mice.12686.
561 [45] E. Bianchi, A. L. Abbott, P. Tokekar, and M. Hebdon, “COCO-Bridge: Structural Detail Data Set
562 for Bridge Inspections,” Journal of Computing in Civil Engineering, vol. 35, no. 3, p. 04021003,
563 2021, doi: 10.1061/(asce)cp.1943-5487.0000949.
564 [46] E. Bianchi and M. Hebdon, “COCO-Bridge 2021+ Dataset.” University Libraries, Virginia Tech,
565 2021. doi: 10.7294/16624495.v1.
566 [47] Q. Zou, Y. Cao, Q. Li, Q. Mao, and S. Wang, “CrackTree: Automatic crack detection from
32
567 pavement images,” Pattern Recognition Letters, vol. 33, no. 3, pp. 227–238, 2012, doi:
568 10.1016/j.patrec.2011.11.004.
569 [48] Y. Shi, L. Cui, Z. Qi, F. Meng, and Z. Chen, “Automatic road crack detection using random
570 structured forests,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 12, pp.
571 3434–3445, 2016, doi: 10.1109/TITS.2016.2552248.
572 [49] C. Feng, M. Y. Liu, C. C. Kao, and T. Y. Lee, Deep active learning for civil infrastructure defect
573 detection and classification, in Congress on Computing in Civil Engineering, Proceedings, 2017,
574 pp. 298–306. doi: 10.1061/9780784480823.036.
575 [50] J. Peng, S. Zhang, D. Peng, and K. Liang, Research on Bridge Crack Detection with Neural
576 Network Based Image Processing Methods, in Proceedings - 12th International Conference on
577 Reliability, Maintainability, and Safety, ICRMS 2018, Jul. 2018, pp. 419–428. doi:
578 10.1109/ICRMS.2018.00085.
579 [51] Y. Narazaki, V. Hoskere, T. A. Hoang, and B. F. Spencer, Automated Bridge Component
580 Recognition using Video Data, in Proceedings of the 7th World Conference on Structural Control
581 and Monitoring, Jun. 2018. [Online]. Available: http://arxiv.org/abs/1806.06820
582 [52] Y. Narazaki, V. Hoskere, T. A. Hoang, and B. F. Spencer, Vision-based automated bridge
583 component recognition integrated with high-level scene understanding, in Proceedings of the 13th
584 International Workshop on Advanced Smart Materials and Smart Structures Technology, 2018.
585 [Online]. Available: http://arxiv.org/abs/1805.06041
586 [53] V. Hoskere, Y. Narazaki, T. A. Hoang, and B. F. Spencer, Towards automated post-earthquake
587 inspections with deep learning-based condition-aware models, in Proceedings of the 7th World
588 Conference on Structural Control and Monitoring, 2018. [Online]. Available:
589 http://arxiv.org/abs/1809.09195
590 [54] V. Hoskere, Y. Narazaki, T. Hoang, and B. Spencer, Vision-based Structural Inspection using
591 Multiscale Deep Convolutional Neural Networks, in Proceedings of the 3rd Huixian International
592 Forum on Earthquake Engineering for Young Researchers, 2018. Accessed: Jun. 10, 2018.
593 [Online]. Available: https://arxiv.org/ftp/arxiv/papers/1805/1805.01055.pdf
594 [55] A. Zhang, K. C. P. Wang, Y. Fei, Y. Liu, S. Tao, C. Chen, J. Q. Li, and B. Li, “Deep Learning–
595 Based Fully Automated Pavement Crack Detection on 3D Asphalt Surfaces with an Improved
596 CrackNet,” Journal of Computing in Civil Engineering, vol. 32, no. 5, p. 04018041, 2018, doi:
597 10.1061/(asce)cp.1943-5487.0000775.
33
598 [56] T. Dawood, Z. Zhu, and T. Zayed, “Computer Vision-Based Model for Moisture Marks Detection
599 and Recognition in Subway Networks,” Journal of Computing in Civil Engineering, vol. 32, no. 2,
600 p. 04017079, Mar. 2018, doi: 10.1061/(ASCE)CP.1943-5487.0000728.
601 [57] R. Dobson, T. Ahlborn, and D. Banach, “Evaluation of bridge decks using non-destructive
602 evaluation (NDE) at near highway speeds for effective asset management-implementation for
603 routine inspection (Phase III),” 2018. [Online]. Available: https://rosap.ntl.bts.gov/view/dot/42754
604 [58] X. Liang, “Image-based post-disaster inspection of reinforced concrete bridge systems using deep
605 learning with Bayesian optimization,” Computer-Aided Civil and Infrastructure Engineering, vol.
606 34, no. 5, pp. 415–430, 2019, doi: 10.1111/mice.12425.
607 [59] Y. Xu, Y. Bao, J. Chen, W. Zuo, and H. Li, “Surface fatigue crack identification in steel box
608 girder of bridges by a deep fusion convolutional neural network based on consumer-grade camera
609 images,” Structural Health Monitoring, vol. 18, no. 3, pp. 653–674, 2019, doi:
610 10.1177/1475921718764873.
611 [60] M. Shafiei Dizaji and D. Harris, “3D InspectionNet: a deep 3D convolutional neural networks
612 based approach for 3D defect detection on concrete columns,” no. April 2019, p. 13, 2019, doi:
613 10.1117/12.2514387.
614 [61] M. Alipour, D. K. Harris, and G. R. Miller, “Robust Pixel-Level Crack Detection Using Deep
615 Fully Convolutional Neural Networks,” Journal of Computing in Civil Engineering, vol. 33, no. 6,
616 2019, doi: 10.1061/(ASCE)CP.1943-5487.0000854.
617 [62] C. Song, L. Wu, Z. Chen, H. Zhou, P. Lin, S. Cheng, and Z. Wu, Pixel-Level Crack Detection in
618 Images Using SegNet, in Lecture Notes in Computer Science (including subseries Lecture Notes in
619 Artificial Intelligence and Lecture Notes in Bioinformatics), 2019, vol. 11909 LNAI, pp. 247–254.
620 doi: 10.1007/978-3-030-33709-4_22.
621 [63] Y. Liu, J. Yao, X. Lu, R. Xie, and L. Li, “DeepCrack: A deep hierarchical feature learning
622 architecture for crack segmentation,” Neurocomputing, vol. 338, pp. 139–153, 2019, doi:
623 10.1016/j.neucom.2019.01.036.
624 [64] S. Li, X. Zhao, and G. Zhou, “Automatic pixel-level multiple damage detection of concrete
625 structure using fully convolutional network,” Computer-Aided Civil and Infrastructure
626 Engineering, vol. 34, no. 7, pp. 616–634, 2019, doi: 10.1111/mice.12433.
627 [65] N. Prappacher, M. Bullmann, G. Bohn, F. Deinzer, and A. Linke, “Defect detection on rolling
34
628 element surface scans using neural image segmentation,” Applied Sciences (Switzerland), vol. 10,
629 no. 9, p. 3290, May 2020, doi: 10.3390/app10093290.
630 [66] R. Kalfarisi, Z. Y. Wu, and K. Soh, “Crack Detection and Segmentation Using Deep Learning
631 with 3D Reality Mesh Model for Quantitative Assessment and Integrated Visualization,” Journal
632 of Computing in Civil Engineering, vol. 34, no. 3, p. 04020010, 2020, doi: 10.1061/(asce)cp.1943-
633 5487.0000890.
634 [67] V. Hoskere, Y. Narazaki, T. A. Hoang, and B. F. Spencer, “MaDnet: multi-task semantic
635 segmentation of multiple types of structural materials and damage in images of civil
636 infrastructure,” Journal of Civil Structural Health Monitoring, vol. 10, no. 5, pp. 757–773, Jun.
637 2020, doi: 10.1007/s13349-020-00409-0.
638 [68] F. Yang, L. Zhang, S. Yu, D. Prokhorov, X. Mei, and H. Ling, “Feature Pyramid and Hierarchical
639 Boosting Network for Pavement Crack Detection,” IEEE Transactions on Intelligent
640 Transportation Systems, vol. 21, no. 4, pp. 1525–1535, 2020, doi: 10.1109/TITS.2019.2910595.
641 [69] S. Dorafshan and H. Azari, “Deep learning models for bridge deck evaluation using impact echo,”
642 Construction and Building Materials, vol. 263, p. 120109, Dec. 2020, doi:
643 10.1016/j.conbuildmat.2020.120109.
644 [70] C. A. Perez-Ramirez, G. Li, Q. Liu, S. Zhao, W. Qiao, and X. Ren, “Automatic crack recognition
645 for concrete bridges using a fully convolutional neural network and naive Bayes data fusion based
646 on a visual detection system,” Measurement Science and Technology, vol. 31, p. 17, 2020, doi:
647 10.1088/1361-6501/ab79c8.
648 [71] Y. Ren, J. Huang, Z. Hong, W. Lu, J. Yin, L. Zou, and X. Shen, “Image-based concrete crack
649 detection in tunnels using deep fully convolutional networks,” Construction and Building
650 Materials, vol. 234, p. 117367, 2020, doi: 10.1016/j.conbuildmat.2019.117367.
651 [72] L. Yang, B. Li, W. Li, H. Brand, B. Jiang, and J. Xiao, “Concrete defects inspection and 3D
652 mapping using CityFlyer quadrotor robot,” IEEE/CAA Journal of Automatica Sinica, vol. 7, no. 4,
653 pp. 991–1002, 2020, doi: 10.1109/JAS.2020.1003234.
654 [73] E. Mclaughlin, N. Charron, and S. Narasimhan, “Automated Defect Quantification in Concrete
655 Bridges Using Robotics and Deep Learning,” Journal of Computing in Civil Engineering, 2020,
656 doi: 10.1061/(ASCE)CP.1943-5487.0000915.
657 [74] J. J. Lin, A. Ibrahim, S. Sarwade, and M. Golparvar-Fard, “Bridge Inspection with Aerial Robots:
35
658 Automating the Entire Pipeline of Visual Data Capture, 3D Mapping, Defect Detection, Analysis,
659 and Reporting,” Journal of Computing in Civil Engineering, vol. 35, no. 2, p. 04020064, 2021,
660 doi: 10.1061/(asce)cp.1943-5487.0000954.
661 [75] C. Yuan, B. Xiong, X. Li, X. Sang, and Q. Kong, “A novel intelligent inspection robot with deep
662 stereo vision for three-dimensional concrete damage detection and quantification,” Structural
663 Health Monitoring, 2021, doi: 10.1177/14759217211010238.
664 [76] Z. Li, H. Zhu, and M. Huang, “A Deep Learning-Based Fine Crack Segmentation Network on
665 Full-Scale Steel Bridge Images with Complicated Backgrounds,” IEEE Access, vol. 9, pp.
666 114989–114997, 2021, doi: 10.1109/ACCESS.2021.3105279.
667 [77] E. Karaaslan, U. Bagci, and F. N. Catbas, “Attention-guided analysis of infrastructure damage
668 with semi-supervised deep learning,” Automation in Construction, vol. 125, no. April 2019, p.
669 103634, 2021, doi: 10.1016/j.autcon.2021.103634.
670 [78] Y. Xu, Y. Bao, Y. Zhang, and H. Li, “Attribute-based structural damage identification by few-shot
671 meta learning with inter-class knowledge transfer,” Structural Health Monitoring, vol. 20, no. 4,
672 pp. 1494–1517, 2021, doi: 10.1177/1475921720921135.
673 [79] Y. Zhang and K. Yuen, “Crack detection using fusion features‐based broad learning system and
674 image processing,” Computer-Aided Civil and Infrastructure Engineering, pp. 1–17, 2021, doi:
675 10.1111/mice.12753.
676 [80] S. Zhou and W. Song, “Crack segmentation through deep convolutional neural networks and
677 heterogeneous image fusion,” Automation in Construction, vol. 125, no. October 2020, p. 103605,
678 2021, doi: 10.1016/j.autcon.2021.103605.
679 [81] Y. Narazaki, V. Hoskere, K. Yoshida, B. F. Spencer, and Y. Fujino, “Synthetic environments for
680 vision-based structural condition assessment of Japanese high-speed railway viaducts,”
681 Mechanical Systems and Signal Processing, vol. 160, p. 107850, 2021, doi:
682 10.1016/j.ymssp.2021.107850.
683 [82] Y. Bai, H. Sezen, and A. Yilmaz, “Detecting Cracks and Spalling Automatically in Extreme
684 Events By End-To-End Deep Learning Frameworks,” ISPRS Annals of the Photogrammetry,
685 Remote Sensing and Spatial Information Sciences, vol. V-2–2021, pp. 161–168, 2021, doi:
686 10.5194/isprs-annals-v-2-2021-161-2021.
687 [83] X. W. Ye, T. Jin, Z. X. Li, S. Y. Ma, Y. Ding, and Y. H. Ou, “Structural Crack Detection from
36
688 Benchmark Data Sets Using Pruned Fully Convolutional Networks,” Journal of Structural
689 Engineering, vol. 147, no. 11, p. 04721008, 2021, doi: 10.1061/(asce)st.1943-541x.0003140.
690 [84] S. O. Sajedi and X. Liang, “Uncertainty-assisted deep vision structural health monitoring,”
691 Computer-Aided Civil and Infrastructure Engineering, vol. 36, no. 2, pp. 126–142, 2021, doi:
692 10.1111/mice.12580.
693 [85] E. Bianchi and M. Hebdon, “Structural Material Semantic Segmentation Dataset.” University
694 Libraries, Virginia Tech, 2021. doi: 10.7294/16624648.v1.
695 [86] E. Bianchi and M. Hebdon, “Corrosion Condition State Semantic Segmentation Dataset.”
696 University Libraries, Virginia Tech, 2021. doi: 10.7294/16624663.v1.
697 [87] E. Bianchi and M. Hebdon, “Labeled Cracks in the Wild (LCW) Dataset.” University Libraries,
698 Virginia Tech, 2021. doi: 10.7294/16624672.v2.
699 [88] T.-W. Tang, W.-H. Kuo, J.-H. Lan, C.-F. Ding, H. Hsu, and H.-T. Young, “Anomaly Detection
700 Neural Network with Dual Auto-Encoders GAN and Its Industrial Inspection Applications,”
701 Sensors, vol. 20, no. 12, p. 3336, Jun. 2020, doi: 10.3390/s20123336.
702 [89] P. Bergmann, M. Fauser, D. Sattlegger, and C. Steger, “MVTEC ad-A comprehensive real-world
703 dataset for unsupervised anomaly detection,” Proceedings of the IEEE Computer Society
704 Conference on Computer Vision and Pattern Recognition, vol. 2019-June, pp. 9584–9592, 2019,
705 doi: 10.1109/CVPR.2019.00982.
706 [90] T. Yasuno, A. Ishii, J. Fujii, M. Amakata, and Y. Takahashi, “Generative damage learning for
707 concrete aging detection using auto-flight images,” arXiv, no. Isarc. 2020. doi:
708 10.22260/isarc2020/0166.
709 [91] Y. Gao, P. Zhai, and K. M. Mosalam, “Balanced semisupervised generative adversarial network
710 for damage assessment from low-data imbalanced-class regime,” Computer-Aided Civil and
711 Infrastructure Engineering, vol. 36, no. 9, pp. 1094–1113, 2021, doi: 10.1111/mice.12741.
712 [92] E. Bianchi and M. Hebdon, Forecasting infrastructure deterioration with inverse GANs, in
713 Applications of Machine Learning, 2021. doi: 10.1117/12.2595111.
714 [93] V. Hoskere, Y. Narazaki, T. A. Hoang, and B. F. Spencer, “Vision-based Structural Inspection
715 using Multiscale Deep Convolutional Neural Networks,” 2018, Accessed: Jun. 10, 2018. [Online].
716 Available: https://arxiv.org/ftp/arxiv/papers/1805/1805.01055.pdf
37
717 [94] M. I. April and C. Spring, “Structures Congress 2014,” no. 1, pp. 1437–1447, 2014. ISBN:
718 9780784413357
719 [95] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A Large-Scale
720 Hierarchical Image Database,” 2009 IEEE Conference on Computer Vision and Pattern
721 Recognition, 2009, doi: 10.1109/CVPR.2009.5206848.
722 [96] S. Dorafshan, R. J. Thomas, and M. Maguire, “SDNET2018: An annotated image dataset for non-
723 contact concrete crack detection using deep convolutional neural networks,” Data in Brief, vol. 21,
724 pp. 1664–1668, 2018, doi: 10.1016/j.dib.2018.11.015.
725 [97] V. Hoskere, F. Amer, D. Friedel, W. Yang, Y. Tang, Y. Narazaki, M. D. Smith, M. Golparvar-
726 Fard, and B. F. Spencer, “Instadam: Open-source platform for rapid semantic segmentation of
727 structural damage,” Applied Sciences (Switzerland), vol. 11, no. 2, pp. 1–16, 2021, doi:
728 10.3390/app11020520.
729 [98] M. D. Wilkinson et al., “Comment: The FAIR Guiding Principles for scientific data management
730 and stewardship,” Scientific Data, vol. 3, pp. 1–9, 2016, doi: 10.1038/sdata.2016.18.
731 [99] AASHTO, Manual for Bridge Element Inspection (1st Edition), with 2015 and 2018 Interim
732 Revisions. American Association of State Highway and Transportation Officials (AASHTO),
733 2018. ISBN: 978-1-56051-591-3
734 [100] T. W. Ryan, J. E. Mann, and Z. M. Chill, FHWA Bridge Inspector’s Reference Manual (BIRM),
735 Vol. 1. 2012. [Online]. Available: https://www.fhwa.dot.gov/bridge/nbis/pubs/nhi12049.pdf
736 [101] Y.-J. Cha, W. Choi, G. Suh, S. Mahmoudkhani, and O. Büyüköztürk, “Autonomous Structural
737 Visual Inspection Using Region-Based Deep Learning for Detecting Multiple Damage Types,”
738 Computer-Aided Civil and Infrastructure Engineering, vol. 33, no. 9, pp. 731–747, 2018, doi:
739 10.1111/mice.12334.
743 [103] P. E. Sarlin, D. Detone, T. Malisiewicz, and A. Rabinovich, SuperGlue: Learning Feature
744 Matching with Graph Neural Networks, in Proceedings of the IEEE Computer Society Conference
745 on Computer Vision and Pattern Recognition, 2020, pp. 4937–4946. doi:
746 10.1109/CVPR42600.2020.00499.
38
747 [104] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-Decoder with Atrous
748 Separable Convolution for Semantic Image Segmentation,” in Lecture Notes in Computer Science
749 (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),
750 vol. 11211 LNCS, 2018, pp. 833–851. doi: 10.1007/978-3-030-01234-2_49.
751 [105] A. Kendall and Y. Gal, “What Uncertainties Do We Need in Bayesian Deep Learning for
752 Computer Vision?,” Advances in Neural Information Processing Systems, vol. 2017-Decem, no.
753 Nips, pp. 5575–5585, Mar. 2017, [Online]. Available: http://arxiv.org/abs/1703.04977
754 [106] J. Y. Zhu, T. Park, P. Isola, and A. A. Efros, Unpaired Image-to-Image Translation Using Cycle-
755 Consistent Adversarial Networks, in Proceedings of the IEEE International Conference on
756 Computer Vision, 2017, vol. 2017-Octob, pp. 2242–2251. doi: 10.1109/ICCV.2017.244.
757
39