0% found this document useful (0 votes)
15 views

structural_form_finding_enhanced_by_graph_neural_networks

Uploaded by

mina saadat
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

structural_form_finding_enhanced_by_graph_neural_networks

Uploaded by

mina saadat
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Structural Form-Finding enhanced by

Graph Neural Networks

Lazlo Bleker1, Rafael Pastrana2, Patrick Ole Ohlbrock3, Pierluigi D’Acunto1


1 Technical University Munich, School of Engineering and Design, Professorship of Structural
Design, Arcisstr. 21 80333, Munich, Germany
[email protected]
2 Princeton University, Form Finding Lab, Princeton, USA
3 ETH Zurich, Institute of Technology in Architecture, Chair of Structural Design, Zurich,

Switzerland

Abstract. Computational form-finding methods hold great potential for enabling


resource-efficient structural design. In this context, the Combinatorial Equilib-
rium Modelling (CEM) allows the design of cross-typological tension-compres-
sion structures starting from an input topology diagram in the form of a graph.
This paper presents an AI-assisted design workflow in which the graph modelling
process required by the CEM is simplified through the application of a Graph
Neural Network (GNN). To this end, a GNN model is used for the automatic
labelling of edges of unlabelled topology diagrams. A synthetic topology diagram
data generator is developed to produce training data for the GNN model. The
trained GNN is tested on a dataset of typical bridge topologies based on real
structures. The experiments show that the trained GNN generalises well to un-
seen synthetic data and data from real structures similar to the synthetic data.
Hence, further developments of the GNN model have the potential to make the
proposed design workflow a valuable tool for the conceptual design of structures.

Keywords: Structural Form-Finding, Combinatorial Equilibrium Modelling,


Equilibrium-based design, Machine Learning, Graph Neural Networks.

1 Introduction

A substantial proportion of the embodied carbon of buildings and infrastructures is at-


tributable to their supporting structures. Therefore, it is of great interest to minimise the
material use of such load-bearing systems (Kaethner and Burridge 2012). The potential
impact of designers on the outcome of the design process, thus including the embodied
carbon of buildings, is the highest in the conceptual design phase, and it steadily de-
creases as the design progresses (Paulson 1976). The field of structural form-finding
deals with methods that allow the transformation of form and forces to generate and
explore efficient structural geometries, making it particularly suited for the conceptual
design stage. Thus, applying structural form-finding holds great potential for decreas-
ing the embodied carbon of new buildings and infrastructures. However, manually ex-
ploring the entire design space of solutions offered by form-finding methods is often
2

infeasible. Employing the ability of machine learning to extract information from large
amounts of data could increase the potential impact of structural form-finding on the
design process even more. Machine learning has already been applied to parametric
structural exploration (Danhaive and Mueller 2021) as well as structural design through
form-finding (Ochoa et al. 2021). This paper introduces a novel conceptual structural
design workflow that relies on the use of Graph Neural Networks (GNN) to enhance
the structural form-finding process.

1.1 Combinatorial Equilibrium Modelling (CEM)

Form-finding approaches such as the force density method (FDM), are tailored for
structures in pure compression or tension (Vassart and Motro 1999). On the contrary,
the Combinatorial Equilibrium Modelling (CEM) (Ohlbrock and D'Acunto 2020), a
form-finding method based on vector-based graphic statics (D'Acunto et al. 2019) and
graph theory, is best-suited for mixed tension-compression structures. The primary in-
put for the CEM is the topology diagram, a graph representing the structure's topology
(Figure 1). A graph is a mathematical structure consisting of a set of nodes intercon-
nected by edges. In a CEM topology diagram, nodes contain embedded information
about the presence of supports or applied external forces. Edges are labelled either as
trail edges (Figure 1 - purple lines) or deviation edges (Figure 1 - orange lines), where
the former embed metric data related to their length and the latter related to their signed
internal force magnitude.

Fig. 1. Example of a CEM topology diagram and its constituting elements.

Creating a valid topology diagram that can be processed by the CEM requires the user
to be knowledgeable about the correct labelling of the edges and nodes (Ohlbrock and
D'Acunto 2020). As shown in Figure 1, trail edges are connected end-to-end, forming
chains of multiple edges (trails). For a topology diagram to be valid, every node must
be part of exactly one trail, and each trail must contain exactly one supported node at
one of its ends. A deviation edge connects a node of one trail to any node of another
trail. Within deviation edges one can distinguish direct deviation edges and indirect
deviation edges. Direct deviation edges connect nodes with an equal topological dis-
tance – the number of trail edges separating two nodes measured along a trail – to the
3

supported nodes of their respective trails. Indirect deviation edges connect nodes with
an unequal topological distance to their corresponding supported nodes. Manipulating
the topology diagram and its embedded metric information allows generating a large
cross-typological design space of efficient structural geometries.

1.2 Graph Neural Networks (GNNs)

Graph Neural Networks (GNNs) are a type of neural network that can operate on graphs
(Zhou et al., 2020), such as the topology diagram of the CEM. Nodes and edges within
a graph may contain embedded information, referred to as node- and edge features,
respectively. GNNs employ convolutional layers similar to those found in deep learn-
ing. During a single forward pass of the network, each convolutional layer updates the
node embedding as a function of its neighbourhood i.e., all directly connected nodes.
GNNs have been applied to a wide range of application domains in which data can be
represented as graphs such as molecules (Kearnes et al. 2016), social networks (Wu et
al. 2019), and scene representations (Qi et al. 2018). In structural design, GNNs have
been used as a surrogate model for truss FEM analysis (Whalen and Mueller 2021), the
design of cross-section sizes (Chang and Cheng, 2020) as well as the form-finding of
funicular shell structures (Tam et al. 2021).

1.3 Objective: Enhancing CEM with GNN

The efficacy of the CEM is often held back in practice by the somewhat unintuitive
nature of operating at the level of the topology diagram. One of the most challenging
tasks when working with the CEM is setting up a valid topology diagram based on an
initial design concept and correctly labelling the edges as trail and deviation edges to
allow direct control of the CEM model during the form-finding process. A brute force
approach to finding a valid edge label mapping rapidly becomes infeasible as this would
require evaluating 2e possible mappings, where e equals the number of edges in the
topology diagram. GNNs potentially present an opportunity to overcome this challenge
of finding a valid edge mapping and as such supplementing the lacking human intuition
for the graph setting with machine intelligence. This paper introduces a design work-
flow that uses a supervised GNN model to determine a mapping of edge labels for a
CEM topology that is appropriate for the form-finding task at hand. In particular, the
presented experiment focuses on the design of bridge structures.

2 Methodology

2.1 Machine learning-based design workflow

The proposed design workflow allows translating rough design ideas into actual struc-
tures that are ensured to be in equilibrium. Figure 2 shows a possible manifestation of
this human-machine collaborative process enabled by the AI model presented in this
paper. The process consists of four separate steps.
4

Wireframe sketch (a). In the first step, the human designer sketches a wireframe
model of the desired structure, including the supports. Creating this wireframe sketch
requires minimal knowledge of statics as it does not need to represent a structure in
equilibrium, although it could very well be inspired by existing structural typologies.

Unlabelled topology diagram (b). In the second step, the wireframe sketch is auto-
matically converted to a graph as an unlabelled CEM topology diagram. Every line
segment in the wireframe sketch is translated to an edge in the graph. Points of inter-
section of line segments in the wireframe model are converted into nodes in the graph.
Nodes are labelled as either supported or unsupported. Figure 2b displays an example
of a topology diagram visualised as a radial graph. The choice of how the graph is
visualised on the 2D plane does not play a role in the outcome of the design workflow.
Labelled topology diagram (c). In the third step, a GNN model is trained with a syn-
thetic dataset of topology diagrams generated from a generic ground topology. The
GNN processes the unlabelled graph representing the topology diagram and outputs a
binary edge label for each edge of the graph. The labels produced for each edge by the
GNN correspond to one of two classes being either a trail edge, or a deviation edge.
Combining these edge labels with the previously unlabelled topology diagram results
in a labelled topology diagram.

Structural model (d). Provided that the labelling produced by the GNN is a valid CEM
topology diagram (all edges are labelled correctly), this now complete topology dia-
gram can be used for the CEM to produce a structural model in equilibrium containing
both members in compression (blue) and tension (red). In this final step, the human
designer is free to choose and modify the metric parameters of the model: lengths of
trail edges and signed internal force magnitudes of deviation edges as well as points of
application, magnitude, and direction of external loads.

2.2 Graph neural network for CEM label prediction

A synthetic dataset of 50,000 valid labelled topology diagrams is created for training
the GNN. Topology diagrams are based on a parametrically generated ground topology,
the size of which is defined according to two variables:

• the number of trails NT (randomly sampled even integer between 2 and 14);
• the number of trail edges NE per trail (randomly sampled integer between 1 and 15).

Four different sets of edge types are then activated in the ground topology (Figure
3). Firstly, all possible trail edges (purple continuous lines) are activated to form a set
of trails with an equal number of trail edges. Secondly, a random subset of all possible
pairs of trails is selected; based on this selection, a subset pd (between 0 and 1) of all
possible direct deviation edges is activated. Thirdly, a random subset pc of all possible
direct centre deviation edges is activated (orange dotted lines); direct centre deviation
edges are defined as the direct deviation edges at the centre of the diagram connecting
the unsupported extremes of opposite trails. Finally, a similarly drawn random subset
pi of indirect deviation edges is activated (orange dashed lines).
5

trail edge unsupported node


deviation edge supported node

GNN

(a) (b) (c)

CEM

tension
compression
external force (d)

Fig. 2. Proposed design workflow: (a) Wireframe model sketched by the human designer. (b)
Unlabelled topology diagram automatically generated from the sketch. (c) Valid labelled topol-
ogy diagram produced by the GNN. (d) Structural model in equilibrium created by the CEM
according to the designer's input of metric parameters.

Node embeddings are one-hot encodings indicating the class of the node being either
a supported node or an unsupported node. Every edge includes a target label for train-
ing, as either trail edge or deviation edge. Figure 4 shows a set of topology diagrams
randomly produced by the synthetic data generator.
Figure 5 shows the structure of the graph neural network used for the prediction of
the edge labels. The model has been implemented in Python with Pytorch Geometric
(Matthias and Lenssen 2019). Because any edge can belong to one of 2 classes (trail
edge or deviation edge), binary cross-entropy loss is used for the loss function. During
training, models are evaluated using the Matthews Correlation Coefficient (MCC),
which is suitable for the imbalanced number of trail- and deviation edges in the dataset
as well as independent of which of both classes is defined as the positive class (Chicco
2017). The MCC ranges from -1 to 1, where 1 represents a perfect classifier and -1 a
perfect misclassifier. Through a hyperparameter study, it has been found that for accu-
rate edge label prediction the number of convolutional layers should be equal to or
larger than the maximum number of trail edges per trail NE in the training dataset (Fig-
ure 6). This can be explained by the fact that if the number of convolutional layers is
6

less than NE, information about the existence of the supported node does not reach every
node of the graph. The generated dataset contains graphs with up to NE = 15 trail edges
per trail; therefore, the main part of the GNN consists of 15 sets of a TransformerConv
(Shi et al. 2020) convolutional layer followed by a Rectified Linear Unit (ReLU) acti-
vation function. The size of the input sample of the first convolutional layer is neces-
sarily equal to the number of node features – which are one-hot encodings of the two
possible node classes – and is therefore equal to two. The size of input and output sam-
ples for consecutive convolutional layers is set to increase linearly up until 300 at the
interface between layer 8 and 9, after which it is kept constant for the following layers.

Fig. 3. Representative topology diagram with different groups of activated edges of the synthetic
generator indicated. Non-activated edges of the ground topology are shown in grey.

As shown by the hyperparameter study (Figure 7), the MCC increases steadily for
maximum output sizes up to around 300 and no longer rises significantly at larger out-
put sizes. Because creating an edge label mapping for the full graph is an edge-level
task, the convolutional layers are followed by an edge pooling layer based on the dot
product of adjacent node embeddings. Finally, these edge embeddings are subjected to
a dropout layer (p = 0.2) and followed by a linear classifier with an output vector of
length two. The argmax of this vector predicts the class to which each respective edge
belongs.
7

Fig. 4. 16 random graphs generated by the synthetic data generator. NT = number of trails, NE =
number of trail edges per trail, pc = fraction of activated centre deviation edges, pd = fraction of
activated direct deviation edges, pi = fraction of activated indirect deviation edges.

Fig. 5. The CEM edge classifier consists of a 15-layer graph neural network (GNN) followed by
an edge pooling- and a linear layer.
8

Fig. 6. Influence of the number of convolutional layers on the Matthews Correlation Coefficient
(MCC) related to the edge labelling.

Fig. 7. Influence of maximum convolutional layer output size on the Matthews Correlation Co-
efficient (MCC) related to the edge labelling.

The GNN model is trained on 70% of the synthetic dataset (35,000 graphs; 14.6
million edges) and continuously tested on 15% (7,500 graphs; 3.1 million edges) for
150 epochs. Figure 8 shows the edge-labelling MCC history for both these training and
testing parts of the dataset. At the end of training, the train data MCC is 0.91, and the
test data MCC is 0.89, at which point it no longer significantly improves. The remaining
15% of the dataset is used as a validation set and also reaches an MCC of 0.89. It is
worth noting that the number of potential graphs that the synthetic generator can pro-
duce is much larger than the number of graphs included in the training dataset. Despite
the relatively modest training dataset size, the MCC on the test dataset is comparable
with the MCC on the train dataset. This indicates that the trained GNN model general-
ises to unseen synthetic data very well. Figure 9 shows the confusion matrices for the
train and test datasets for the trained GNN model. From the test data matrix, it can infer
that 91.5% of unseen trail edges and 97.3% of unseen deviation edges are labelled cor-
rectly by the trained GNN model.
9

Fig. 8. Training and testing Matthews Correlation Coefficient (MCC) history on a synthetic da-
taset of 50,000 topology diagrams using the Adam optimizer with a learning rate of 0.001.

Fig. 9. Confusion matrices related to edge labelling for the train data (left) and test data (right) at
the end of training.

3 Implementation and results

To test the applicability of the proposed framework to real design scenarios, the pre-
sented model is tested on structural typologies from an available dataset of roughly
1,600 CEM topology diagrams representing different typologies of 2D and 3D bridge
structures (Pastrana et al. 2021). Only those topology diagrams of the dataset that the
synthetic data generator could theoretically generate are used for the test. This translates
to filtering out any topologies containing indirect deviation edges that do not belong to
10

the class of indirect deviation edges present in the ground topology of the generator, as
well as any topologies containing trails of unequal lengths. After applying this filter,
173 topology diagrams of various sizes are left, spreading across eight different bridge
typologies, including 2D and 3D structures for each topology. The per-typology MCC
obtained from applying the GNN model trained on the synthetic dataset to these realis-
tic cases is displayed in Figure 10. An MCC similar to that of the synthetic test is ob-
tained for six typologies, while a perfect edge-level classification is obtained for four
of these six typologies. It is worth noting that for the GNN model to be useful, all the
edges of one graph must be labelled correctly. In this respect, Figure 10 also shows the
graph-level accuracy per typology. As expected, it can be observed that for the classes
for which low MCC values are obtained, the graph-level accuracy is zero. This can be
explained by the fact that even a single mistake in the edge labelling renders an entire
graph invalid. On the contrary, the four perfectly labelled typologies hold 100% accu-
racy at the graph level.
Figure 11 shows a possible outcome of the design workflow presented in this paper.
Given three valid CEM topology diagrams labelled by the GNN taken from the bridge
dataset, different design variations are generated via the CEM according to various
metric parameters defined by the human designer.

Fig. 10. Matthews Correlation Coefficient (MCC) and graph-level accuracy per typology of data
from a dataset of bridge structures (Pastrana et al. 2021).

4 Conclusion

This paper presented an AI-assisted design workflow for structural form-finding that
allows simplifying the set-up of a valid Combinatorial Equilibrium Modelling (CEM)
topology diagram through a Graph Neural Network (GNN). This workflow was tested
in an experiment focused on the design of bridge structures. The trained GNN model
developed in this experiment generalises well not only to topology diagrams from the
testing dataset but also to topology diagrams of real bridge structures similar to those
11

produced by the synthetic generator. Future work should focus on expanding the GNN
training dataset to include a larger set of bridge typologies. Moreover, considering that
a valid CEM topology diagram requires all edges to be labelled correctly, translating
high MCC values on the edge labelling to high graph-level accuracy remains challeng-
ing. This could be tackled, for example, by applying a post-processing filter to produce
valid topology diagrams from nearly entirely correct edge label predictions by the GNN
model presented in this paper. In conclusion, the proposed design workflow represents
a step towards a new human-machine collaborative way of designing structures through
form-finding.

Fig. 11. Three different design variations for three typologies from the bridge dataset generated
based on valid CEM topology diagrams labelled by the GNN. The metric parameters for the
form-finding are defined by the human designer.

References
1. Chang, K.; Cheng, C., 2020: Learning to simulate and design for structural engineering.
Proceedings of the 37th International Conference on Machine Learning, pp. 1426-1436.
2. Chicco, D., 2017: Ten quick tips for machine learning in computational biology. BioData
Mining 10.
3. D'Acunto, P.,; Jasienski, J.-P.; Ohlbrock, P.O.; Fivet, C.; Schwartz, J.; Zastavni, D., 2019:
Vector-based 3D graphic statics: A framework for the design of spatial structures based on
the relation between form and forces. International Journal of Solids and Structures 167, pp.
58-70.
12

4. Danhaive, R.; Mueller, C., 2021: Design subspace learning: Structural design space explo-
ration using performance-conditioned generative modeling. Automation in Construction
127.
5. Kaethner, S.; Burridge, J., 2012: Embodied CO2 of structural frames. Struct Eng 90(5), pp.
33-44.
6. Kearnes, S.; McCloskey K.; Berndl M.; Pande V.; Riley P., 2016: Molecular graph convo-
lutions: moving beyond fingerprints. J Comput Aided Mol Des. 30(8), pp. 595-608.
7. Matthias, F.; Lenssen, J., 2019: Fast Graph Representation Learning with PyTorch Geomet-
ric. ICLR Workshop on Representation Learning on Graphs and Manifolds.
8. Ochoa, K.S.; Ohlbrock, P.O.; D'Acunto, P.; Moosavi, V., 2021. Beyond typologies, beyond
optimization: Exploring novel structural forms at the interface of human and machine intel-
ligence. International Journal of Architectural Computing 19(3), pp. 466-490.
9. Ohlbrock, P.; D'Acunto, P., 2020: A Computer-Aided Approach to Equilibrium Design
Based on Graphic Statics and Combinatorial Variations,. Computer-Aided Design 121.
10. Pastrana, R.; Skepasts, M.; Parascho, S., 2021: The CEM Framework Bridges Dataset: A
dataset of bridge topologies, forms and forces. https://github.com/arpastrana/cem_dataset.
11. Qi, S.; Wang, W.; Jia, B.; Shen, J.; Zhu, S., 2018: Learning Human-Object Interactions by
Graph Parsing Neural Networks. Proceedings of European Conference on Computer Vision
2018. pp. 401-417.
12. Shi, Y; Huang, Z; Feng, S; Zhong, H; Wang, W; Sun, Y, 2020: Masked Label Prediction:
Unified Message Passing Model for Semi-Supervised Classification. Proceedings of the
Thirtieth International Joint Conference on Artificial Intelligence, pp. 1548-1554.
13. Tam, K.M.; Moosavi, V.; Mele, T.; Block, P., 2021. Towards Trans-topological Design Ex-
ploration of Reticulated Equilibrium Shell Structures with Graph Convolution Networks.
Proceedings of the IASS Annual Symposium 2020/21, pp. 784-796.
14. Vassart, N.; Motro, R., 1999; Multiparametered Formfinding Method: Application to
Tensegrity Systems. International Journal of Space Structures 14(2), pp. 147-154.
15. Whalen, E; Mueller, C, 2021. Toward Reusable Surrogate Models: Graph-Based Transfer
Learning on Trusses. ASME. J. Mech. Des. 144(2).
16. Wu, Q.; Zhang, H.; Gao, X.; He, P.; Weng, P.; Gao H.; Chen, G., 2019: Dual Graph Atten-
tion Networks for Deep Latent Representation of Multifaceted Social Effects in Recom-
mender Systems. The World Wide Web Conference May 2019. pp. 2091-2102
17. Zhou, J. et al., 2020: Graph neural networks: A review of methods and applications. AI Open
1, pp. 57-81.

You might also like