Dritsas 2012 Design Built Rationalization Strategies and Applications
Dritsas 2012 Design Built Rationalization Strategies and Applications
Rationalization Strategies
and Applications
Stylianos Dritsas
Abstract
Rationalisation of architectural design is paramount to manufacturing
and its construction.This paper presents a methodology of
rationalisation of building envelope geometry.We discuss methods for
understanding and addressing design complexity; review two theoretical
models of rationalisation: the pre-rational and post-rational design
principles; illustrate their benefits and limitations and demonstrate their
meeting point proposing an integrated performance-oriented model for
analysis and design of building envelopes, using digital design techniques.
576
1. INTRODUCTION
Rationalisation is a widely used term across disciplines such the social
sciences, mathematics, engineering and architecture [1].We may categorize
the broad range of available definitions into either (a) retrospective
explanation of unconscious or ad-hoc behaviours via rational thinking or (b)
prospective application of rational thought in establishing causal
relationships. Rationalisation as an effort to systematize thinking and making
seems to encompass both a notion of analysis as well as synthesis.Within
the domain of architecture one may rationalise, or more precisely interpret,
historical precedents to attribute sociocultural, economic and technological
instigators for its inception and realisation. Rationalisation in the design-built
process amounts to the effort of creating a structured methodology for
translating a design from its conceptual phase to its end production while
addressing the arising technical implications.
In this paper we study the rationalisation of a design process in two
domains (a) strategies: we discuss the methods for structuring a design
process and offer a sketch for a unified pre + post rational framework and
(b) applications: we present an approach to address a particular family of
rationalisation problems aiming to control dimensional component variance
in contemporary digital design.The methodology used in both domains is
based on architectural design-computation and the primary interface to
design information is geometry. Both strategies and applications aim at
simplifying or compressing a design’s information density optimising its
geometry towards manufacturing and construction ends.
2. CONTEXT
Design rationalisation is not a new phenomenon but perhaps innate to the
architectural design-built condition: Classical design media such as sketches
and drawings express the desire of formalizing design and construction by
figurative documentation in addition to information conveyed by natural
language [1].What is new today is the medium, namely design-computation.
The emergence of a digital design paradigm occurred during the past few
decades expanded the scope of design opportunity while it increased its
complexity both formally and configurationally [2, 3]. Parametric models
allow us to organize the design process using dynamic geometric and
numerical relationships thus enhancing intuition by offering an opportunity
to observe design considerations interacting with one another. Our digital
models are not figurative or experiential representations of architecture but
they compute design.The thought process used to compose design models
is central in contemporary practice of design as they are the prime medium
of explicitly expressing design intent.Within this context we focus our study
on a computational version of design rationalisation as part of the process
seeking to address complexity in contemporary design and provide the
means of gaining insight, overview and control.
The framework is affine to classical models by Asimow [11] and Archer [12]
however the reorder of events with design description, or synthesis,
preceding analysis is used to allow for less explicit and exploratory design
processes which may be formalized subsequently. Most importantly for
contemporary rationalisation within digital media description, analysis and
optimisation which are traditionally human operations, are assumed being
embodied by computation or a composition between thereof. Finally, we
allow these components to nest in one another in order to achieve
complex design graphs.
Indicatively, we may employ the framework in a wide range of
architectural and engineering applications such as tall building design [7, 13]
where cladding unit configuration is computed using procedurally generated
geometry while performance metrics such as wall cavity and floor areas are
optimized via divide and conquer strategies. For the same building [14]
present a structural engineering study where description is based on finite
element modelling, structural analysis is used to evaluate the building
performance and stochastic optimisation is employed to improve its static
behaviour and reduce material use.
5. APPLICATION
We will proceed with examining an application of the model via a case
study of a computer aided design rationalisation of a complex building
envelope.The particular design process is derived from an on-going project
that cannot be disclosed at the point of publication upon client’s request.
However the methods used are applicable to a wide range of free-form
building envelope designs.
5.1. Description
The design intent is to create a curved wall in plan, and potentially in
section, that may be simply generated and manufactured.We employ a pre-
rational strategy of piece-wise tangent surfaces of revolution (Figure 2), also
known as translational surfaces [5, 15], to define the cladding setting-out
geometry.The result envelope belongs to a family of surfaces exhibiting
certain desirable characteristics: (a) components are circular arcs which are
simple to manufacture and geometrically offset without deterioration, (b)
surface patches along the primary surface directions are planar quads thus
suitable for curtain wall applications, and (c) the visual constraint of tangent
continuity or apparent smoothness is ensured by definition.
5.2. Analysis
Setting-out geometry typically spans large regions of a building’s envelope
and it requires subdivision into smaller components which can be
manufactured using standard industrial methods. Each arc is thus divided by
a constant chord length into equal segments to pre-constraint the number
of dimensionally unique parts (Figure 3). Control of the number of types is a
logistical consideration as volumes of varying parts increase manufacturing
and assembly time/cost and encumber the process of building information
management on and off site.
5.3. Optimisation
The pre-rational system presented is geometrically predictable in indicating
coarsely the number of unique elements required.We deduce that the
number of unique segments (n) in plan times the number of unique
segments in section (m) yields an upper bound of (n • m) total types. A
composite cylindrical envelope independent of the number of patches
requires one unit type, simple conic surfaces result to a number of types
equal to the number of sectional segments and composites thereof require
at most the product of patches times the number of sectional segments
(Figure 6).
The aim of design optimisation in this case study is to answer the
following design inquiry: Can we further reduce the total number of unique
types and simplify the design and construction? This turns out to be a
common problem, or a family of problems, where formal complexity results
to dimensionally varying parts. Relevant previous work includes [9]
attempting to rationalize the number of unique linear elements of a spatial
frame; [21] study a similar problem from an engineering perspective such
that by modifying the overall form and force distribution achieve using a
single cross sectional element in a bridge design; and [22, 23] identify
typologies of curvature in fitting free-form envelopes with simple
developable surfaces.
Our study was performed at a later phase of design development where
there was no opportunity to modify the overall building form. Instead the
rationale is based on the insight that the design contains variably sized parts
which are variably spaced due to segmentation. Units and gaps between
thereof lie at different building scales which hint of an opportunity to
leverage part variance concentrating it into detailing voids instead of physical
members. In particular, we focus at the transom carriers and attempt to
reduce their length variance while allowing it to fluctuate in between.
Description
Variation control implies a notion of similarity classification where parts of
certain closeness are averaged and refitted in the original design as long as
they do not alter it dramatically. Our case study is a one-dimensional
optimisation of linear member sizes thus we do not need to examine
geometric affinity which is far more challenging. Identifying a fixed number
of unique types which best approximate variation among a set of samples
bound by an error metric is a data clustering problem. In particular it is an
NP-hard problem addressed typically via numerical computation [24, 25].
Analysis
In order to understand the idea of variance concretely and measure the
performance of our optimisation we need a benchmark strategy.We thus
turn to statistics and use frequency histograms to record part size
distributions and later compare clustering strategies. Cladding element sizes
span the domain of real numbers which for complex envelopes every
instance may be potentially unique.This is however a purely theoretical limit
as manufacturing accuracies and construction tolerances allow us to define
a discrete lower bound. For simplicity we define a pseudo-theoretical
distribution increment of one millimetre, assuming this is the absolutely very
least one may round unit sizes, and effectively count the number of non-
empty histogram bins (Figure 7).
Generally we wish to perform better so we define a parametric
increment bin-size of a few millimetres, round elements downwards to their
nearest cluster, measure the average and maximum errors and count the
number of non-empty or unique unit types.The maximum rounding error is
bound by the cluster’s size, while the average error is approximately half of
that. For 5mm bins we are certain that the worst rounding error will be
strictly bellow 5mm while on average the error will be approximately
2.5mm.The process is computationally trivial as it may be computed using a
standard spread sheet.Yet expectedly the results are suboptimal as by
definition the histogram approach disregards all prior density characteristics
of the distribution. By implication this means that we have no control over
the end-effect of rationalisation as for instance gaps among units may
Optimisation
For selecting better cluster pivots we developed a constrained k-means
algorithm. K-means clustering is a statistical method for minimising
intercluster variance [26, 27].The algorithm selects a number of cluster
partitions at random, associates each element with its closest cluster and
repositions the partition centres at the mean of each cluster’s elements
(Figure 8).The algorithm converges after a few iterations but it is sensitive
towards the initial seed-cluster locations and requires repeated attempts.
Our application (Figure 9) uses a modified k-means method as we
cannot allow units to increase dimensions upwards to their nearest cluster;
elongated units may result to geometric clashes. Instead, we pin a cluster to
the lowest bound of the range and skew the final cluster centres to their
respective lowest bound.This subtle modification challenges the distance
metric of the original algorithm. Intuitively it doubles its error as it coerces
nodes from a clusters’ mid-point half way to its lower bound.We describe
our method as two-times k-means strategy. Even so, the algorithm still
performs on average 11% and 35% better than the benchmark in maximum
and average errors. Moreover, k-means’ intercluster error minimization
ensures that gaps will appear smooth within typology groups and only
suddenly jump between clusters.
However, while k-means improves intercluster error it cannot address
the problem of minimising the maximum error which is often visually more
pronounced. A qualitative interpretation suggests that minimising the
maximum error expresses the desire to control the largest visible gap
change across an envelope while minimising the average error implies that
small transitions need to be smooth.To minimise the maximum intercluster
error we employ the k-tMM strategy [28, 29].The algorithm splits an initial
cluster containing every envelope unit into sub-clusters while maintaining
the min/max criterion (Figure 10). Eventually, each unit is rounded to the
lowest bound of its cluster and the algorithm deterministically converges
within two times the optimal [28]. Our implementation achieves on average
16% and 50% better than the benchmark strategy in terms of maximum and
average errors, respectively.
Figure 10:
Min/max
clustering
algorithm process
diagram.
6. CONCLUSION
We presented a design-built rationalisation framework alongside a case
study demonstrating how we may integrate pre and post rational principles
to address the problem of high levels of part variation in formally complex
designs.We conclude by offering observations on the thinking process of
rationalisation systems within digital media.
The question we propose as per when or how one may select a pre or
post rational approach to rationalisation cannot be seen as an either/or
proposition. Instead we may use a set of indicators to assist and guide the
decision making process. Pre-rational principles offer a visual paradigm for
expressing design information via primarily associative geometric modelling
technique. Component/configuration organizational principles allow pre-
rational modelling to achieve complex design compositions.We attribute
this notion as systemic compactness whereby simple operations yield
expressively complex results. Interestingly this behaviour is achieved while
ensuring certain upper bounds in terms of complexity expectation. In our
case study we can predict coarsely the expected part variance already from
input parameters generating the design surface. Sensitivity-wise pre-rational
models behave also fairly linear towards small parametric variations.We
attribute this notion as systemic predictability, which is the degree we can
foresee how generative actions effect results.
The rules captured by pre-rational systems are usually simple and may
be replicated with limited or no computation using traditional media.
Revolute building envelopes may be inefficiently drawn by hand because
intrinsically they involve merely affine geometric transformations. Pre-
rational methods in this respect are somewhat calculation averse. Simplicity
at the descriptive level though maps naturally to conventional design
ACKNOWLEDGEMENTS
The author would like to thank the International Design Centre at the
Singapore University of Technology and Design for supporting this research.
REFERENCES
1. Fischer,T., Geometry Rationalisation for Non-Standard Architecture, Architecture
Science, 2012, 5, 25-47.
2. Kolarevic, B., Architecture in the Digital Age: Design and Manufacturing,Taylor and
Francis Group, New York and London, 2003.
3. Kolarevic, B. and Klinger, K., Manufacturing Material Effects, Rethinking Design and
Making in Architecture, Routledge, New York, 2008.
4. Shelden, R. D., Digital surface representation and the constructability of Gehry’s
architecture, PhD Thesis, Massachusetts Institute of Technology, 2002.
Stylianos Dritsas
Singapore University of Technology and Design
Architecture and Sustainable Design
L2R10, 20 Dover Drive, Singapore 138682
S Dritsas, [email protected]