100% found this document useful (1 vote)
8 views

Computational Models In Architecture Towards Communication In Caad Spectral Characterisation And Modelling With Conjugate Symbolic Domains Nikola Marini download

The document discusses computational models in architecture, aiming to place them within the broader context of mathematical modeling and to explore contemporary approaches that emphasize communication between different domains. It traces the evolution of computational thought from the 19th century, highlighting a shift from logic-based models to those that facilitate communication and self-organization. The work ultimately proposes a self-organizing model to analyze architectural representations, aiming to enhance the understanding of spatial similarities through computational methods.

Uploaded by

mamiyobarimi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
8 views

Computational Models In Architecture Towards Communication In Caad Spectral Characterisation And Modelling With Conjugate Symbolic Domains Nikola Marini download

The document discusses computational models in architecture, aiming to place them within the broader context of mathematical modeling and to explore contemporary approaches that emphasize communication between different domains. It traces the evolution of computational thought from the 19th century, highlighting a shift from logic-based models to those that facilitate communication and self-organization. The work ultimately proposes a self-organizing model to analyze architectural representations, aiming to enhance the understanding of spatial similarities through computational methods.

Uploaded by

mamiyobarimi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

Computational Models In Architecture Towards

Communication In Caad Spectral Characterisation


And Modelling With Conjugate Symbolic Domains
Nikola Marini download
https://ebookbell.com/product/computational-models-in-
architecture-towards-communication-in-caad-spectral-
characterisation-and-modelling-with-conjugate-symbolic-domains-
nikola-marini-51929536

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

Computational Models In Biomedical Engineering Finite Element Models


Based On Smeared Physical Fields Theory Solutions And Software Milos
Kojic

https://ebookbell.com/product/computational-models-in-biomedical-
engineering-finite-element-models-based-on-smeared-physical-fields-
theory-solutions-and-software-milos-kojic-46432302

Computational Models In The Economics Of Environment And Development


1st Edition Anantha Kumar Duraiappah Auth

https://ebookbell.com/product/computational-models-in-the-economics-
of-environment-and-development-1st-edition-anantha-kumar-duraiappah-
auth-4476944

Computational Models In Political Economy Kenneth W Kollman

https://ebookbell.com/product/computational-models-in-political-
economy-kenneth-w-kollman-10838158

Human Interaction With Electromagnetic Fields Computational Models In


Dosimetry Dragan Poljak

https://ebookbell.com/product/human-interaction-with-electromagnetic-
fields-computational-models-in-dosimetry-dragan-poljak-10679784
Computational And Mathematical Models In Biology Nonlinear Systems And
Complexity 38 1st Edition Carla M A Pinto

https://ebookbell.com/product/computational-and-mathematical-models-
in-biology-nonlinear-systems-and-complexity-38-1st-edition-carla-m-a-
pinto-57805072

Computational Intelligence Cyber Security And Computational Models


Recent Trends In Computational Models Intelligent And Secure Systems
5th International Conference Icc3 2021 Coimbatore India December 1618
2021 Revised Selected Papers Indhumathi Raman
https://ebookbell.com/product/computational-intelligence-cyber-
security-and-computational-models-recent-trends-in-computational-
models-intelligent-and-secure-systems-5th-international-conference-
icc3-2021-coimbatore-india-december-1618-2021-revised-selected-papers-
indhumathi-raman-46502170

Computational Models Of Referring Computational Models Of Referring A


Study In Cognitive Science

https://ebookbell.com/product/computational-models-of-referring-
computational-models-of-referring-a-study-in-cognitive-
science-56401546

Adjustment Models In 3d Geomatics And Computational Geophysics With


Matlab Examples Bashar Alsadik

https://ebookbell.com/product/adjustment-models-in-3d-geomatics-and-
computational-geophysics-with-matlab-examples-bashar-alsadik-10666680

Optimal Control Models In Finance A New Computational Approach Applied


Optimization 95 2005th Edition Chen

https://ebookbell.com/product/optimal-control-models-in-finance-a-new-
computational-approach-applied-optimization-95-2005th-edition-
chen-55483558
COMPUTATIONAL
MODELS IN
ARCHITECTURE

APPLIED VIRTUALITY BOOK SERIES VOL. 12


COMPUTATIONAL
MODELS IN
ARCHITECTURE
— TOWARDS
COMMUNICATION
IN CAAD. SPECTRAL
CHARACTERISATION
AND MODELLING
WITH CONJUGATE
SYMBOLIC DOMAINS
NIKOLA MARINČIĆ

BIRKHÄUSER
Basel

CHAPTER 3
Nikola Marinčić
Chair for Computer Aided Architectural Design (CAAD),
Institute for Technology in Architecture (ITA), Swiss Federal Institute of Technology (ETH),
Zurich, Switzerland

SERIES EDITORS
Prof. Dr. Ludger Hovestadt
Chair for Computer Aided Architectural Design (CAAD),
Institute for Technology in Architecture (ITA), Swiss Federal Institute of Technology (ETH),
Zurich, Switzerland

PROF. DR. VERA BÜHLMANN


Chair for Architecture Theory and Philosophy of Technics, Institute for Architectural Sciences,
Technical University (TU) Vienna, Austria

Acquisitions Editor: David Marold, Birkhäuser Verlag, A-Vienna


Content and Production Editor: Angelika Heller, Birkhäuser Verlag, A-Vienna
Proof reading / Copy editing: Prof. Dr. Michael Doyle, École d’architecture de l’Université
Laval, Canada
Layout and Cover Design: onlab, CH-Geneva, www.onlab.ch
Typeface: Korpus, binnenland (www.binnenland.ch)
Printing: Christian Theiss GmbH, A-St. Stefan/Lavanttal

Library of Congress Control Number: 2019933808

Bibliographic information published by the German National Library


The German National Library lists this publication in the Deutsche Nationalbibliografie; detailed
bibliographic data are available on the Internet at http://dnb.dnb.de.

This work is subject to copyright. All rights are reserved, whether the whole or part of the material
is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation,
broadcasting, reproduction on microfilms or in other ways, and storage in databases.
For any kind of use, permission of the copyright owner must be obtained.

ISSN 2196-3118
ISBN 978-3-0356-1848-8
e-ISBN (PDF) 978-3-0356-1862-4

© 2019 Birkhäuser Verlag GmbH, Basel


P.O. Box 44, 4009 Basel, Switzerland
Part of Walter de Gruyter GmbH, Berlin/Boston

987654321

www.birkhauser.com

4 TOWARDS COMMUNICATION IN CAAD


TABLE OF CONTENTS
Abstract 7
Preface 9
Acknowledgements 11
I An overview: Architecture and computation 13
I background: a quest for coherence 15 — II computers and architecture 33
— III towards a new vision of architectonics 105
II Architectonics of communication: 125
how different natures communicate
I natural communication model 126 — II glossematics 146
III An instrument for communication: 219
Self-organizing model
I self-organizing map 220 — II self-organizing model 225
IV An experiment: Communication and natures 237
of architectural representation
I nature(s) of architectural representation 238 — II computational
precedents 243 — III spectral characterisation of an abstract object 247 —
IV modelling with conjugate symbolic domains 271
V EPILOGUE 283
REFERENCES 289
IMAGE AND ILLUSTRATION CREDITS 295

CHAPTER 5
ABSTRACT
This work deals with computational models in architecture, with the
ambition of accomplishing three objectives:
1 To position the established computational models in architecture
within the broader context of mathematical and computational
modelling.
2 To challenge computational models in architecture with contem-
porary modelling approaches, in which computation is regarded
from the perspective of communication between different
domains of a problem.
3 To show how within the paradigm of communication, it is possi-
ble to computationally address architectural questions that can-
not be adequately addressed within the current computational
paradigm.
The first part of the work begins in the 19th century, delves into the body
of thinking from which computation emerged and traces two general at-
titudes towards mathematical modelling, which will each eventually lead
to different interpretations of computation. The first one, described as
the logicist tradition, saw the potential of formal, mechanised reasoning
in the possibility of constructing the absolute foundation of mathemat-
ics, its means of explanation and proof. The second one, the algebraist
tradition, regarded formalisation within a larger scope of model-theoret-
ic procedures, characterised by creatively applying abstraction towards
a certain goal. The second attitude proved to be a fertile ground for the
redefinition of both mathematics and science, thus paving a way for
contemporary physics and information technology. On the basis of the
two traditions, this dissertation identified a discrepancy between the
computational models in architecture, following the first tradition, and
those commonly used in information technology, following the second.
The Internet revolution, initiated by the development of search
engines and social media, is recognised as indicative of the changing
role of computers, from “computing machinery” towards the generic
infrastructure for communication. In this respect, three contemporary
models of communication, proponents of the algebraic tradition, are
presented in detail in the second part of the work. As a result, the self-
organizing model is introduced as the concrete implementation of the
ideas appropriated from communication models.
In the last part of the work, the self-organizing model is applied
to the problem of similarity between spaces, on the basis of their archi-
tectural representation. By applying partition and generalisation proce-
dures of the self-organizing model to a large number of floor plan images,

ABSTRACT 7
a finite collection of elementary geometric expressions was extracted,
and a symbol attached to each instance. This collection of symbols is re-
garded as the alphabet, by means of which any plan created by the same
conventions can be described as the writing of that alphabet. Finally,
each floor plan is represented as a chain of probabilities, based upon its
individual alphabetic expression of a written language, and its values
used to compute similarities between plans.

8 TOWARDS COMMUNICATION IN CAAD


PREFACE
Some might find this doctoral thesis unconventionally written. Instead
of circumscribing its scope and concentrating its efforts on accomplish-
ing a single objective within that scope, it engages with an unusually
extensive body of knowledge with the aim of providing additional angles
to its principal research domain: computational models in architecture.
This body of knowledge involves early analytic philosophy, computabil-
ity and probability theory, formal logic, quantum physics, abstract alge-
bra, computer-aided design, computer graphics, glossematics, machine
learning and architecture. However, the reason for such a comprehen-
sive approach and perhaps radical gesture is not to claim any expertise
nor mastery over the aforementioned fields of knowledge. To the con-
trary, it is a matter of methodology, aiming to operate in a more archi-
tectural manner, without losing the necessary rigour and consistency
required of an academic work. An architect’s effort towards creating a
masterful work, whether it is a building or a theory, always involves the
integration of a wide variety of aspects laying outside of his/her own
area of expertise. I see this apparent difficulty as a potential to enrich
my work, and as a source of inspiration towards finding new, unexplored
research perspectives. One more reason in favour of such approach can
be justified by the very theories cited within this work, especially the
concept of communication. To communicate with someone or some-
thing involves a responsive spectrum of frequencies on both sides, and
tuning oneself to become sensitive to the potential resonances. The
wider and richer this spectrum is, the more meaningful communication
becomes. In this sense, the aim of this work is to make the spectrum of
the research as resonant as possible, hoping to establish a more satisfy-
ing communication with the field of computational models in architec-
ture, as well as with the reader.

Nikola Marinčić, Spring 2019

PREFACE 9
ACKNOWLEDGEMENTS
I would like to express my deepest gratitude to Professor Ludger
Hovestadt for introducing me to a fantastic, new world of ideas that
have been an abundant source of inspiration in the last six years. This
work would not have been possible without the continuous support,
remarkable patience and absolute freedom he has given me to pursue
my ideas. I am especially indebted to Professor Vera Bühlmann who pro-
vided me with the means to become a literate person, and so much more.
I see this work as a showcase for the intellectual abilities I have acquired
or strengthened by taking part in her theory colloquium for five years.
I would also like to offer my special thanks to Professor Elias Zafiris
for an invaluable introduction to abstract mathematics and quantum
physics, and for giving me a deep insight into his natural communication
model. His ideas provided a missing link to my thinking and allowed me
to complete my thesis with a feeling of great satisfaction.
I would like to thank my colleagues from the chair of CAAD for
creating a truly unique and stimulating environment. I will always be
proud of sharing it with you. My special thanks are extended to Jorge
Orozco, Miro Roman, Mihye An, Diana Alvarez-Marin and Vahid
Moosavi, who provided me with great feedback, and with whom I had
amazing discussions throughout my whole engagement as a researcher.
I would also like to thank Dennis Lagemann, Poltak Pandjaitan and
David Schildberger for their help with translating the abstract of the
thesis into German, and to Mario Guala for his great help with all the
administrative aspects that made my stay at ETH much easier. I also
wish to express my gratitude to Michael Doyle for doing a great job with
copyediting this dissertation for publication.
I would also like to thank my lovely wife Vanessa, for her relent-
less support and love through the challenging time of writing my thesis,
and for helping with the proof-reading. Finally, I wish to thank my father
Miodrag and my sister Ivana for always having faith in me, and for doing
their best to help me in my pursuit of happiness.

Nikola Marinčić, Spring 2019

ACKNOWLEDGEMENTS 11
AN OVERVIEW I
ARCHITECTURE AND
COMPUTATION

AN OVERVIEW 13
Architecture and information technology… two species similar
in kind, neither of them being in the least disciplinal: both affect
everything, both are arts of gathering things. The one, 2,500 years
old and dignified, and the other, just fifty years of age and impatient.

L. Hovestadt, “Cultivating the Generic” (2014: 9)

What today seems to be a passionate love affair between architecture and


information technology, is in fact quite a delicate relationship, full of mis-
understandings. The first “universal machines,” built in the 1940s, emerged
as a side effect of the resolution of the 19th century attempt to ground all
formalized, mathematical knowledge in logic. 1 Soon, computation was seen
essentially as the mechanised treatment of logic. 2 Nevertheless, computers
quickly got the attention of almost every field of human endeavour, including
architecture. 3 With a certain amount of scepticism, acquired in the long tra-
dition built upon mastership, architecture did not embrace its new potential
“partner” very easily. Early researchers saw a lot of promise in computation,
but for the large majority of practitioners, it seemed to be in poor taste4 to
simply embrace “logic” as a means to mechanise their articulations with the
promise of greater efficiency and formal clarity. However, with the expansion
of personal computers and intuitive computer-aided design software, the re-
sistance became futile. An architecture was born out of generic drafting and
modelling solutions, which employed computation to mimic the established
modes of design. 5 While information technology started exploring new ideas,
architectural research remained on the path of the “logicist” tradition.
Today we live in a different world. Computers are omnipresent in
our existence, and are no longer about logic. As the old identities slowly
dissolve, new ideas are emerging on what computers are all about. These
new ideas come from a higher level of abstraction and offer new unexpect-
ed vistas. 6 In this chapter, I will give an account of both old and new, with
a hope that architecture might just find a very good partner in information
technology, and hopefully reinvent itself in the digital.

1 Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem.”


Turing’s seminal paper on computability comes as an answer to Hilbert’s decision prob-
lem (Entscheidungsproblem) but it starts with providing a mechanically constructed
arithmetic of “computable” numbers.
2 “Kurt Gödel has reduced mathematical logic to computation theory by showing that the
fundamental notions of logic … are essentially recursive. Recursive functions are those
functions which can be computed on Turing machines, and so mathematical logic may
be treated from the point of view of automata.” Burks, editor’s introduction to Theory
of Self-Reproducing Automata, 25.
3 Mitchell, foreword to Architecture’s New Media, xi.
4 “Computational methods to support the synthesis of design solutions have fascinated
architectural researchers and horrified the practitioners.” Kalay, Architecture’s New
Media, 237.
5 Kalay, 181.
6 See: Hovestadt, “Elements of Digital Architecture,” 28–116.

14 TOWARDS COMMUNICATION IN CAAD


I background: a quest for coherence

Necessity or contingency?

The story of symbolic computation emerged out of a peculiar state of


affairs that started in the 19th century and got its epilogue in the first
half of the 20th century. Before that time, mathematics appeared to be
intimately linked to our physical reality, a phenomenon that Erich Reck
described with the metaphor of an umbilical cord. 7 Geometry was safely
grounded in Euclid’s axiomatic method, dating from around 300 BC. This
method consisted of three parts: Axioms, or postulates are statements
which are accepted without proof, and act as foundations of a theory;
theorems are statements that are derived from the axioms and act as a su-
perstructure (of knowledge) built upon the foundations; logic is a formal
apparatus used to deduce theorems from axioms. Logic was established
in antiquity and can be traced back to Plato and Aristotle. Aristotelian
logic introduced three laws of reasoning in the natural language: laws of
identity, contradiction and the excluded middle. 8 The interesting thing
about logic was that it preserved the truthfulness of the statements it
derived from axioms. It was believed that if the axioms were true, every-
thing that was logically deducible from the axioms necessarily needed to
be true as well. In fact, geometry and logic were so stable that their link
to physical reality was not questioned for thousands of years. 9
It was not recognised for a long time that the truthfulness of axi-
oms of logic and geometry was in fact accepted on the basis of intuition,
which could only confirm that such statements are in fact self-evident. 10
An example of such evident truths were Euclid’s four postulates of pla-
nar geometry, which seemed to conceptualise our experience of space:
1 Let it have been postulated to draw a straight-line from any point
to any point.
2 And to produce a finite straight-line continuously in a straight-line.
3 And to draw a circle with any centre and radius.
4 And that all right-angles are equal to one another. 11
Euclid’s fifth postulate about parallel lines in two-dimensional geom-
etry seems to be of a different kind than the previous four:

7 “With this first conception, geometry is firmly attached to physical reality—the


umbilical cord between them is still in place.” Reck, “Frege, Natural Numbers, and
Arithmetic’s Umbilical Cord,” 431.
8 Encyclopædia Britannica, s.v. “Laws of thought,” accessed September 1, 2017, https://
www.britannica.com/topic/laws-of-thought.
9 “As late as 1787, the German philosopher Immanuel Kant was able to say that since
Aristotle formal logic ‘has not been able to advance a single step, and is to all appear-
ances a closed and completed body of doctrine.’” Nagel and Newman, Gödel’s Proof, 30.
10 Burge, “Frege on Knowing the Third Realm,” 1.
11 Euclid, Elements of Geometry, 7.

AN OVERVIEW 15
5 And that if a straight-line falling across two (other) straight-
lines makes internal angles on the same side (of itself whose
sum is) less than two right-angles, then the two (other) straight-
lines, being produced to infinity, meet on that side (of the origi-
nal straight-line) that the (sum of the internal angles) is less
than two right-angles (and do not meet on the other side). 12
This postulate is logically equivalent to the assumption that only one
parallel can be drawn through a point outside a given line. 13 The fifth
postulate introduced a great deal of problems to mathematicians, as it
is neither self-evident, nor can be proved within Euclid’s axiomatic sys-
tem. 14 Nevertheless, it somehow appears to be a correct statement. Such
incoherencies were something that science and mathematics of the 19th
century were determined to eradicate.
Unlike other branches of mathematics, geometry was considered
to be the most stable due to its axiomatic method. It seemed natural to
ask whether such a secure axiomatic system could also be established
elsewhere. Soon, many branches of mathematics were supplied with
“what appeared to be adequate sets of axioms.” 15 It was of the utmost
importance to establish an adequate axiomatic system of arithmetic, as
it would securely ground other branches of mathematics on top of it. 16
In an attempt to use algebra to ground infinitesimal calculus, Cantor,
Cauchy, Weierstrass, Dedekind, and others, showed how different no-
tions in analysis could be defined in arithmetical terms. 17 The promise
of axiomatisation was great: For each area of inquiry, having such a set
of axioms would yield endless amounts of true propositions.
In the mid 19th century, the work of Lobachevsky, Bolyai,
Gauss and Riemann 18 began to challenge Euclid’s axiomatic system. In
1829, Lobachevsky developed a “geometry” by appropriating the first
four axioms of Euclid, asserting that in his geometry the famous fifth

12 Euclid, 7.
13 Nagel and Newman, Gödel’s Proof, 6.
14 “The chief reason for this alleged lack of self-evidence seems to have been the fact that
the parallel axiom makes an assertion about infinitely remote regions of space. Euclid
defines parallel lines as straight lines in a plane that, “being produced indefinitely in
both directions,” do not meet. Accordingly, to say that two lines are parallel is to make
the claim that the two lines will not meet even ‘at infinity’.” Nagel and Newman, 6.
15 Nagel and Newman, 3.
16 Nagel and Newman, 3.
17 For example: “instead of accepting the imaginary number ‘−1’ as a somewhat mysterious
“entity,” it came to be defined as an ordered pair of integers (0, 1) upon which certain
operations of “addition” and “multiplication” are performed. Similarly, the irrational
number √2 was defined as a certain class of rational numbers—namely, the class of
rationals whose square is less than 2.” Nagel and Newman, Gödel’s Proof, 32. See also:
Gauthier, Towards an Arithmetical Logic, 1.
18 “However, the geometric starting point of Riemann was not the non-Euclidean ge-
ometry, of which Riemann apparently had not even taken note, but rather the theory
of surfaces developed by Carl Friedrich Gauss.” Jost, historical introduction to On the
Hypotheses Which Lie at the Bases of Geometry, 26.

16 TOWARDS COMMUNICATION IN CAAD


postulate was not a true statement. This will become known as Bolyai–
Lobachevskian geometry. Compared to Euclidean geometry, which
was considered to mirror physical reality 19, the hyperbolic geometry
of Lobachevsky and Bolyai was radically different. This geometry was
able to describe a world that could not be observed empirically, while at
the same time remaining a perfectly valid mathematical construction.
Finally, in 1868, Beltrami demonstrated the independence of the fifth
postulate from the other axioms. Implications of these events shook the
very idea of mathematical foundations. If the validity of mathematical
statements could not be guaranteed by the truthfulness of the axioms, as
they need not be self-evident or mirror reality anymore, what remains
of the ideas of grounding and validation? Moreover, if mathematics is
not about the truths of our world, then what it is about? Gradually, it
became clear that the position of necessity within mathematics was to
be shifted from the truthfulness of its axioms to the validity of the in-
ferences it employed. 20 Mathematics became abstract and stripped of
meaning, as illustrated by the famous quote from Russell:
… mathematics may be defined as the subject in which we never know
what we are talking about, nor whether what we are saying is true. 21
What replaced the method of validating a system of premises on the
basis on its truthfulness, was a new idea of internal coherence, known
as consistency. If an axiomatic system was to be consistent, it needed
to guarantee that no mutually contradictory theorems can be deduced
from the postulates. 22 With that requirement, an important question
begged to be asked: Are even the axioms of Euclid’s system consistent?
There was no single approach to the idea of creating a consistent system,
and the interest in this question by two equally rigorous but ideologi-
cally quite distinct schools of thought warrants attention. The approach
of the first group of mathematicians, including George Boole, Richard
Dedekind and David Hilbert, among others, can be characterised as an
algebraic approach to the idea of consistency. On the other side, Gottlob
Frege, Bertrand Russell and their school of thought established an ap-
proach based on formal logic.

19 “Against Leibniz and Wolff, Kant thus emphasises and elaborates the axiomatic nature
of geometry, i.e., that geometry has real axioms and that the propositions of geometry
cannot be obtained analytically from definitions.” Jost, 28.
20 “We repeat that the sole question confronting the pure mathematician (as distinct from
the scientist who employs mathematics in investigating a special subject matter) is not
whether the postulates he assumes or the conclusions he deduces from them are true,
but whether the alleged conclusions are in fact the necessary logical consequences of
the initial assumptions.” Nagel and Newman, Gödel’s Proof, 8.
21 Russell, Mysticism and Logic, 58.
22 Nagel and Newman, Gödel’s Proof, 10.

AN OVERVIEW 17
Algebraist tradition The ‘algebraist’ approach heavily relied
on abstraction as the operative means to create coherent but contingent
frameworks that did not offer a unifying consensual definition of the
basis. The objectivity which they sought to establish within algebra was
not something they considered as already given, but rather something
that needed to be produced.
George Boole was the first to revolutionise the study of logic
after Aristotle. In his 1847 book The mathematical analysis of Logic,
he established the study of logic on a purely algebraic basis. His alge-
bra of logic provided a precise notation “for handling more general and
more varied types of deduction than were covered by traditional logical
principles.” 23 In 1854, he published his second monograph on algebraic
logic, known as An Investigation of the Laws of Thought. The most im-
portant invention in his work was the equational treatment of logical
statements, which allowed him to assess the validity of logical problems,
and to extend their scope. In his book, he demonstrated how to trans-
form any logical problem into an operative algebraic equation. By solv-
ing the algebraic equation, the logical problem was able to be resolved. 24
One of the most misunderstood algebraists of the time was the
mathematician Richard Dedekind. 25 His approach to the problem of
mathematical foundations was to arithmetise mathematics, but without
appealing to numbers and the operations on them as naturally given. For
Dedekind, natural numbers were a free creation of the human mind and
abstraction was a tool to think with. In his essay “On Continuity and
Irrational Numbers” (1872), he attempted to rigorously define the notion
of a continuous magnitude, which at the time rested upon geometrical in-
tuitions. 26 His method, known today as the Dedekind cut, constructed ir-
rational and real numbers by freeing them from any content. 27 Dedekind
considered the application of ordinal numbers as central, which allowed
him to identify numbers structurally. He defined the cut as a separa-
tion which possesses one property, namely that it separates the domain

23 Nagel and Newman, 31.


24 Boole, An Investigation of the Laws of Thought, 24–38.
25 “… great philosophers, such as Cantor and Dedekind, are treated as philosophical naïfs,
however creative, whose work provides, at best, fodder for philosophical chewing. Not
only have we inherited from Frege a poor regard for his contemporaries, but, taking
the critical parts of his Grundlagen as a model, we in the Anglo-American tradition of
analytic philosophy have inherited a poor vision of what philosophy is.” Tait, “Frege
Versus Cantor and Dedekind: on the Concept of Number,” 215.
26 “The statement is so frequently made that the differential calculus deals with continu-
ous magnitude, and yet an explanation of this continuity is nowhere given; even the
most rigorous expositions of the differential calculus do not base their proofs upon
continuity but, with more or less consciousness of the fact, they either appeal to geo-
metric notions or those suggested by geometry, or depend upon theorems which are
never established in a purely arithmetic manner.” Dedekind, Essays on the Theory of
Numbers, 2.
27 Tait, “Frege Versus Cantor and Dedekind: on the Concept of Number,” 222.

18 TOWARDS COMMUNICATION IN CAAD


of rational numbers into two classes A1 and A2, where every number a1
belonging to A1, is smaller than every number a2 from A2. 28

fig. 1
Dedekind cut
(Hyacinth, 2015)

1 7
5
41
29
239 1393
169 985
3363
2378
577
408
99
70
17
12
7
5 2

√2
His construction of natural numbers goes “beyond logic because it
appeals to entities which, although created by the intellect, are nev-
ertheless objectively available to it.” 29 In the chapter “Architectonics
of Communication: How Different Natures Communicate,” we will in-
vestigate the mathematical framework of category theory in the light
of Dedekind’s legacy of free creation of numbers, and his attention to
structural properties.
David Hilbert’s early work was greatly inspired by the ad-
vances in the axiomatic treatment of geometry. In The Foundations of
Geometry (1899) 30, Hilbert devised a set of twenty axioms as a founda-
tion of Euclidean geometry. 31 Unlike Euclid’s system, Hilbert’s axioms
are not about the physical space, but “rather, they are taken to form the
definition or characterisation of a certain abstract structure.” 32 In oth-
er words, Hilbert’s axioms are not self-evident truths, but contingent
truths, which employ algebra to construct their consistency. 33 Since
the algebraic characterisation cannot be “accommodated within any
one ideal and elemental order,” 34 Hilbert provided the contingent basis
as the source of consistency. 35 Some of the approaches to establish con-
sistency required an infinite number of elements; others simply shifted

28 Dedekind, Essays on the Theory of Numbers, 12–13.


29 Potter, Reason’s Nearest Kin, 282.
30 The original title is Grundlagen der Geometrie.
31 Hilbert, The Foundations of Geometry, 2–16.
32 Reck, “Frege, Natural Numbers, and Arithmetic’s Umbilical Cord,” 431.
33 “The geometric statement that two distinct points uniquely determine a straight line is
then transformed into the algebraic truth that two distinct pairs of numbers uniquely
determine a linear relation; the geometric theorem that a straight line intersects a circle
in at most two points, into the algebraic theorem that a pair of simultaneous equations
in two unknowns (one of which is linear and the other quadratic of a certain type)
determine at most two pairs of real numbers.” Nagel and Newman, Gödel’s Proof, 15.
34 Bühlmann, “Continuing the Dedekind Legacy Today,” 6.
35 Riemann’s “…idea is that if the metric properties of the space do not necessarily follow
from its structure, then the space can carry several possible metrics, and the math-
ematician then can specify any such hypothetical relations and examine the resulting
structures and distinguish them with regard to their characteristics. Hilbert will then
raise this as the axiomatic method to a systematic program.” Jost, presentation of the
text On the Hypotheses Which Lie at the Bases of Geometry 46.

AN OVERVIEW 19
the problem of consistency of one system, by placing the responsibility
on another system used as its base. Hilbert found these approaches un-
satisfactory. In the next twenty years, he became obsessed with the idea
of proving the absolute consistency of an axiomatic system, which led to
the development known as Hilbert’s program. 36

Logicist tradition The ‘logicist’ approach to consistency emerged


into a dominant paradigm whose followers appropriated computation as a
child of their own tradition. Its proponents wished to encapsulate an ulti-
mate objectivity within a system of foundations, upon which the whole of
mathematics could rest. The implementation of this idea required ground-
ing all of mathematics in logic. The objective was to construct an ideal, fool-
proof reasoning apparatus on the logical basis, which could, ideally, (in)
validate any logical statement.
The most prominent member of the logicist party was Gottlob
Frege. If Boole’s idea was to ground logic within mathematics by means of
algebra, Frege’s idea was quite the opposite. He wished to ground the whole
of mathematics in arithmetic by means of the powerful deductive logic.
Frege claimed that all the axioms of arithmetic could be “deduced from a
small number of basic propositions certifiable as purely logical truths.” 37
In Begriffsschrift (1879), Frege invented quantification theory, which was
a first step towards a precise notion of purely logical deduction. 38 The
“conceptual notation” he defined allowed him to represent mathematical
statements involving, for example, an infinite number of prime numbers. 39
In 1884, in The Foundations of Arithmetic 40, Frege introduced his own
number theory, made to emulate formal logic. 41 He wished to show that
arithmetic could be reduced to logical fundamentals, without any basis
in intuition. Moreover, he regarded arithmetic as a completely objective
“realm.” His central claim in The Foundations was that:
In arithmetic, we are not concerned with objects which we come
to know as something alien from without through the medium of
the senses, but with objects given directly to our reason and, as
its nearest kin, utterly transparent to it. 42
Today, we can more easily recognise the alarming implications of such
a statement. By regarding mathematics as a transparent, objective

36 Nagel and Newman, Gödel’s Proof, 25.


37 Nagel and Newman, 32.
38 Tait, “Frege Versus Cantor and Dedekind: on the Concept of Number,” 213.
39 Tait, 217. However, he did so by utilising cardinal numbers, which was the misunder-
standing of the notion of infinity introduced by Cantor.
40 Originally published as Die Grundlagen der Arithmetik.
41 Gauthier reformulates Frege’s question into: “How far can we go into arithmetic with
deductive logic alone?” Gauthier, Towards an Arithmetical Logic, 1.
42 Frege, The Foundations of Arithmetic, §105:115.

20 TOWARDS COMMUNICATION IN CAAD


reality, 43 which was simply to be accessed by reason, Frege diminished
the role of human creativity and invention. His statement completely
rejects the possibility of the creative abstraction that Dedekind was
fighting for. 44 Despite all this, Frege is considered to be the founder and
the “hero” of abstraction until this day. 45
Very soon, a complete new set of problems had emerged out of
Frege’s program. In 1901, Bertrand Russell showed that Frege’s logical
axioms were inconsistent. 46 He discovered that Frege’s approach could
lead to the construction of paradoxical sets, which was named Russell’s
paradox. 47 The source of Frege’s inconsistencies lied in its self-referen-
tiality. The logicists sought to remain within the same paradigm, while
avoiding self-reference at all costs. In the period from 1910 to 1913,
Russell and Whitehead wrote Principia Mathematica, a cornerstone of
the logicist paradigm. It was a three-volume work of mathematical foun-
dations that attempted to establish a set of axioms and rules powerful
enough to prove all mathematical truths. It was meticulously designed
to keep the inconsistencies out “in a most staunch and watertight
manner.” 48 Principia Mathematica also appeared to be the final solution
for the problem of consistency, as it reduced the problem of consistency
of arithmetic to the problem of the consistency of formal logic itself. 49
This was the moment where Russell and Whitehead’s work be-
came closely intertwined with Hilbert’s search for absolute consistency,
which consisted in the complete formalisation of a deductive system
by “draining” it from any meaning, as described by Nagel and Newman:
The postulates and theorems of a completely formalised system
are “strings” (or finitely long sequences) of meaningless marks,
constructed according to rules for combining the elementary
signs of the system into larger wholes. Moreover, when a system
has been completely formalised, the derivation of theorems from
postulates is nothing more than the transformation (pursuant
to rule) of one set of such “strings” into another set of strings. 50

43 Burge, “Frege on Knowing the Third Realm,” 2. He called it “The third realm.”
44 Bühlmann, “Continuing the Dedekind Legacy Today,” 8.
45 “However, more important to me in this paper than the question of Frege’s own im-
portance in philosophy is the tendency in the literature on philosophy to contrast the
superior clarity of thought and powers of conceptual analysis that Frege brought to bear
on the foundations of arithmetic, especially in the Grundlagen, with the conceptual
confusion of his predecessors and contemporaries on this topic.” Tait, “Frege Versus
Cantor and Dedekind: on the Concept of Number,” 215.
46 Irvine and Deutsch, “Russell’s Paradox.”
47 Irvine and Deutsch, “Russell’s Paradox.” famous “set of all sets that are not members of
themselves.”
48 Hofstadter, preface to Gödel, Escher, Bach, 4.
49 Nagel and Newman, Gödel’s Proof, 33.
50 Nagel and Newman, 20.

AN OVERVIEW 21
The defining trait of formal systems lies in their simplicity. They include
a limited number of signs, a grammar which defines how to create well-
formed strings, a set of strings taken as axioms, and a set of transfor-
mation rules. 51 This introduces two levels from which a formal system
can be considered: The first, “lower” level consists of the “meaningless
marks” that are produced mechanically; the second accommodates high-
level reasoning about the processes of the lower level. Hilbert defined
the higher level as a meta-language. His goal was to a find method that
could prove the absolute consistency of a system. He believed that the
solution lied on the “lower” level and was interested in demonstrating
the “impossibility of deriving certain contradictory formulas” 52 within
it. In other words, Hilbert’s hope was that a purely formal language
could be used to prove its own consistency.
The main achievement of Principia Mathematica, was that it pro-
vided “a remarkably comprehensive system of notation, with the help of
which all statements of pure mathematics (and of arithmetic in particu-
lar) can be codified in a standard manner.” 53 The book’s notation and
deductive system presented themselves to Hilbert as a perfect medium
for establishing an absolute proof of consistency. His work seemed to be
on the right track until 1931, when Gödel’s theorems proved that neither
Principia, nor any other system of that kind, could ever achieve this goal.

linguistic turn

The philosophy of the early 20th century experienced a crisis similar to the
one of mathematics. The accounts that held philosophy as the fundamen-
tal discipline responsible for the questions of foundations and knowledge,
started to lose appeal in the light of the clarity and the precision dem-
onstrated by modern logic. An idea began to emerge that philosophical
facts do not exist per se, but that they are above all language articulations.
Accordingly, philosophy should have been dealing with clarification of
thoughts on a logical basis by analysing the logical form of propositions. 54
As a consequence, the attention of philosophy turned to language as an
operative medium for thought, and to grammar as an apparatus for coher-
ent thinking. Within the relation between philosophy and language, an-
other current emerged that was interested in the relation between gram-
mar and logic. This interest introduced two schools of thought: The first
one was established by the Austrian philosopher Ludwig Wittgenstein;
the second by Rudolph Carnap and the Vienna Circle. 55

51 Hofstadter, Gödel, Escher, Bach, 35.


52 Nagel and Newman, Gödel’s Proof, 27.
53 Nagel and Newman, 33.
54 Wittgenstein, Tractatus Logico-Philosophicus, 45 (4.12).
55 Potter, Reason’s Nearest Kin, 18.

22 TOWARDS COMMUNICATION IN CAAD


Wittgenstein, an associate of Russell and a young admirer of Frege’s
work, is considered to be one of the progenitors of the linguistic turn.
In 1921, he wrote Tractatus Logico-Philosophicus, which conveyed the
idea that philosophical problems arise from an inconsistent nature of
the language that is used to construct philosophical statements. 56 In the
preface of the Tractatus, he summed up his argument as the following:
What can be said at all can be said clearly; and whereof one can-
not speak thereof one must be silent. 57
Tractatus was the first philosophical work putting the language at the
centre of its inquiry, boldly stating that “the limits of my language mean
the limits of my world.” 58 This non-intuitive position was in strong
contrast to most of the Western philosophical tradition. To free phi-
losophy from incoherence, Wittgenstein required an ideal language for
philosophical analysis, as “ordinary” language was full of ambiguities.
Wittgenstein’s conception of a language was not simply an instrument
of logic. If this were the case, the argumentation would need to be set up
so that it leads to an argument or a proof. It was a philosophical gram-
mar, designed to draw a line separating valid philosophical language
from nonsense. 59 By creating a philosophical system as an application
of his rigorous grammar consisting of atomic facts, propositions and
operators, Wittgenstein believed to have eliminated all philosophical
problems. However, in all of his self-proclaimed success, he also realised
“how little has been done when these problems have been solved.” 60
Decisive for the linguistic turn in the humanities were the works of
yet another tradition, namely the structuralism of Ferdinand de Saussure
and the ensuing movement of poststructuralism. 61 Saussure’s general com-
plaint was directed at the lack of systematicity in the study of language. 62
In his university lectures, collected and published only later by his students
in Course in General Linguistics (1916), 63 Saussure referred to a number of
approaches for studying language, finding them all inadequate. 64 For him,
grammar was detached from language and too dependent upon (and limi­
ted by) logic, having the sole purpose of distinguishing between correct

56 Wittgenstein, preface to Tractatus Logico-Philosophicus, 23.


57 Wittgenstein, 23.
58 Wittgenstein, 74.
59 Wittgenstein, 23.
60 Wittgenstein, 24.
61 Wikipedia, s.v. “Linguistic turn,” last modified March 24, 2017, 15:55, https://
en.wikipedia.org/wiki/Linguistic_turn.
62 Saussure, Course in General Linguistics, 3–4.
63 Originally published as Saussure, Ferdinand de. Cours de linguistique générale. Publ.
par Charles Bailly et Albert Sechehaye avec la collaboration de Albert Riedlinger.
Lausanne: Pavot, 1916.
64 “At the same time scholars realised how erroneous and insufficient were the notions of
philology and comparative philology. Still, in spite of the services that they rendered,
the neogrammarians did not illuminate the whole question, and the fundamental prob-
lems of general linguistics still await solution.” Saussure, 5.

AN OVERVIEW 23
and incorrect forms. He found that philology was not about language at
all, but about the interpretation of texts “as a means to literary and his-
torical insight.” 65 He recognised some potential in comparative philology,
and in its task of finding similarities and differences between languages. 66
Saussure’s vision of linguistics was that it should be able:
• to describe and trace the history of all observable languages, which
amounts to tracing the history of families of languages and recon-
structing as far as possible the mother language of each family;
• to determine the forces that are permanently and universally at
work in all languages and to deduce the general laws to which all
specific historical phenomena can be reduced; and
• to delimit and define itself. 67
For Saussure, language was a “system of signs that express ideas” 68 and
was part of a larger whole, of a “science that studies the life of signs
within society,” 69 which he termed semiology. 70 His concept of a sign
challenged the traditional view, which considered words as mere labels
attached to concepts. He defined sign as an entity that united a con-
cept of a thing, the signified, and its sound image, the signifier. Since
there cannot be a concept without it being named, the signified and the
signifier necessarily exist as a pair. For Saussure, language was about
symbolic manipulation, thus the “real things” did not play any role in the
constitution of a sign. Another crucial view that he held was that signs
possess differential, not natural, identity. In other words, a sign is being
a sign only by the virtue of not being any other sign:
… the concepts are purely differential and defined! Not by their
positive content but negatively by their relations with the other
terms of the system. Their most precise characteristic is in being
what the others are not. 71

mechanisation of articulation

The advent of digital computers was rapid, overwhelming, and its de-
velopment is still underway. There is no room here to mention every
important contributor. For the purposes of this dissertation, the focus
will be on the figures who have established the main computational
paradigms and on those whose work has had the biggest influence on
computer-aided architectural design.

65 Saussure, 1.
66 Saussure, 4–5.
67 Saussure, 6.
68 Saussure, 16.
69 Saussure, 16.
70 Saussure, 16.
71 Saussure, 117.

24 TOWARDS COMMUNICATION IN CAAD


Recursion and computability In the 1920’s, Hilbert’s program
crystallised the main expectations of formal systems for the purpose of
axiomatization, most notably ideas of:
• Consistency: No mutually contradictory theorems should be de-
ducible from the axioms.
• Completeness: Axioms of a deductive system are “complete” if
every true statement that can be expressed in the system is for-
mally deducible from the axiom. 72
At an international conference in 1928, Hilbert introduced a famous
challenge that illustrated his hope of the potential of formal systems,
what he referred to as the Entscheidungsproblem 73 (decision problem).
He asked whether an algorithm could be made that takes two inputs: (i)
a description of a formal language (for example Principia Mathematica
(PM)) and (ii) a statement expressed in that language (for example, a
theorem of PM), and outputs either true or false, depending whether
the statement ii is provable within the formal language i. All that re-
mained to settle the question of foundations once and for all was to
solve the problem.
Unfortunately for Hilbert, the complete opposite happened.
In his 1931 paper “On formally undecidable propositions of Principia
Mathematica and related systems,” Austrian logician Kurt Gödel
proved that Hilbert’s requirements of consistency and completeness
could not both be achieved in a formal system. Moreover, he exposed
the fundamental limitations of all axiomatic systems, including those of
arithmetic and logic. 74 The pinnacle of Gödel’s paper are the two theo-
rems known as Gödel’s incompleteness theorems, which he proved in
an ingenious way. The first theorem states that any system of a certain
complexity—in which, for example, arithmetic can be developed—is
essentially incomplete. In other words, true statements that cannot be
derived from the axioms could be expressed in such a system. The sec-
ond theorem shattered Hilbert’s hope of achieving absolute consistency
by showing that a formal system alone cannot be used to prove its own
consistency. In his proof, Gödel applied the idea of recursive enumer-
ability, and demonstrated how arithmetic, defined by recursive func-
tions, could be made to emulate logic.

72 Nagel and Newman, Gödel’s Proof, 73.


73 German for “decision problem.”
74 “What is more, he (Gödel) proved that it is impossible to establish the internal logi-
cal consistency of a very large class of deductive systems— elementary arithmetic,
for example—unless one adopts principles of reasoning so complex that their internal
consistency is as open to doubt as that of the systems themselves.” Nagel and Newman,
Gödel’s Proof, 3.

AN OVERVIEW 25
Fig. 2 Wir definieren nun nach dem Rekursionsschema (2) eine
A recursive
definition from Funktion | (x,h) folgendermaßen:
Gödel’s famous | (0,h) = 0
paper. (Gödel, 1931)
| (n +1, h) = (n +1) $ a +| (n, h) $ a (a)
wobei a = a [a (t (0, h))] $ a [| (n, h)]

In 1936 and 1937, only five years after Gödel’s paper, Alan Turing
and Alonzo Church delivered another blow to Hilbert’s hope that the
Entscheidungsproblem could be solved. Both mathematicians proved
that Hilbert’s question cannot be positively answered. 75 In order to make
Hilbert’s notion of the algorithm operative, and its operations explicit,
Turing conducted an experiment involving a hypothetical machine which
operated with “empty” symbols mechanically. By mechanical means,
the machine could automate the operations of finitistic formal systems.
Like Gödel, Turing attempted to solve the Entscheidungsproblem within
arithmetic, and in doing so introduced a novel constitution of arithme-
tic, utterly different from one based on deductive logic.
Turing introduced his paper with the notion of a computable number:
According to my definition, a number is computable if its deci-
mal can be written down by a machine. 76
The ‘computable’ numbers may be described briefly as the real num-
bers whose expressions as a decimal are calculable by finite means. 77
The computing machine consisted of an infinite tape divided into a
number of discrete elements called “squares.” Each square could be
empty but was also capable of bearing a symbol, for example 0 or 1.
The machine could carry out only four actions: read the symbol from
the square, write the symbol to the square, erase the symbol from the
square, or move the tape one step left or right. Like Hilbert’s formal
system, such a machine was completely described by a finite number
of conditions that he defined as “m-configuration.” 78 Depending upon
the symbol that was read from the square, the configuration assigned
actions to be taken. For example, one such m-configuration c1 would
instruct the machine to write a symbol to the current square, move one
step to the left and change its current configuration to c2. Such simple
procedures were to be repeated indefinitely. At any moment the ma-
chine was “directly aware” only of one symbol: the “scanned symbol”
from the “scanned square.” But the tape is what allowed the machine to

75 In other words, it is impossible to decide algorithmically whether statements in a finit-


istic formal system are true or false according to the description of the formal system.
76 Turing, 230.
77 Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem,”
230.
78 Turing, 231.

26 TOWARDS COMMUNICATION IN CAAD


“effectively remember some of the symbols which it had seen (scanned)
previously,” thus serving as its memory. 79 It was shown later that such
a simple mechanical machine could in fact emulate any possible formal
system, but was prone to the same limitations discovered by Gödel. 80
By explicating the machine’s capabilities, Turing had effectively
reduced the class of real numbers to the class of computable numbers,
a class whose sole property was that it could accommodate for a finite
mechanical calculation of all of its members. He showed that large class-
es of numbers were in fact computable, but not all numbers. 81 Those
that were computable, were also necessarily enumerable 82, which was
proved by Gödel. The same holds true for computable sequences. By
instrumentalising them, however, Turing turned what appeared to be
an inherent limitation into a new perspective:
It is possible to invent a single machine which can be used to
compute any computable sentence. 83
With this statement, Turing turned the attention from the necessity
implied in the limits of mechanically computing numbers, towards the
contingency implied in the infinity of possible sequences. He implic-
itly pointed out that limitations are inherent to any formal system, but
not to the creativity of an individual having such a system at his or her
disposal. His statement transcended the computable machine into the
universal or any-machine.

Digital computers and algorithms According to Turing, the


first mechanical computers, namely Babbage’s difference and analytical
engine were invented as early as the beginning of 19th century but failed
to surpass the prototypical stage. 84 Inspired by Turing’s early work, the first
digital computers 85 came to existence in the 1940s. Hungarian-American
polymath John von Neumann was one of the pioneers who streamlined the

79 Turing, 231. Or what we would call today—stored program.


80 “In short, it has become quite evident, both to the nominalists like Hilbert and to the
intuitionists like Weyl, that the development of a mathematico-logical theory is subject
to the same sort of restrictions as those that limit the performance of a computing
machine.” Wiener, Cybernetics, 13.
81 “Computable numbers include all numbers which could naturally be regarded
as computable.” - large classes of numbers are computable including PI, e, etc.
“The computable numbers do not, however, include all definable numbers, and an ex-
ample is given of a definable number which is not computable.” Turing, “On Computable
Numbers, with an Application to the Entscheidungsproblem,” 230.
82 “…able to be counted by one-to-one correspondence with the set of all positive inte-
gers.” Turing, 230.
83 Turing, 241.
84 Turing, “Computing Machinery and Intelligence,” 439.
85 Burks, editor’s introduction to Theory of Self-Reproducing Automata, 6–10. For ex-
ample, ENIAC (1943–45) and EDVAC (1945).

AN OVERVIEW 27
design of digital computers 86 and wrote the first successful algorithms. 87
For him, computation was part of a larger umbrella of automation–theory
that seeks “general principles of organisation, structure, language, informa-
tion and control.” 88 Such theory was meant to explain the processes inher-
ent to natural systems by means of both analogue (natural automata) and
digital computers 89 (artificial automata). However, Turing’s construction
in which arithmetic (thus logic) could be reduced to computation inspired
von Neumann to think of them as one and the same thing. He introduced
logic at the heart of the theory of automata, often referring to it as a “logical
theory of automata.” 90 Arthur Burks illustrated this point in his introduc-
tion to von Neumann’s book Theory of Self-Reproducing Automata:
To conclude, von Neumann thought that the mathematics of autom-
ata theory should start with mathematical logic and move toward
analysis, probability theory, and thermodynamics. When it is devel-
oped, the theory of automata will enable us to understand automata
of great complexity, in particular, the human nervous system. 91
The early work of Claude Shannon has firmly established the technical
foundation of digital computers in logic. In his famous 1937 master’s
thesis named “A Symbolic Analysis of Relay and Switching Circuits,”
he investigated the correspondence between Boolean algebra and elec-
trical relays, which were the building blocks of electrical components
of the time. He advanced the design of electrical switches by propos-
ing that they be implemented as binary switches. 92 The logical basis of
electrical switches became the cornerstone for the design of electronic
digital computers 93, but its further development to transistors and com-
puter chips was made possible only with development of quantum phys-
ics. In 1948, Shannon published his paper “A Mathematical Theory of
Communication,” which is considered to be the founding work of infor-
mation theory. In this paper, Shannon defined entropy as the quantita-
tive measure of information within the set-theoretical paradigm (which
will be discussed and challenged in the part II).

86 …by separating data from instructions, analogous to the Turing’s tape, so that “by chang-
ing the program, the same device can perform different tasks.” Kalay, Architecture’s
New Media, 28.
87 “He devised algorithms and wrote programs for computations ranging from the cal-
culation of elementary functions to the integration of non-linear partial differential
equations and the solutions of games.” Burks, editor’s introduction to Theory of Self-
Reproducing Automata, 5.
88 Burks, 21.
89 Burks, 21.
90 Burks, 25.
91 Burks, 28.
92 Shannon, “A Symbolic Analysis of Relay and Switching Circuits,” 3–4.
93 “In other words, the structure of the machine is that of a bank of relays, capable each
of two conditions, say “on” and “off”; while at each stage the relays assume each a posi-
tion dictated by the positions of some or all the relays of the bank at a previous stage of
operation.” Wiener, Cybernetics, 119.

28 TOWARDS COMMUNICATION IN CAAD


German mathematician Norbert Wiener pushed the idea of automation fur-
ther within an emerging thermodynamic understanding of the world, which,
as he wrote, operated on the principles of conservation (and degradation)
of energy 94, information and control. 95 Within such a paradigm, the distinct
processes of both natural (animal) and artificial (machine) entities could be
addressed from the perspective of energy and information exchange. Since
the input and the output of each component of the system were necessarily
interconnected, each event affected the state of the whole environment.
Wiener used the example of patients suffering from ataxia, whose
muscles were completely healthy, but their brain was not able to establish
control over their actions. 96 His hypothesis was that the brain was not
simply an organ that gives orders to other organs, but also a monitoring
device, that continuously, and in real time, adjusts its “outputs” according
to the “inputs” it receives from the senses. Such a continuously adapting
control process he called the chain of feedback 97, and named the entire
field of “control and communication theory, whether in the machine or
in the animal,” cybernetics. 98 Accordingly, every system conceived upon
the principle of feedback chains, is a cybernetic system.
For Wiener, the basis of a feedback chain lies in the anatomy of
the brain. He considers neurons to be the elements of the human com-
putation system “which are ideally suited to act as relays.” 99 If the brain
was using computation to control its own feedback chain, then digital
computers had potential for controlling any system:
It has long been clear to me that the modern ultra-rapid computing
machine was in principle an ideal central nervous system to an appa-
ratus for automatic control; and that its input and output need not be
in the form of numbers or diagrams but might very well be, respec-
tively, the readings of artificial sense organs, such as photoelectric
cells or thermometers, and the performance of motors or solenoids. 100
Wiener did not stop at defining the program for cybernetics, but rather
developed it into a mathematical model that could be computationally
implemented. He saw an enormous potential for the optimal governance
of systems, going as far as proposing its use by psychopathologists for
the control of physiological diseases. 101

94 “The living organism is above all a heat engine, burning glucose or glycogen or starch,
fats, and proteins into carbon dioxide, water, and urea. It is the metabolic balance which
is the center of attention.” Wiener, 41.
95 “the present time is the age of communication and control.” Wiener, 39.
“…the present age is as truly the age of servomechanisms as the nineteenth century was
the age of the steam engine or the eighteenth century the age of the clock.” Wiener, 43.
96 Wiener, 8.
97 Wiener, 96.
98 Wiener, 11.
99 Wiener, 120.
100 Wiener, 26.
101 In the chapter: “Cybernetics and Psychopathology” Wiener, 144–154.

AN OVERVIEW 29
Wiener’s work has greatly influenced computer science, with his neu-
ron model as a precursor to the neural network perspective to machine
learning. 102 However, it is important to remember that cybernetics is
primarily a control paradigm that models the world as a closed thermo-
dynamic system. Its constitution, once it is set up, is, in principle, fixed.
The social parallel to such a paradigm appears today as a tyrannical form
of governance and care should be taken in proposing its application to
systems more complex than a thermostat.

Computation and language Around forty years after the


Principia Mathematica and the peak of Hilbert’s programme, research-
ers began to consider natural language from the perspective of formal
systems. The most prominent figure in this respect was Noam Chomsky,
whose most important work on the topic is Syntactic Structures (1957).
Chomsky’s interest in language was purely formal and pragmatic, focus-
ing on two elementary notions:
• Syntax: “the study of the principles and processes by which sen-
tences are constructed in particular languages.” 103
• Grammar: “device of some sort” for producing sentences of such
a language. 104
For Chomsky, grammar plays the role of a mechanical “judge,” making
efficient binary decisions whether a given sentence is correct or not,
independent of any semantics. He gave a famous example:
Sentences (1) and (2) are equally nonsensical, but any speaker
of English will recognize that only the former is grammatical.
(1) Colorless green ideas sleep furiously.
(2) Furiously sleep ideas green colorless. 105
Before presenting his generative model, Chomsky described two models
that he considered incapable of dealing with the complexity of grammar:
the Markov model, and the phase structure model. The fact that Chomsky
dismissed the Markov model, which later became the de facto standard
for machine translation (as well as search engine technology) well illus-
trates his misunderstanding of and disregard for mathematics. 106
Chomsky presented his own generative transformational model
as the most adequate and “natural” model for addressing linguistic struc-
tures. The model consists of three stages where simple transformations
of strings are applied in succession. In the first stage, Chomsky applied

102 Wikipedia, s.v. “Cybernetics,” last modified August 30, 2017, 17:55, https://en.wikipedia.
org/wiki/Cybernetics.
103 Chomsky, Syntactic Structures, 11.
104 Chomsky, 11.
105 Chomsky, 15.
106 “I think that we are forced to conclude that grammar is autonomous and independent of
meaning, and that probabilistic models give no particular insight into some of the basic
problems of syntactic structure.” Chomsky, 33.

30 TOWARDS COMMUNICATION IN CAAD


the phase structure model, which he had characterised as inadequate
if applied in isolation. In the second stage, he introduced “simple and
generalised transformations of the structure.” In the third stage, he ap-
plied “morphophonemic rules.” 107 Without being acquainted with 19th
century mathematics, one might fail to notice that these three stages
of transformations are in fact an appropriation of Hilbert’s formal sys-
tems, decontextualised and put to pragmatic use. Chomsky reduced the
complexities of a language to pure syntax and defined language as a set
of sentences constructed out of a finite number of strings and rules. 108
Although his attempt to define natural language as a recursively
defined structure of string replacements failed to capture the complex-
ity of human languages, it has greatly influenced artificial languages
used to program computers. These languages were seen as “a conceptual
leap above BASIC, FORTRAN and all the engineering-based languages,
which were ill-structured, inconsistent and clumsy.” 109
Soon, transformation grammars and state machines entered
other fields of science. Computational approaches in biology had a
special relevance for computer-aided architectural design, as they uti-
lised generative grammars for the development of form. In 1968, the
Hungarian biologist Aristid Lindenmayer developed a special type of
formal system, for the purpose of modelling the cellular interactions in
the development of plant filaments. 110 Today, we call such grammars
L-systems. 111 In the beginning, Lindenmayer’s language was mathemati-
cal and showed traces of inspiration from cybernetics and the theory of
computation, where the main elements—inputs, outputs and states—
described and encapsulated the model.
We assume a “black box,” or in more recent terms a “sequential
machine,” which has buttons or levers through which inputs can
be given to the machine, dials or slots from which outputs can be
read out, while the machine is thought to pass from state to state
in consecutive discrete moments of time. Under a sequence of
inputs the machine, therefore, undergoes a sequence of changes
in state, and produces a sequence of outputs, which is why it is
called a “sequential” machine. 112

107 Chomsky, 32–33.


108 Chomsky, 13.
109 Coates, programming.architecture, 27.
110 “…filaments are composed of cells which undergo changes of state under inputs they
receive from their neighbours, and the cells produce outputs as determined by their
state and the input they receive. Cell division is accounted for by inserting two new
cells in the filament to replace a cell of a specified state and input.” Lindenmayer,
“Mathematical Models for Cellular Interactions in Development I.,” 280.
111 Pruskinkiewicz and Lindenmayer, The Algorithmic Beauty of Plants, 1.
112 Lindenmayer, 281.

AN OVERVIEW 31
The design of his sequential machine promoted the idea of cell division,
which allowed it to easily model growth. Such growth, he observed, bears
similarity to the growth of plants 113, which he illustrated with a diagram.

fig. 3 Row
Lindenmayer’s 0 0 1
diagram of 1 0 1 1
cellular growth. 2 0 1 1 0
(Lindenmayer, 1968) 3 01 1 0 1
4 0 1 1 1
5 0 11 0 0
6 0 0 1 0
7 0 0 1 1 1
8 0 1 1 1 0 0
9 0 0 0 0 1 0
10 0 0 0 0 1 1 1
11 0 1 0 0 1 1 0 0
12 0 0 1 0 1 1 0 1 0
13 0 0 1 1 1 1 1 0 1 1 1 1
14 0 1 1 1 0 0 0 0 1 1 1 0 0 0
15 0 0 0 0 1 0 0 0 1 1 0 0 1 0 0
16 0 0 0 0 1 1 1 0 0 1 1 0 1 0 1 11 0
17 0 0 0 0 1 10 0 1 0 1 10 1 11 1 11 00 1
18 1 1 01 1 01 0 11 1 1 1 01 11 00 0 00 10 1 1

In his later work, as his interest shifted from numerical patterns to


forms, Lindenmayer’s method became less mathematical, and more
grammatical in the sense of Chomsky. Ultimately, it became a simple
formal system aimed to explain the complexity of living organisms:
The main question we wish to address is whether a ‘program’
or ‘grammar’ can be defined and used profitably in the study of
multicellular development “from egg to adult organism. 114

BEyoND THE liMiTS of forMal logic Turing’s discovery an-


swered the question posed by Hilbert’s programme, and, in a certain
way, does speak the language of the 19th century. Nonetheless, I argue
that the viewpoints inspired by the interpretations of Gödel’s theorems,
insisting that computation is not more than formalised logic are mis-
leading. Computers, whose inherent capabilities logically equate the
capability of a Turing machine 115 will never be able to prove their own

113 “…this is in spite of the fact that new cells are continually inserted, and old ones are
being pushed to the right or disappear by division. Thus, a stable pattern is generated,
moving from the left to the right, while the cells participating in this pattern are con-
tinually replaced or displaced. This is rather reminiscent of the growing apical regions
of plant organs, like those of shoots and roots.” Lindenmayer, 287.
114 Lindenmayer and Jürgensen, “Grammars of Development,” 4.
115 “Kurt Gödel had reduced mathematical logic to computation theory by showing that
the fundamental notions of logic (such as well-formed formula, axiom, rule of inference,
proof) are essentially recursive (effective).” Burks, editor’s introduction to Theory of
Self-Reproducing Automata, 25.

32 TOWARDS COMMUNICATION IN CAAD


consistency by means of syntax or to compute certain numbers. But we
have to ask, is this really what computation is all about?
If we embrace computation from this perspective, our curiosity
will be confronted by inevitable formal limits, and our creativity bound-
ed by necessities. If, on the contrary, we embrace computation from
the ‘algebraist’ perspective, in which computers are any-machines, our
interest will be in model-theoretic procedures that computation allows
to articulate. 116 These procedures will be neither objective nor absolute,
neither necessary nor contingent. The essence of computation lies be-
yond necessities of formal logic and calls upon our capacity to think. By
following the legacy established by Dedekind, we will use computers to
“engender cases out of abstraction.” 117 This chapter concludes with a
hypothesis: The kind of mathematics (thus abstractions) we decide to
set in motion will set the limits of what can be done with computers.

II computers and architecture

Leon Battista Alberti, the Renaissance man credited with establishing


the architectural profession in the 15th century, codified architectural
design into parallel projection scale drawings. 118 As for the physical
models of buildings, Smith refers to the texts of Vitruvius to acknowl-
edge the importance of small-scale models for builders already in Roman
times. 119 The two-dimensional—graphical—and three-dimensional—
volumetric—representation of architectural models remain until today
the standard encoding of architectural design, produced on a daily basis
by the large majority of practicing architects around the world.
In their first encounters, architecture and computers began to me-
dialise the technique of drawing. A prominent example is the work of Ivan
Sutherland, often acknowledged as the grandfather of interactive com-
puter graphics, graphical user interfaces and (indirectly) computer-aided-
architectural design. 120 In 1963, he created Sketchpad, the first interactive
computer-aided drawing program operating in three-dimensions. It used
a light pen as the input device and a modified oscilloscope as the output
device. 121 Such experiments gradually introduced a generic model of space

116 Bühlmann, “Continuing the Dedekind Legacy Today,” 17.


117 “The key assumption he has to make, thereby, is to consider abstraction as an act of
engendering.” Bühlmann, 11.
118 Carpo, The Alphabet and the Algorithm, x, 31.
119 “First, the Romans seemed to be well aware of the persuasive application of the small-
scale model. Second, the small-scale model built by Callius permitted a population un-
trained in architecture to easily view the possibilities of a full-scale mechanism. Third,
the Roman small-scale model presented a mechanism granting the architect and the
population an opportunity to perceive a possible future.” Smith, Architectural Model
as Machine, 15.
120 Salomon, The Computer Graphics Manual, 10.
121 Salomon, 10.

AN OVERVIEW 33
to computer screens and seemed to settle an unspoken agreement—that a
computer-generated space should mimic our “natural” intuition of three-
dimensional space. Introducing the mathematical model of the camera to
the space clearly suggested placing a human observer within the computer.
The major technical challenge at the time was how to model and display real-
world or imaginary objects. One way of achieving this was to approximate
real-world objects with the numerical geometric models 122 created from
idealised geometrical components such as lines, curves, surfaces, solids,
and to project them onto the two-dimensional surface of the screen. This
approach evolved into a discipline of mathematics called geometric mod-
elling 123, and to its pragmatic counterpart, computer-aided-geometric de-
sign. 124 Two more prominent terms of the time merit clarification: The first
is computer-aided design (CAD) 125, which indicates a practice of furnishing
the infrastructure of geometric modelling with user-friendly graphical in-
terfaces, used to model digital geometric objects; 126 the second is computer
graphics 127, which appropriated CAD modelling to create virtual objects, but
with a primary focus of creating a realistic simulation of the real world. 128
The first two parts of this chapter investigate how the ideas
behind computer graphics and computer-aided design influenced the
way we think about modelling architecture with computers and give an
overview of their mathematical and technical infrastructure. The third
part outlines the prominent computational models in architecture, situ-
ating them within the previously covered mathematical traditions and
the more general models of computation. The last part introduces the
current state of the art and identifies its inherent limits.

mathematical bases of geometric modelling

Many scholars consider the analytic (Cartesian) geometry to be the


mathematical foundation of computer graphics and computer-aided

122 Golovanov, Geometric Modeling, 11.


123 Golovanov, 11.
124 “CAGD is based on the creation of curves and surfaces and is accurately described as curve
and surface modelling.” Rockwood and Chambers, Interactive Curves and Surfaces, 2.
125 “Computer-aided design was one of, maybe the first application to make use of computer-
generated curves and surfaces.” Peddie, The History of Visual Magic in Computers, 47.
126 “Geoscience used CAGD methods to represent seismic horizons; computer graphics de-
signers modelled their objects with surfaces, as did molecule designers for pharmaceu-
ticals. Architects discovered CAGD, word processing and drafting programs based their
interface protocols on free-form curves (PostScript), and even moviemakers discovered
the power of animating with such surfaces, beginning with TRON, continuing through
Jurassic Park, and beyond.” Rockwood and Chambers, Interactive Curves and Surfaces, 7.
127 “The first use of the phrase Computer Graphics is generally attributed to William Fetter
… who used the phrase in the early 1960s in the development of first computer model of
a human body.” Peddie, The History of Visual Magic in Computers, 101.
128 Hoffmann, Geometric and Solid Modeling, 3–4.

34 TOWARDS COMMUNICATION IN CAAD


design. 129 It is productive to scrutinise this statement within the con-
text architect and computer scientist Ludger Hovestadt introduces in
“Elements of Digital Architecture” (2015), where he describes geometry
as the “rationalisation of thought patterns amid known elements.” 130 He
distinguishes between three principal geometries, each of them “un-
locking a new world”: Euclidean geometry, operating in space; analytic
geometry, operating in time; and digital code, operating in values. 131 In
his 1637 work La géométrie, French mathematician and philosopher
René Descartes introduced analytic geometry. Using algebra 132, he
parametrised the Euclidean paradigm and captured time 133, which in
turn fixed a new reference system for all physical processes. 134 Unlike
Euclidean geometry, described by text, analytic geometry operates with
numbers, making its character formulaic and principally non-visual. To
represent it outside of equations, they must be drawn and thereby need
to be parametrised and evaluated. Only in this sense can one say that
Cartesian geometry is the foundation of CAD and computer graphics.
However, this leaves us with a dilemma: What kind of modelling would
correspond to Hovestadt’s third geometry of digital code? An attempt
at an answer will be presented later in this work.

Polynomial functions Euclid’s geometry rests upon construc-


tions with straight lines and circles. 135 Descartes showed that for the
analytic geometry to capture circularity (thus curvature), its algebraic
elements must be raised to a power, which can be accomplishing using
polynomials. Mathematically, the simplest form of a polynomial is de-
scribed as a function of a single variable t on its domain:

a n t n + a n —1 t n —1 + … + a 1 t + a 0

Where an, an-1, a0 are constants called coefficients. 136 A product of a vari-
able and a coefficient constitutes a monomial. 137 Thus, every polyno-
mial can be expressed as a finite sum of monomials, whose variables

129 For example: “Computer-aided geometric design has mathematical roots that stretch
back to Euclid and Descartes.” Rockwood and Chambers, Interactive Curves and
Surfaces, 6, or: “The Cartesian system and geometry is the basis of all computer graph-
ics.” Peddie, The History of Visual Magic in Computers, 33.
130 Hovestadt, “Elements of Digital Architecture,” 35.
131 Hovestadt, 35.
132 “Often it is not necessary thus to draw the lines on paper, but it is sufficient to designate
each by a single letter.” Descartes, The Geometry of Rene Descartes, 5.
133 Hovestadt, “Elements of Digital Architecture,” 35.
134 Jost, historical introduction to On the Hypotheses Which Lie at the Bases of Geometry, 14.
135 Euclid, Elements of Geometry, 7. It is reflected in his elementary postulates.
136 Barbeau, Polynomials, 1.
137 Rockwood and Chambers, Interactive Curves and Surfaces, 28.

AN OVERVIEW 35
are raised to a nonnegative integer power. 138 The monomial whose vari-
able is raised to the highest power within the polynomial determines
its degree. According to their degree, polynomials of the first degree are
considered linear polynomials, of the second degree as quadratic poly-
nomials, of the third degree as cubic polynomials, of the fourth degree
as quartic polynomials, and so on. 139
From the perspective of the elements that could be substituted
by the variable, a polynomial is often mapped to a polynomial func-
tion 140, such as in our case of a single variable t:

p (t ) = a 0 + a 1 t + a 2 t 2 + … a n t n

Polynomial functions are important because they are simple, easy to


compute, highly applicable and have a finite evaluation schema. 141 The
evaluation of a polynomial can be accomplished by simply replacing its
variable with a number and carrying out the computation. 142 To obtain
inequalities or to graph a wide variety of functions, it is important to
know the value of t for which the polynomial takes the value 0—a value
referred to as the zero of a polynomial. 143
On a more abstract level, operating with polynomials and vectors
is quite similar: Both can be added, multiplied with constants or with
each other, etc. 144 Furthermore, by means of the elementary algebraic
structure known as a field, it is possible to establish an isomorphic map-
ping from the space of polynomials to the space of vectors. 145 In this case,
to a polynomial of degree n corresponds a vector of a dimension n+1.

pn ~
– V
n+1

For example, we can use { to symbolise a mapping of the quadratic poly-


nomial 146 to its corresponding space of vectors:

{: p 2 → V 3

In this case, the coefficients of the quadratic can be represented as the


three-dimensional vector:

138 Barbeau, Polynomials, 1.


139 Barbeau, 2.
140 Broida and Williamson, A Comprehensive Introduction to Linear Algebra, 255–56.
141 Farouki, “The Bernstein Polynomial Basis: A Centennial Retrospective,” 382.
142 Barbeau, Polynomials, 2.
143 Barbeau, 2.
144 Barbeau, 2–3.
145 See chapter 4.2. “The Algebra of Polynomials” in Hoffman and Kunze, Linear Algebra,
119–23.
146 Polynomial of the second degree.

36 TOWARDS COMMUNICATION IN CAAD


a0
{ ( a 0 + a 1 t + a 2 t 2) = a 1
a2

In the light of this mapping, the collection of variables:

B = {1,t,t 2 }

gains an interesting geometrical interpretation: It can be seen as a set of


“atomic,” linearly independent vectors called basis vectors, whose linear
combination spans the space of the same dimensionality. 147 Thus, in our ex-
ample we can represent every quadratic polynomial as the linear combina-
tion of the basis set B. From elementary linear algebra, we know that there
can be (infinitely) many bases spanning a vector space, and by means of lin-
ear transformations we could rewrite them in another form. 148 The choice of
the appropriate basis depends on the task at hand. Through the aforemen-
tioned mapping, any newly obtained basis will also yield its polynomial form.
One important application of polynomials was interpolation. In
mathematics, to interpolate means to “estimate values between given
known values,” 149 and in polynomial terms this corresponds to finding a
simplest 150 polynomial passing through a set of points. For example, to in-
terpolate between two points, it is sufficient to use a first-degree polyno-
mial (maps to a line in linear algebra); three points would require a qua-
dratic polynomial (maps to a parabola); four points a cubic polynomial. 151
At the end of 18th century 152, French mathematician Lagrange
introduced a special form of polynomial that could approximate all the
given points. 153 Polynomials in what later became known as the Lagrange
form 154 are those that are mapped to standard basis vectors 155 in their cor-
responding vector space. For example, to approximate four points on a

147 If monomials are considered as a collection P containing all the variables raised to the de-
grees ranging from 1 to k, “and if by means of this base we can represent any polynomial of
degree k as a sum, then the question may be asked whether any polynomial of degree k can
be written as a summation of terms, each a product of a coefficient and a basis function from
P. If yes, then P is said to span the set of polynomials of degree k. Further, if P is the smallest
set to span, then it is a basis for the polynomials of degree k. The set P is a basis called a
power basis. If any monomial is eliminated from P, then not all polynomials of degree k can
be written in terms of P.” Rockwood and Chambers, Interactive Curves and Surfaces, 29.
148 Rockwood and Chambers, 29.
149 Rockwood and Chambers, 60.
150 Barbeau, Polynomials, 206. Simplest meaning—of the lowest possible degree.
151 Finding a polynomial interpolating a set of points is a linear algebra problem. It is an
equation in which polynomial coefficients are the variables to be solved for.
152 “Lagrange formula appears in the fifth lecture of his “Leçons élémentaires sur les mathé-
matiques” delivered in 1795 at the École Normale.” Gautschi, “Interpolation Before and
After Lagrange,” 347.
153 Rockwood and Chambers, Interactive Curves and Surfaces, 62.
154 Rockwood and Chambers, 60.
155 For each basis vector, one of its coordinates will be equal to one, while all the others to zero.

AN OVERVIEW 37
plane requires computing a sum of four polynomials, each constructed on
the unique set of their x coordinates. The first polynomial p1 should evalu-
ate to 1 in the point 1, and evaluate to 0 in all other points (2, 3, 4). The
second polynomial p2 should evaluate to 1 in the point 2, and evaluate to
0 in all other points (1, 3, 4), and the pattern continues for other points. 156

l1(x) l2(x) l3(x) l4(x) L(x)


Fig. 4
Lagrange
8
interpolation
polynomial
6
as a sum of scaled
basis polynomials.
(adopted from 4

Glosser.ca, 2016)
2

–2

–4
–5 0 5

Since we know the zeroes of the four polynomials—and we know them


because we have specified them by the definition—it is easy to derive
the actual polynomials. These polynomials form a Lagrange basis. The
Lagrange polynomial would be a linear combination of this basis and the
set of their y coordinates. This polynomial simply solves the Lagrange
interpolation problem.
Using interpolation to approximate a set of points can lead to
unpredictable results because the polynomial must pass through the all
points. Even a slight change of a single point can dramatically alter the
interpolating polynomial. 157 If the set contains many scattered points,
as is often the case with any series of measurements, approximation
becomes highly inaccurate. An alternative approach to this problem,
known as regression, would be “to give up the requirement of exact
agreement at some points in favour of gaining some flexibility for mak-
ing the approximation close everywhere.” 158 In this case, approxima-
tion cannot be done globally, but needs to be restricted to an interval 159,
where it can be approximated using methods such as least squares. 160

156 Rockwood and Chambers, 61.


157 Barbeau, Polynomials, 213.
158 Barbeau, 213.
159 Barbeau, 213.
160 Barbeau, 214.

38 TOWARDS COMMUNICATION IN CAAD


Fig. 5
0.50
An example
of a non-linear
regression.
0.45

0.40

0.35

0.30

0.0 0.2 0.4 0.6 0.8

Another important application of polynomials is the approximation of


continuous functions. The Weierstrass approximation theorem (1885)
states that polynomials can uniformly approximate any function that
is merely continuous over a closed interval. 161 In 1912, Russian math-
ematician Sergei Bernstein proved the Weierstrass theorem using only
basic algebraic operations. He introduced a parameter t that replaced
variable x, within the interval [a, b] with the equality t=(x-a)/(b-a). He
was thereby able to map any continuous function from the interval [a, b]
to the parametric domain t ![0, 1] by means of a Bernstein polynomial of
degree n without any loss of generality. 162 Unlike the basis of Lagrange
polynomials, which needs to be computed for the specific points it in-
terpolates, Bernstein basis B is a collection of functions that are com-
puted for a desired accuracy of approximation n and is independent of
coordinates: 163
n
B nk (t ): = ( 1 – t ) n—k t k, k = 0, 1, … , n
k

where t is a parameter. 164 Therefore, a Bernstein polynomial p(t) is the


approximation of a given continuous function f constructed as a linear
combination of parametric Bernstein basis functions: 165
n

p n (t ): = ∑ f ( k / n) B n
k (t )
k= 0

This approximation schema is very simple and elegant, but it converges


very slowly, requiring a high polynomial degree n to approximate the

161 Farouki, “The Bernstein Polynomial Basis,” 383.


162 Farouki, 383.
163 Rockwood and Chambers, Interactive Curves and Surfaces, 36 and Farouki, “The
Bernstein Polynomial Basis,” 383.
164 Bellucci, “On the explicit representation of orthonormal Bernstein Polynomials,” 2.
165 Farouki, “The Bernstein Polynomial Basis,” 383, 385.

AN OVERVIEW 39
given function precisely. 166 For that reason, Bernstein polynomial ap-
proximants have remained theory rather than practice. 167

Fig. 6
Bernstein
polynomial
approximation.
(Farouki, 2012)

However, approximation was not the end for Bernstein polynomials.


The generality of its basis functions allowed for a free creation of poly-
nomials with arbitrary coefficients in Bernstein basis. Such construc-
tions are called polynomials in Bernstein form and, in the 1960s, became
a cornerstone of computer-aided geometrical design. 168

Algebraic functions and their graphs Given an arbitrary poly-


nomial and its geometric interpretation, it is sometimes possible to extract
valuable information about it, such as its zeros or degree, by simply graphing
it. 169 Depending on the particular choice of equation used to represent it, dif-
ferent properties can be investigated, for example the generalised parabola
xp-yq=0 or the folium of Descartes x3+y3+axy=0. 170
When learning algebra at school, students are often first present-
ed with the notion of a function that comes in a form y=f (x). As x varies,
the value of y is computed by the function f and, when graphed, a pair of
coordinates (x, y) gradually forms a curve. 171 Variable y is presented as
dependent on x and the form of their dependence is captured in an explicit
form. In analytic geometry, however, an equation of the unit circle is pre-
sented as x2+y2=1. Here, the focus is on the interdependence of x and y,
which is equated with the constant representation of a radius of a circle,
capturing the relationship between x and y in an implicit form.

166 Sometimes the required degrees exceeded hundreds (or even thousand), which was
impossible to compute at the beginning of the 20th century without computers.
167 Farouki, “The Bernstein Polynomial Basis,” 385.
168 Farouki, 380.
169 Barbeau, Polynomials, 71.
170 Brieskorn and Knörrer, Plane Algebraic Curves, 88.
171 Rockwood and Chambers, Interactive Curves and Surfaces, 10–11.

40 TOWARDS COMMUNICATION IN CAAD


Let us look at this from the perspective of graphs. Both implicit and explicit
formula can be plotted, and their graph can represent either a curve or a
surface. However, the explicit form will be quite deficient in this respect,
due to the nature of the relation it embodies. Since f (x) is a functional map-
ping, only a single value of y can be assigned to each value of x by definition.
Therefore, a vertical line cannot be represented, as a function cannot have
infinite slope; 172 the derivative f ' (x) is not defined parallel to the y axis; a cir-
cle cannot be defined due to the intersection of two points for each x, etc. 173
Another form of equations is not subjected to such limitations.
Parametric forms of equations offer advantages to the implicit form 174, in-
cluding a method of parameterisation, which defines motion on a curve 175 and
allows for easier differentiation. 176 Parametric forms remove the direct depen-
dency of the variables y and x, as in the function y=f (x) by introducing a third
variable t called a parameter, upon which both x and y are made dependent.
x = f (t )
y = f (t )
To represent a vertical line in this form is trivial, as well as the unit circle
x = cos(t )
y = sin (t )
where the value of the parameter t traces the circle while varying in the do-
main 0#t#2r. To represent a polynomial—as we have seen with the example
of the Bernstein form—the formula does not change and can be written as:

p (t ) = a 0 + a 1 t + a 2 t 2 + … a n t n

Here p(t) takes a form of a vector-valued function of a scalar parameter t


whose vectors are unrestricted to a specific dimensionality. 177 If the vec-
tors are three-dimensional, the function p(t) contains three coordinate
functions x(t), y(t), and z(t). 178 The function p(t) is called a curve 179, the
slope of which is given by the tangent line at any point and computed
by finding the derivative vector [x’(t), y’(t)] at any value of t. 180As the

172 Rockwood and Chambers, 11–12.


173 Rockwood and Chambers, 11–12.
174 Rockwood and Chambers, 12.
175 “Motion on the curve refers to the way that the point (x, y) traces out the curve.”
Rockwood and Chambers, 12. In this respect parametric representation allows us to
control position and speed of an object moving along a curve, or tells us when the object
will be at a particular location. The parameter gives us direction and speed.
176 “Vertical tangent vectors can be defined by differentiation, for instance, which is not
possible in explicit Cartesian form.” Rockwood and Chambers, 6.
177 Rockwood and Chambers, 17.
178 Rockwood and Chambers, 17.
179 Golovanov, Geometric Modeling, 13.
180 Rockwood and Chambers, Interactive Curves and Surfaces, 15.

AN OVERVIEW 41
parameter t changes, the derivative vector computed at each point cor-
responds to the speed at which the point traces out the curve. 181
Lissajous curves were investigated in the 19th century in con-
nection with vibration problems and with the motion of a double pen-
dulum. 182 In the early days of computer graphics, parametric represen-
tation allowed displaying these curves on an oscilloscope for the first
time. 183 The parametric equation of one such a curve can be written as:
x = cos(3t )
y = sin (2t )

Variation of the value of the parameter t in the domain 0#t#2r draws


the Lissajous curve. Sampling the values of the parameter t at certain
steps reveals the direction in which the curve unfolds, corresponding to
the motion of the point that traces it.
Fig. 7
A parametric
Lissajous curve
evaluated
in 20 steps.

t = 0.05 t = 0.10 t = 0.15 t = 0.20 t = 0.25

t = 0.30 t = 0.35 t = 0.40 t = 0.45 t = 0.50

t = 0.55 t = 0.60 t = 0.65 t = 0.70 t = 0.75

t = 0.80 t = 0.85 t = 0.90 t = 0.95 t = 1.00

181 Rockwood and Chambers, 15.


182 Brieskorn and Knörrer, Plane Algebraic Curves, 65.
183 “Lissajous figure on an oscilloscope, displaying a 3:1 relationship between the frequen-
cies of the vertical and horizontal sinusoidal inputs, respectively.” Peddie, The History
of Visual Magic in Computers, 295.

42 TOWARDS COMMUNICATION IN CAAD


Another advantage of parametrically defined curves is that they generalise
more easily to higher dimensions. For example, by adding another equa-
tion for the coordinate z we obtain a Lissajous curve in three dimensions:

x = cos(3t )
y = sin (2t )
z= t

Fig. 8
6 A parametric
Lissajous curve
in three dimensions.

0
-1.0

-0.5

0.0

-1.0 0.5
-0.5
0.0
0.5 1.0
1.0

Again we can see how the “analytic base allows interesting curves to be
defined through mere generalisation of the form of equations.” 184

Splines: Pragmatics of mathematical curves The discus-


sion of curves thus far has included only analytic curves 185, which have
a limited practical value. The period after the Second World War saw the
pragmatic revival of mathematical curves following the development of

184 Brieskorn and Knörrer, Plane Algebraic Curves, 87.


185 “We will designate curves as analytic curves if their coordinates in some local coordi-
nate system can be described by analytic functions without using points, vectors, or
other curves.” Golovanov, Geometric Modeling, 17.

AN OVERVIEW 43
computers. This pragmatism was embodied in the idea that curves could
be used as a geometric modelling tool to not only represent but also
design arbitrary shapes. 186 The breakthrough happened in the 1950s
with the work of French engineers Pierre Bézier at Renault and Paul de
Casteljau at Citroën—who independently developed a smooth type of
a curve, created on a Bernstein polynomial basis 187 that could be con-
structed on a freely specified set of points. 188 The curve became known
as the Bézier curve 189 and the family of parametric curves as splines. 190
De Casteljau and Bezier turned the approximation inefficiency of
Bernstein polynomials to their advantage, converting it into a math-
ematical tool that would allow them “to create complex shapes for the
automobile industry, to replace clay models.” 191
Geometric modelling is characterised by a progression of geo-
metric elements from simple primitives, such as points and lines to-
wards more complex shapes. 192 To illustrate this, we will demonstrate
how to derive Bézier curves by using only elementary algebra. The dem-
onstration starts with the parametric equations of a line in two dimen-
sions, passing through two points (x0,y0) and (x1,y1):
x = x 0 + ( x 1 – x 0) t
y = y 0 + ( y 1 – y 0) t

This equation can be rewritten as:


t x = (1 – t ) x 0 + t · x 1
y = (1 – t ) y 0 + t · y 1

These equations are in fact the same. Rewriting them exposes their poly-
nomial basis {1-t, t} that multiplies coordinate points 193 and shows that the
coefficients are precisely those discrete values that will produce the desired

186 “The initial use of CAGD was to represent the data as a smooth surface for numeri-
cal control. It soon became apparent that the surfaces could be used for the design.”
Rockwood and Chambers, Interactive Curves and Surfaces, 7.
187 “Ultimately, the work of de Casteljau and Bézier lead to adoption of the Bernstein form,
typified by what is now called a Bézier curve.” Farouki, “The Bernstein Polynomial
Basis,” 386.
188 Farin, Curves and Surfaces for CAGD, xvi.
189 “By taking a linear combination of Bernstein polynomials we can define a generalized
parametric curve over the interval [a, b], which is known as the Bézier curve.” Bellucci,
“On the explicit representation of orthonormal Bernstein Polynomials,” 3.
190 Computationally speaking, splines are parametric curves, whose computation “may be
broken down into seemingly trivial steps—sequences of linear interpolations.” Farin,
Curves and Surfaces for CAGD, 25.
191 Farouki, “The Bernstein Polynomial Basis,” 386.
192 Rockwood and Chambers, Interactive Curves and Surfaces, 3.
193 Rockwood and Chambers, 17.

44 TOWARDS COMMUNICATION IN CAAD


interpolation in a form of a continuous line equation. 194 In that respect,
the coefficients (x0,y0) and (x1,y1) are called control points of the line. 195 In
a more general form, the parametric equation of the line can be defined as:

l (t ) = (1 – t ) p 0 + t p 1 196

Here the parameter t takes value in the range (0#t#1) defining the line
segment; p0 and p1 can be vectors of arbitrary dimensionality:
xi
pi =
yi

but in our specific case they are:


x0 x1
p0 = and p1 =
y0 y1

The expressions 1-t and t that make up the polynomial basis are also
called blending functions, as each of them influences its respective co-
ordinate, acting as a weight.
blend [2p] x y
1– t x0 y0
t x1 y1

Constructing a line between points is the simplest example of linear


interpolation, which is perhaps the most fundamental concept in geo-
metric modelling. 197 However, even for this simplest case, other forms
of linear interpolation are possible, also resulting in the straight-line
interpolation of two points. Choosing a different basis/blending func-
tion can produce the same result with a different parametrisation—that
is, the motion of a particle at t tracing a line will be different. 198
Let us give a practical example of a curve interpolating the two
points A={1, 2}; B={8, 5}. Replacing the variables with numbers in the
previously given equations and tables, gives:
x = (1 – t ) 1 + 8t
y = (1 – t ) 2 + 5t

194 “Although a parametric curve or surface (a vector-valued function of one or two vari-
ables) is an infinitude of points, its computer representation must employ just a fi-
nite data set. The mapping from the finite set of input values to a continuous locus is
achieved by interpreting those values as coefficients for certain basis functions in the
parametric variables.” Farouki, “The Bernstein Polynomial Basis,” 386.
195 Rockwood and Chambers, Interactive Curves and Surfaces, 34.
196 Golovanov, Geometric Modeling, 27.
197 “All subsequent curves and surfaces are defined by repeated linear interpolation in
some form.” Rockwood and Chambers, Interactive Curves and Surfaces, 28.
198 Rockwood and Chambers, 28.

AN OVERVIEW 45
blend [2p] x y
1– t 1 2
t 8 5

Rewriting the previous equations gives the equation of the line interpolat-
ing between points A and B defined parametrically in the range (0#t#1):

t x = 1 + 7t
y = 2 + 3t

By replacing the variable t with numbers ranging from 0 to 1, we can


derive the corresponding x and y coordinates sweeping the line.
t t x y
0 1 2
1 8 5

Fig. 9 6
Graph of
a parametrically 5
{8, 5}

defined t=1
line segment.
4

{1, 2}
2
t=0

0 2 4 6 8

The successive composition of multiple first-degree curves defines a


polyline. It is the simplest compound curve constructed on a set of points,
where line segments sequentially connect the given points. 199
The next level would be to interpolate between three points on a
plane: (x0,y0), (x1,y1), and (x2,y2). The interpolation consists of three steps:
First we interpolate between points (x0,y0) and (x1,y1):
B 01 = (1 – t ) p 0 + t · p 1

Then we interpolate between points (x1,y1) and (x2,y2):

B 12 = (1 – t ) p 1 + t · p 2

199 Golovanov, Geometric Modeling, 19.

46 TOWARDS COMMUNICATION IN CAAD


Lastly, we interpolate between the two newly obtained interpolating
functions, just as we would between points:
B 012 = (1 – t ) · B 01 + t · B 12
= (1 – t ) ((1 – t ) p 0 + t · p 1) + t ((1 – t ) p 1 + t p 2)

When we distribute the terms in the equation and rewrite it to expose


the polynomial basis we get the interpolation formula for three points.

B 012 = (1 – t ) 2 · p 0 + 2t (1 – t ) · p 1 + t 2 · p 2

Points pi define the control polyline. 200 This is expressed in the following table:
blend [3p] x y
(1 – t ) 2
x0 y0
2t (1 – t ) x1 y1
t2 x2 y2

It is interesting to notice that the three basis (blending) polynomials that


we have just derived are in fact the same expressions that would be the
result of evaluating the previously given Bernstein basis function for n=2:
n
B nk (t ): = ( 1 – t ) n—k t k, k = 0, 1, 2
k

In fact, Bézier curves are simply a linear combination of the Bernstein


basis with arbitrary vectors, where n reflect the number of both basis
polynomials and control points. 201
Let us give a practical example of a Bézier curve interpolating
the three points: A={-2, 2}; B={0, -1}; C={2, 2}. When we replace the
numbers in the previously given equations and tables, we obtain:
x 012 = (1 – t ) 2 · x 0 + 2t (1 – t ) · x 1 + t 2 · x 2
y 012 = (1 – t ) 2 · y 0 + 2t (1 – t ) · y 1 + t 2 · y 2

blend [3p] x y
(1 – t) 2
–2 2
2t (1 – t) 0 –1
t2 2 2
When we solve the equation for t, we obtain a parametric equation of
the line interpolating points A, B and C defined in the range (0#t#1):

200 Golovanov, 28.


201 Bellucci, “On the explicit representation of orthonormal Bernstein Polynomials,” 3.

AN OVERVIEW 47
x = –2 + 4t
y = 2 – 6t + 6t 2

By replacing the variable t with numbers ranging from 0 to 1, we derive


the corresponding x and y coordinates sweeping the line.
t x y
0 –2 2
0.2 –1.2 1.04
0.4 –0.4 0.56
0.6 0.4 0.56
0.8 1.2 1.04
1 2 2

fig. 10 {–2, 2} {2, 2}


Graph of 2.0
t=0 t=1
a parametrically
defined parabola 1.5
sampled
in 6 intervals.
1.0

0.5

–2.0 –1.0 1.0 2.0

–0.5

–1.0 {0, -1}

If we wish to interpolate a curve in a higher resolution, we sample the


parameter t in smaller increments yielding additional (x, y) points:

fig. 11 {-2, 2} {2, 2}


Graph of t=0
2.0
t=1
a parametrically
defined parabola 1.5
sampled
in 21 intervals.
1.0

0.5

-2.0 -1.0 1.0 2.0

-0.5

-1.0 {0, -1}

48 TOWARDS COMMUNICATION IN CAAD


I will conclude with the interpolation between four points on a plane:
(x0, y0), (x1, y1), (x2, y2), and (x3, y3), which is the standard form of the
Bézier curve. By repeating the procedure from the example of three
points, or by simply solving the Bernstein function, we obtain the poly-
nomial basis represented in a table:

blend [4p] x y
(1 – t ) 3 x0 y0
3t (1 – t ) 2 x1 y1
3t (1 – t )
2
x2 y2
t 3
x3 y3

Finally, let us show a practical example of a Bézier curve interpolating


four points: A={1, 2}; B={3, 7}; C={8, -5}; and D={10, 8}. Depending on
the sampling increment of the parameter t, the Bézier curve can be vi-
sualised in a required resolution.

{10, 8} Fig. 12
Graph of
{3, 7}
a parametrically
defined cubic
sampled
5 in 11 intervals.

{1, 2}

0
2 4 6 8 10

-5
{8, –5}

AN OVERVIEW 49
{10, 8}
Fig. 13
Graph of {3, 7}
a parametrically
defined cubic
sampled
in 51 intervals. 5

{1, 2}

0
2 4 6 8 10

-5
{8, –5}

The first and the last basis function in the table have the highest poly-
nomial degree, thus have the most influence on their respective points.
Because of that, the first and the last are the only points through which
the curve will pass and through which the curve will be cotangent to the
control polygon. 202 The curve can approach the other two points with
lesser influence, but it can never pass through them. 203
To illustrate the mathematical concepts behind it, we have evaluated
the Bézier curve by direct substitution, that is by replacing the variable t with
numbers ranging from 0 to 1 and deriving the coordinates. Technically, this
is probably the least effective method of evaluating a point on the curve. 204
Raising small floating-point numbers to high powers by using computers is
a very error-prone operation. 205 One fast and numerically stable method to
evaluate the Bézier curve is de Casteljau algorithm. This algorithm allows us
“to calculate any point of the Bezier curve using the control points without
knowing anything about the Bernstein functions.” 206 Furthermore, it is ap-
plicable for computing derivatives 207 and subdividing the curve. 208
Most classes of parametric curves in geometric modelling are de-
fined linear combinations of certain base functions. 209 An extension of Bézier
curves, rational Bézier curves, have additional weight attributes attached
to their control points, which allows them to accurately represent conic

202 Rockwood and Chambers, Interactive Curves and Surfaces, 34.


203 Rockwood and Chambers, 4.
204 Rockwood and Chambers, 42.
205 Rockwood and Chambers, 42.
206 Golovanov, Geometric Modeling, 30.
207 Derivative can be defined as: “Vector directed along the tangent to the curve in the
direction in which the parameter increases.” Golovanov, 13.
208 Rockwood and Chambers, Interactive Curves and Surfaces, 42–43.
209 Hoffmann, Geometric and Solid Modeling, 168–69.

50 TOWARDS COMMUNICATION IN CAAD


Random documents with unrelated
content Scribd suggests to you:
containing a part of this work or any other work associated with
Project Gutenberg™.

1.E.5. Do not copy, display, perform, distribute or redistribute


this electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the
Project Gutenberg™ License.

1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must,
at no additional cost, fee or expense to the user, provide a copy,
a means of exporting a copy, or a means of obtaining a copy
upon request, of the work in its original “Plain Vanilla ASCII” or
other form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™
works unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or


providing access to or distributing Project Gutenberg™
electronic works provided that:

• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt
that s/he does not agree to the terms of the full Project
Gutenberg™ License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project


Gutenberg™ electronic work or group of works on different
terms than are set forth in this agreement, you must obtain
permission in writing from the Project Gutenberg Literary
Archive Foundation, the manager of the Project Gutenberg™
trademark. Contact the Foundation as set forth in Section 3
below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on,
transcribe and proofread works not protected by U.S. copyright
law in creating the Project Gutenberg™ collection. Despite these
efforts, Project Gutenberg™ electronic works, and the medium
on which they may be stored, may contain “Defects,” such as,
but not limited to, incomplete, inaccurate or corrupt data,
transcription errors, a copyright or other intellectual property
infringement, a defective or damaged disk or other medium, a
computer virus, or computer codes that damage or cannot be
read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except


for the “Right of Replacement or Refund” described in
paragraph 1.F.3, the Project Gutenberg Literary Archive
Foundation, the owner of the Project Gutenberg™ trademark,
and any other party distributing a Project Gutenberg™ electronic
work under this agreement, disclaim all liability to you for
damages, costs and expenses, including legal fees. YOU AGREE
THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT
LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT
EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE
THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY
DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE
TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL,
PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE
NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you


discover a defect in this electronic work within 90 days of
receiving it, you can receive a refund of the money (if any) you
paid for it by sending a written explanation to the person you
received the work from. If you received the work on a physical
medium, you must return the medium with your written
explanation. The person or entity that provided you with the
defective work may elect to provide a replacement copy in lieu
of a refund. If you received the work electronically, the person
or entity providing it to you may choose to give you a second
opportunity to receive the work electronically in lieu of a refund.
If the second copy is also defective, you may demand a refund
in writing without further opportunities to fix the problem.

1.F.4. Except for the limited right of replacement or refund set


forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’,
WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of
damages. If any disclaimer or limitation set forth in this
agreement violates the law of the state applicable to this
agreement, the agreement shall be interpreted to make the
maximum disclaimer or limitation permitted by the applicable
state law. The invalidity or unenforceability of any provision of
this agreement shall not void the remaining provisions.

1.F.6. INDEMNITY - You agree to indemnify and hold the


Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and
distribution of Project Gutenberg™ electronic works, harmless
from all liability, costs and expenses, including legal fees, that
arise directly or indirectly from any of the following which you
do or cause to occur: (a) distribution of this or any Project
Gutenberg™ work, (b) alteration, modification, or additions or
deletions to any Project Gutenberg™ work, and (c) any Defect
you cause.

Section 2. Information about the Mission


of Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new
computers. It exists because of the efforts of hundreds of
volunteers and donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project
Gutenberg™’s goals and ensuring that the Project Gutenberg™
collection will remain freely available for generations to come. In
2001, the Project Gutenberg Literary Archive Foundation was
created to provide a secure and permanent future for Project
Gutenberg™ and future generations. To learn more about the
Project Gutenberg Literary Archive Foundation and how your
efforts and donations can help, see Sections 3 and 4 and the
Foundation information page at www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-
profit 501(c)(3) educational corporation organized under the
laws of the state of Mississippi and granted tax exempt status
by the Internal Revenue Service. The Foundation’s EIN or
federal tax identification number is 64-6221541. Contributions
to the Project Gutenberg Literary Archive Foundation are tax
deductible to the full extent permitted by U.S. federal laws and
your state’s laws.

The Foundation’s business office is located at 809 North 1500


West, Salt Lake City, UT 84116, (801) 596-1887. Email contact
links and up to date contact information can be found at the
Foundation’s website and official page at
www.gutenberg.org/contact
Section 4. Information about Donations to
the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission
of increasing the number of public domain and licensed works
that can be freely distributed in machine-readable form
accessible by the widest array of equipment including outdated
equipment. Many small donations ($1 to $5,000) are particularly
important to maintaining tax exempt status with the IRS.

The Foundation is committed to complying with the laws


regulating charities and charitable donations in all 50 states of
the United States. Compliance requirements are not uniform
and it takes a considerable effort, much paperwork and many
fees to meet and keep up with these requirements. We do not
solicit donations in locations where we have not received written
confirmation of compliance. To SEND DONATIONS or determine
the status of compliance for any particular state visit
www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states


where we have not met the solicitation requirements, we know
of no prohibition against accepting unsolicited donations from
donors in such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot


make any statements concerning tax treatment of donations
received from outside the United States. U.S. laws alone swamp
our small staff.

Please check the Project Gutenberg web pages for current


donation methods and addresses. Donations are accepted in a
number of other ways including checks, online payments and
credit card donations. To donate, please visit:
www.gutenberg.org/donate.

Section 5. General Information About


Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could
be freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose
network of volunteer support.

Project Gutenberg™ eBooks are often created from several


printed editions, all of which are confirmed as not protected by
copyright in the U.S. unless a copyright notice is included. Thus,
we do not necessarily keep eBooks in compliance with any
particular paper edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg
Literary Archive Foundation, how to help produce our new
eBooks, and how to subscribe to our email newsletter to hear
about new eBooks.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like