Presentation
Presentation
Architecture styles can be classified into two main types: monolithic (single deployment unit of all code) and distributed (multiple deployment units connected through remote
access protocols). While no classification scheme is perfect, distributed architectures all share a common set of challenges and issues not found in the monolithic architecture
styles, making this classification scheme a good separation between the various architecture styles. In this book we will describe in detail the following architecture styles:
Monolithic
Layered architecture (Chapter 10)
Distributed
Service-based architecture (Chapter 13)
Network is not reliable :Service B may be totally healthy, but Service A cannot reach it due to a network problem; or even worse, Service A made a request to
Service B to process some data and does not receive a response because of a network issue. This is why things like timeouts and circuit breakers exist between
services. The more a system relies on the network (such as microservices architecture), the potentially less reliable it becomes
Latency not zerodo you know what the average round-trip latency is for a RESTful call in your production environment? Is it 60 milliseconds? Is it 500
millisecondsAssuming an average of 100 milliseconds of latency per request, chaining together 10 service calls to perform a particular business function adds
1,000 milliseconds to the request! Knowing the average latency is important, but even more important is also knowing the 95th to 99th percentile. While an
average latency might yield only 60 milliseconds (which is good), the 95th percentile might be 400 milliseconds! It’s usually this “long tail” latency that will kill
performance in a distributed architecture. In most cases, architects can get latency values from a network administrator
Bandwidth is finite distributed architecture such as microservices, communication to and between these services significantly utilizes bandwidth, causing networks
to slow down, thus impacting latency (fallacy #2) and reliability
Stamp coupling in distributed architectures consumes significant amounts of bandwidth. If the customer profile service were to only pass back the data needed by
the wish list service (in this case 200 bytes), the total bandwidth used to transmit the data is only 400 kb. Stamp coupling can be resolved in the following ways:
Distributed architectures rely on what is called eventual consistency to ensure the data processed by separate deployment units is at some unspecified point in time all
synchronized into a consistent state. This is one of the trade-offs of distributed architecture: high scalability, performance, and availability at the sacrifice of data consistency
and data integrity.
Transactional sagas are one way to manage distributed transactions. Sagas utilize either event sourcing for compensation or finite state machines to manage the state of
transaction. In addition to sagas, BASE transactions are used. BASE stands for (B)asic availability, (S)oft state, and (E)ventual consistency. BASE transactions are not a piece of
software, but rather a technique. Soft state in BASE refers to the transit of data from a source to a target, as well as the inconsistency between data sources. Based on the
basic availability of the systems or services involved, the systems will eventually become consistent through the use of architecture patterns and messaging.
Contract versioning and version
Layered architecture
Every layered architecture will have at least some scenarios that fall into the architecture sinkhole anti-pattern. The key to determining
whether the architecture sinkhole anti-pattern is at play is to analyze the percentage of requests that fall into this category. The 80-20 rule is
usually a good practice to follow. For example, it is acceptable if only 20 percent of the requests are sinkholes. However, if 80 percent of the
requests are sinkholes, it a good indicator that the layered architecture is not the correct architecture style for the problem domain. Another
approach to solving the architecture sinkhole anti-pattern is to make all the layers in the architecture open, realizing, of course, that the
trade-off is increased difficulty in managing change within the architecture.
Why layered architecture
a good choice for small, simple applications or websites. It is also a good architecture choice, particularly as a starting point, for situations with very tight budget and time constraints.
Because of the simplicity and familiarity among developers and architects, the layered architecture is perhaps one of the lowest-cost architecture styles, promoting ease of
development for smaller applications. The layered architecture style is also a good choice when an architect is still analyzing business needs and requirements and is unsure which
architecture style would be best.
Both deployability and testability rate very low for this architecture style. Deployability rates low due to the ceremony of deployment (effort to deploy), high risk, and lack of frequent
deployments. A simple three-line change to a class file in the layered architecture style requires the entire deployment unit to be redeployed, taking in potential database changes,
configuration changes, or other coding changes sneaking in alongside the original change. Furthermore, this simple three-line change is usually bundled with dozens of other changes,
thereby increasing deployment risk even further (as well as increasing the frequency of deployment). The low testability rating also reflects this scenario; with a simple three-line
change, most developers are not going to spend hours executing the entire regression test suite (even if such a thing were to exist in the first place), particularly along with dozens of
other changes being made to the monolithic application at the same time. We gave testability a two-star rating (rather than one star) due to the ability to mock or stub components
(or even an entire layer), which eases the overall testing effort.
Overall reliability rates medium (three stars) in this architecture style, mostly due to the lack of network traffic, bandwidth, and latency found in most distributed architectures. We
only gave the layered architecture three stars for reliability because of the nature of the monolithic deployment, combined with the low ratings for testability (completeness of testing)
and deployment risk.
Elasticity and scalability rate very low (one star) for the layered architecture, primarily due to monolithic deployments and the lack of architectural modularity. Although it is possible
to make certain functions within a monolith scale more than others, this effort usually requires very complex design techniques such as multithreading, internal messaging, and other
parallel processing practices, techniques this architecture isn’t well suited for. However, because the layered architecture is always a single system quantum due to the monolithic
user interface, backend processing, and monolithic database, applications can only scale to a certain point based on the single quantum.
Performance is always an interesting characteristic to rate for the layered architecture. We gave it only two stars because the architecture style simply does not lend itself to high-
performance systems due to the lack of parallel processing, closed layering, and the sinkhole architecture anti-pattern. Like scalability, performance can be addressed through
caching, multithreading, and the like, but it is not a natural characteristic of this architecture style; architects and developers have to work hard to make all this happen.
Layered architectures don’t support fault tolerance due to monolithic deployments and the lack of architectural modularity. If one small part of a layered architecture causes an out-of-
memory condition to occur, the entire application unit is impacted and crashes. Furthermore, overall availability is impacted due to the high mean-time-to-recovery (MTTR) usually
experienced by most monolithic applications, with startup times ranging anywhere from 2 minutes for smaller applications, up to 15 minutes or more for most large applications.
Pipeline architecture
Like the layered architecture, overall reliability rates medium (three stars) in this architecture style, mostly due to the lack of network
traffic, bandwidth, and latency found in most distributed architectures. We only gave it three stars for reliability because of the nature of
the monolithic deployment of this architecture style in conjunction with testability and deployability issues (such as having to test the
entire monolith and deploy the entire monolith for any given change).
Elasticity and scalability rate very low (one star) for the pipeline architecture, primarily due to monolithic deployments. Although it is
possible to make certain functions within a monolith scale more than others, this effort usually requires very complex design techniques
such as multithreading, internal messaging, and other parallel processing practices, techniques this architecture isn’t well suited for.
However, because the pipeline architecture is always a single system quantum due to the monolithic user interface, backend processing,
and monolithic database, applications can only scale to a certain point based on the single architecture quantum.
Pipeline architectures don’t support fault tolerance due to monolithic deployments and the lack of architectural modularity. If one small
part of a pipeline architecture causes an out-of-memory condition to occur, the entire application unit is impacted and crashes.
Furthermore, overall availability is impacted due to the high mean time to recovery (MTTR) usually experienced by most monolithic
applications, with startup times ranging anywhere from 2 minutes for smaller applications, up to 15 minutes or more for most large
applications.
Microkernel