0% found this document useful (0 votes)
14 views

Presentation

The document discusses various architecture styles, categorizing them into monolithic and distributed types, each with unique challenges. It highlights the fallacies of distributed architectures, such as network reliability and latency issues, and contrasts them with the layered architecture's simplicity and cost-effectiveness. Additionally, it covers the pipeline architecture, emphasizing its modularity and separation of concerns through filters and pipes.

Uploaded by

BinduBc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as KEY, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Presentation

The document discusses various architecture styles, categorizing them into monolithic and distributed types, each with unique challenges. It highlights the fallacies of distributed architectures, such as network reliability and latency issues, and contrasts them with the layered architecture's simplicity and cost-effectiveness. Additionally, it covers the pipeline architecture, emphasizing its modularity and separation of concerns through filters and pipes.

Uploaded by

BinduBc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as KEY, PDF, TXT or read online on Scribd
You are on page 1/ 15

Architecture style

Architecture styles can be classified into two main types: monolithic (single deployment unit of all code) and distributed (multiple deployment units connected through remote
access protocols). While no classification scheme is perfect, distributed architectures all share a common set of challenges and issues not found in the monolithic architecture
styles, making this classification scheme a good separation between the various architecture styles. In this book we will describe in detail the following architecture styles:

Monolithic
Layered architecture (Chapter 10)

Pipeline architecture (Chapter 11)

Microkernel architecture (Chapter 12)

Distributed
Service-based architecture (Chapter 13)

Event-driven architecture (Chapter 14)

Space-based architecture (Chapter 15)

Service-oriented architecture (Chapter 16)

Microservices architecture (Chapter 17)


Fallacies of distributed arch

Network is not reliable :Service B may be totally healthy, but Service A cannot reach it due to a network problem; or even worse, Service A made a request to
Service B to process some data and does not receive a response because of a network issue. This is why things like timeouts and circuit breakers exist between
services. The more a system relies on the network (such as microservices architecture), the potentially less reliable it becomes
Latency not zerodo you know what the average round-trip latency is for a RESTful call in your production environment? Is it 60 milliseconds? Is it 500
millisecondsAssuming an average of 100 milliseconds of latency per request, chaining together 10 service calls to perform a particular business function adds
1,000 milliseconds to the request! Knowing the average latency is important, but even more important is also knowing the 95th to 99th percentile. While an
average latency might yield only 60 milliseconds (which is good), the 95th percentile might be 400 milliseconds! It’s usually this “long tail” latency that will kill
performance in a distributed architecture. In most cases, architects can get latency values from a network administrator
Bandwidth is finite distributed architecture such as microservices, communication to and between these services significantly utilizes bandwidth, causing networks
to slow down, thus impacting latency (fallacy #2) and reliability
Stamp coupling in distributed architectures consumes significant amounts of bandwidth. If the customer profile service were to only pass back the data needed by
the wish list service (in this case 200 bytes), the total bandwidth used to transmit the data is only 400 kb. Stamp coupling can be resolved in the following ways:

Create private RESTful API endpoints

Use field selectors in the contract

Use GraphQL to decouple contracts

Use value-driven contracts with consumer-driven contracts (CDCs)

Use internal messaging endpoints


Network is not secure each and every endpoint to each distributed deployment unit must be secured so that unknown or bad requests do not make it to that serviceHaving to
secure every endpoint, even when doing interservice communication, is another reason performance tends to be slower in synchronous, highly-distributed architectures such
as microservices or service-based architecture.
Topology alway changes :be in touch admin to network upgrade and changes
There are many network admin
Transport is Huge cost Transport cost here does not refer to latency, but rather to actual cost in terms of money associated with making a “simple RESTful call.Whenever
embarking on a distributed architecture, we encourage architects to analyze the current server and network topology with regard to capacity, bandwidth, latency, and security
zones to not get caught up in the trap of surprise with this fallacy.
Other issues Performing root-cause analysis to determine why a particular order was dropped is very difficult and time-consuming in a distributed architecture due to the
distribution of application and system logs
Distributed transactions
Architects and developers take transactions for granted in a monolithic architecture world because they are so straightforward and easy to manage. Standard commits and
rollbacks executed from persistence frameworks leverage ACID (atomicity, consistency, isolation, durability) transactions to guarantee that the data is updated in a correct
way to ensure high data consistency and integrity. Such is not the case with distributed architectures.

Distributed architectures rely on what is called eventual consistency to ensure the data processed by separate deployment units is at some unspecified point in time all
synchronized into a consistent state. This is one of the trade-offs of distributed architecture: high scalability, performance, and availability at the sacrifice of data consistency
and data integrity.

Transactional sagas are one way to manage distributed transactions. Sagas utilize either event sourcing for compensation or finite state machines to manage the state of
transaction. In addition to sagas, BASE transactions are used. BASE stands for (B)asic availability, (S)oft state, and (E)ventual consistency. BASE transactions are not a piece of
software, but rather a technique. Soft state in BASE refers to the transit of data from a source to a target, as well as the inconsistency between data sources. Based on the
basic availability of the systems or services involved, the systems will eventually become consistent through the use of architecture patterns and messaging.
Contract versioning and version
Layered architecture

Also called n tier architecture, simplicity, low cost ,


Topology : Components within the layered architecture style are organized into logical horizontal layers, with each layer performing a specific role within the application (such as
presentation logic or business logic). Although there are no specific restrictions in terms of the number and types of layers that must exist, most layered architectures consist of
four standard layers: presentation, business, persistence, and database
Variation of deployment unit (figure) Many on-premises (“on-prem”) products are built and delivered to customers using this third variant.
presentation layer would be responsible for handling all user interface and browser communication logic, whereas the business layer would be responsible for executing specific
business rules associated with the request. Each layer in the architecture forms an abstraction around the work that needs to be done to satisfy a particular business request. For
example, the presentation layer doesn’t need to know or worry about how to get customer data; it only needs to display that information on a screen in a particular format.
Similarly, the business layer doesn’t need to be concerned about how to format customer data for display on a screen or even where the customer data is coming from; it only
needs to get the data from the persistence layer, perform business logic against the data (such as calculating values or aggregating data), and pass that information up to the
presentation layer.
The trade-off of this benefit, however, is a lack of overall agility (the ability to respond quickly to change).
customer” is contained in the presentation layer, business layer, rules layer, services layer, and database layer, making it difficult to apply changes to that domain. As a result, a
domain-driven design approach does not work as well with the layered architecture style.
Layers of Isolation Each layer in the layered architecture style can be either closed or open.
bypassing any unnecessary layers (what used to be known in the early 2000s as the fast-lane reader pattern). For this to happen, the business and persistence layers would have
to be open, allowing requests to bypass other layers. Which is better—open layers or closed layers? The answer to this question lies in a key concept known as layers of isolation.
The layers of isolation concept means that changes made in one layer of the architecture generally don’t impact or affect components in other layers, providing the contracts
between those layers remain unchanged. Each layer is independent of the other layers, thereby having little or no knowledge of the inner workings of other layers in the
architecture. However, to support layers of isolation, layers involved with the major flow of the request necessarily have to be closed. If the presentation layer can directly access
the persistence layer, then changes made to the persistence layer would impact both the business layer and the presentation layer, producing a very tightly coupled application
with layer interdependencies between components. This type of architecture then becomes very brittle, as well as difficult and expensive to change.
The layers of isolation concept also allows any layer in the architecture to be replaced without impacting any other layer (again, assuming well-defined contracts and the use of
the business delegate pattern). For example, you can leverage the layers of isolation concept within the layered architecture style to replace your older JavaServer Faces (JSF)
presentation layer with React.js without impacting any other layer in the application.
While closed layers facilitate layers of isolation and therefore help isolate change within the architecture, there are times when it makes
sense for certain layers to be open.
Constraint : presentation layer is restricted from using these shared business objects.This scenario is difficult to govern and control because
architecturally the presentation layer has access to the business layer, and hence has access to the shared objects within that layer.
Adding this new layer now architecturally restricts the presentation layer from accessing the shared business objects because the business
layer is closed (see Figure 10-5). However, the new services layer must be marked as open; otherwise the business layer would be forced to
go through the services layer to access the persistence layer. Marking the services layer as open allows the business layer to either access
that layer (as indicated by the solid arrow), or bypass the layer and go to the next one down (as indicated by the dotted arrow in Figure 10-
5)
Failure to document or properly communicate which layers in the architecture are open and closed (and why) usually results in tightly
coupled and brittle architectures that are very difficult to test, maintain, and deploy.
architecture sinkhole anti-pattern. This anti-pattern occurs when requests move from layer to layer as simple pass-through processing with
no business logic performed within each layer.
This results in unnecessary object instantiation and processing, impacting both memory consumption and performance.

Every layered architecture will have at least some scenarios that fall into the architecture sinkhole anti-pattern. The key to determining
whether the architecture sinkhole anti-pattern is at play is to analyze the percentage of requests that fall into this category. The 80-20 rule is
usually a good practice to follow. For example, it is acceptable if only 20 percent of the requests are sinkholes. However, if 80 percent of the
requests are sinkholes, it a good indicator that the layered architecture is not the correct architecture style for the problem domain. Another
approach to solving the architecture sinkhole anti-pattern is to make all the layers in the architecture open, realizing, of course, that the
trade-off is increased difficulty in managing change within the architecture.
Why layered architecture

a good choice for small, simple applications or websites. It is also a good architecture choice, particularly as a starting point, for situations with very tight budget and time constraints.
Because of the simplicity and familiarity among developers and architects, the layered architecture is perhaps one of the lowest-cost architecture styles, promoting ease of
development for smaller applications. The layered architecture style is also a good choice when an architect is still analyzing business needs and requirements and is unsure which
architecture style would be best.
Both deployability and testability rate very low for this architecture style. Deployability rates low due to the ceremony of deployment (effort to deploy), high risk, and lack of frequent
deployments. A simple three-line change to a class file in the layered architecture style requires the entire deployment unit to be redeployed, taking in potential database changes,
configuration changes, or other coding changes sneaking in alongside the original change. Furthermore, this simple three-line change is usually bundled with dozens of other changes,
thereby increasing deployment risk even further (as well as increasing the frequency of deployment). The low testability rating also reflects this scenario; with a simple three-line
change, most developers are not going to spend hours executing the entire regression test suite (even if such a thing were to exist in the first place), particularly along with dozens of
other changes being made to the monolithic application at the same time. We gave testability a two-star rating (rather than one star) due to the ability to mock or stub components
(or even an entire layer), which eases the overall testing effort.

Overall reliability rates medium (three stars) in this architecture style, mostly due to the lack of network traffic, bandwidth, and latency found in most distributed architectures. We
only gave the layered architecture three stars for reliability because of the nature of the monolithic deployment, combined with the low ratings for testability (completeness of testing)
and deployment risk.

Elasticity and scalability rate very low (one star) for the layered architecture, primarily due to monolithic deployments and the lack of architectural modularity. Although it is possible
to make certain functions within a monolith scale more than others, this effort usually requires very complex design techniques such as multithreading, internal messaging, and other
parallel processing practices, techniques this architecture isn’t well suited for. However, because the layered architecture is always a single system quantum due to the monolithic
user interface, backend processing, and monolithic database, applications can only scale to a certain point based on the single quantum.

Performance is always an interesting characteristic to rate for the layered architecture. We gave it only two stars because the architecture style simply does not lend itself to high-
performance systems due to the lack of parallel processing, closed layering, and the sinkhole architecture anti-pattern. Like scalability, performance can be addressed through
caching, multithreading, and the like, but it is not a natural characteristic of this architecture style; architects and developers have to work hard to make all this happen.

Layered architectures don’t support fault tolerance due to monolithic deployments and the lack of architectural modularity. If one small part of a layered architecture causes an out-of-
memory condition to occur, the entire application unit is impacted and crashes. Furthermore, overall availability is impacted due to the high mean-time-to-recovery (MTTR) usually
experienced by most monolithic applications, with startup times ranging anywhere from 2 minutes for smaller applications, up to 15 minutes or more for most large applications.
Pipeline architecture

Pipe and filter architecture: bas,zag ,mapeeduce


The pipes and filters coordinate in a specific fashion, with pipes forming one-way
communication between filters, usually in a point-to-point fashion.
Pipee : Pipes in this architecture form the communication channel between filters.
Each pipe is typically unidirectional and point-to-point (rather than broadcast) for
performance reasons, accepting input from one source and always directing output
to another. The payload carried on the pipes may be any data format, but
architects favor smaller amounts of data to enable high performance.
Filters Filters are self-contained, independent from other filters, and generally
stateless. Filters should perform one task only. Composite tasks should be handled
by a sequence of filters rather than a single one.
Four type of filter : consumer , producer , tester ,transformer
Producer The starting point of a process, outbound only, sometimes called the source.
Transformer Accepts input, optionally performs a transformation on some or all of the data, then forwards it to the outbound pipe.
Functional advocates will recognize this feature as map.
Tester Accepts input, tests one or more criteria, then optionally produces output, based on the test. Functional programmers will recognize
this as similar to reduce.
Consumer The termination point for the pipeline flow. Consumers sometimes persist the final result of the pipeline process to a database,
or they may display the final results on a user interface screen.
Example : Electronic Data Interchange (EDI) tools use this pattern, building transformations from one document type to another using
pipes and filters.
where various service telemetry information is sent from services via streaming to Apache Kafka.
Example : the use of the pipeline architecture style to process the different kinds of data streamed to Kafka. The Service Info Capture filter
(producer filter) subscribes to the Kafka topic and receives service information. It then sends this captured data to a tester filter called
Duration Filter to determine whether the data captured from Kafka is related to the duration (in milliseconds) of the service request. Notice
the separation of concerns between the filters; the Service Metrics Capture filter is only concerned about how to connect to a Kafka topic
and receive streaming data, whereas the Duration Filter is only concerned about qualifying the data and optionally routing it to the next
pipe. If the data is related to the duration (in milliseconds) of the service request, then the Duration Filter passes the data on to the
Duration Calculator transformer filter. Otherwise, it passes it on to the Uptime Filter tester filter to check if the data is related to uptime
metrics. If it is not, then the pipeline ends—the data is of no interest to this particular processing flow. Otherwise, if it is uptime metrics, it
then passes the data along to the Uptime Calculator to calculate the uptime metrics for the service. These transformers then pass the
modified data to the Database Output consumer, which then persists the data in a MongoDB database.
Architectural modularity is achieved through the separation of concerns between the various filter types and transformers. Any of these
filters can be modified or replaced without impacting the other filters
Deployability and testability, while only around average, rate slightly higher than the layered architecture due to the level of modularity
achieved through filters. That said, this architecture style is still a monolith, and as such, ceremony, risk, frequency of deployment, and
completion of testing still impact the pipeline architecture.

Like the layered architecture, overall reliability rates medium (three stars) in this architecture style, mostly due to the lack of network
traffic, bandwidth, and latency found in most distributed architectures. We only gave it three stars for reliability because of the nature of
the monolithic deployment of this architecture style in conjunction with testability and deployability issues (such as having to test the
entire monolith and deploy the entire monolith for any given change).

Elasticity and scalability rate very low (one star) for the pipeline architecture, primarily due to monolithic deployments. Although it is
possible to make certain functions within a monolith scale more than others, this effort usually requires very complex design techniques
such as multithreading, internal messaging, and other parallel processing practices, techniques this architecture isn’t well suited for.
However, because the pipeline architecture is always a single system quantum due to the monolithic user interface, backend processing,
and monolithic database, applications can only scale to a certain point based on the single architecture quantum.

Pipeline architectures don’t support fault tolerance due to monolithic deployments and the lack of architectural modularity. If one small
part of a pipeline architecture causes an out-of-memory condition to occur, the entire application unit is impacted and crashes.
Furthermore, overall availability is impacted due to the high mean time to recovery (MTTR) usually experienced by most monolithic
applications, with startup times ranging anywhere from 2 minutes for smaller applications, up to 15 minutes or more for most large
applications.
Microkernel

also referred to as the plug-in architecture)


product-based applications (packaged and made available for download and installation as a single, monolithic deployment, typically
installed on the customer’s site as a third-party product)
consisting of two architecture components: a core system and plug-in components. Application logic is divided between independent plug-in
components and the basic core system, providing extensibility, adaptability, and isolation of application features and custom processing logic
Depending on the size and complexity, the core system can be implemented as a layered architecture or a modular monolith (as illustrated
in Figure 12-2). In some cases, the core system can be split into separately deployed domain services, with each domain service containing
specific plug-in components specific to that domain.
Removing the cyclomatic complexity of the core system and placing it into separate plug-in components allows for better extensibility and
maintainability, as well as increased testability
Plug-in : independent, add feature , extend core system plug-in components should be independent of each other and have no
dependencies between them
The communication between the plug-in components and the core system is generally point-to-point, meaning the “pipe” that connects the
plug-in to the core system is usually a method invocation or function call to the entry-point class of the plug-in component. In addition, the
plug-in component can be either compile-based or runtime-based. Runtime plug-in components can be added or removed at runtime without
having to redeploy the core system or other plug-ins, and they are usually managed through frameworks such as Open Service Gateway
Initiative (OSGi) for Java, Penrose (Java), Jigsaw (Java), or Prism (.NET). Compile-based plug-in components are much simpler to manage but
require the entire monolithic application to be redeployed when modified, added, or removed.
Point-to-point plug-in components can be implemented as shared libraries (such as a JAR, DLL, or Gem), package names in Java, or
namespaces in C#.

You might also like