0% found this document useful (0 votes)
6 views

Lesson 4

Uploaded by

sudah15
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Lesson 4

Uploaded by

sudah15
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 54

Design of Objects

Objects are instances of object types. Design of objects is, thus,


about design of object types. To create object types, we use classes
(abstract or concrete) or interfaces. Therefore, this chapter is about
design of classes and interfaces or, more appropriately, about design
of data abstractions. A data abstraction is a separation between
specifications of the operations and their implementations. Thus,
object design is primarily about design of the operations an object is
responsible for and collaborating objects that support the operations.
In other words, design of an object is to design its behavior and its
structural relations with other objects.

Overview
The Context and Process
An analysis of software requirements
leads to the creation of various
potential software entities or elements.
As design advances, we refine the
entities and elements, their
responsibilities, their collaborators, and
their relations with external entities and
elements.
Process of identifying software
objects
Design of objects in the context
software construction
Objects are computing or processing software entities, and their interactions
manifest computing and problem-solving. Objects may play different roles.
Some are structural entities, some collectively model software domain
behavior, and some are essentially data structures or data handlers. Domain
elements are typically modeled with objects, compared to computational
processes and services, which can be either static or dynamic entities
depending on situations. Discovery of objects may start with behavioral and
functional decompositions of a system. These decompositions with
conceptualization are the basis for designing procedural and data
abstractions. Conceptualization of entities may benefit the most design of
behavioral polymorphism, but it can also impact design of data
representation, collaborative task completion, and the tradeoff.

Essentials of Object Design


One design principle that applies universally is the single responsibility
principle. In the current context, an object type should model one thing and
do it well. This design principle is consistent with the principle of interface
segregation to promote conceptual separation and independence. Often, we
also have various design concerns; thus, the principle of separation of
concerns also applies to design of objects. To create an abstraction, it is likely
that we identify a task first (as a result of functional decompositions) and
then the name of a potential entity responsible for the task. We may name a
single-task entity relatively easily such as UserRequestDisplay or
BookOrderDelivery. If we find that the responsibilities are bit diverse, we
might raise the level of abstraction so that a name might still be appropriate,
such as DataHandler or InfoProcessor, to allow a wider range of
interpretations. However, a name alone cannot determine whether
responsibilities of an object are appropriate.

Object to Model One Thing


Functional decomposition is used to decompose a system
into subsystems or a module into smaller modules. It is
still a tool to use to design objects. Decompositions lead
to software tasks, simple or complex. Each task may
further be decomposed as necessary before tasks are
assigned to appropriate objects. To identify non-domain
objects, we may use decompositions of information, a
concept, or even a concern.

Object to Model One Thing


A “thing” can be of any nature. It can be (conventionally) a domain
entity, a data structure, or a data connector. If something can potentially
vary, we might design something more abstract to allow variations. If
there is a design concern, say, about structural feasibility in the solution
domain, we might design “structurers” to address the issues. A naïve
strategy to identify initial set of objects is to identify objects by looking
for nouns in software requirements and user stories. While we recognize
the names in software requirements or user stories that may indicate
potential objects, we perform appropriate analyses to discern between
nouns and software needs based on decompositions. In fact, entities we
design are mostly fabrications, purely based on analysis and modeling
needs, and they often lack counterparts in the real world. Even for
domain entities, we may design properties and responsibilities that we do
not observe in their real-world counterparts.
Object to Model One Thing
Diverse Object Design
Possibilities
A primary concern of object design is to design ways objects collaborate.
An object uses or collaborates with other objects for operational support
because a collaborator may:

Hold the information to be


Act as a messenger
needed

Provide required services Deliver structural support

Maintain data connections Provide software tooling

Be responsible for delegated


Function as a data container
actions
Prototyping Object Interaction
How an object behaves is perhaps best understood
in ways it interacts with other objects. Once we
identified objects and their responsibilities, we
want to observe how they interact. This process is
called casually “object modeling,” and certain
diagrams are typically used. But often, we can
model interaction directly in code to prototype the
interaction. Prototyping object interactions in code
allows instant feedback and “just-in-time” design
refactoring.
Prototyping Object Interaction
As we will see in the next chapter, we typically use UML sequence
diagrams to show object interactions. But prototyping object interaction
with code has the following advantages:
It describes interactions more accurately
It allows more effective design refactoring
It promotes the practice of early testing
It facilitates incremental delivery efforts
It makes transition from design to code construction
smoother
Designing Objects Around a Structural
Style
Design of a problem-domain object can be motivated by explicit entities we
observe in software requirements. However, an object we design often has very
different responsibilities even in cases of a “perfect match.” For example, in a
card game, a player draws a card from another player’s hand, but that’s
equivalent to software operations that the other player returns a random card and
then the controller places it in the first player’s hand. The following are a few
A software
more action
heuristics that may require
we may use inmultiple collaborating
the design of objects: objects; often that is
because none of them seems to be appropriate as a host of all
collaborating actions needed
A complex process may need data encapsulated in other objects; as a
result, its process can be fragmented with distributed intelligence across
multiple objects
Data might be the only stable aspect of a system; everything else can
change; thus, design stability is always a concern
A widely applicable structural style known as Model-View-
Controller (or MVC) is to structure software entities and
elements around three themes—model, view, and control. This
structural style was first introduced in the late 1970s and was
even integrated into the object-oriented programming
language Small Talk. Model is the “engine” of a software
application, where data is processed. View manages the
communication between the software and users or between
data sources and information display. Control drives software
internal operation and coordinates between the model and
possibly multiple views.

Designing Objects Around a Structural Style


As an example, consider a simple address book application.
The view would be a user interface for user input and
information display. In an execution of a search operation, for
example, the model would be a data structure managing an
address book with editing operations and capability of using
externally stored address data. A control would coordinate
interactions among a user, a view, and the model. Objects of
other kinds may still be needed such as objects handling data
between the in-memory data store and an external data source
(these are called boundary objects as they are part of a
solution and communicate with problem-domain objects).

Designing Objects Around a Structural Style


Internet applications with a Model-
View-Controller architecture
Alignment of computer, software model,
and user’s mental model
To take full advantages of object orientation without creating
unnecessary object layers, an object needs to be rich in behavior or
substantial in its intelligence to contribute to an overall computing
process. Objects with simple or trivial behavior often only create
unnecessary complexity and runtime overhead with little
computational benefit. YAGNI (You Aren’t Gonna Need It) is a mantra
used to help developers avoid pitfalls in designing objects. The
statement was originally used for discouraging introduction of
features we presume the software would need in the future without
much reliable information to support the decisions. It is intended to
promote simplicity in software design. If we suspect whether a
potential object might help, YAGNI probably will apply.

More About Object Discovery


Removing bad objects through code refactoring can be costly.
Meanwhile, objects that have significant computing or processing
power do not necessarily mean they are complex. Software
complexity, as opposed to algorithmic complexity, typically comes
from entity or data dependencies and obscurity in contrived or
convoluted code due to improper object communications. Good
object design with powerful abstractions can help avoid complexity in
high operational layers and push complexity down to an appropriate
level—ideally the bottom layer where objects of workhorse, data
structures, and application service modules reside.

More About Object Discovery


In addition to understanding our computational needs, there are other things
we can do to explore opportunities for the design of object types. Working
with multiple lists of decompositions can be another effective strategy for
identifying powerful object types. A functional decomposition may provide an
initial list of operations; and subsequent requirements iterations may update
this list as necessary. We keep a list of unassigned operations and assign
them to existing or new object types when opportunities arise. A concept
decomposition may produce a list of concepts to be associated with potential
objects. System variability decompositions may result in another list of things
to be abstracted with object types. There may also be emergent needs to
warrant new object types (though such needs must be adequately assessed).
For example, we might see a need for a lightweight object as a “messenger”
to collect and then deliver information among objects that do not
communicate directly.

More About Object Discovery


When facing multiple lists of “things” to “mix and match,”
divergent thinking probably works the best. We run simulations
of possible object interactions in our minds but expect them to
fail (quickly) before more plausible ideas emerge. When
modeling object’s behavior, we consider not only interactions
with its collaborators but also the potential to be used as a
collaborator by others. Mental modeling and evaluation can
quickly identify inappropriate alignment between concepts,
responsibilities, and priorities to avoid pervasive and often
costly consequences if responsibilities are missed or misplaced.

More About Object Discovery


Prototyping with code fragments, as shown earlier, can be used
interweaving with a mental process as necessary to confirm or
abandon the ideas. For example, an object modeling a student
can be “shallow” without much behavior beyond simple state
updates. But it can also be responsible for loading data
student’s academic and financial aid data to design substantial
behavior as appropriate. Or if a student can work for school in
certain capacity, we might design a student object to assume
certain responsibilities as an employee with polymorphic
behavior. These are diverse design possibilities that mental
simulations can be particularly effective

More About Object Discovery


Design to Ensure Objects’ Behavioral
Correctness
Pre- and postconditions are to ensure that we correctly understand the operations
of an object. The representation invariant (or class invariant) of an object is to
ensure its behavioral correctness. Pre- and postconditions specify object’s
behavior and thus should be part of the design of object’s responsibilities. We
explicitly include pre- and postconditions in the design of instance methods for
good
Allow reasons:
accurate evaluation of the appropriateness of the responsibilities associated
with the concept being modeled

Allow appropriate identification of collaborators

Allow accurate representation of object’s properties and class invariant

Support early prototyping

Support validation of the software requirements


A control of a software operation delivers the outcome of the
operation by coordinating task completion in a defined process and
by delegating tasks to other entities as necessary. A control strategy
determines responsibilities of the subordinate objects and how data
flows between software processes. Monolithic applications in the past
worked well with relatively simple-structured, centralized controls
using procedures and functions sharing common or global data. Such
a control is often a “call-return” hierarchy of modules. Each
procedure or function is either a “worker” to deliver a result or a
“broker” to further delegate a task and assemble the results returned
from task executors. In some extreme cases, a control may become
completely centralized when the control delegates the work directly
to “workers.”

Design of Control Objects


Object orientation promotes distributed controls. Nonetheless, an
object may still be a centralized control as illustrated in Figure. A
centralized control object may allocate tasks or sub-controls to other
objects as appropriate but maintain the control as the overall
decision-maker. Figure also describes the opposite to centralized
control— decentralized control. Delegated control is a particular kind
of decentralized control and is common in the design of control
objects. A delegated control differs from a centralized control in that
the primary control object is more of a coordinator to delegate
decision making (or at least a part of it) to subordinate objects. The
term “dispersed control” is used to describe a “highly” decentralized
control.

Design of Control Objects


Illustration of centralized and
decentralized controls
An analysis often identifies a control center at a high level,
which may become a focus of design. A control center can be
highly centralized if a task delegation hierarchy is shallow, but
widespread with task executors. To design a control that is less
centralized, we can use a control style with multiple task
coordination layers. For example, a computer game may have
a control center that implements a game loop using some
abstract processes, but the control center is primarily a
manager or coordinator to delegate appropriate decision-
making to objects modeling players, environmental obstacles,
and various other game elements to render their
responsibilities based on the game rules.
Highly Centralized vs. Coordinated Controls
For example, the control center does not play for players (to
move them around or make them jump), but players play for
themselves. However, a player may further delegate play to a
“device” it controls to move around or make jumps. Thus, the
collaborating objects may be controllers too in their respective
areas of concerns with their own coordination intelligence. In
this fashion, a coordination-oriented control forms a tree of
task coordinators with leaves being pure task executors. Each
task coordinator is responsible for a significant portion of an
overall task, thus achieving distributed control with an
appropriate balance between the number of control layers and
the flexibility of the coordinated controls
Highly Centralized vs. Coordinated Controls
A framework is a set of structurally related, domain
customizable modules to implement an aspect of a
system, a subsystem, or even an entire system. A
framework depends on customizable modules, when
implemented, to become a working program. Design
of a framework is based on a design strategy to
implement what’s stable and abstract out what’s
changeable.

Process Control with a Framework


The abstracted processes, captured by abstract
operations, are “placeholders” for concrete
implementations of the operations when a framework
is used for a concrete application. A context in which
a framework is used can also vary. For example, a
framework for a self-check-in process can be used at
an airport or a railway station. Contextual variations
can generally be addressed by interfaces

Process Control with a Framework


Developing a framework is perhaps the most effective
way for software reuse by maximizing its applicability to
possibly an entire system. For a particular application
instance, we can simply focus on implementations of
required interfaces and abstract methods. Frameworks
are all about process controls. For similar processes even
with very different application scenarios, developing
frameworks remains a viable option to streamline
software development and shorten development cycles
with consistent development quality and software
maintainability.
Process Control with a Framework
Event-driven applications achieve distributed controls because of an
event-driven programming model consistent across programming
languages (such as Java’s awt model). In such a model, a control object
(known as an event listener) is registered with an event-enabling
component. The system responds to an event in a callback operation
embedded in a listener class, which implements an interface appropriate
for a designated event. Thus, each event enabling widget works with a
dedicated event-handling object—a control object often created with an
anonymous class or an inner class. For a program that involves many
user interaction widgets, too many scattered small event-handling
classes may negatively affect code comprehension, refactoring, and
extension. There are a few alternatives for reorganizing event-response
controls:

Controls for Event-Driven Systems


Controls for Event-Driven Systems
Use a combined control object by implementing all necessary interfaces. This
option may effectively reduce cluttering of small control classes with a less
cohesive overall control class

Extend a widget class with implementation of an event interface to create


component objects with an event handler “built in.” This approach can be ideal if
a widget’s internal data is adequate for event handling. Otherwise, object coupling
would be a tradeoff

For events of the same kind such as button push, we can use an “integrated”
control by “consolidating” responses polymorphically if details of the responses to
different event sources can be abstracted
For some applications, a centralized control is how things work.
A control of elevator operations is a good example. An elevator
control directs elevator movement and must be able to
determine the direction the elevator travels and the floor to
stop. Thus, the control must know the current states of the
floor and elevator buttons. When appropriate, an elevator
control might even manage elevator door operations. Thus, the
role an elevator control plays is global and central. It is much
more often, however, that we design objects that play control
roles in a “local” capacity to control dynamic processes. Here
are a few scenarios of “local” control:

Different Control Roles


Different Control Roles
Control information transfer from one object to another in either a synchronized or
asynchronized fashion. Such an object can be useful to reduce object coupling

Control an assembling process, much like a (virtual) pizza making process by


“picking” the ingredients and passing the “composed pizza” to a “baking” process

Control operational contributions from collaborators, much like a general


contractor who does home remodeling by contracting the tasks to subcontractors.
Such an object may be called a “structurer.” Using the analogy, this general
contractor would “structure” subcontractors for the right tasks done in a right
order at the right time. The difference of a structurer from a coordinator can be
subtle, but a structurer is about presenting a coherent whole by coordinating the
disparity of the pieces
Different Control Roles
Control an adaptation process. For example, consider a method a.op(x, y, z) where
parameter z is no longer practical or meaningful to use. To resolve the issue, we
create an object b, collaborating with a, to offer users the method b.op(x, y)
instead. Object b may be called an “interfacer” or “adaptor” to bridge a “gap”
between entities. A “gap” can also be a mismatch between two processes, and
work to “bridge” the “gap” can be substantial and complex

Control a decision-making process such as channeling incoming messages or user


requests to the right handlers. An appropriate analogy would be at a service store
of a service company or a government service office where a person directs
customers to the right service windows
Design of distributed controls is a core characteristic of an
object-oriented design. Arguably, if an object is not a domain
“workhorse,” it provides a control in two perspectives. One is to
provide a centralized control with its adequately encapsulated
data, and the other is to manage a process to provide certain
control in its computing vicinity. The first kind is explicit and
relatively easy to identify such as a control of elevator
operations. The second kind probably comprises most of
objects we design. They participate in a control process
collaboratively. Thus, appropriately identifying an object’s role
in such a collaborative process is key to object design.

Objects Are Designed to Control


A complex application typically has multiple control
centers due to complexity in multiple application aspects
like event handling, complex sub-processes, data
visualization, or access to external systems. The
appropriateness of a distributed control style for a given
software process (or the way objects participate in an
overall process control) is a design decision often based
on a chosen style of the system architecture

Objects Are Designed to Control


Objects Are Designed to Control
Design patterns (to be covered in Chap. 8) provide certain control
structures for objects to participate, locally or globally, in control
processes of software operations. There are three groups of patterns
depending on their roles in process control:

Control object creation

Control process transfer (such as bridging to external processes,


functioning as proxies, adapting to, or interfacing with existing processes)

Control over object’s behavior (with behavioral strategies, by reacting to


events, or though coordinated communications)
Objects collaborate through defined dependency or
association relations. Making an object do an appropriate
amount of work with an appropriate level of help from
collaborators is a concern about object’s cohesion and
coupling. “Good” cohesion with the “right” amount of
coupling makes an object easier to design, maintain, and
extend.

Object Cohesion and Coupling


Law of Demeter
A design principle known as Law of Demeter (or the Principle of Least
Knowledge) was intended to address inappropriate level of “intimacy”
among software units. Proposed in 1987, the Law describes what would
be considered appropriate knowledge a software unit possesses:

Each unit should have knowledge only about units that are “closely”
related to the current unit

Each unit should only talk to its immediate friends, not strangers
A more colloquial version of the Law is: “Tell, Don’t Ask”
(to “tell” what is needed, not “ask” for more than that). It
is otherwise also known as the “Shy Principle” (to stay
shy of what you might be able to access). In other words,
a collaborating object should offer all needed operations
that use its encapsulated instance data, as opposed to
exporting data for other objects to consume. An object
should be as intelligent as they can be with its own
resources to maximize what they can do

Law of Demeter
To minimize overlapped behaviors, object types should be
behaviorally “orthogonal” or as much different
behaviorally as they can be. For example, if both Order
and Shipping have the access to an address list of
customers, the two types may contain competing
operations. Objects offering similar or competing
operations, if used as collaborators, may potentially
introduce inconsistent behavior in a hosting object
because the overlapping operations of the collaborators
may not be created or updated consistently

Objects with No Overlapping Behavior


DRY Principle (Don’t Repeat Yourself) is meant to avoid
code redundancy, encourage code reuse, and discourage
cut-and-paste coding practices. But this principle may
apply more broadly to the design of objects to avoid
overlapping properties and behaviors to minimize the
possibility of inconsistent object behaviors. Code
refactoring could make objects less orthogonal overtime
as types are split or subclassed. The following heuristics
may help avoid creating evolutionally less “orthogonal”
object types:

Objects with No Overlapping Behavior


Objects with No Overlapping Behavior

Keep only a core set of essential methods of an object type

Create static methods (such as extension methods in C#) in collaborating


objects, when appropriate, to avoid subclassing, and provide safe
extension of objects’ capabilities with minimal side effects

Use polymorphism to control functional variations


Iterative Design of Objects
Object design is also an iterative process.
Software requirements provide essential
ingredients for the design of object
abstractions. The initial design of objects will
likely undergo subsequent updates and
changes when the process of analysis and
modeling proceeds deep into developing a
domain model and an architectural structure
A domain model (i.e., problem-domain object model) is a model
of the entities within a business context. It is to provide an
“overview” of the interrelationships among the business
entities and their functional responsibilities. As appropriate, a
domain model also provides a high-level description of data
encapsulated in domain entities (separate from a data model).
Some domain entities are of high level such as a business
control that might be consistent across businesses of the same
kind. Some are of low level such as entities that involve
business policies and rules, which may be company specific

Initial Design of Domain Abstractions


Software requirements provide initial input to conceptualizing
candidate domain objects. Requirements are typically stated in
three different forms: user requirements, system (functional)
requirements, and non-functional requirements. User
requirements are based on ways a system can be used, which
contextualize the functional requirements. User requirements,
which may include elaborated interactions with the system,
often serve as initial input to a domain design. These
requirements are stated from a client’s viewpoint, which are
further elaborated from viewpoints of an analyst or modeler.
Design considerations based on user requirements elaboration
may include:
Initial Design of Domain Abstractions
Initial Design of Domain Abstractions
Design goals

Design heuristics

Possible ways for the system to respond to a user interaction

Similar features developed in the past

Ambiguities and things that might appear ill-defined

Potential technical challenges


Similar features developed in the Implications of non-functional
requirements past
As the process of requirement engineering progresses,
candidate object types are reviewed, validated, refactored, or
removed. Additional object types can be discovered and added.
To facilitate validation of objects’ roles and responsibilities, we
can group object candidates based on their stereotypes,
application themes, layers of abstractions, user requirements
they satisfy, and their dependencies. To update existing
entities, we look for symptoms that may suggest design flaws
such as instance variables of similar nature, overlapped
operations, or trivial operations with little behavioral value
added.

Subsequent Design Validation and Refactoring


This process might be termed “static validation of the
candidate objects.” A “dynamic validation” is to simulate
how objects would work together to complete certain
tasks, implement certain usage scenarios, or provide
certain services or structural supports. We use modeling
languages (to be covered in the next chapter), our minds,
or (when helpful) code prototypes to perform dynamic
validation, allowing quick decision-making.

Subsequent Design Validation and Refactoring


When software requirements evolved, so does our
understanding about the software structure and potential
new risks. Based on our improved understanding, we
proceed with object design iterations that may result in
minor design tweaks, moderate design refactoring, or
even a complete redesign. This iterative process may be
proceeded concurrently with code construction and
software testing. We ask very different questions in
design iterations such as:

Subsequent Design Validation and Refactoring


Subsequent Design Validation and
Refactoring
Are the roles objects play unique in the design?

Is there a fundamental shift of our design goal (if the software


structure has been significantly tweaked)?
Are all the software requirements mapped by the design of the
software entities?

How stable are the requirements going forward?

Are there other tradeoffs to be considered?


THANK
YOU!

You might also like