SE 22-23 Answers
SE 22-23 Answers
The cyclomatic complexity of a code section is the quantitative measure of the number of
linearly independent paths in it. It is a software metric used to indicate the complexity of a
program. It is computed using the control flow graph of the program. The nodes in the graph
indicate the smallest group of commands of a program, and a directed edge in it connects the
two nodes i.e. if the second command might immediately follow the first command.
Boundary Value Analysis is based on testing the boundary values of valid and invalid partitions.
The behavior at the edge of the equivalence partition is more likely to be incorrect than the
behavior within the partition, so boundaries are an area where testing is likely to yield defects.
Functional testing verifies that each function of the software application works in conformance
with the requirement and specification. Boundary Value Analysis (BVA) is one of the functional
testings.
Regression testing is like a software quality checkup after any changes are made. It involves
running tests to make sure that everything still works as it should, even after updates or
tweaks to the code. This ensures that the software remains reliable and functions properly,
maintaining its integrity throughout its development lifecycle.
1. Correctness:
User review is used to ensure the correctness of requirements stated in the SRS. SRS is said to be
correct if it covers all the requirements that are actually expected from the system.
2. Completeness:
Completeness of SRS indicates every sense of completion including the numbering of all the pages,
resolving the to be determined parts to as much extent as possible as well as covering all the functional
and non-functional requirements properly.
3. Consistency:
Requirements in SRS are said to be consistent if there are no conflicts between any set of requirements.
Examples of conflict include differences in terminologies used at separate places, logical conflicts like
time period of report generation, etc.
4. Unambiguousness:
A SRS is said to be unambiguous if all the requirements stated have only 1 interpretation. Some of the
ways to prevent unambiguousness include the use of modelling techniques like ER diagrams, proper
reviews and buddy checks, etc.
6. Modifiability:
SRS should be made as modifiable as possible and should be capable of easily accepting changes to the
system to some extent. Modifications should be properly indexed and cross-referenced.
7. Verifiability:
An SRS is verifiable if there exists a specific technique to quantifiably measure the extent to which every
requirement is met by the system. For example, a requirement starting that the system must be user-
friendly is not verifiable and listing such requirements should be avoided.
8. Traceability:
One should be able to trace a requirement to design component and then to code segment in the
program. Similarly, one should be able to trace a requirement to the corresponding test cases.
9. Design Independence:
There should be an option to choose from multiple design alternatives for the final system. More
specifically, the SRS should not include any implementation details.
Part-2
2) a) List and describe good characteristics of a good software.
Software engineering is the process of designing, developing, and maintaining software
systems. Good software meets the needs of its users, performs its intended functions reliably,
and is easy to maintain.
1. There are several characteristics of good software that are commonly recognized by
software engineers, which are important to consider when developing a software
system.
2. These characteristics include functionality, usability, reliability, performance, security,
maintainability, reusability, scalability, and testability.
Characteristics of Good Software
1. Functionality: The software meets the requirements and specifications that it was
designed for, and it behaves as expected when it is used in its intended environment.
2. Usability: The software is easy to use and understand, and it provides a positive user
experience.
3. Reliability: The software is free of defects and it performs consistently and accurately
under different conditions and scenarios.
4. Performance: The software runs efficiently and quickly, and it can handle large amounts
of data or traffic.
5. Security: The software is protected against unauthorized access and it keeps the data
and functions safe from malicious attacks.
6. Maintainability: The software is easy to change and update, and it is well-documented,
so that it can be understood and modified by other developers.
7. Reusability: The software can be reused in other projects or applications, and it is
designed in a way that promotes code reuse.
e) Write elaborately on Unit testing and Regression testing. How do you develop test suites?
Unit testing is the process of testing the smallest parts of your code, like individual functions or methods, to
make sure they work correctly. It’s a key part of software development that improves code quality by testing
each unit in isolation.
You write unit tests for these code units and run them automatically every time you make changes. If a test
fails, it helps you quickly find and fix the issue. Unit testing promotes modular code, ensures better test
coverage, and saves time by allowing developers to focus more on coding than manual testing.
To create effective unit tests, follow these basic techniques to ensure all scenarios are covered:
• Logic checks: Verify if the system performs correct calculations and follows the expected path with valid
inputs. Check all possible paths through the code are tested.
• Boundary checks: Test how the system handles typical, edge case, and invalid inputs. For example, if an
integer between 3 and 7 is expected, check how the system reacts to a 5 (normal), a 3 (edge case), and
a 9 (invalid input).
• Error handling: Check the system properly handles errors. Does it prompt for a new input, or does it
crash when something goes wrong?
• Object-oriented checks: If the code modifies objects, confirm that the object’s state is correctly
updated after running the code.
Unit Testing Techniques
There are 3 types of Unit Testing Techniques. They are follows
1. Black Box Testing: This testing technique is used in covering the unit tests for input, user interface, and
output parts.
2. White Box Testing: This technique is used in testing the functional behavior of the system by giving the
input and checking the functionality output including the internal design structure and code of the
modules.
3. Gray Box Testing: This technique is used in executing the relevant test cases, test methods, and test
functions, and analyzing the code performance for the modules.
Regression testing is a crucial aspect of software engineering that ensures the stability and reliability of a
software product. It involves retesting the previously tested functionalities to verify that recent code changes
haven’t adversely affected the existing features.
By identifying and fixing any regression or unintended bugs, regression testing helps maintain the overall
quality of the software. This process is essential for software development teams to deliver consistent and
high-quality products to their users.
Regression testing is like a software quality checkup after any changes are made. It involves running tests to
make sure that everything still works as it should, even after updates or tweaks to the code. This ensures that
the software remains reliable and functions properly, maintaining its integrity throughout its development
lifecycle.
Unified Modeling Language (UML) is a standardized modeling language used primarily in software engineering
to visualize, specify, construct, and document the structure and behavior of complex systems. UML provides a
variety of diagrams that help represent different aspects of a system, making it easier for developers and
stakeholders to understand and communicate requirements, processes, and workflows.
Here’s a breakdown of each requested component:
A) Use Case Diagram
• Purpose: Represents the functional requirements of a system from the end-user perspective.
• Components:
o Actors: Entities (like users or other systems) that interact with the system.
o Use Cases: Specific functions or services provided by the system to fulfill a goal for the actor.
o Relationships: These can include associations, dependencies, and generalizations between use
cases and actors.
• Use: Use case diagrams are useful for capturing what the system should do without detailing how it
achieves it, making them great for requirements gathering and stakeholder discussions.
B) Sequence Diagram
• Purpose: Illustrates how objects interact in a particular sequence to carry out a specific task or process.
• Components:
o Actors/Objects: Represent entities involved in the interaction.
o Lifelines: Show the lifespan of objects during interactions.
o Messages: Arrows that indicate communication between objects, often labeled to show the
type of message or operation.
o Activation Bars: Show the period when an object is active or performing a task.
• Use: Sequence diagrams are often used to model the flow of logic within a particular scenario,
showcasing interactions over time, which helps in understanding the sequence of operations.
C) State Diagram
• Purpose: Depicts the different states of an object and transitions based on events.
• Components:
o States: Represent different stages of an object’s life cycle.
o Transitions: Arrows that show how an object moves from one state to another based on events
or conditions.
o Events: Conditions that trigger transitions between states.
• Use: State diagrams are useful for modeling the dynamic behavior of individual objects, especially in
complex systems where objects can undergo numerous state changes, like lifecycle management.
D) Classes and Objects (in UML Context)
• Class:
o Represents a blueprint for objects.
o Contains attributes (properties) and methods (functions) to define what an object of that class
will hold and perform.
• Object:
o An instance of a class that holds actual data and performs defined behaviors.
• Class Diagram:
o Illustrates the relationships and hierarchy between classes and objects.
o Can show inheritance, associations, and dependencies among classes.
g) Explain why it is important to model the context of a system that is being developed. Give
two examples of possible errors that could arise if software engineers do not understand the
system context.
Modeling the context of a system being developed is crucial because it helps software engineers understand
how the system interacts with external entities, users, and other systems. This understanding guides the
design, functionality, and limitations of the software, ensuring it meets user requirements and integrates
seamlessly with other components. By defining the system boundaries, engineers can also identify
dependencies, constraints, and assumptions that might impact the software's success.
If engineers do not model the system context, several errors can arise:
1. Misaligned Functional Requirements: Without a clear understanding of the system's context,
engineers may design features that don’t align with the actual user needs or business objectives. For
example, in a hospital management system, if engineers don’t understand that the system must
interact with patient monitoring devices, they may fail to include necessary interfaces for device
integration, resulting in a system that cannot perform critical functions.
2. Integration Failures: A system often needs to interact with other systems, such as databases, external
APIs, or legacy systems. If the context isn’t understood, engineers may overlook required compatibility
or data-sharing protocols, leading to integration failures. For example, in an e-commerce platform,
neglecting the context of external payment gateway integration could lead to unsuccessful
transactions, impacting the business's operations and customer satisfaction.
Understanding the system context prevents these errors by ensuring the design and functionality of
the software are aligned with its intended environment and interactions.
SDLC is a process followed for software building within a software organization. SDLC consists
of a precise plan that describes how to develop, maintain, replace, and enhance specific
software. The life cycle defines a method for improving the quality of software and the all-
around development process.
The Management Information Systems (MIS) oriented Software Development Life Cycle
(SDLC) model is an approach to developing systems that focuses on creating information
systems specifically designed to help management in making well-informed, strategic
decisions. The model adapts the traditional SDLC phases to emphasize the collection,
processing, and dissemination of management information, aligning the system's development
process with organizational goals and decision-making needs.
• Management-Centric: Designed with management and decision-makers in mind,
focusing on information flow and usability.
• Data Integrity and Relevance: Emphasizes ensuring the accuracy, reliability, and
relevance of information that managers use.
• Continuous Alignment with Business Goals: Regular feedback loops to make sure the
MIS evolves with organizational changes and supports strategic planning.
i) Consider a large-scale project for which the manpower requirement is K= 600PY and the
development time is 3 years 6 months. What is the manpower cost after 1 year and 2
months? Calculate the peak time.
NDFA
NDFA refer to the Non-Deterministic Finite Automata. It is used to transit the any number of states for a
particular input. NDFA accepts the NULL move that means it can change state without reading the symbols.
NDFA also has five states same as DFA. But NDFA has different transition function.
Transition function of NDFA can be defined as:
δ: Q x ∑ →2Q
Example
See an example of non-deterministic finite automata:
1. Q = {q0, q1, q2}
2. ∑ = {0, 1}
3. q0 = {q0}
4. F = {q3}
l) What are the risk management activities? Is it possible to prioritize the risks? Explain with
suitable example.
Risk management activities are a structured approach to identifying, assessing, and mitigating
potential risks that could impact a project. These activities help teams to proactively address
uncertainties and reduce the likelihood or impact of negative outcomes. Here’s an outline of
the primary risk management activities:
Level 1 DFD
The Level 1 DFD breaks down the main process into specific functions.
1. Input Validation: Checks user input for errors (e.g., correct format, no division by zero).
2. Basic Operations: Handles addition, subtraction, multiplication, and division.
3. Scientific Functions: Handles trigonometric, logarithmic, and exponential functions.
4. Advanced Operations: Calculates square roots, powers, and factorials.
5. Memory Management: Stores and retrieves values in memory for reuse.
Here’s the structure:
In the Level 1 DFD:
• The Input Validation process first ensures that inputs are valid.
• Basic Operations handles standard arithmetic functions.
• Scientific Functions and Advanced Operations cover more complex calculations.
• Memory Management manages temporary data storage and retrieval.
Q4 Define cohesion and coupling. Explain various types of each of them. What are CASE tools? With a
suitable diagram, explain the categories of CASE tools.
Coupling refers to the degree of interdependence between software modules. High coupling means
that modules are closely connected and changes in one module may affect other modules. Low
coupling means that modules are independent, and changes in one module have little impact on other
modules.
Cohesion refers to the degree to which elements within a module work together to fulfill a single,
well-defined purpose. High cohesion means that elements are closely related and focused on a single
purpose, while low cohesion means that elements are loosely related and serve multiple purposes.
Both coupling and cohesion are important factors in determining the maintainability,
scalability, and reliability of a software system. High coupling and low cohesion can make a
system difficult to change and test, while low coupling and high cohesion make a system easier
to maintain and improve.
Basically, design is a two-part iterative process. The first part is Conceptual Design which tells
the customer what the system will do. Second is Technical Design which allows the system
builders to understand the actual hardware and software needed to solve a customer’s
problem.
The essential idea of CASE tools is that in-built programs can help to analyze developing
systems in order to enhance quality and provide better outcomes. Throughout the 1990, CASE
tool became part of the software lexicon, and big companies like IBM were using these kinds of
tools to help create software.
Various tools are incorporated in CASE and are called CASE tools, which are used to support
different stages and milestones in a software development life cycle.
4. Central Repository: It provides a single point of storage for data diagrams, reports, and
documents related to project management.
5. Documentation Generators: It helps in generating user and technical documentation as
per standards. It creates documents for technical users and end users.
For example, Doxygen, DrExplain, Adobe RoboHelp for documentation.
6. Code Generators: It aids in the auto-generation of code, including definitions, with the
help of designs, documents, and diagrams.
7. Tools for Requirement Management: It makes gathering, evaluating, and managing
software needs easier.
8. Tools for Analysis and Design: It offers instruments for modelling system architecture
and behavior, which helps throughout the analysis and design stages of software
development.
9. Tools for Database Management: It facilitates database construction, design, and
administration.
10.Tools for Documentation: It makes the process of creating, organizing, and maintaining
project documentation easier.
Q5 Explain Software Reverse Engineering and Software Reengineering. Briefly describe
Service Oriented Architecture (SOA) in software engineering.
Reverse engineering of data occurs at different levels of abstraction .It is often the first
reengineering task.
1. At the program level, internal program data structures must often be reverse
engineered as part of an overall reengineering effort.
2. At the system level, global data structures (e.g., files, databases) are often reengineered
to accommodate new database management paradigms (e.g., the move from flat file to
relational or object-oriented database systems).
Internal Data Structures
Reverse engineering techniques for internal program data focus on the definition of classes of
objects.
1. This is accomplished by examining the program code with the intent of grouping related
program variables.
2. In many cases, the data organization within the code identifies abstract data types.
3. For example, record structures, files, lists, and other data structures often provide an
initial indicator of classes.
Database Structures
A database allows the definition of data objects and supports some method for establishing
relationships among the objects. Therefore, reengineering one database schema into another
requires an understanding of existing objects and their relationships.
The following steps define the existing data model as a precursor to reengineering a new
database model:
1. Build an initial object model.
2. Determine candidate keys (the attributes are examined to determine whether they are
used to point to another record or table; those that serve as pointers become candidate
keys).
3. Refine the tentative classes.
4. Define generalizations.
Software Re-Engineering
Software Re-Engineering is the examination and alteration of a system to reconstitute it in a
new form. The principle of Re-Engineering when applied to the software development process
is called software re-engineering. It positively affects software cost, quality, customer service,
and delivery speed. In Software Re-engineering, we are improving the software to make it
more efficient and effective.
It is a process where the software’s design is changed and the source code is created from
scratch. Sometimes software engineers notice that certain software product components need
more upkeep than other components, necessitating their re-engineering.
The re-Engineering procedure requires the following steps
1. Decide which components of the software we want to re-engineer. Is it the complete
software or just some components of the software?
2. Do Reverse Engineering to learn about existing software functionalities.
3. Perform restructuring of source code if needed for example modifying functional-
Oriented programs in Object-Oriented programs
4. Perform restructuring of data if required
5. Use Forward Engineering ideas to generate re-engineered software
The need for software Re-engineering: Software re-engineering is an economical process for
software development and quality enhancement of the product. This process enables us to
identify the useless consumption of deployed resources and the constraints that are restricting
the development process so that the development process could be made easier and cost-
effective (time, financial, direct advantage, optimize the code, indirect benefits, etc.) and
maintainable.
Services might aggregate information and data retrieved from other services or create
workflows of services to satisfy the request of a given service consumer. This practice is known
as service orchestration Another important interaction pattern is service choreography, which
is the coordinated interaction of services without a single point of control.
Components of SOA:
Q6 What are the different architectural styles applied for software development? Explain
with diagrams. What is acceptance testing? Explain briefly alpha testing and beta testing
with suitable examples.
1. Monolithic Architecture:
One of the earliest and most basic architectural forms is monolithic architecture. The system is
intended to function as a single, self-contained unit in a monolithic application. Each
component, including the data access layer, business logic, and user interface, is closely
integrated into a single codebase.
Characteristics:
o Tight Coupling: When parts are closely connected, it is challenging to scale or modify
individual elements without influencing the system as a whole.
o Simplicity: Small to medium-sized applications might benefit from monolithic
architectures since they are easy to build and implement.
o Performance: Monolithic programs can be quite performant because there are no inter-
process communication overheads.
Use Cases:
Monolithic designs work well in smaller applications where performance and ease of use are
more important than scalability and flexibility. Some e-commerce websites, blogging platforms,
and content management systems (CMS) are a few examples.
2. Layered Architecture:
Layered architecture, sometimes called n-tier architecture, divides the software system into
several levels, each in charge of a certain task. Better system organization and maintainability
are made possible by this division.
Characteristics:
o Separation of Concerns: Distinct concerns, like data access, business logic, and display,
are handled by different levels.
o Scalability: The ability to scale individual layers allows for improved resource and
performance usage.
o Reusability: Reusing components from one layer in other applications or even in other
sections of the system is frequently possible.
Use Cases:
Web applications, enterprise software, and numerous client-server systems are just a few
applications that use layered structures. They offer an excellent mix of maintainability,
scalability, and modifiability.
3. Architecture of Client-Server:
The system is divided into two primary parts by client-server architecture: the client, which is
responsible for the user interface, and the server, which is in charge of data management and
business logic. A network is used to facilitate communication between the client and server.
Characteristics:
o Scalability: This design works well for large-scale applications since servers may be
scaled independently to accommodate growing loads.
o Centralized Data Management: Since data is kept on the server, security and
management can be done centrally.
o Thin Clients: Since most work occurs on the server, clients can be quite light.
Use Cases:
Web applications, email services, and online gaming platforms are just a few of the networked
applications that rely on client-server architectures.
4. Microservices Foundation:
A more modern architectural style called microservices architecture encourages the creation of
autonomous, little services that speak to one another via APIs. Every microservice
concentrates on a certain business function.
Characteristics:
o Decomposition: The system is broken down into smaller, more manageable services to
improve flexibility and adaptability.
o Independent Deployment: Continuous delivery is made possible by microservices'
ability to be deployed and upgraded separately.
o Scalability: Individual services can be scaled to maximize resource utilization.
Use Cases:
Large and complicated apps like social media networks, cloud-native apps, and e-commerce
platforms are frequently built using microservices. They work effectively when fault tolerance,
scalability, and quick development are crucial.
5. Event-Driven Architecture:
The foundation of event-driven architecture is the asynchronous event-driven communication
between components. An event sets off particular responses or actions inside the system.
Characteristics:
o Asynchronous Communication: Independently published, subscribed to, and processed
events allow for component-to-component communication.
o Loose coupling: Because of their loose coupling, event-driven systems have more
flexibility regarding component interactions.
o Scalability: Event-driven systems scale effectively and can withstand heavy loads.
Use Cases:
Financial systems, Internet of Things platforms, and online multiplayer games are a few
examples of applications where event-driven architectures are appropriate since they require
real-time processing, flexibility, and scalability.
6. Service-Oriented Architecture:
A type of architecture known as service-oriented architecture, or SOA, emphasizes providing
services as the fundamental units of larger systems. Services may be coordinated to build large
systems since they are meant to be autonomous, reusable, and flexible.
Characteristics:
o Reusability: To minimize effort duplication, services are made to be used again in many
situations.
o Interoperability: SOA strongly emphasizes using open standards to ensure that services
from various suppliers can cooperate.
o Flexibility: Adaptability is made possible by orchestrating services to develop various
applications.
Use Cases:
Enterprise-level applications that necessitate integrating several systems and services
frequently employ SOA. It also frequently occurs in systems when various teams or
organizations have developed separate application components.
7. Peer-to-Peer Architecture:
Peer-to-peer (P2P) architecture enables communication and resource sharing between
networked devices or nodes without depending on a centralized server. Every network node
can serve as both a client and a server.
Characteristics:
o Decentralization: The lack of a single point of failure in P2P systems results from their
decentralization.
o Resource Sharing: Nodes can share resources such as files, processing power, and
network bandwidth.
o Autonomy: Every node inside the network possesses a certain level of autonomy,
enabling it to make decisions on its own.
Use Cases:
Peer-to-peer (P2P) architectures are widely employed in distributed systems, video
conferencing software, and file-sharing programs. In these applications, nodes cooperate and
exchange resources without a central authority.
8. Architecture in N-Tiers:
An expansion of layered architecture, which divides the system into several tiers or layers, each
with a distinct function, is known as N-tier architecture. Presentation, application, business
logic, and data storage layers are examples of these tiers.
Characteristics:
o Modularity: N-Tier designs divide intricate systems into more manageable, smaller
parts.
o Scalability: Performance can be optimized by scaling each layer independently.
o Security: Data security can be improved by physically or logically separating data storage
levels.
Use Cases:
N-tier architectures are frequently employed in web applications where a distinct division of
responsibilities is necessary. They work great in scenarios where maintainability, scalability, and
modifiability are crucial.
9. Cloud-Based Architecture:
Software systems are developed and delivered using cloud-based architecture, which uses
cloud computing services. Outsourcing infrastructure to cloud service providers makes
scalability, adaptability, and cost-effectiveness possible.
Characteristics:
o Scalability: Cloud services are easily expandable or contracted to accommodate
fluctuating needs.
o Cost-effectiveness: Cloud-based architecture lowers initial hardware purchase
requirements and ongoing maintenance expenses.
o Worldwide Accessibility: Cloud architecture enables applications to be accessed from
anywhere in the world.
Use Cases:
Enterprise systems, mobile apps, and online applications are among the use cases for cloud-
based architectures. They are helpful when programs must manage changing workloads or
when regional distribution is crucial.
x-X-x