0% found this document useful (0 votes)
38 views

S - E Unit 2

The document discusses the crucial steps of requirement engineering which include feasibility study, requirements elicitation, requirements analysis, requirements specification, requirements validation, and requirements management. It also discusses problems in formulating requirements, requirement elicitation phase, verification and validation, static and dynamic verification, and software quality factors and SQA.

Uploaded by

Akrati Verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

S - E Unit 2

The document discusses the crucial steps of requirement engineering which include feasibility study, requirements elicitation, requirements analysis, requirements specification, requirements validation, and requirements management. It also discusses problems in formulating requirements, requirement elicitation phase, verification and validation, static and dynamic verification, and software quality factors and SQA.

Uploaded by

Akrati Verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Q.1- What are the crucial steps of requirement engineering.

Discuss with the help of


diagram.
Ans: Requirement Engineering is a systematic and disciplined approach to defining,
documenting, and maintaining the requirements of a software system12. It involves several
crucial steps1234:
1. Feasibility Study: This step involves creating the reasons for developing the software
that is acceptable to users, flexible to change, and conformable to established
standards2.
2. Requirements Elicitation: This is the process of gathering information about the
needs and expectations of stakeholders for the software system1234.
3. Requirements Analysis: This step involves analyzing the information gathered in the
requirements elicitation step to identify the high-level goals and objectives of the
software system1234.
4. Requirements Specification: This step involves documenting the requirements
identified in the analysis step in a clear, consistent, and unambiguous manner1234.
5. Requirements Validation: This step involves checking that the requirements are
complete, consistent, and accurate1234.
6. Requirements Management: This step involves managing the requirements
throughout the software development life cycle, including tracking and controlling
changes, and ensuring that the requirements are still valid and relevant1234.
Here is a simple diagram to illustrate the process:
Feasibility Study
|
V
Requirements Elicitation
|
V
Requirements Analysis
|
V
Requirements Specification
|
V
Requirements Validation
|
V
Requirements Management
Each of these steps is crucial in ensuring that the software system being developed meets the
needs and expectations of stakeholders, and that it is developed on time, within budget, and to
the required quality.
Q.2- What are the linkages between DFD and ER-diagram?
Data Flow Diagrams (DFDs) and Entity Relationship Diagrams (ERDs) serve different
purposes but are inherently related1.
• DFDs visually represent data processes and flows within a system2. They focus on
how data moves through a system, showing the flow of data between processes, data
stores, and external entities34.
• ERDs, on the other hand, showcase relationships between entities in a
database2. They emphasize the structure and organization of data, representing the
data objects or entities and the relationships between them34.
The linkages between DFDs and ERDs can be understood as follows:
• The data flows represented in a DFD correspond to the entities and relationships
depicted in an ERD1.
• The entities in an ERD often represent data stores or external entities in a DFD1.
• The relationships in an ERD can correspond to processes in a DFD that transform or
move data1.
Ensuring the consistency between DFDs and ERDs is essential for a holistic representation of
the system1. They complement each other in providing a comprehensive view of both data
movement and data structure within a system1.
Q.3- What are the problems in the formulation of requirements?
Formulating requirements for a software system can be challenging due to several factors1234:
1. Understanding User Needs: Requirements are often poorly defined and may change
over time, making it difficult for engineers to understand the user’s true needs1.
2. Managing Stakeholders: There may be multiple stakeholders with different goals
and priorities, making it difficult to satisfy everyone’s requirements1.
3. Identifying and Mitigating Risks: Engineers must identify and mitigate potential
risks associated with the requirements, such as security vulnerabilities or scalability
issues1.
4. Handling Ambiguity: Requirements may be ambiguous, inconsistent, or incomplete,
making it difficult for engineers to understand what the system should do1.
5. Keeping Up with Changing Technology: Requirements must be aligned with the
latest technology trends and innovations, which can be difficult to predict and keep up
with1.
6. Maintaining a Balance Between Feasibility, Cost, and Time: Engineers need to
balance the feasibility of implementing a requirement, the cost of implementation, and
the time required to implement it1.
7. Maintaining Traceability: Engineers need to maintain traceability of requirements
throughout the development process to ensure that all requirements are met and any
changes are tracked1.
8. Understanding Large and Complex System Requirements: The word ‘large’
represents 2 aspects: Large constraints in terms of security, etc. due to a large number
of users and a large number of functions to be implemented1.
9. Undefined System Boundaries: There might be no defined set of implementation
requirements. The customer may go on to include several unrelated and unnecessary
functions besides the important ones, resulting in an extremely large implementation
cost that may exceed the decided budget1.
10. Customers/Stakeholders are not clear about their needs: Sometimes, the
customers themselves may be unsure about the exhaustive list of functionalities they
wish to see in the software1.
11. Conflicting Requirements: There is a possibility that two different stakeholders of
the project express demands which contradict each other’s implementation1.
12. Changing Requirements: In the case of successive interviews or reviews from the
customer, there is a possibility that the customer expresses a change in the initial set
of specified requirements1
Q.4- List out requirement eliciation phase?
Ans: Requirement elicitation is a critical part of the software development life cycle and is
typically performed at the beginning of the project12. It involves the identification, collection,
analysis, and refinement of the requirements for a software system12. The requirement
elicitation phase includes the following activities12:
1. Knowledge of the overall area where the systems are applied: Understanding the
domain and the context in which the system will be used1.
2. Understanding the precise customer problem: The details of the specific customer
problem where the system is going to be applied must be understood1.
3. Prepare for elicitation: The goal is to consider the nature of the elicitation operation,
choose the correct techniques, and prepare for sufficient resources2.
4. Conduct Elicitation: The goal is to discover and recognize change-related
information2.
5. Confirm Elicitation Findings: The information collected in the elicitation session is
tested for accuracy in this phase2.
These activities ensure that the software development process is based on a clear and
comprehensive understanding of the customer’s needs and requirements12.
Q.5- Discuss verification and validation in brief and also explain static and dynamic
verification.
Verification and Validation are two critical processes in software engineering that ensure a
software system meets its specifications and fulfills its intended purpose12345.
• Verification is the process of checking that a software achieves its goal without any
bugs1. It is the process to ensure whether the product that is developed is right or
not1. It verifies whether the developed product fulfills the requirements that we
have1. Verification is also known as static testing16.
• Validation is the process of checking whether the software product is up to the mark
or in other words product has high-level requirements1. It is the process of checking
the validation of the product i.e., it checks what we are developing is the right
product1. Validation is the dynamic testing16.
Static Verification (also known as static testing) is a type of software testing method which
is performed to check the defects in software without actually executing the code of the
software application6. Static testing is performed in the early stage of development to avoid
errors as it is easier to find sources of failures and it can be fixed easily6. The errors that
cannot be found using Dynamic Testing, can be easily found by Static Testing6. Static testing
involves manual or automated reviews of the documents7. This review is done during an
initial phase of testing to catch defects early in the Software Testing Life Cycle (STLC)7.
Dynamic Verification (also known as dynamic testing) is a type of Software Testing which is
performed to analyze the dynamic behavior of the code6. It includes the testing of the
software for the input values and output values that are analyzed6. Dynamic testing is
performed at the later stage of the software development6. Dynamic testing is performed after
code deployment6. Dynamic testing involves functional and non-functional testing6.
Q.6-What are software quality factors and also explain SQA?
Ans Software Quality Factors are the characteristics or properties that contribute to the
overall quality of a software product1234. They can be broadly divided into two
categories1. The first category includes factors that can be measured directly, such as the
number of logical errors1. The second category includes factors that can be measured only
indirectly, such as maintainability1. Here are some key software quality factors:
1. Correctness: These requirements deal with the correctness of the output of the
software system1.
2. Reliability: Reliability requirements deal with service failure1.
3. Efficiency: It deals with the hardware resources needed to perform the different
functions of the software system1.
4. Integrity: This factor deals with the software system security1.
5. Usability: A software has good usability if different categories of users (i.e., expert
and novice users) can easily invoke the functions of the product1.
6. Maintainability: A software is maintainable if errors can be easily corrected as and
when they show up, new functions can be easily added to the product, and the
functionalities of the product can be easily modified, etc1.
7. Flexibility: A software is flexible if it can adapt to changes in its environment or
requirements1.
8. Testability: A software is testable if its functions can be tested easily1.
9. Portability: A software is portable if it can be easily made to work in different
operating environments, on different machines, with other software products, etc1.
10. Reusability: A software is reusable if different modules of the product can easily be
reused to develop new products1.
11. Interoperability: A software is interoperable if it can operate with other products1.
Software Quality Assurance (SQA) is a process that assures that all software engineering
processes, methods, activities, and work items are monitored and comply with the defined
standards5678. These defined standards could be one or a combination of anything like ISO
9000, CMMI model, ISO15504, etc6. SQA incorporates all software development processes
starting from defining requirements to coding until release6. Its prime goal is to ensure
quality6. The SQA process includes activities such as creating an SQA management plan,
setting the checkpoints, supporting/participating in the software engineering team’s
requirement gathering, conducting formal technical reviews, formulating a multi-testing
strategy, etc6.
Q.7-Explain ISO standards and why these standards are important?
Ans ISO Standards are a set of internationally agreed best practices designed to provide a
framework for companies to ensure security, quality, and efficiency in their operations,
services, and products12. They are developed by the International Organization for
Standardization (ISO), an independent, non-governmental, international standard
development organization composed of representatives from the national standards
organizations of member countries3. ISO standards cover a wide range of activities, including
making a product, managing a process, delivering a service, or supplying materials2.
ISO standards are important for several reasons4156:
1. Facilitates Global Trade: ISO standards are recognized and respected worldwide,
making it easier for companies to do business with each other across different
countries and regions1.
2. Improves Quality and Safety: ISO standards set guidelines and requirements for
quality management and safety measures, ensuring that products, services, and
processes are safe and reliable4.
3. Enhances Efficiency: Implementing ISO standards can help businesses reduce waste,
improve their efficiency, and minimize risk, resulting in increased sustainability and
profitability1.
4. Boosts Reputation: Certification to ISO standards can enhance an organization’s
integrity and reputation, leading to better business prospects and partnerships1.
5. Promotes Consistency: ISO standards ensure that everyone follows the same set of
guidelines no matter where they are based, resulting in a safer, more consistent end
result7.
6. Supports Regulatory Compliance: Regulators and governments count on ISO
standards to help develop better regulation, knowing they have a sound basis thanks to
the involvement of globally-established experts4.
Q.8- Explain CMM model and also discuss each phase in detail.
The Capability Maturity Model (CMM) is a framework developed by the Software
Engineering Institute (SEI) at Carnegie Mellon University in 198712. It is used to analyze the
approach and techniques followed by any organization to develop software products12. It also
provides guidelines to enhance the maturity of the process used to develop those software
products12.
The CMM model describes a strategy for software process improvement that should be
followed by moving through 5 different levels13. Each level of maturity shows a process
capability level13. All the levels except level 1 are further described by Key Process Areas
(KPA)13.
Here are the five levels of the CMM model13:
1. Initial (Level 1): At this level, work is performed informally and is characterized by
ad hoc activities3. The process is unpredictable and reactive, which increases risk3.
2. Managed (Level 2): Work is planned and tracked at this level3. This level of software
development organization has a basic and consistent project management process to
track cost, schedule, and functionality3. The process is in place to repeat the earlier
successes on projects with similar applications3.
3. Defined (Level 3): At this level, the software process for both management and
engineering activities are defined and documented3. The process is more proactive3.
4. Quantitatively Managed (Level 4): Work is quantitatively controlled at this
level3. Software Quality management – Management can effectively control the
software development effort using precise measurements3. At this level, organization
set a quantitative quality goal for both software process and software maintenance3.
5. Optimizing (Level 5): Work is based upon continuous improvement at this
level3. The key characteristic of this level is focusing on continuously improving
process performance3.
Each of these levels represents a stage of growth in the maturity of organizational
processes4. The CMM model is not a software process model, but a framework that helps
evaluate, develop, and improve the software development process12.
Q.9- What is the degree of relationship? Give an example of each of the relationship
degree?
In Database Management Systems (DBMS), the degree of relationship represents the
number of entity types that are associated with a relationship12. The degree of relationship
can be categorized into several types based on the number of entities involved1:
1. Unary (Degree 1): In this type of relationship, the associating entity types are the
same1. For example, in a particular class, we have many students, there are monitors
too. So, here class monitors are also students. Thus, we can say that only students are
participating here1.
2. Binary (Degree 2): In a Binary relationship, there are two types of entity associates1.
For example, we have two entity types ‘Student’ and ‘ID’ where each ‘Student’ has
his ‘ID’. So, here two entity types are associating we can say it is a binary
relationship1.
3. Ternary (Degree 3): In the Ternary relationship, there are three types of entity
associates1. For example, we have three entity types ‘Teacher’, ‘Course’, and
‘Class’. The relationship between these entities is defined as the teacher teaching a
particular course, also the teacher teaches a particular class1.
4. N-ary (Degree N): In an N-ary relationship, there are N types of entity
associates1. This type of relationship is used when more than three entities are
associated1.
Each of these degrees of relationship represents a different complexity of interaction between
entities in a database1.
Q.11Define the decision table. Discuss the difference between decision table and decision
tree with the help of suitable example
A Decision Table is a concise visual representation for specifying which actions to perform
depending on given conditions12. They are algorithms whose output is a set of actions1. The
information expressed in decision tables could also be represented as decision trees or in a
programming language as a series of if-then-else and switch-case statements1. Each decision
corresponds to a variable, relation, or predicate whose possible values are listed among the
condition alternatives1. Each action is a procedure or operation to perform, and the entries
specify whether (or in what order) the action is to be performed for the set of condition
alternatives the entry corresponds to1.
A Decision Tree is a graph that uses a branching method to illustrate every possible outcome
of a decision3. Decision Trees are graphical and show a better representation of decision
outcomes3. It consists of three nodes namely Decision Nodes, Chance Nodes, and Terminal
Nodes3.
The key differences between Decision Tables and Decision Trees are3:
1. Representation: Decision Tables are a tabular representation of conditions and
actions, whereas Decision Trees are a graphical representation of every possible
outcome of a decision3.
2. Derivation: We can derive a decision table from the decision tree, but we cannot
derive a decision tree from the decision table3.
3. Criteria Clarification: Decision Tables help to clarify the criteria, whereas Decision
Trees help to take into account the possible relevant outcomes of the decision3.
4. Complexity: Decision Tables are used whenever the processing logic is very
complicated and involves multiple conditions, whereas Decision Trees are useful for
simple to moderate scenarios3.
5. Interdependence: Decision Tables can account for overlapping or interdependent
effects, but Decision Trees are not as well-suited for interdependent factors3.
6. Decision-Making: Decision Tables deal with all possible conditions and actions in
one place, whereas Decision Trees require following through branches to determine
outcomes3.
7. Implementation: Decision Tables can be implemented manually or through software,
whereas Decision Trees are typically implemented through software3.
8. Analysis: Decision Tables are suitable for rule-based analysis, whereas Decision
Trees are suitable for probability-based analysis3.
Q.12 Discuss the signification and use of requirement engineering. What are problems
in formulation of requirement?
Requirement Engineering (RE) is a systematic and disciplined approach to defining,
documenting, and maintaining requirements in the engineering design process12. It provides
the appropriate mechanism to understand what the customer desires, analyzing the need, and
assessing feasibility, negotiating a reasonable solution, specifying the solution clearly,
validating the specifications, and managing the requirements as they are transformed into a
working system12. Thus, requirement engineering is the disciplined application of proven
principles, methods, tools, and notation to describe a proposed system’s intended behavior
and its associated constraints12.
The significance and use of requirement engineering are123:
• It provides a vision of the final software i.e., what the software would do? This creates
a sense of mutual understanding between the customer and the software developer3.
• Requirement engineering also helps in defining the scope of the software i.e., what
will be the functionalities of the final software3.
• It also helps in perceiving the cost of the final software3.
• The Requirements Engineering process is a critical step in the software development
life cycle as it helps to ensure that the software system being developed meets the
needs and expectations of stakeholders, and that it is developed on time, within
budget, and to the required quality2.
Formulating requirements for a software system can be challenging due to several factors4567:
• Understanding User Needs: Requirements are often poorly defined and may change
over time, making it difficult for engineers to understand the user’s true needs4.
• Managing Stakeholders: There may be multiple stakeholders with different goals
and priorities, making it difficult to satisfy everyone’s requirements4.
• Identifying and Mitigating Risks: Engineers must identify and mitigate potential
risks associated with the requirements, such as security vulnerabilities or scalability
issues4.
• Handling Ambiguity: Requirements may be ambiguous, inconsistent, or incomplete,
making it difficult for engineers to understand what the system should do4.
• Keeping Up with Changing Technology: Requirements must be aligned with the
latest technology trends and innovations, which can be difficult to predict and keep up
with4.
• Maintaining a Balance Between Feasibility, Cost, and Time: Engineers need to
balance the feasibility of implementing a requirement, the cost of implementation, and
the time required to implement it4.
• Maintaining Traceability: Engineers need to maintain traceability of requirements
throughout the development process to ensure that all requirements are met and any
changes are tracked4.
• Understanding Large and Complex System Requirements: The word ‘large’
represents 2 aspects: Large constraints in terms of security, etc. due to a large number
of users and a large number of functions to be implemented4.
• Undefined System Boundaries: There might be no defined set of implementation
requirements. The customer may go on to include several unrelated and unnecessary
functions besides the important ones, resulting in an extremely large implementation
cost that may exceed the decided budget4.
• Customers/Stakeholders are not clear about their needs: Sometimes, the
customers themselves may be unsure about the exhaustive list of functionalities they
wish to see in the software4.
• Conflicting Requirements: There is a possibility that two different stakeholders of
the project express demands which contradict each other’s implementation4.
• Changing Requirements: In the case of successive interviews or reviews from the
customer, there is a possibility that the customer expresses a change in the initial set
of specified requirements4.
Q.13 What is data flow diagram? Explain rule for drawing good data flow diagram with
the help of suitable example.
A Data Flow Diagram (DFD) is a graphical representation of the flow of data through a
process or a system123. It uses defined symbols like rectangles, circles, and arrows, plus short
text labels, to show data inputs, outputs, storage points, and the routes between each
destination123. DFDs are built using standardized symbols and notation to describe various
entities and their relationships123. They can range from simple, even hand-drawn process
overviews, to in-depth, multi-level DFDs that dig progressively deeper into how the data is
handled1.
Here are some rules to keep in mind while drawing a DFD451:
1. Data cannot flow between two entities: Data flow must be from an entity to a
process or a process to an entity4.
2. Data cannot flow between two data stores: Data flow must be from a data store to a
process or a process to a data store4.
3. Data cannot flow directly from an entity to a data store: Data Flow from an entity
must be processed by a process before going to a data store and vice versa4.
4. A process must have at least one input data flow and one output data flow: Every
process must have input data flow to process the data and an output data flow for the
processed data4.
5. A data store must have at least one input data flow and one output data flow:
Every data store must have input data flow to store the data and an output data flow
for the retrieved data4.
6. Two data flows cannot cross each other4.
7. All processes in the system must be linked to a minimum of one data store or any
other process4.
8. The DFD should maintain consistency across all the DFD levels5.
9. Each process should be named with a short sentence, in one word, or a phrase to
express its essence5.
For example, consider a simple library management system. The entities could be ‘Librarian’,
‘Member’, and ‘Books’. The processes could be ‘Issue Book’, ‘Return Book’, and ‘Add New
Book’. The data stores could be ‘Book Details’ and ‘Member Details’. The data flows could
be ‘Book Request’, ‘Issued Book’, ‘Returned Book’, ‘New Book Details’, etc. The DFD
would show how these entities, processes, and data stores interact with each other through the
data flows.
Q.14 What are Differences between analysis and design?
Analysis and Design are two crucial phases in the software development life cycle, each with
its own purpose and focus123.
Analysis is the process of understanding the problem, identifying the requirements, and
defining the problem that needs to be solved12. It involves breaking down a system into its
individual components and understanding how each component interacts with the others to
accomplish the system’s overall goal1. In this phase, the analyst collects the requirements of
the system and documents them1. The main focus of the analysis phase is on “what” the
system should do1.
On the other hand, Design is the process of creating a solution to the problem identified
during the analysis phase123. It involves designing the system architecture, components,
modules, interfaces, and data1. The design phase focuses on “how” the system will do what it
needs to do1. It involves identifying the modules and components of the system, creating the
user interface, and database design1.
In summary, while analysis focuses on understanding the problem and defining the
requirements, design focuses on how to implement the solution123.
Q.15 What is a data flow diagram? Explain rules for drawing good data flow diagrams
with the help of a suitable example.
Ans: A Data Flow Diagram (DFD) is a graphical representation of the flow of data through
a process or a system12. It uses defined symbols like rectangles, circles, and arrows, plus short
text labels, to show data inputs, outputs, storage points, and the routes between each
destination12. DFDs are built using standardized symbols and notation to describe various
entities and their relationships12. They can range from simple, even hand-drawn process
overviews, to in-depth, multi-level DFDs that dig progressively deeper into how the data is
handled1.
Here are some rules to keep in mind while drawing a DFD31:
1. Data cannot flow between two entities: Data flow must be from an entity to a
process or a process to an entity3.
2. Data cannot flow between two data stores: Data flow must be from a data store to a
process or a process to a data store3.
3. Data cannot flow directly from an entity to a data store: Data Flow from an entity
must be processed by a process before going to a data store and vice versa3.
4. A process must have at least one input data flow and one output data flow: Every
process must have input data flow to process the data and an output data flow for the
processed data3.
5. A data store must have at least one input data flow and one output data flow:
Every data store must have input data flow to store the data and an output data flow
for the retrieved data3.
6. Two data flows cannot cross each other3.
7. All processes in the system must be linked to a minimum of one data store or any
other process3.
8. The DFD should maintain consistency across all the DFD levels1.
9. Each process should be named with a short sentence, in one word, or a phrase to
express its essence1.
For example, consider a simple library management system. The entities could be ‘Librarian’,
‘Member’, and ‘Books’. The processes could be ‘Issue Book’, ‘Return Book’, and ‘Add New
Book’. The data stores could be ‘Book Details’ and ‘Member Details’. The data flows could
be ‘Book Request’, ‘Issued Book’, ‘Returned Book’, ‘New Book Details’, etc. The DFD
would show how these entities, processes, and data stores interact with each other through the
data flows.

Q.16 Compare ISO: 9000 and SEI –CMM MODEL


ISO 9000 and SEI-CMM are two well-established models for a software quality
system12. Here are some key differences between them12:
1. Definition: ISO 9000 is an international standard of quality management and
quality assurance12. It certifies the companies that they are documenting the
quality system elements which are needed to run an efficient and quality
system12. On the other hand, SEI-CMM (Software Engineering Institute -
Capability Maturity Model) is specifically for software organizations to certify
them at which level they are following and maintaining the quality standards12.
2. Focus: The focus of ISO 9000 is on the customer-supplier relationship, and to
reduce the customer’s risk12. The focus of SEI-CMM is to improve the processes
to deliver a quality software product to the customer12.
3. Target Industry: ISO 9000 is used by manufacturing industries12. SEI-CMM is
used by the software industry12.
4. Recognition: ISO 9000 is universally accepted across lots of countries12. SEI-
CMM is mostly used in the USA12.
5. Guidelines: ISO 9000 guides about concepts, principles, and safeguards to be in
place in a workplace12. SEI-CMM specifies what is to be followed at what level of
maturity12.
6. Levels: ISO 9000 has one acceptance level12. SEI-CMM has five acceptance
levels: Initial, Repeatable, Defined, Managed, and Optimized12.
7. Validity: ISO 9000 certificate is valid for three years12. SEI-CMM certificate is
valid for three years as well12.
8. Focus: ISO 9000 focuses on following a set of standards so that firms delivery are
successful every time12. SEI-CMM focuses on improving the processes12.
Both models can be used to complement each other in establishing a quality system by a
software engineering organization12.

You might also like