0% found this document useful (0 votes)
170 views19 pages

Conventional Software Management

The document discusses conventional software project management. It describes the waterfall model of software development, which involves sequential phases from requirements to maintenance without overlap between phases. While simple, the waterfall model is inflexible and not suited for complex or long-term projects where requirements may change. Effective software project management is important to deliver projects on time and on budget by accurately estimating costs and schedules, monitoring progress, and ensuring quality.

Uploaded by

Sai Venkat Gudla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
170 views19 pages

Conventional Software Management

The document discusses conventional software project management. It describes the waterfall model of software development, which involves sequential phases from requirements to maintenance without overlap between phases. While simple, the waterfall model is inflexible and not suited for complex or long-term projects where requirements may change. Effective software project management is important to deliver projects on time and on budget by accurately estimating costs and schedules, monitoring progress, and ensuring quality.

Uploaded by

Sai Venkat Gudla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

G.

shiridi venkata sai

Conventional Software Management

Introduction of Models:

“Project Management is the discipline of organizing and managing resources

(e.g. people) in such a way that the project is completed within defined scope,

quality, time and cost constraints. A project is a temporary and one-time

endeavor undertaken to create a unique product or service, which brings about

beneficial change or added value.”

The goal of software project management is to understand, plan, measure and

control the project such that it is delivered on time and on budget. This

involves gathering requirements, managing risk, monitoring and controlling

progress, and following a software development process.

Software project management requires trained and experienced Software

Engineers in order to increase the likelihood of project success because software

development for large projects is extremely complex and following strict

engineering principles will help reduce the risks associated with the project.

Software project management is extremely important for the following

reasons:

 Software development is highly unpredictable: [as of 2007] only about

10% of projects are delivered within initial budget and on schedule.

 Management has a greater effect on the success or failure of a project

than technology advances.

 Too often there is too much scrap and rework. The entire process is very

immature, not enough reuse.

According to the 10th edition of the annual CHAOS report from The Standish

Group, only 34% of projects are completed successfully. While this actually

represents a huge improvement, there is obviously still room for more! Why
have things started to get better?

Project failure in itself is not the only reason why software management is so

important. When a project fails, not only is a product not delivered, but all the

money invested in the product is also lost. Without proper software

management, even completed projects will be delivered late and over budget.

Take a look at some of these

Example:

The 2004 CHAOS report, entitled CHAOS Chronicles, found total U.S. project

waste to be $55 billion, made up of $38 billion in lost dollar value and $17

billion in cost overruns. Total project spending was found to be $255 billion in

the 2004 report.

In 1994, The Standish Group estimated U.S. IT projects wasted $140 billion

$80 billion of that from failed projects out of a total of $250 billion in project

spending.

If the risk of failure and loss of money is not enough to convince you of the

importance of proper software management, consider that some software will

also put the lives of people at risk. Go read Software Horror Stories or History’s

Worst Software Bugs to see some examples…

Failures are universally unprejudiced: they happen in every country; to large

companies and small; in commercial, nonprofit, and governmental

organizations; and without regard to status or reputation.

So why does software fail anyways? Here is the list from the IEEE Spectrum:

 Unrealistic or unarticulated project goals

 Inaccurate estimates of needed resources

 Badly defined system requirements

 Poor reporting of the project’s status


 Unmanaged risks

 Poor communication among customers, developers, and users

 Use of immature technology

 Inability to handle the project’s complexity

 Sloppy development practices

 Poor project management

 Stakeholder politics

 Commercial pressures

Software project failures have a lot in common with airplane crashes. Just as

pilots never intend to crash, software developers don’t aim to fail. When a

commercial plane crashes, investigators look at many factors, such as the

weather, maintenance records, the pilot’s disposition and training, and

cultural factors within the airline. Similarly, we need to look at the business

environment, technical management, project management, and

organizational culture to get to the roots of software failures.

THE PILOT’S ACTIONS JUST BEFORE a plane crashes are always of great

interest to investigators. That’s because the pilot is the ultimate

decision-maker, responsible for the safe operation of the craft. Similarly,

project managers play a crucial role in software projects and can be a major

source of errors that lead to failure.

Bad decisions by project managers are probably the single greatest cause of

software failures today. Poor technical management, by contrast, can lead to

technical errors, but those can generally be isolated and fixed. However, a bad

project management decision such as hiring too few programmers or picking

the wrong type of contract can wreak havoc.

Project management decisions are often tricky precisely because they involve

tradeoffs based on fuzzy or incomplete knowledge. Estimating how much an IT

project will cost and how long it will take is as much art as science. The larger
or more novel the project, the less accurate the estimates. It’s a running joke in

the industry that IT project estimates are at best within 25 percent of their

true value 75 percent of the time.

Poor project management takes many other forms, including bad

communication, which creates an inhospitable atmosphere that increases

turnover; not investing in staff training; and not reviewing the project’s

progress at regular intervals. Any of these can help derail a software project.

Another problem which distinguishes software engineering from other

engineering fields is the fact that software is not concrete. There is a common

misconception that software can be easily changed to do anything no matter

which stage the project is currently at. If construction on a building or bridge is

nearly complete, people understand that it is too late to make significant

changes to the architecture or design. However with software, clients tend to

have the impression that making changes are always easy even though the end

result could be the equivalent to tearing down a nearly completed building!

A common misconception is that developing software means writing code,

which is definitely not the case. Code writing itself only counts for about 40%

of software development. There are many other important steps such as

requirements, configuration, deployment and maintenance.

The main goal of software project management is to try and reduce the risks

involved with a project such that the project can then finish on budget and on

time with all of the features desired by the clients..

Project management helps us achieve the following:

 Estimate the budget needed to complete the project before it starts and

to monitor the progress so that at any given time we know how much a

project has cost and how much more it will cost.


 Estimate the time needed to complete at project before it starts and to

monitor the progress so that at any given time we know how much time is

left before completion.

 Estimate which features can be developed in the given time and cost

frame.

 Monitors the project progress and so we know which features have been

completed and which ones will be completed before the end of the project.

 Software delivered must provide all the features specified in the

requirements (feature complete). Project management therefore helps

project managers re-negotiate features and requirements.

 Software users are among the worst treated customer in engineering. It

is taken for granted without much complaint that software has bugs,

crashes from time to time, doesn’t work occasionally and is too

complicated to install and use. Quality must be a given part of the scope;

the completed features must be of high quality.

 Since project management is so important, we need to be able to rank

organizations in terms of their software capability and maturity. We use

the Capability and Maturity Model (CMM) to achieve this.

 CMM ranks the software development process of a firm by using 5 levels

of maturity
Waterfall model:

The waterfall model is a popular version of the systems development life cycle

model for software engineering. Often considered the classic approach to the

systems development life cycle, the waterfall model describes a development

method that is linear and sequential. Waterfall development has distinct goals

for each phase of development. Imagine a waterfall on the cliff of a steep

mountain. Once the water has flowed over the edge of the cliff and has begun

its journey down the side of the mountain, it cannot turn back. It is the same

with waterfall development. Once a phase of development is completed, the

development proceeds to the next phase and there is no turning back.

Advantages of waterfall model:

 This model is simple and easy to understand and use.

 It is easy to manage due to the rigidity of the model – each phase has

specific deliverables and a review process.

 In this model phases are processed and completed one at a time. Phases

do not overlap.
 Waterfall model works well for smaller projects where requirements are

very well understood.

Disadvantages of waterfall model:

 Once an application is in the testing stage, it is very difficult to go back

and change something that was not well-thought out in the concept stage.

 No working software is produced until late during the life cycle.

 High amounts of risk and uncertainty.

 Not a good model for complex and object-oriented projects.

 Poor model for long and ongoing projects.

 Not suitable for the projects where requirements are at a moderate to

high risk of changing.

When to use the waterfall model:

 This model is used only when the requirements are very well known,

clear and fixed.

 Product definition is stable.

 Technology is understood.

 There are no ambiguous requirements

 Ample resources with required expertise are available freely

 The project is short.

Very less customer enter action is involved during the development of the

product. Once the product is ready then only it can be demoed to the end

users. Once the product is developed and if any failure occurs then the cost of

fixing such issues are very high, because we need to update everywhere from

document till the logic.

Program design comes first:

Insert a prelim¬inary program design phase between the software

requirements generation phase and the analysis phase. By this technique, the
program designer assures that the software will not fail because of storage,

timing, and data flux (continuous change). As analysis proceeds in the

succeeding phase, the program designer must impose on the analyst the

storage, timing, and operational constraints in such a way that he senses the

consequences. If the total resources to be applied are insufficient or if the

embryonic(in an early stage of development) operational design is wrong, it

will be recognized at this early stage and the iteration with requirements and

preliminary design can be redone before final design, coding, and test

commences. How is this program design procedure implemented? The

fol¬lowing steps are required:

Begin the design process with program designers, not analysts or

pro¬grammers.

Design, define, and allocate the data processing modes even at the risk of being

wrong. Allocate processing functions, design the database, allo¬cate execution

time, define interfaces and processing modes with the operating system,

describe input and output processing, and define pre¬liminary operating

procedures.

Write an overview document that is understandable, informative, and current

so that every worker on the project can gain an elemental under¬standing of

the system.

Document the design

The amount of documentation required on most software programs is quite a

lot, certainly much more than most program¬mers, analysts, or program

designers are willing to do if left to their own devices. Why do we need so much

documentation? (1) Each designer must communicate with interfacing

designers, managers, and possibly custom¬ers. (2) During early phases, the

documentation is the design. (3) The real monetary value of documentation is

to support later modifications by a separate test team, a separate


maintenance team, and operations personnel who are not software literate.

Do it twice

If a computer program is being developed for the first time, arrange matters

so that the version finally delivered to the customer for operational

deployment is actually the second version insofar as critical design/operations

are concerned. Note that this is simply the entire process done in miniature, to

a time scale that is relatively small with respect to the overall effort. In the

first version, the team must have a special broad com¬petence where they can

quickly sense trouble spots in the design, model them, model alternatives,

forget the straightforward aspects of the design that aren't worth studying at

this early point, and, finally, arrive at an error-free program.

Plan, control, and monitor testing

Without question, the biggest user of project resources-manpower, computer

time, and/or management judg¬ment-is the test phase. This is the phase of

greatest risk in terms of cost and schedule. It occurs at the latest point in the

schedule, when backup alternatives are least available, if at all. The previous

three recommenda¬tions were all aimed at uncovering and solving problems

before entering the test phase. However, even after doing these things, there is

still a test phase and there are still important things to be done, including:

1. Employ a team of test specialists who were not responsible for the

original design;

2. Employ visual inspections to spot the obvious errors like dropped minus

signs, missing factors of two, jumps to wrong addresses (do not use the

computer to detect this kind of thing, it is too expensive);

3. Test every logic path;

4. Employ the final checkout on the target computer.

Involve the customer

For some reason, what a software design is going to do is subject to wide

interpretation, even after previous agreement. It is important to involve the


customer in a formal way so that he has committed himself at earlier points

before final delivery. There are three points follow¬ing requirements definition

where the insight, judgment, and commitment of the customer can bolster the

development effort. These include a "prelim¬inary software review" following

the preliminary program design step, a sequence of "critical software design

reviews" during program design, and a "final software acceptance review".

To overcome the drawbacks of waterfall models we have another modules

introduced in the software engineering or project management process

Conventional Software Management performance.

Conventional software management practices are sound in theory, but

practice is still tied to archaic technol¬ogy and techniques. Conventional

software economics provides a benchmark of performance for conventional

software manage¬ment principles. The best thing about software is its

flexibil¬ity: It can be programmed to do almost anything. The worst thing

about software is also its flexibility: The "almost anything" characteristic has

made it difficult to plan, monitors, and control software development

Three important analyses of the state of the software engineering industry are
1. Software development is still highly unpredictable. Only about 10% of

software projects are delivered successfully within initial budget and

sched¬ule estimates.

2. Management discipline is more of a discriminator in success or failure

than are technology advances.

3. The level of software scrap and rework is indicative of an immature

process.

All three analyses reached the same general conclusion: The success rate for

soft¬ware projects is very low. The three analyses provide a good introduction

to the magnitude of the software problem and the current norms for

conventional software management performance.

Barry Boehm's "Industrial Software Metrics Top 10 List” is a good, objective

characterization of the state of software development.

1. Finding and fixing a software problem after delivery costs 100 times

more than finding and fixing the problem in early design phases

2. You can compress software development schedules 25% of nominal, but

no more.

3. For every $1 you spend on development, you will spend $2 on

maintenance

4. Software development and maintenance costs are primarily a function

of the number of source lines of code

5. Variations among people account for the biggest differences in software

productivity

6. The overall ratio of software to hardware costs is still growing. In 1955

it was 15:85; in 1985, 85:15.

7. Only about 15% of software development effort is devoted to

programming.

8. Software systems and products typically cost 3 times as much per

SLOC as individual software programs. Software-system products (i.e.,

system of sys¬tems) cost 9 times as much.


9. Walkthroughs catch 60% of the errors 80% of the contribution comes

from 20% of the contributors.

The Principles of Conventional Software Engineering

There are many descriptions of engineering software "the old way." After years

of software development experience, the software industry has learned many

lessons and formulated many principles. This section describes one view of

today's software engineering principles as a benchmark for introducing the

primary themes discussed throughout the remainder of the book. The

benchmark I have chosen is a brief article titled "Fifteen Principles of Software

Engineering" [Davis, 1994], The article was subsequently expanded into a book

[Davis, 1995] that enumerates 201 principles. Despite its title, the article

describes the top 30 principles, and it is as good a summary as any of the

conventional wisdom within the software industry. While I endorse much of

this wisdom, I believe some of it is obsolete. Davis's top 30 principles are quoted

next, in italics. For each principle, I comment on whether the perspective

provided later in this book would endorse or change it. I make several

assertions here that are left unsubstantiated until later chapters.

1. Make quality #1. Quality must be quantified and mechanisms put into place

to motivate its achievement.

a Defining quality commensurate with the project at hand is important but is

not easily done at the outset of a project. Consequently, a modern process

framework strives to understand the trade-offs among features, quality, cost,

and schedule as early in the life cycle as possible. Until this understanding is

achieved, it is not possible to specify or manage the achievement of quality.

2. High-quality software is possible. Techniques that have been demonstrated

to increase quality include involving the customer, prototyping, simplifying

design, conducting inspections, and hiring the best people.

a This principle is mostly redundant with the others.

3. Give products to customers early. No matter how hard you try to learn

users' needs during the requirements phase, the most effective way to
determine real needs is to give users a product and let them play with it.

a This is a key tenet of a modern process framework, and there must be

several mechanisms to involve the customer throughout the life cycle.

Depending on the domain, these mechanisms may include demonstrable

prototypes, demonstration-based milestones, and alpha/beta releases.

4. Determine the problem before writing the requirements. When faced with

what they believe is a problem, most engineers rush to offer a solution. Before

you try to solve a problem, be sure to explore all the alternatives and don't be

blinded by the obvious solution.

a This principle is a clear indication of the issues involved with the conventional

requirements specification process. The parameters of the problem become

more tangible as a solution evolves. A modern process framework evolves the

problem and the solution together until the problem is well enough understood

to commit to full production.

4. Evaluate design alternatives. After the requirements are agreed upon, you

must examine a variety of architectures and algorithms. You certainly do not

want to use an "architecture" simply because it was used in the requirements

specification.

1. Evolution of Software Economics

2.1 SOFTWARE ECONOMICS


Most software cost models can be abstracted into a function of five basic

parameters: , size, process, personnel, environment, and required quality.

1. The size of the end product (in human-generated components), which

is typically quantified in terms of the number of source instructions or the

number of function points required to develop the required functionality

2. The process used to produce the end product, in particular the ability of

the process to avoid non-value-adding activities (rework, bureaucratic

delays, communications overhead)

3. The capabilities of software engineering personnel, and particularly

their experience with the computer science issues and the applications

domain issues of the project

4. The environment, which is made up of the tools and techniques

available to support efficient software development and to automate the

process

5. The required quality of the product, including its features, performance,

reliability, and adaptability

The relationships among these parameters and the estimated cost can be

written as follows:

Effort = (Personnel) (Environment)( Quality)( Size process)

One important aspect of software economics (as represented within today's

software cost models) is that the relation¬ship between effort and size exhibits

a diseconomy of scale. The diseconomy of scale of software development is a

result of the process exponent being greater than 1.0. Contrary to most

manufacturing processes, the more software you build, the more expensive it is

per unit item.

Figure 2-1 shows three generations of basic technology advancement in tools,

components, and processes. The required levels of quality and personnel are

assumed to be constant. The ordinate of the graph refers to software unit costs

(pick your favorite: per SLOC, per function point, per component) realized by

an organization.
The three generations of software development are defined as follows:

1. Conventional: 1960s and 1970s, craftsmanship. Organizations used

cus¬tom tools, custom processes, and virtually all custom components built

in primitive languages. Project performance was highly predictable in that

cost, schedule, and quality objectives were almost always underachieved.

2. Transition: 1980s and 1990s, software engineering. Organiz:1tions

used more-repeatable processes and off-the-shelf tools, and mostly (>70%)

cus¬tom components built in higher level languages. Some of the

components (<30%) were available as commercial products, including the

operating system, database management system, networking, and

graphical user interface.

3. Modern practices: 2000 and later, software production. This book's

philos¬ophy is rooted in the use of managed and measured processes,

integrated automation environments, and mostly (70%) off-the-shelf

components. Perhaps as few as 30% of the components need to be custom

built

Technologies for environment automation, size reduction, and process

improvement are not independent of one another. In each new era, the key is

complementary growth in all technologies. For example, the process advances

could not be used suc¬cessfully without new component technologies and

increased tool automation.


Organizations are achieving better economies of scale in successive technology

eras-with very large projects (systems of systems), long-lived products, and

lines of business comprising multiple similar projects. Figure 2-2 provides an

overview of how a return on investment (ROI) profile can be achieved in

subsequent efforts across life cycles of various domains.


Pragmatic Software Cost Estimation

One critical problem in software cost estimation is a lack of well-documented

case studies of projects that used an iterative development approach. Software

industry has inconsistently defined metrics or atomic units of measure, the

data from actual projects are highly suspect in terms of consistency and

comparability. It is hard enough to collect a homo¬geneous set of project data

within one organization; it is extremely difficult to homog¬enize data across

different organizations with different processes, languages, domains, and so on.

There have been many debates among developers and vendors of software cost

estimation models and tools.

Three topics of these debates are of partic¬ular interest here:

1. Which cost estimation model to use

2. Whether to measure software size in source lines of code or function

points
3. What constitutes a good estimate

There are several popular cost estimation models (such as COCOMO,

CHECKPOINT, ESTIMACS, KnowledgePlan, Price-S, ProQMS, SEER, SLIM,

SOFTCOST, and SPQR/20), CO COMO is also one of the most open and

well-documented cost estimation models. The general accuracy of conventional

cost models (such as COCOMO) has been described as "within 20% of actuals,

70% of the time."

Most real-world use of cost models is bottom-up (substantiating a target cost)

rather than top-down (estimating the "should" cost). Figure 2-3 illustrates the

pre¬dominant practice: The software project manager defines the target cost

of the soft¬ware, and then manipulates the parameters and sizing until the

target cost can be justified. The rationale for the target cost maybe to win a

proposal, to solicit customer fund¬ing, to attain internal corporate funding, or

to achieve some other goal.

The process described in Figure 2-3 is not all bad. In fact, it is neces¬sary to

analyze the cost risks and understand the sensitivities and trade-offs

objec¬tively. It forces the software project manager to examine the risks

associated with achieving the target costs and to discuss this information with

other stakeholders.

A good software cost estimate has the following attributes:

 It is conceived and supported by the project manager, architecture

team, development team, and test team accountable for performing the

work.

 It is accepted by all stakeholders as ambitious but realizable.

 It is based on a well-defined software cost model with a credible basis.

 It is based on a database of relevant project experience that includes

similar processes, similar technologies, similar environments, similar quality

requirements, and similar people.


 It is defined in enough detail so that its key risk areas are understood

and the probability of success is objectively assessed.

Extrapolating from a good estimate, an ideal estimate would be derived from

a mature cost model with an experience base that reflects multiple similar

projects done by the same team with the same mature processes and tools.

You might also like