BX4002
BX4002
Department of MCA
BX4002 - Problem Solving and Programming in C
UNIT I INTRODUCTION TO COMPUTER PROBLEM SOLVING
Introduction
The following six steps must be followed to solve a problem using computer.
1. Problem Analysis
First, we need to know what problem is actually being solved. Making a clear
statement of the problem depends upon the size and complexity of the problem.
Smaller problems not involving multiple subsystems can easily be stated and
then we can move onto the next step of “Program Design”. However, a
problem interacting with various subsystems and series of programs require
complex analysis, in-depth research and careful coordination of people,
procedures and programs.
Before identifying inputs required for the system, we need to identify what
comes out of the system. The best way to specify output is to prepare some
output forms and required format for displaying result. The best person to
judge an output form is the end user of the system i.e. the one who uses the
software to his benefit. Various forms can be designed by the programmer
which must be examined to see whether they are useful or not.
1
3. Specifying Input Requirements :
After having specified the outputs, the input and data required for the system
need to be specified as well. One needs to identify the list of inputs required
and the source of data. For example, in a simple program to keep student’s
record, the inputs could be the student’s name, address, roll-numbers, etc. The
sources could be the students themselves or the person supervising them.
When output and inputs are specified, we need to specify process that converts
specified inputs into desired output. If the proposed program is to replace or
supplement an existing one, a careful evaluation of the present processing
procedures needs to be made, noting any improvements that could made. If the
proposed system is not designed to replace an existing system, then it is well
advised to carefully evaluate another system that addresses a similar problem.
After the successful completion of all the above four steps one needs to see
whether the things accomplished so far in the process of problem solving are
practical and feasible. To replace an existing system one needs to determine
how the potential improvements outperforms existing system or other similar
system.
Before concluding the program analysis stage, it is best to record whatever has
been done so far in the first phase of program development. The record should
contain the statement of program objectives, output and input specifications,
processing requirements and feasibility.
The second stage in software development or problem solving using computer cycle is
program design. This stage consists of preparing algorithms, flowcharts and
pseudocodes. Generally, this stage intends to make the program more user friendly,
feasible and optimized. Programmer just requires a pen and pencil in this step in
which the tasks are first converted into a structured layout without the involvement of
computer. In structured programming, a given task is divided into number of
sub-tasks which are termed as modules. Each process is further divided until no
further divisions are required. This process of dividing a program into modules and
then into sub-modules is known as “top down” design approach. Dividing a program
into modules (functions) breaks down a given programming task into small,
independent and manageable tasks.
1. Algorithms
2. Flowcharts
2
3. Pseudocodes
3. Coding
In this stage, process of writing actual program takes place. A coded program is most
popularly referred to as a source code. The coding process can be done in any
language (high level and low level). The actual use of computer takes place in this
stage in which the programmer writes a sequence of instructions ready for execution.
Coding is also known as programming.
1. Comment clauses in the program help to make the program readable and
understandable by people other than the original programmer.
2. It should be efficient.
3. It must be reliable enough to work under all reasonable conditions to provide a
correct output.
4. It must be able to detect unreasonable error conditions and report them to the
end user or programmer without crashing the system.
5. It should be easy to maintain and support after installation.
Generally coding is done in high level language or low level language (assembly
language). For the computer to understand these languages, they must be translated
into machine level language. The translation process is carried out by a
compiler/interpreter (for high level language) or an assembler (for assembly language
program). The machine language code thus created can be saved and run immediately
or later on.
Compilation Process
A source code must go through several steps before it becomes an executable program.
In the first step the source code is checked for any syntax errors. After the syntax
errors are traced out a source file is passed through a compiler which first translates
high level language into object code (A machine code not ready to be executed). A
linker then links the object code with pre-compiled library functions, thus creating an
executable program. This executable program is then loaded into the memory for
execution. General compilation process is shown in Figure below:
3
Figure : Compilation Process
To understand debugging and testing more intuitively, lets first consider learning
about different types of error that occurs while programming.
Error
Error means failure of compilation and execution of the computer program or not
getting expected results after execution. Debugging and testing are systematic process
during program development cycle to avoid errors in the program. Different types of
error that we encounter while programming are listed below :
4
Types of Error:
Debugging
Debugging is the process of finding errors and removing them from a computer
program, otherwise they will lead to failure of the program. Even after taking full care
during program design and coding, some errors may remain in the program and these
errors appear during compilation or linking or execution. Debugging is generally done
by program developer.
Testing
Testing is performed to verify that whether the completed software package functions
or works according to the expectations defined by the requirements. Testing is
generally performed by testing team which repetitively executes program with intent
to find error. After testing, list of errors and related information is sent to program
developer or developmen team.
Debugging vs Testing
Testing is done during testing phase which comes after development phase.
Testing is generally carried out by separate testing team rather than program
developer.
5
6. Program Documentation
1. Programmer's Documentation
2. User's Documentation
Programmer's Documentation
User's Documentation
User documentation is required for the end user who installs and uses the program. It
consists instructions for installation of the program and user manual.
Problem solving is the act of defining a problem; determining the cause of the
problem; identifying, prioritizing, and selecting alternatives for a solution; and
implementing a solution.
6
Problem Solving Chart
Step Characteristics
1. Define the problem • Differentiate fact from opinion
• Specify underlying causes
• Consult each faction involved for information
• State the problem specifically
• Identify what standard or expectation is
violated
• Determine in which process the problem lies
• Avoid trying to solve the problem without
data
4. Implement and follow up • Plan and implement a pilot test of the chosen
on the solution alternative
• Gather feedback from all affected parties
• Seek acceptance or consensus by all those
affected
• Establish ongoing measures and monitoring
• Evaluate long-term results based on final
solution
7
1. Define the problem
Diagnose the situation so that your focus is on the problem, not just its symptoms.
Helpful problem-solving techniques include using flowcharts to identify the expected
steps of a process and cause-and-effect diagrams to define and analyze root causes.
The sections below help explain key problem-solving steps. These steps support the
involvement of interested parties, the use of factual information, comparison of
expectations to reality, and a focus on root causes of a problem. You should begin by:
• Reviewing and documenting how processes currently work (i.e., who does
what, with what information, using what tools, communicating with what
organizations and individuals, in what time frame, using what format).
• Evaluating the possible impact of new tools and revised policies in the
development of your "what should be" model.
Postpone the selection of one solution until several problem-solving alternatives have
been proposed. Considering multiple alternatives can significantly enhance the value
of your ideal solution. Once you have decided on the "what should be" model, this
target standard becomes the basis for developing a road map for investigating
alternatives. Brainstorming and team problem-solving techniques are both useful tools
in this stage of problem solving.
Many alternative solutions to the problem should be generated before final evaluation.
A common mistake in problem solving is that alternatives are evaluated as they are
proposed, so the first acceptable solution is chosen, even if it’s not the best fit. If we
focus on trying to get the results we want, we miss the potential for learning
something new that will allow for real improvement in the problem-solving process.
Skilled problem solvers use a series of considerations when selecting the best
alternative. They consider the extent to which:
Leaders may be called upon to direct others to implement the solution, "sell" the
solution, or facilitate the implementation with the help of others. Involving others in
the implementation is an effective way to gain buy-in and support and minimize
resistance to subsequent changes.
8
Regardless of how the solution is rolled out, feedback channels should be built into
the implementation. This allows for continuous monitoring and testing of actual
events against expectations. Problem solving, and the techniques used to gain clarity,
are most effective if the solution remains in place and is updated to respond to future
changes.
Top-down design is a method of breaking a problem down into smaller, less complex
pieces from the initial overall problem. Most “good” problems are too complex to
solve in just one step, so we divide the problem up into smaller manageable pieces,
solve each one of them and then bring everything back together again. The process of
making the steps more and more specific in top-down design is called stepwise
refinement.
We will be using top-down design (and top-down design diagrams, like the one above)
to help us understand a problem and all its components.
9
Implementation of algorithm
Characteristics of Algorithm
Step 1: Start
Step 6: Add some tea leaves to the water according to the requirement.
Step 7: Then again wait for some time until the water is getting colorful as tea.
Step 9: Again wait for some time until the sugar is melted.
Step 10: Turn off the gas burner and serve the tea in cups with biscuits.
10
Step 11: End
Here is an algorithm for making a cup of tea. This is the same for computer science
problems.
Step 1: Start
Step 8: End
Step 1: Start
Step 5: Store the multiplication to “Area”, (its look like area = Height
x Width)
Step 7: End
11
Example 3. Find the greatest between 3 numbers.
Step 1: Start
Step 5: Print A
Step 6: Else
Step 9: Print B
Advantages of Algorithm
Disadvantages of Algorithms
Program Verification
************See Program Verification PDF***************
12
The efficiency of algorithms
Computer resources are limited that should be utilized efficiently. The efficiency of an
algorithm is defined as the number of computational resources used by the algorithm.
An algorithm must be analyzed to determine its resource usage. The efficiency of an
algorithm can be measured based on the usage of different resources.
The efficiency of an algorithm depends on how efficiently it uses time and memory
space.
The time efficiency of an algorithm is measured by different factors. For example, write
a program for a defined algorithm, execute it by using any programming language, and
measure the total time it takes to run. The execution time that you measure in this case
would depend on a number of factors such as:
However, to determine how efficiently an algorithm solves a given problem, you would
like to determine how the execution time is affected by the nature of the algorithm.
Therefore, we need to develop fundamental laws that determine the efficiency of a
program in terms of the nature of the underlying algorithm.
13
Space-Time tradeoff
To solve a given programming problem, many different algorithms may be used. Some
of these algorithms may be extremely time-efficient and others extremely
space-efficient.
Time/space trade off refers to a situation where you can reduce the use of memory at the
cost of slower program execution, or reduce the running time at the cost of increased
memory usage.
Asymptotic Notations
Asymptotic Notations are languages that uses meaningful statements about time and
space complexity. The following three asymptotic notations are mostly used to
represent time complexity of algorithms:
(i) Big O
(ii) Big Ω
Big Omega is the reverse Big O, if Bi O is used to describe the upper bound (worst -
case) of a asymptotic function, Big Omega is used to describe the lower bound
(best-case).
(iii) Big Θ
When an algorithm has a complexity with lower bound = upper bound, say that an
algorithm has a complexity O (n log n) and (n log n), it’s actually has the complexity Θ
(n log n), which means the running time of that algorithm always falls in n log n in the
best-case and worst-case.
14
Best, Worst, and Average ease Efficiency
The best case would be if the first element in the list matches with the key element to be
searched in a list of elements. The efficiency in that case would be expressed as O(1)
because only one comparison is enough.
Similarly, the worst case in this scenario would be if the complete list is searched and
the element is found only at the end of the list or is not found in the list. The efficiency
of an algorithm in that case would be expressed as O(n) because n comparisons
required to complete the search.
The average case efficiency of an algorithm can be obtained by finding the average
number of comparisons as given below:
number of comparison = n
In the analysis of the algorithm, it generally focused on CPU (time) usage, Memory
usage, Disk usage, and Network usage. All are important, but the most concern is
about the CPU time. Be careful to differentiate between:
15
Algorithm Analysis:
Algorithm analysis is an important part of computational complexity theory, which
provides theoretical estimation for the required resources of an algorithm to solve a
specific computational problem. Analysis of algorithms is the determination of the
amount of time and space resources required to execute it.
Algorithm Analysis:
Algorithm analysis is an important part of computational complexity theory, which
provides theoretical estimation for the required resources of an algorithm to solve a
specific computational problem. Analysis of algorithms is the determination of the
amount of time and space resources required to execute it.
The branch of theoretical computer science where the goal is to classify algorithms
according to their efficiency and computational problems according to their inherent
16
difficulty is known as computational complexity. Paradoxically, such classifications
are typically not useful for predicting performance or for comparing algorithms in
practical applications because they focus on order-of-growth worst-case performance.
In this book, we focus on analyses that can be used to predict performance and
compare algorithms.
A complete analysis of the running time of an algorithm involves the following steps:
• Distributional. Let ΠN
be the number of possible inputs of size N and ΠNk be the number of inputs of size N
that cause the algorithm to have cost k, so that ΠN=∑kΠNk. Then the probability that
the cost is k is ΠNk/ΠN and the expected cost is
1ΠN∑kkΠNk.
The analysis depends on "counting." How many inputs are there of size N and how
many inputs of size N cause the algorithm to have cost k? These are the steps to
compute the probability that the cost is k
· , so this approach is perhaps the most direct from elementary probability theory.
· Cumulative. Let ΣN
be the total (or cumulated) cost of the algorithm on all inputs of size N. (That is,
ΣN=∑kkΠNk, but the point is that it is not necessary to compute ΣN in that way.)
Then the average cost is simply ΣN/ΠN
• . The analysis depends on a less specific counting problem: what is the total
cost of the algorithm, on all inputs? We will be using general tools that make
this approach very attractive.
17
The distributional approach gives complete information, which can be used directly to
compute the standard deviation and other moments. Indirect (often simpler) methods
are also available for computing moments when using the other approach, as we will
see. In this book, we consider both approaches, though our tendency will be towards
the cumulative method, which ultimately allows us to consider the analysis of
algorithms in terms of combinatorial properties of basic data structures.
To analyze this algorithm, we start by defining a cost model (running time) and an
input model (randomly ordered distinct elements). To separate the analysis from the
implementation, we define CN
• N+1
18
· .
· The size of the two subarrays to be sorted in that case are k
and N−k−1.
19
10000 175771.70 164206.81
100000 2218053.41 2102585.09
The discrepancy in the table is explained by our dropping the 2N term (and our not
using a more accurate approximation to the integral).
1.7 Distributions.
It is possible to use similar methods to find the standard deviation and other moments.
The standard deviation of the number of compares used by quicksort is
7−2π2/3−−−−−−−−√N≈.6482776N which implies that the expected number of
compares is not likely to be far from the mean for large N. Does the number of
compares obey a normal distribution? No. Characterizing this distribution is a difficult
research challenge.
Is our assumption that the input array is randomly ordered a valid input model? Yes,
because we can randomly order the array before the sort. Doing so turns quicksort
into a randomized algorithm whose good performance is guaranteed by the laws of
probability.
It is always a good idea to validate our models and analysis by running experiments.
Detailed experiments by many people on many computers have done so for quicksort
over the past several decades.
In this case, a flaw in the model for some applications is that the array items need not
be distinct. Faster implementations are possible for this case, using three-way
partitioning.
Fundamental Algorithms
************** See Fundamental Algorithms PDF***************
20