0% found this document useful (0 votes)
15 views

Chapter I

Asd lecture 1st prepa class

Uploaded by

hanaefodil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Chapter I

Asd lecture 1st prepa class

Uploaded by

hanaefodil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Course

Algorithms and Data Structures I

ENSM
2022
2024 –/ 2023
2025

Chapter I
I. General Introduction to Computer Science
For all the subjects studied in college, high school or university, it is mainly a question of
acquiring knowledge in different fields in order to:
1. Learn to think about various issues that are more or less complex;
2. Solve problems.

The main purpose of this approach is to understand the "scientific reasoning" and the process
of "scientific knowledge production".

In mathematics, algebra and analysis are two fields that are distinguished by their content
and by the types of problems they address. In high school, we have already studied arithmetic
(which teaches us calculation on numbers), algebra which teaches us the resolution of
equations (calculation on variables) and geometry where we learn to reason on geometric
figures.

In computer science, we learn problem solving through an algorithmic approach, using


algorithms. This approach is fundamentally different from mathematical proofs. If in
mathematics we can be satisfied with showing the existence of an object by the proof of a
theorem, in computer science, we consider that we have also to construct/find the object (the
result) using an algorithm.

As the algorithms are translated into programming languages to be processed (executed) by


a computer, we must also learn the architecture of the computer and how a computer works
in order to allow this resolution.

But before going further, it is worth giving some definitions:

Computer Science:
- Computer science is the science of automatic information processing.
- The term “informatique” (in French) is a neologism proposed in 1962 by Philippe
Dreyfus to characterize the automatic processing of information: It is built on the
contraction of the expression “information automatique”. This term was accepted by the
French Academy in April 1966, and “computer science” (informatique) then officially
became the science of automatic processing of information. The word informatics does
not really have an equivalent in the United States where one speaks of “Computing
Science” or “Computer science” whereas “Informatics” is accepted by the British.

1
- Computing deals with two complementary aspects: The immaterial programs (software)
which describe a process to be carried out and the machines (material, hardware) which
execute this process.

Hardware:
Hardware is therefore all the physical elements (microprocessor, memory, screen, keyboard,
hard disks, etc.) used to process data.

Software:
Software is a structured set of instructions describing information processing to be carried
out by a computer hardware.

Computer :
The computer is a generic term which designates a computer equipment allowing to process
information according to sequences of instructions (the programs, that are the software). The
term “ordinateur” (in French) was proposed by the philologist Jacques Perret in April 1955
in response to a request from IBM France which considered the word "calculator"
(computer) far too restrictive with regard to the possibilities of these machines.

The hardware architecture of a computer is based on the Von Neumann model1. The
computer is structured around four main parts (see Figure 1):
1. The arithmetic/logic unit, performs the basic operations.
2. The control unit sequences the operations.
3. The memory contains both the data to be processed and the program. This will tell the
control unit what calculations to make on the data.
4. Inputs-outputs are devices dedicated to communicate with the outside world (screen,
keyboard, mouse, printer, scanner, etc.).

Figure 1. Von Neumann architecture.

1
John Von Neumann (1903-1957): American mathematician of Hungarian origin. He gave his name to the von Neumann
architecture used in almost all modern computers.

2
II. Algorithms: A brief history

a. History of the word “Algorithm”


- Algorithmics is a branch of mathematics.
- It has existed for several centuries (about 1000 years), even before the
existence of the computer.
- It was born with the birth of algebra as a new mathematical discipline .
- The word Algorithm is the Latin transformation of the name of the 9th
century Muslim mathematician, Muḥammad ibn Mūsa ̄ al Khawārizmi ̄.

b. Why was the name of this mathematician given by his Latin successors to this
discipline?
- To understand this, one must study what existed before al Khawārizmī and
what his contribution brought to scientific knowledge.

c. Mathematics before al Khawārizmi ̄


Among the Greeks and their predecessors (Babylonians, Indians, etc.), there were
two main mathematical disciplines:

i. Arithmetic (‫ )ﺻﻨﺎﻋﺔ اﻟﺤﺴﺎب‬: calculation on numbers (+, -, *, etc.).


ii. Euclidean geometry ( ‫ )ﺻﻨﺎﻋﺔ اﻟﮭﻨﺪﺳﺔ‬: calculation on plane
geometric figures.

In a very famous mathematical treatise that is very succinctly entitled “al Jabr
wa al Muqābala” ( ‫)اﻟﺠﺒﺮ واﻟﻤﻘﺎﺑﻠﺔ‬, al Khawārizmi ̄ introduces a new
mathematical discipline which allows in the same formalism to express and
solve both problems of arithmetic or geometry. This new discipline consists in
calculating “on variables” as one calculates on numbers. Algebra problems are
expressed using expressions where the unknown to be sought is a variable which
can be either of arithmetic type or of geometric type.

This new formalism makes it possible to interpret an expression of equality


between two members ( ‫اﻟﻤﻘﺎﺑﻠﺔ‬/ equation ). The fundamental operation
between members of an equation is transposition and reduction ( ‫)اﻟﺠﺒﺮ‬.
ð Refer to the solution of the quadratic equation by al Khawārizmī.

III. The algorithm

- We can informally define the algorithm as follows:

An algorithm is a valid computational method intended to solve a class of


problems previously defined.

- An algorithm is an ordered sequence of instructions that indicates the procedure to


be followed to solve a series of equivalent problems.

3
So when we define an algorithm, it must only contain instructions that are
understandable by the person/machine who will have to execute it.

- The processing that the computer performs on the data is specified by what is called
an algorithm. The algorithm describes a finite, organized, and unambiguous
sequence of elementary operations in order to obtain a solution to a given problem.
In general, this solution is not unique.

- In order to be executed on a computer, an algorithm has to be transformed into a


program, that is to say expressed in a programming language such as C++, Python,
etc.

- Example 1: A part extracted from the user manual of a fax machine explaining how
to send a document.
1. Insert the document into the automatic feeder.
2. Dial the recipient's fax number using the dial pad.
3. Press the send key to start the transmission.

This user's guide explains how to send a fax. It is composed of an ordered sequence
of instructions (insert…, dial…, press…) that manipulate data (document,
automatic feeder, fax number, numeric keypad, send key) to perform the desired
task (send a document).

- Example 2: Cooking recipe.

IV. Programming

Recall that an algorithm is an ordered sequence of instructions that indicates the approach to be
followed to solve a series of equivalent problems.

Thus, it expresses the logical structure which have then to be translated into a computer
language in order to produce a program. The latter is intended to be executed by a computer.

Therefore, the algorithm is independent of the programming language used to produce a


computer program. On the other hand, the translation of the algorithm into a particular language
depends on the chosen language.

Programming a computer consists in “explaining” in detail what it has to do, knowing that it
does not “understand” human language. It makes it possible to produce a program which is
nothing more than a series of instructions, encoded in very strict compliance with a set of
conventions fixed in advance by a computer language.

The execution of a program on a computer then consists in decoding the instructions contained
in the program by associating with each “word” of the computer language a precise action.

The program that we write in computer language using an editor is called “source program” (or
“source code”).

4
The steps to go from a given problem to a source code are described by the diagram below:

But how does the computer make use of the source code?

The source code undergoes a transformation into a usable/executable form by the computer,
which makes it possible to obtain an "executable" program. For this, it will be necessary to use
“automatic translation systems” called “compilers”.

A compiler is a piece of software that translates the entire source code at once. The compiler
reads all the lines of the source program, checks that they respect the programming language
used and produces a new sequence of instructions called an “executable” program (or object
code). This can be executed independently of the compiler and can be kept as is in a file
("executable file").

If the source code contains errors, the compiler displays error messages. It is necessary to:
• Read and understand the messages;
• Correct errors;
• Recompile the corrected source code file.

The figure below illustrates the steps for supporting a source code.

5
6

You might also like