0% found this document useful (0 votes)
4 views

Unit 4 Notes

unit 4

Uploaded by

Ranjit47 H
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Unit 4 Notes

unit 4

Uploaded by

Ranjit47 H
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

STORAGE ALLOCATION STRATEGIES

Storage Allocation Strategies are important in compiler design because they enable efficient
program execution. These techniques control how memory is allocated to variables and data
structures throughout the compilation process, resulting in optimal resource utilization.

Static Storage Allocation is a fundamental strategy. Memory is allocated at build time, ensuring
simplicity and predictability. However, this strategy lacks flexibility since it does not respond well to
changing program requirements. In contrast, Dynamic Storage Allocation is a more adaptive
technique that assigns memory during runtime. While dynamic allocation accommodates changing
memory requirements, it presents the difficulty of effective deallocation to prevent memory leaks.
Heap and stack are important components in storage allocation. The Stack, a Last-In-First-Out (LIFO)
structure, is primarily used to store local variables and make function calls. Its automated and
organized nature facilitates management but may limit adaptability. Meanwhile, the Heap, a
dynamic storage pool, can handle variables with unpredictable lifetimes. Although adaptable,
manual memory management is essential here, necessitating a cautious developer to avoid
memory-related errors.

Let us look at the various storage allocation strategies in compiler design in detail.

Static Allocation

Static allocation assigns RAM to programme variables before runtime and keeps them constant
during execution. This implies that the compiler decides the storage requirements during
compilation, resulting in a predictable and easy technique.

Example:

#include <stdio.h>

int main() {

static int myStaticVar = 10;

int myLocalVar = 20;

// Rest of the code...

return 0;

Here, myStaticVar is statically allocated, as its space is determined at compile-time.


Let us now look at the advantages of Static Allocation.

 Predictable Performance:
With memory allocated beforehand, there's no runtime overhead for memory management.
This enhances the predictability of program behavior.

 Efficient Access:
Direct addressing simplifies variable access, leading to faster execution times compared to
dynamic allocation.

Let us now look at the disadvantages of Static Allocation.

 Limited Flexibility:
The fixed allocation poses challenges for programs with dynamic memory requirements. It
may lead to inefficient use of memory if space is reserved but not fully utilized.

 No Adaptability:
The inability to adjust memory during runtime hinders the execution of programs with
evolving memory needs.

Heap Allocation

Heap Allocation, often known as dynamic memory allocation, is a method that enables a program to
request memory during runtime. Unlike static memory allocation, which is specified at build time,
heap allocation is flexible, allowing the program to adapt to changing memory requirements while
running.

Example:

#include <stdlib.h>

int main() {

int *dynamicArray = (int*)malloc(5 * sizeof(int));

// Use dynamicArray as needed

free(dynamicArray); // Release allocated memory

return 0;

}
In this snippet, malloc reserves the memory on the heap for an array of integers. The allocated
memory is then freed using free when it's no longer needed.

Let us now look at its advantages.

Heap Allocation provides programs with dynamic memory management, resulting in benefits such as
flexibility and efficient memory utilization. It supports varying data volumes and promotes more
durable, scalable applications.

Let us now look at its disadvantages.

However, tremendous authority comes with great responsibility. Mismanagement of heap memory
can result in memory leaks or fragmentation, reducing speed and reliability. Additionally, dynamic
allocation has a lower runtime overhead than static allocation.

Stack Allocation

Consider a real-life example where you're at a bakery, with trays stacked high. The first tray you take
is the one at the top. Similarly, with stack allocation, the last function called receives the memory at
the top of the stack. It takes a Last In, First Out (LIFO) method. The program maintains track of the
current function's state, neatly organizing variables and regulating the execution flow.

Example:

void exampleFunction() {

int localVar = 42;

// other code

After exampleFunction is invoked, the variable localVar is allocated stack space, which is
automatically recovered after the function terminates.

Let us look at the advantages of Stack Allocation:

 Speedy Access:
As stack memory is quite well-structured, accessing variables is fast. No need to search; it's
right on top.

 Automatic Cleanup:
Manual memory management is not required. The stack takes care of it. Variables leave
when their function is done.

 Simple Implementation:
Implementing a stack is straightforward, making it a practical choice for many compilers.

Let us look at the disadvantages of Stack Allocation.

 Limited Size:
The stack size is finite, and if exceeded, it leads to a stack overflow. Recursive functions or
deep nesting can cause issues.
 Static Structure:
Unlike the heap, the stack's structure is fixed, making dynamic memory management a no-
go.

Choosing the Best According to Our Use

Static Allocation:

 When: Memory requirements are known at compile time.

 Pros: Simple, efficient.

 Cons: Limited flexibility, can't handle dynamic memory needs.

Stack Allocation:

 When: Dealing with local variables, and function calls.

 Pros: Automatic management, fast access.

 Cons: Limited size, and scope; potential for stack overflow.

Heap Allocation:

 When: Dynamic memory needs, unknown memory requirements.

 Pros: Dynamic, flexible.

 Cons: Slower, manual management, the potential for leaks.

Register Allocation:

 When: Performance-critical sections, loops, computations.

 Pros: Fastest access, optimization.

 Cons: Limited registers, the potential for spills.

Guidelines:=

 Compile-Time Knowledge:
Use static allocation when memory needs are known in advance.

 Dynamic Requirements:
Opt for heap allocation for dynamic memory needs.

 Function Scope:
Stick to stack allocation for local variables and function calls.

 Performance Optimization:
Employ register allocation for critical performance sections.

Conclusion

 Storage allocation algorithms are critical to the proper operation of compilers.

 Static allocation simplifies memory management by determining storage requirements


before program execution. This strategy assures stability while potentially limiting flexibility
when dealing with dynamic data structures.
 Stack and heap allocations have various objectives. Stack allocation excels in managing local
variables by providing a simple and quick allocation-deallocation procedure.

 Heap allocation allows for dynamic memory allocation, which is essential for dynamic data
structures, but also necessitates meticulous human memory management.

PARAMETER PASSING

All our languages so far this quarter have had the same semantics for function calls:

1. Evaluate the function value and arguments.

2. Bind the argument names to the argument values. (For OO languages, this includes
the self receiver argument, and usually its instance variables.)

3. Evaluate the function body in the environment produced in step 2.

This is called call by value parameter passing, because the parameter names are bound to
the values of the arguments.

This is not the only way to design the semantics of function calls, although it is probably the "best".
There are several other ways to design parameter passing; some are used by popular languages
today, but others are mostly of historical interest:

 Call by reference:

1. Evaluate the procedure value, and evaluate the arguments to locations.

2. Bind the argument names to references to the argument locations.

3. Evaluate the procedure body in the environment produced in step 2.

 Call by name:

1. Evaluate the procedure value. Do not evaluate the arguments.

2. Bind the argument names to expressions that will be re-evaluated every time the
argument is referenced.

3. Evaluate the procedure body in the environment produced in step 2. Wherever you
refer to a parameter name, re-evaluate the expression.

 Call by result:

1. Evaluate the procedure value. Evaluate only the location to store the arguments,
not the arguments themselves. (For example, if you pass the array subscript
expression a[0], simply compute the location

2. Bind the argument name(s) to fresh empty location(s).

3. Evaluate the procedure body in the environment produced in step 2.

4. Copy the final results from the locations produced in step 2, into the locations
produced in step 1.
The intuition behind this mode is that the procedure is producing output in its parameters --- so
instead of evaluating the arguments fully, simply figure out their locations, and copy the results out
after the function runs.

 Call by value result:

1. Evaluate the procedure value and arguments, remembering the locations of the
argument values.

2. Bind the argument names to the argument values.

3. Evaluate the procedure body in the environment produced in step 2.

4. Copy the final results into the locations remembered in step 1.


The intuition behind this mode is that the procedure is copying argument values into the
callee and copying result values back to the caller.

As previously noted, most languages use call-by-value. However, at least two widely used languages
use call-by-reference and call-by-name.

C++ reference parameters

In C++, you can declare that a parameter should use call by reference by using the @ symbol.
Consider the following:

// Takes an int by value

void valueF(int x) { x = x + 1; }

// Takes an int by reference.

void refF(int@ x) { x = x + 1; }

int main() {

int a = 0;

valueF(a);

cout << a << endl; // Prints "0\n"

refF(a);

cout << a << endl; // Prints "1\n"

return 0;

}
In valueF, x is an ordinary local variable. Mutating it doesn't affect the caller's variable. In refF, x is
a reference to the caller's variable. Therefore, updating it updates the caller.

C preprocessor macros

The C language defines a preprocessor, which is a mini-language that runs over the C source code
and transforms it in some fairly simple ways before the C compiler itself runs. The preprocessor is
used for a variety of things, including:

 Adding the contents of certain "included" source files to the current source file. (This is
similar to what ML's use function does.)

 Preventing the compilation of certain blocks of code, based on programmer-specified


conditions.

 Defining macros

Macros are special functions that are defined in terms of rewriting the source code of the arguments.
Here's an example of a macro:

#define MAX(x,y) (x > y ? x : y)

This macro resembles the ML function

fun max(x, y) = if x > y then x else y

However, when C code invokes a preprocessor macro, then the preprocessor rewrites the source
code, replacing the left side with the right side. Hence, in the following:

int a = 0;

int b = 5;

int c = 7

int d = MAX(a+b, b+c); // XXX

line XXX will get rewritten to:

int d = (a+b > b+c ? a+b : b+c)

Notice that the argument expressions get duplicated --- and hence re-evaluated --- wherever the
body refers to the argument name. Therefore, C preprocessor macros effectively pass parameters
using call by name.

This leads to some odd results. Consider the following invocation of MAX:

MAX(a++, b++)

This will get rewritten to:

(a++ > b++ ? a++ : b++)

The result is that either a or b will be incremented twice --- probably not what the programmer
wanted!

Call by name and lazy evaluation


All the languages we've studied so far (except for C syntax macros above) use strict evaluation, in
which expressions are evaluated prior to being passed to functions.

There are languages (of which Haskell is the most popular) that use lazy evaluation: in lazily
evaluated languages, expressions are not evaluated until they are needed for the result of the
program.

In lazy languages, defining infinite data structures (e.g., the sequence of all natural numbers) and
calling infinitely recursive functions (e.g., a function that generates all the Fibonacci numbers) do not
necessarily lead to nonterminating programs. As long as a program only "needs" some finite subset
of that infinite results, the program will terminate.
Lazily evaluated languages are rather awkward to use unless, like Haskell, they are purely
functional --- i.e., there are no side effects (no updatable data structures, etc.). Side effects require
that the programmer reason about the order that things happen, and in lazy languages it's hard to
tell when any computation may occur ("when it's needed" can be arbitrarily far in the source code
from where the expression is written).

It turns out that in lazily evaluated, purely functional languages, call-by-name makes perfect sense,
because it allows you to "postpone" evaluation further than call-by-value. The problem we
demonstrated above, with C preprocessor macros, does not arise, because purely functional
languages cannot mutate data.

You might also like