Architecture of Computers
Architecture of Computers
An instruction in computer architecture consists of different fields that guide the CPU on how to
process the instruction. These fields typically include the following:
- **Opcode Field**: Specifies the operation to be performed (e.g., ADD, SUB, MUL).
- **Operand Field(s)**: Specifies the data or the registers to be used in the operation.
- **Address Field(s)**: Indicates where in memory the operands or the result is stored.
Consider an example instruction in a simple assembly language: `ADD R1, R2, R3`. This instruction
tells the CPU to add the contents of registers `R2` and `R3`, then store the result in register `R1`.
Here, the opcode field is `ADD`, and the operand fields are `R1, R2, R3`.
---
Interrupts are signals to the processor indicating an event that requires immediate attention. The
two major types of interrupts are:
- **Hardware Interrupts**: These are generated by hardware devices, like keyboards or network
cards, to signal the CPU to handle specific tasks (e.g., processing a keypress or a network packet).
They typically occur asynchronously to the CPU's program flow.
- **Software Interrupts**: These are initiated by software to request system services (e.g., making a
system call). They typically occur synchronously with the CPU's program flow, since the software
triggers them as part of its operation.
---
A B C Y
0 0 0 0
0 0 1 1
0 1 0 0
0 1 1 1
1 0 0 0
1 0 1 1
1 1 0 1
1 1 1 1
```
The canonical Sum-of-Products (SOP) form includes terms for each case where `Y = 1`. In this truth
table, these are:
```
```
3. **Combine**:
Draw a Karnaugh map with three variables (A, B, C), and fill in the given truth table's outputs. The
map looks like this:
```
| 00 | 01 | 11 | 10 |
----------------------
0 | 0| 1| 1| 0|
1 | 0| 1| 1| 1|
```
After grouping adjacent ones, you can see that `Y` is simplified to **B'C + AB**.
---
A half-adder adds two single-bit numbers and outputs a sum bit and a carry bit.
```
B ----> |
AND
----> CARRY
```
---
The principle of locality refers to the tendency of computer programs to access the same memory
locations or instructions repetitively over a short period of time. There are two types of locality:
- **Temporal Locality**: If a data item or instruction is accessed, it's likely to be accessed again
soon. This property underlies caching mechanisms, allowing recently accessed data to be stored for
quicker retrieval.
- **Spatial Locality**: If a data item or instruction is accessed, other nearby items or instructions are
likely to be accessed soon. This justifies reading larger memory blocks into cache at once.
Locality improves performance by reducing the need to retrieve data from slower memory, making
cache operations more efficient.
---
- **Programmed I/O**: The CPU actively manages I/O operations, checking or polling devices to see
if they're ready for reading or writing. This can be inefficient, as it requires the CPU to wait for the
device.
- **Interrupt-Driven I/O**: The CPU initiates an I/O operation and then continues executing other
tasks. When the device is ready, it sends an interrupt to signal the CPU. This approach is more
efficient, as it allows the CPU to perform other tasks while waiting.
---
- **High-Level Language**:
- **Assembly Language**:
- Typically platform-dependent.
- Offers finer control and may lead to more efficient programs but requires greater knowledge of
hardware.
---
Parallel instruction execution involves multiple instructions being executed simultaneously, leading
to increased throughput and reduced execution time. There are various types of parallelism:
- **Data-Level Parallelism (DLP)**: Processing multiple data items simultaneously. SIMD (Single
Instruction, Multiple Data) and vector processing fall into this category.
- **Task-Level Parallelism**: Different tasks or processes are executed concurrently. This is typical in
multi-core CPUs, where each core can execute a different task.
Parallel instruction execution improves performance but requires careful management of
dependencies and resources.
---
Virtual memory allows a computer to use disk storage to simulate additional memory when physical
memory is limited. This is needed because:
- **Address Space Expansion**: Programs can have a larger memory footprint than available
physical memory, allowing more programs to run concurrently.
- **Isolation and Security**: Virtual memory provides isolation between programs, reducing the risk
of one program affecting others or the system.
- **Simplified Memory Management**: Virtual memory can enable processes to have consistent
memory layouts, easing memory management.
---
1. **Insufficient Memory (RAM)**: If a computer lacks enough memory, it has to rely heavily on
slower virtual memory, reducing performance.
2. **Fragmented or Full Disk**: A heavily fragmented or nearly full disk can slow down data retrieval
and storage.
3. **Excessive Background Processes**: Too many background processes can monopolize system
resources, slowing down primary tasks.
4. **Outdated Hardware**: Old hardware may not keep up with modern software demands.
5. **Malware or Viruses**: Malicious software can consume resources and hinder performance.
6. **Inadequate Cooling**: Overheating can cause the CPU to throttle its performance, leading to
slowdowns.