0% found this document useful (0 votes)
17 views6 pages

Al Leadex CSC Harmonised Test 8

Uploaded by

Nfor Lucy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views6 pages

Al Leadex CSC Harmonised Test 8

Uploaded by

Nfor Lucy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

PDV LESDEX EDUC

CSC AL WEEKEND HARMONISED TEST 8 MARKING GUIDE

Q1. (a.) Access Time Comparison:


 Direct Access: Offers faster access time. It allows retrieval of any data location directly without
needing to traverse through sequential positions. Examples include accessing data on a hard
disk using its sector and head information.
 Sequential Access: Typically has a slower access time. Data is accessed in a linear fashion, one
unit after another. Magnetic tapes and reading data from a file line by line are examples.
Direct access is faster because it doesn't require traversing through intermediate locations. It's
analogous to jumping directly to a specific page in a book, while sequential access is like reading a
book page by page.
More on the topic: Due to mechanical limitations, sequential access is slower than direct access. In
hard drives, reading data sequentially involves physically moving the read/write head across spinning
platters, taking time. Direct access, on the other hand, acts like jumping to a specific page in a book
using a page number. The storage device can locate the data's address and retrieve it efficiently,
bypassing the need for slow mechanical seeking.
(b.) Binary to Hexadecimal Conversion (8-bit 2's Complement):
Since the most significant bit (MSB) is 0, it's a positive number. The hexadecimal expression for the
denary number whose 8-bit 2's complement is 01111010 is 7A.

(c.) SISD Architecture


 SISD (Single Instruction, Single Data): Refers to a computer architecture where a single
processor executes a single instruction on a single data item at a time. These are typically found
in non-parallel processing environments like basic calculators or simple microcontrollers.
 Block Diagram of SISD Architecture:
(d.) Computer Networks:

 Computer Network: A collection of interconnected computers and devices that can


communicate and share resources electronically. Examples include local area networks (LANs)
within an office or wide area networks (WANs) that span large geographical distances.
Advantages of Computer Networks:
1. Resource Sharing: Enables sharing of hardware, software, data, and other resources among
connected devices, eliminating redundancy and reducing costs.
2. Improved Communication: Facilitates faster and more efficient communication between
users, departments, or organizations through email, messaging, and collaboration tools.
3. Increased Reliability: If one device in the network fails, others can potentially remain
operational, enhancing overall system resilience.

Disadvantages of Computer Networks:


1. Security Threats: Networks are vulnerable to various attacks, including malware, hacking,
phishing, and data breaches, which can lead to data loss, system disruptions, and financial
losses.
2. Complexity and Cost: Establishing and maintaining a network, especially a large one, can be
complex and require expertise in hardware, software, and security. This can involve hardware
and software costs, as well as ongoing maintenance and administration expenses.

(e.) Program Control Constructs (Flow Diagrams):


 Program Control Constructs: Statements or structures in programming languages that control
the flow of execution, allowing for conditional branching, repetition, and other logic patterns.
Pre-test Loop: The condition is evaluated before each iteration. The loop continues as long as the
condition is true.
Post-test Loop: The condition is evaluated after each iteration. The loop executes at least once, even if
the condition is initially false.
Pre-test Loop vs. Post-test Loop (Flow Diagrams):

Q2. (a.) Testing Techniques:


 White-Box Testing (Glass-Box Testing):
 Focuses on the internal structure and logic of the code.
 Requires knowledge of the code by the tester.
 Tests specific code paths, branches, and functions.
 Examples: Unit testing, code coverage analysis.
 Black-Box Testing (Functional Testing):
 Treats the software as a black box, focusing on functionality without internal details.
 Requires knowledge of the requirements and specifications.
 Tests the system's behavior from a user's perspective.
 Examples: Equivalence partitioning, boundary value analysis.
(b.) Programming Concepts:
 Inheritance: Allows creating new classes (subclasses) based on existing classes (superclasses),
inheriting properties and methods. Promotes code reusability and code maintainability.
 Programming Paradigm: An approach to programming that defines the fundamental concepts,
methods, and styles for problem-solving. Examples: Object-Oriented Programming (OOP),
Procedural Programming, Functional Programming.
 Encapsulation: Bundling data (attributes) and methods (functions) that operate on that data
into a single unit (class or object). Encapsulation promotes data protection and modularity.
 Dynamic Data Structure: A data structure whose size or structure can change at runtime
(during program execution). Examples: Linked lists, trees, hash tables.
(c.) Structured Programming and Modularity:
 Structured Programming: A programming approach that uses control flow structures (if-else,
loops) to organize code into well-defined, readable blocks. Improves maintainability, reduces
errors, and promotes modularity.
 Program Modularity: Breaking down a program into smaller, independent, reusable units of
code (modules, functions) with well-defined interfaces. Enhances code reusability,
maintainability, and testability.
(d.) Logic Implementation with NAND/NOR Gates:
(e.) Object-Oriented Programming (OOP):
 A programming paradigm that emphasizes objects as the building blocks of software.
 Elements:
 Classes: Blueprints for creating objects, defining their attributes (data) and methods
(functions).
 Objects: Instances of classes, encapsulating data and related operations.
 Inheritance: Ability to create new classes based on existing ones, inheriting properties
and methods.
 Polymorphism: Ability of objects of different classes to respond differently to the same
method call (method overriding).
 Encapsulation: Bundling data and methods within a class to protect data integrity and
control access.

Q3. (a.) Algorithmic Complexity:
 A measure of the resources (time and space) required by an algorithm to execute as the input
size increases.
 Often expressed using Big O Notation, which describes the upper bound of the algorithm's
resource usage.
(b.) Binary Search:
 A search algorithm that repeatedly halves the search space based on the comparison of the key
being searched for with the middle element of the sorted array.
Iterative Implementation:
1. Set low and high indexes to the beginning and end of the array.
2. While low is less than or equal to high:
 Calculate the middle index.
 If the key equals the middle element, the element is found (return its position).
 If the key is less than the middle element, search the left half by setting high
 Time Complexity: The time complexity of binary search is O(log n), where n is the number of
elements in the sorted array. This is a significant improvement over linear search, which has a
time complexity of O(n). The logarithmic complexity makes binary search very efficient for
searching large datasets.
(c.) Interrupts:
 An interrupt is a signal that temporarily halts the current CPU execution and transfers control to
a specific service routine (interrupt service routine, ISR) to handle a higher-priority event.
 Interrupts in Round Robin Scheduling:
1. The OS maintains a scheduling queue for processes.
2. Each process gets a time slice (quantum) for CPU execution.
3. A hardware timer generates an interrupt after the time slice expires.
4. The interrupt handler saves the state of the currently running process (registers, program
counter) and moves it to the back of the queue.
5. The interrupt handler selects the next process from the queue and loads its state for
execution.
6. The interrupted process can resume execution when it gets its turn again.
By using interrupts, Round Robin scheduling ensures a fair allocation of CPU time to processes and
allows the OS to respond to events like I/O completion or timer expirations without significantly
delaying ongoing processes.
(d.) ROM, PROM, EPROM, EEPROM:
 ROM (Read-Only Memory): Non-volatile memory that stores data permanently. Data can be
read but not easily modified. Used for programs that need to be preserved, like BIOS or
firmware.
 PROM (Programmable Read-Only Memory): Can be programmed once using a special
device (PROM programmer). Often used for customization in embedded systems where the
program doesn't need frequent updates.
 EPROM (Erasable Programmable Read-Only Memory): Can be erased by exposing it to
ultraviolet light and then reprogrammed. Offers more flexibility than PROM but requires
specialized equipment for erasing.
 EEPROM (Electrically Erasable Programmable Read-Only Memory): Can be erased and
reprogrammed electrically within the device itself. Provides greater flexibility and convenience
compared to PROM and EPROM, often used in configuration settings or settings that might
need occasional updates.
(e.) Cache Memory:
 A small, high-speed memory that stores frequently accessed data and instructions closer to the
CPU.
 Differences from Other Memories:
 Size: Cache memory is much smaller than main memory (RAM) but faster.
 Speed: Cache memory accesses are significantly faster than main memory accesses due
to its proximity to the CPU.
 Volatility: Cache memory is typically volatile, meaning data is lost when power is off
(unlike ROM).
 Buffer: Cache memory can be considered a buffer in some ways. It acts as a temporary storage
area that holds recently accessed data, anticipating future needs of the CPU and reducing the
need for frequent main memory accesses. However, unlike some buffers, cache memory is not
specifically intended for data transfer between devices; its primary purpose is to speed up CPU
execution.

You might also like