Bcs-41 Notes
Bcs-41 Notes
Circuit switching is referred to as the technology of data transfer that utilizes sending messages from one
point to another. This involves sending messages from the receiver to the sender and back simultaneously. A
physical connection gets established during this process along with the receiver; a dedicated circuit is always
present to handle data transmissions, through which data is sent. Packet switching can be used as an
alternative to circuit switching. In packet-switched networks, data is sent in discrete units that have variable
lengths.
Address Resolution Protocol (ARP) is a protocol that maps dynamic IP addresses to permanent physical machine
addresses in a local area network (LAN). The physical machine address is also known as a media access control
(MAC) address.
ARP translates 32-bit addresses to 48-bit addresses and vice versa, which is necessary because IP addresses in IP
version 4 (IPv4) are 32 bits but MAC addresses are 48 bits.
ARP works between Layer 2 and Layer 3 of the Open Systems Interconnection model (OSI model). The MAC address
exists on Layer 2 of the OSI model, the data link layer. The IP address exists on Layer 3, the network layer.
ARP can also be used for IP over other LAN technologies, such as token ring, Fiber Distributed Data Interface and IP
over Asynchronous Transfer Mode.
IPv4 is the fourth version of the Internet Protocol (IP). It is one of the core protocols of standards-based
internetworking methods in the Internet and other packet-switched networks.
IPv6 is the most recent version of the Internet Protocol (IP), the communications protocol that provides an
identification and location system for computers on networks and routes traffic across the Internet.
A protocol which will be used in the data link layer which will support the reliable and sequential delivery
of data frames. This protocol is the sliding window protocol.
This sliding window facility is also used with the TCP which helps to transfer multiple frames by a sender at
a time before receiving an acknowledgment from the receiver.
If the sender data transmitting speed will be high compared with the receivers receiving speed the overflow
will be there. This will cause data loss. This can be controlled by TCP. That is providing the window
concept.
Due to the bad implementation of TCP, the silly window syndrome may arise.
This will reduce the performance. Due to this problem the data transmission becomes inefficient.
Because of this problem the sender window may shrink to a small size.
So the transmitting data size also becomes smaller than the TCP header.
Pure aloha is used when data is available for sending over a channel at stations. In pure Aloha, when each
station transmits data to a channel without checking whether the channel is idle or not, the chances of
collision may occur, and the data frame can be lost
When a station transmits the data frame to a channel without checking whether the channel is free or not,
there will be a possibility of the collision of data frames. Station expects the acknowledgement from the
receiver, and if the acknowledgement of the frame is received at the specified time, then it will be OK;
otherwise, the station assumes that the frame is destroyed. Then station waits for a random amount of time,
and after that, it retransmits the frame until all the data are successfully transmitted to the receiver.
There is a high possibility of frame hitting in pure aloha, so slotted aloha is designed to overcome it. Unlike
pure aloha, slotted aloha does not allow the transmission of data whenever the station wants to send it.
In slotted Aloha, the shared channel is divided into a fixed time interval called slots. So that, if a station
wants to send a frame to a shared channel, the frame can only be sent at the beginning of the slot, and only
one frame is allowed to be sent to each slot. If the station is failed to send the data, it has to wait until the
next slot.
Pure aloha is used whenever data is available for sending over a channel at stations, whereas slotted aloha is
designed to overcome the problem of pure aloha because there is a high possibility of frame hitting in pure
aloha. Similarly, we will see the comparison chart between pure aloha and slotted aloha. So, without any
delay, let's start the topic.
OSI Model
OSI stands for Open System Interconnection is a reference model that describes how information from a
software application in one computer moves through a physical medium to the software application in another
computer.
OSI consists of seven layers, and each layer performs a particular network function.
OSI model was developed by the International Organization for Standardization (ISO) in 1984, and it is now
considered as an architectural model for the inter-computer communications.
OSI model divides the whole task into seven smaller and manageable tasks. Each layer is assigned a particular
task.
Each layer is self-contained, so that task assigned to each layer can be performed independently.
The OSI model is divided into two layers: upper layers and lower layers.
The upper layer of the OSI model mainly deals with the application related issues, and they are implemented
only in the software. The application layer is closest to the end user. Both the end user and the application
layer interact with the software applications. An upper layer refers to the layer just above another layer.
The lower layer of the OSI model deals with the data transport issues. The data link layer and the physical
layer are implemented in hardware and software. The physical layer is the lowest layer of the OSI model and
is closest to the physical medium. The physical layer is mainly responsible for placing the information on the
physical medium.
1. Physical Layer
2. Data-Link Layer
3. Network Layer
4. Transport Layer
5. Session Layer
6. Presentation Layer
7. Application Layer
TCP and IP
The TCP/IP model refers to the Transmission Control Protocol/Internet Protocol Model. This model is a part of the
network domain designed specifically for overseeing efficient and error-free transmission of data.
The model works on a four-layered architecture model, where each layer implicit the required network protocols on
the data to be transmitted, which remodels the data to the most optimum structure for efficient transmission over the
network.
Application layer
Transport layer
Internet layer
Network Access layer
We use both of these to connect with the peripheral devices and also communicate with them. But the parallel
transmission is sensitive to time, while the serial transmission is not sensitive to time. Let us discuss some more of the
differences between them.
Meaning and In the case of serial transmission, In the case of parallel transmission,
Definition only a single communication link we use multiple numbers of
for transferring data from any parallel links for the simultaneous
given end to another one. transmission of all the data bits in
the network.
Cost Since the serial transmission uses Parallel transmission, on the other
Efficiency just a single link, it incurs a hand, needs to implement multiple
comparatively lower cost during its links. Thus, it incurs much more
implementation. Thus, it is more cost, and it is not a very cost-
cost-efficient as compared to efficient option as compared to
parallel transmission. serial transmission.
Complexity The single link transmission makes Due to multiple link transmission,
the process of serial transmission the process of parallel transmission
very simple, and not very complex, becomes comparatively complex to
even if it covers long distances. handle, thus, paving the way to
short-distance data transfer.
Classless Inter-Domain Routing (CIDR) . This addressing type helps to allocate IP addresses more
efficiently. When the user requires a particular number of IP addresses, this method assigns a block of IP
addresses concerning certain rules. And, this block is called a CIDR block and has the required number of IP
addresses.
Classful Addressing
Classful addressing was an early method of IP address allocation in computer networking, categorizing the
IPv4 address space into five classes: A, B, C, D, and E. The primary focus was on the division between
network IDs and host IDs, with class A, B, and C being the most widely used for identifying networks and
hosts. Each class has a different default subnet mask, determining the number of hosts allowed within that
network.
However, classful addressing has limitations, particularly in terms of address space efficiency, leading to the
adoption of classless addressing methods, such as CIDR (Classless Inter-Domain Routing). By reading the
first octet, we can determine the class of address to which it belongs.
Classful addressing is an IP address allocation method that allocates IP addresses according to five major
classes. Classless addressing is an IP address allocation method that is designed to replace classful
addressing to eliminate the possibility of exhaustion of IP addresses. This is themain difference between
classful and classless addressing.Another difference is that in classful addressing, the network ID and host
ID change dependingon the classes. However, in classless addressing, there is no boundary on network ID
and hostID.Classless addressing allows allocating IP addresses more efficiently than classful
addressing.Classless addressing avoids running out of IP addresses that can occur in classful addressing.
Difference between Virtual Circuits and Datagram Networks
Datagram switching is a connection-less service, where data packets are sent independently without
establishing a prior connection, allowing for potentially lower latency. In contrast, virtual circuit switching
is a connection-oriented service that establishes a predefined path between the sender and receiver before
data transmission, which can result in more reliable packet delivery at the cost of higher latency.