PHYSICAL LEVEL AND DATALINK

PHYSICAL LEVEL

reti

In telecommunications in the context of computer networks, the physical layer is level 1 of the ISO/OSI model. In transmission, this layer receives from the datalink layer the packetized bit sequence to be transmitted on the channel and converts it into signals suitable for the transmission medium such as coaxial cable (BNC connector), STP or UTP twisted pair, optical fibers or radio waves. In particular, a physical layer standard defines:

  • the physical characteristics of the transmission medium such as shape, size, number of pins of a connector and mechanical specifications;
  • functional characteristics such as the meaning of the pins of a component;
  • the electrical characteristics, such as the voltage values ​​for the logic levels, the coding and duration of each bit;
  • the encoding of the digital signal on a transmission medium which is inherently analog (numerical modulation).

There are different standards relating to the management of the transmission medium, be it analog or digital. The transmission media used for the realization of a channel in a network are usually divided into three categories, depending on the physical phenomenon used to transmit the bits:

  • electrical means: for transmission they use the property of metals to conduct electrical energy (such as twisted pairs and coaxial cables);
  • optical means: for transmission they use light (such as multimode or singlemode optical fibers or laser transmission in air);
  • wireless means (“wireless”): they use electromagnetic waves for transmission (such as microwave radio transmissions and satellite radio transmissions). In this case, the transmission medium can be considered the “empty” space between sender and recipient.

 

DATALINK

The data link layer (or data link layer) is the second layer of the network architecture based on the ISO / OSI model for the interconnection of open systems. This transmitting layer receives data packets from the network layer and forms the frames that are passed to the underlying physical layer with the aim of allowing the reliable transfer of data through the underlying channel. In the IP stack, in some cases, the datalink layer consists in the use of a network built with another protocol to transport IP packets. This happens for example with X.25, Frame Relay, Asynchronous Transfer Mode. Some examples of datalink layer protocols are:

  • Ethernet (for LANs)
  • PPP, HDLC and ADCCP for point-to-point connections, i.e. between two stations connected directly, without intermediate nodes.

It may or may not be reliable – many data link protocols do not use acknowledgments, and some may not even check for transmission errors. In this case, the higher levels must carry out the flow control, the error control and manage the confirmations (and related retransmissions). In some networks, such as IEEE 802 LANs, this layer is divided into the MAC and LLC sub-layers. The latter is common to all MAC layers, such as token ring and IEEE 802.11 but also to MAC layers that are not part of the 802 standard, such as FDDI.

FUNCTIONALITY

The datalink layer must therefore perform several specific functions:

  • In the transmission phase, it groups the bits coming from the upper layer and destined for the physical layer in packets called frames (framing);
  • In the reception phase, it checks and manages transmission errors (error control);
  • Regulates the flow of transmission between source and receiver (flow control).
  • In the transmission phase, some form of multiple access/multiplexing operates for shared access between multiple users to the physical channel that avoids collisions between packets and interference in reception or on the channel.

All this makes it possible to make the physical medium appear in reception, at the higher level, as an error-free transmission line.

SUB-LEVEL LLC

The top sub-level is Logical link control (LLC), and can provide flow control, confirmation, error detection (or correction) services. PPP and HDLC protocols are part of this sublevel. The LLC sub-level protocols that provide the data reception confirmation or guarantee service must include acknowledge, or ACK messages. The sender can wait for the acknowledgment of each message before transmitting the next, or it can continue to transmit until a maximum number of messages not yet confirmed by the receiver is reached, in the so-called windowed protocols. In protocols with window each transmitted packet is identified with a progressive number inside the window, called sequence number; the confirmation messages must report the sequence number of the packet they encounter. The confirmation messages can be cumulative (“received packets up to N”), or require cumulative retransmission (“retransmit packets up to N”) or selective only of packets not received correctly. In some cases the acknowledgment of the messages received uses a dedicated message, in other cases the acknowledgment is inserted in specific fields of the messages transmitted in the opposite direction (piggyback), decreasing the retransmission latencies.

SUB-LEVEL MAC

The lower sub-level is Media Access Control or Medium Access Control. Its purpose is to regulate the multiple access of multiple nodes to a shared communication channel, avoiding or managing the occurrence of collisions. A collision occurs when two or more nodes simultaneously transmit data on the shared channel. This leads to the inevitable loss of transmitted data with consequent waste of bandwidth. There are multiple standard algorithms and protocols for multiple access control. For example, the IEEE 802.3 MAC adopts the CSMA / CD algorithm while the IEEE 802.11 MAC is based on the CSMA/CA algorithm. The former is commonly adopted in wired LANs, the latter in WLAN. There are two main types of multiple access algorithms: random and ordered. In multiple random access, collisions may occur but appropriate mechanisms are implemented to reduce the probability of their occurrence and to retransmit the collided frames. In orderly access, on the other hand, the occurrence of a collision is completely impossible since the nodes follow a precise order of access to the channel (established in the initialization phase of the network) which makes them exclusive users of the transmission medium (unless there are failures). or malfunctions). Furthermore, at the MAC level, the frame format is defined, which will typically contain the start / end fields, the sender/recipient MAC address fields, the LLC level encapsulated packet, the error detection code (FEC), and optionally padding bytes to ensure that the size of the frame does not drop below a minimum threshold.

LLC SUB-LEVEL FUNCTIONS

Synchronous and asynchronous transmission

Serial transmission can take place synchronously or asynchronously. In the asynchronous transmission each character transmitted is preceded and followed by signals which indicate the beginning and the end of the character; these signals are called start and stop signals. Asynchronous transmission is therefore also called start-stop transmission. With this method each character can be considered independent from the others, the time interval between sending two characters is unspecified. In synchronous transmission, the characters to be sent are grouped into messages (frames). Each frame is preceded by synchronization characters which are used to make the receiving station synchronize to the transmission speed of the station sending the message. Synchronous transmission is faster because transmission dead times are reduced, but an error in even a single bit can damage the entire message sent. The synchronous transmission protocols are divided into BCP (Byte Control Protocol) or byte-oriented, in which the subdivision into characters of the message to be transmitted is maintained, and BOP (Bit Oriented Protocol) or bit-oriented, in which the messages are seen as a succession of bits (in this way you are not tied to the 8-bit ASCII encoding). An important operation in synchronous transmission is framing, i.e. the subdivision into frames of the information to be transmitted.

Framing

The term framing refers to the following operations:

  • Encapsulation of the data with a header and a possible queue (trailer).
  • Interpretation of the bits present in the headers (and possibly in the queues).

In order to provide services at the network layer, the data link layer must take advantage of the services provided to it by the physical layer. The usual approach of the data link layer is to divide the stream of bits into packets (adapted precisely to a transmission on a packet network), and calculate the Checksum. Various methods are used for splitting bits into packets or frames:

  • Character count.
  • Start and end characters.
  • Start and end flags.

The character counting method (obtained by specifying the number of characters of the frame in the packet header field) is rarely usable since, if the field containing the number of characters is damaged (altered) during transmission, it can no longer be used. locate where the next frame begins; the other techniques are then used. In byte-oriented transmission (the frame retains the subdivision in bytes) the frame is preceded by the ASCII character sequence DLE STX (Data Link Escape Start of TeXt) and ends with the DLE sequence ETX (Data Link Escape End of TeXt). If a frame fails and the destination loses synchronization just find the next DLE STX or DLE ETX. However, the DLE character can appear randomly inside the frame when binary data such as object programs or floating point numbers are transmitted; so that these characters do not interfere, an additional DLE is added (which is removed at the destination before passing the frame at the network level) so that only single DLEs are interpreted as delimiters; this technique is called character stuffing. In bit-oriented transmission (the frame can contain any number of bytes) each frame begins and ends with the sequence 01111110 called flag: this sequence can appear randomly in the data, therefore in transmission after five consecutive 1s a 0 is always inserted in the stream of bits, regardless of whether the next bit is 1 or 0, while in reception it is necessary to eliminate the inserted bits, always removing a 0 after five 1s; this technique is called bit stuffing.

Error check

As mentioned, the sender station of the data-link layer receives the data from the upper layer and subdivides them into frames before entrusting them to the physical layer for transmission on the channel, adding to it a code for the control of transmission errors (data integrity). in reception (checksum). When a packet arrives at its destination, the checksum is recalculated from the same data-link layer as the receiving system. If the result is different from that contained in the packet, the data-link layer recognizes that an error has been made and takes appropriate action (such as discarding the packet and sending an error message in response to the sender). In general there are two types of control codes, the detector codes that only allow you to understand that the frame is incorrect and possibly request the retransmission of the packet (ARQ Automatic Repeat-reQuest) and the correction codes that allow not only to understand if you an error has occurred, but also to identify the location of the error and correct it accordingly (FEC Forward Error Correction). These latter codes require many more bits than detector codes and therefore waste bandwidth; therefore usually detection-only codes are used. In the event of an error, if the service is unreliable, the frame can simply be discarded; if the line is to be reliable, all the frames must arrive correctly; if a detector code is used, the receiver must request the retransmission of the wrong frames. The choice between detector and corrector codes may also depend on the speed of the lines (for low speed lines, waiting for retransmission may take too long) or reliability (if the error rate on the line is very low it is not worth wasting a lot of bandwidth for a corrector code) or the type of service requested (real-time or not). The usual way to ensure reliable delivery is to provide the sender with feedback of what is happening on the other end of the line. Typically the protocol requires the receiver to send back some special control packets with a positive or negative value depending on the packets received. If the sender receives a positive response on a shipped package, they know that it has arrived correctly. If, on the other hand, it gets a negative response, it means that something went wrong and that the packet needs to be retransmitted. An additional complication could come from the possibility that hardware problems cause the total disappearance of the package. If a packet does not arrive at its destination, the sender will not wait indefinitely, in fact a timer is used, which is started when the data is transmitted, if the timer exceeds the (programmed) limit threshold without receiving the ACK (Acknowledgment or confirmation ), it will resend the packages. However, if the packet or acknowledgment message is lost, the timer expires (time-out), and the sending station, not receiving confirmation, is forced to resend the data, but at this point the sender could receive two or more times. the same package. To solve this problem, the packets sent are numbered, so the receiving system, in case it receives a packet number equal to the previous one, ie a copy of the packet, discards it. This technique is known as Stop and wait; the other techniques most commonly used for error control are the Hamming Code and the CRC (Cyclic Redundancy Control). In fact, however, error control functions on individual packets are carried out not only at the datalink level, but in every other layer of the protocol to guarantee the correctness of the service data (header) of the protocols intended for the respective layers.

Flow control

Another important design problem that is found in the data link layer is that of managing a shared line when multiple nodes want to send messages at the same time and also has to decide what to do with a sender who systematically tends to transmit packets faster than the recipient can accept them. This situation can easily be encountered when the sender is located on a fast machine and the receiver on a slow machine. The sender continues to send packets at high speed, until the receiver is completely overwhelmed. Even if the transmission is error free, at some point the receiver will not be able to handle the incoming packets and will start losing them (buffer overflow). The typical solution is to introduce a flow control to force the sender to respect the speed of the receiver in sending packets. This imposition usually requires some type of feedback mechanism so that the sender can be notified if the receiver is able to receive or not. In the event that instead more nodes they want to send messages at the same time, there is a tendency to introduce centralized control, creating a single control node, responsible for determining who gets priority within the network; the next node will then check when the network is no longer busy, so that it can send the message as soon as it becomes free. However, it may happen that more nodes monitor the network and that as soon as it is free, they immediately send messages, in this case there will be collision problems; to overcome this problem, the nodes monitoring the network will be regulated by a multiple access protocol by waiting for example a random time before sending the messages, since it is unlikely that the nodes choose the same instant to send the data.

MAC SUB-LEVEL FUNCTIONALITY

In telecommunications, in the context of computer networks, MAC (acronym for Medium Access Control or Media Access Control) is a sub-level of the ISO/OSI standardized architectural model, defined in the IEEE 802 standard, which contains functions for controlling access to physical medium for broadcast channels, framing functionality and error checking. It is part of the datalink level, of which it represents the lower sub-level surmounted by the LLC sub-level and limited below by the physical level. The various characteristics of this layer are described from the third part of the standard onwards. This is the level at which the MAC address or physical address of the computer is located. This layer has mainly two functions, that of data encapsulation and that of access to the medium. The first function deals with the encapsulation of frames before their transmission and decapsulation upon their reception; it also deals with the detection of transmission errors and delimiting the frame to facilitate synchronization between the transmitter and the receiver. The second function controls access to media, communicating directly with the physical layer.

DEVICE OF LEVEL 1 HUB

In information technology and telecommunications, in the technology of computer networks, a hub (literally in English fulcrum, hub, central element) represents a concentrator, that is a network device that acts as a data distribution node of a data communication network organized with a topology bus logic and physical star topology. Hub technology is currently considered obsolete, largely supplanted by the use of network switches. Since 2011, the use of hubs or repeaters to connect computer networks is deprecated by the IEEE 802.3 standard.

HUB

DESCRIPTION

The HUB is a device that sends packets to all the devices present, the defect is that all packets other than those for the person concerned are lost; doing so creates unnecessary traffic. In the widespread case of Ethernet networks, it forwards the data arriving from any of its ports on all the others, that is, in a diffuse manner (broadcasting). For this reason it can also be defined as a multiport repeater. Precisely for the latter reason, through the use of this device, a logical bus topology network is implemented. This allows two devices to communicate through the hub as if it were not there, apart from a small additional delay in transmission beyond the standard propagation delay. The consequence of the behavior of the hub is that the total band available at the output is divided and divided among the various carrier signals sent due to the multiplication of the data to be sent.

There are three categories of Hubs:

  • Active hubs: (by now the vast majority of devices on the market are of this type), they require electrical power, as they amplify the signal to minimize the attenuation at the destination.
  • Passive hubs: they do not have the function of “signal amplification”, therefore they do not require power supply. They are limited only to physically connecting the cables.
  • Hybrid hubs: they are special and advanced hubs that allow the connection between several types of cable.

In addition to these three categories, a hub can also be classified as hub-root when it is arranged in a particular “star-center” configuration to which only other hubs or switches are connected. The peculiarity of the hub-root compared to normal hubs, consists in not having direct links with the terminals, and is therefore characterized by a greater distance from knowledge. You need to be careful in some special cases. For example, the 10Base-T standard requires the length of the UTP cable not to exceed 100 meters. Generally using an active hub and making use of the 10Base-T standard we can use a pair of cables both of 100 meters long, interconnected by the hub itself, thus exceeding the theoretical limit of 100 meters according to this scheme: “PC-active hub-PC “(cables shown under hatching). This is possible because the active hub amplifies the signal, bringing it to its destination with an overall good intensity. This type of interconnection is not feasible through a passive hub, since the two pieces of cable would be interconnected without amplification, and the final result would be no different from having a cable twice the maximum required by the standard, connecting PC and PC: the signal it may come too weak or not at all. The delay introduced by a hub is generally of a few microseconds, therefore almost negligible and irrelevant. The simplicity of a hub’s behavior makes it one of the cheapest components to build a network. A switch, which behaves similarly to a hub, but with greater “intelligence” so as not to waste much of the bandwidth, is slightly more complicated and expensive. A hub does not need to recognize the boundaries of data passing through it, so it is considered a layer 1 (physical) device in the ISO/OSI model as it simply retransmits electrical signals and does not enter into the merits of the data. In the jargon of Ethernet networks, a hub creates a single collision domain by uniting all the computers or networks connected to its ports, i.e. if two computers connected to different ports transmit at the same time, a collision of receiving packets occurs and the transmission must be repeated . In fact, the hub does not distinguish the LAN segments and retransmits all the signals it receives. This also creates limitations on the number of nodes that can be connected in the LAN seen in its complexity. Furthermore, due to this simple function, it is not possible to connect Ethernet segments of different types and speeds as the hub is not even equipped with a buffer. In practice, therefore, the LAN as a whole should be seen as a single network.

LEVEL TWO DEVICES

THE BRIDGE

The bridge is a network device that is located at the datalink level of the ISO/OSI model and that translates from one physical medium to another within the same local network. It is therefore able to recognize, in the electrical signals it receives from the transmission medium, data organized in packet structures called frames (in English frame), to identify within them the address of the sender node and that of the recipient node and on the basis of these, address the frames between several network segments interconnected to it.

1280px-Schema_bridge

OPERATION

Typically a bridge is equipped with ports with which it is connected to different segments of the local network by routing packets between them. When it receives a frame on a port, it tries to figure out from the recipient’s address whether it is in the same segment as the sender or not. In the first case, avoid forwarding the frame, as presumably the recipient has already received it by sharing the communication bus. In the second case, however, the bridge forwards the frame towards the segment in which the recipient is actually located. If it does not know which segment the receiver is on, the bridge forwards the frame on all ports except the one from which it received it. These operations are called filtering and forwarding operations. In crossing the bridge, the information packet therefore undergoes an additional delay, compared to the usual propagation delay, and due to the processing times that the bridge operates on the packet to decide on which segment, and therefore on which output port, to forward it.

Address table

To forward the frames to the right domains, the bridge maintains a table (called a forwarding table) of MAC addresses for each port, and based on its content it is able to understand which port, and therefore which domain, to forward the frame to. . The table can be created manually by the network administrator through specific resident software, or it can be created automatically through a self-learning mechanism through the progressive traffic of packets on the bridge by associating the source port with the address of the sender. This learning can be made even more sophisticated and efficient by providing for the cancellation by the bridge itself of the MAC address after a certain period of time in which it is not used (aging time), thus avoiding manual updating and problems of scalability to the increase the number of hosts on the network. When the Bridge is switched on, the address tables (forwarding) are empty, so when a frame is passed it is forwarded on all the lines of the Bridge (with the exception of the arrival one) by executing what is called Flooding. In practice, bridges are increasingly plug-and-play devices, so we are talking about transparent bridges.

Collision domains

Each network segment, connected to a port of a bridge, constitutes a separate collision domain. This greatly optimizes transmissions on the local network by reducing the number of collisions. Thanks to this feature, the bridge allows you to build a LAN of infinite size. Furthermore, if a bridge identifies that a collision problem exists on another network segment on which it must transmit, then it applies the CSMA / CD algorithm operating as any host on the network, i.e. by buffering the data and sending them to the LAN free. Therefore a bridge can be used to connect two collision domains at the datalink level without increasing the risk of collisions or, vice versa, to divide a collision domain into two smaller and therefore more performing domains.les

THE SWITCH

A switch (switch), in the technology of computer networks, is a network device that deals with switching at the datalink level (data link), the level 2 of the ISO/OSI model, introduced to reduce the so-called collision domain in LAN networks Ethernet (now IEEE 802.3).

Switch

DESCRIPTION

A switch is a device in a computer network that connects other devices together. Multiple network cables are connected to a switch to enable communication between different devices. Switches manage the flow of data across a network by transmitting a received network packet to only one or more devices for which the packet is intended. Each networked device to a switch can be identified by its MAC address, allowing the switch to direct traffic flow maximizing network security and efficiency. A switch is “smarter” than an Ethernet hub, which simply retransmits packets out of every port on the hub, except the port on which the packet was received: the hub is unable to distinguish the different recipients and therefore gets a overall lower network efficiency. An Ethernet switch operates on the data link layer (layer 2) of the OSI model to create a separate collision domain for each switch port. Any device connected to one port on the switch can transfer data to any of the other ports at any time, and transmissions will not interfere. In half duplex mode, each port on the switch cannot simultaneously receive and transmit to the device it is connected to. Conversely, this is possible in full duplex mode, assuming that the connected device supports this mode. Since broadcasts continue to be forwarded to all devices connected through the switch, the new network segment continues to be a broadcast domain. Segmentation involves using a switch to divide a larger collision domain into a smaller one in order to reduce the likelihood of collision and improve overall network throughput. In the extreme case (i.e. micro-segmentation), each device is on a dedicated switch port. Unlike an Ethernet hub, there is a separate collision domain on each switch port. This allows computers to have dedicated bandwidth on point-to-point connections to the network and also to run in full-duplex mode. Full-duplex mode has only one transmitter and one receiver per collision domain, making collision impossible.

Compared to a bridge, the switch has:

greater expandability in terms of the number of ports
better performance

Additionally, a high-end switch typically provides the following features:

possibility of management;

  • support for multiple instances of the Spanning Tree protocol;
  • Shortest Path Brid
  • support of virtual LANs (VLANs) according to the
  •  802.1Q standard;
  • port mirroring;
  • QoS (Quality of Service) support.

IN-DEPTH AI

The Data Link Layer (or Data Connection Layer) is the second layer of the Open Systems Interconnection (OSI) Model, and is primarily responsible for providing a reliable link between two adjacent nodes on a network. This layer deals with the transmission of data within the same local area network (LAN) or between directly connected nodes on a network.

Main functions of the Data Link Layer:

1. Framing (Encapsulation):

-Data received from the network layer is divided into smaller units called frames. Each frame contains data, control information (such as destination and source addresses), and an error check to detect any problems during transmission.

2. Physical Addressing:

-The data link layer uses MAC (Media Access Control) addresses to uniquely identify devices connected to a network. This physical address allows the data link layer to determine the correct destination node on the network.

3. Error Control:

-One of the most important purposes of the data link layer is to make sure that data is transmitted without errors. This is done using techniques such as Cyclic Redundancy Check (CRC), which checks to see if data has been corrupted during transmission.

4. Flow Control:

-The data link layer regulates the data transmission rate between the two nodes to prevent the receiver from being overloaded. Flow control systems such as the sliding window protocol are used to manage the flow of data.

5. Channel Access:

-In networks where multiple devices share the same communication medium (such as a LAN), the data link layer manages access to the shared medium to avoid data collisions. Some protocols that manage channel access are CSMA/CD (Carrier Sense Multiple Access with Collision Detection) used in Ethernet networks.

Sub-levels of the Data Link Layer:

The Data Link layer is divided into two main sub-levels:

1. LLC (Logical Link Control) sub-level:

-This sub-level deals with logical link control and data flow. It manages multiplexing of network protocols above the data link layer and coordinates data transfer between devices.

2. MAC (Media Access Control) sub-layer:

-The MAC sub-layer is responsible for controlling access to the physical transmission medium. It manages how and when devices can transmit data over the network. It uses mechanisms to avoid collisions and regulates access to the shared medium.

Data Link Layer Protocols:

Some of the main protocols operating at this layer are:

-Ethernet: the most common protocol for local area networks (LANs), uses MAC addressing and a medium access control system (CSMA/CD) to avoid collisions.

-PPP (Point-to-Point Protocol): used for point-to-point links such as dial-up or VPN connections.

-HDLC (High-Level Data Link Control): protocol used on serial links and in WAN networks to provide a reliable connection.

Summary:

The Data Link Layer is crucial for ensuring that data can travel error-free and efficiently between two directly connected nodes through error, flow and packet routing control at the local network level.

LINKS TO PREVIOUS POST

COMPUTER NETWORKS

DEEPENING

THE MAC ADDRESS