Monday, December 7, 2020

TCP Indepth

 

FTCP Flow Control


TCP is the protocol that guarantees we can have a reliable communication channel over an unreliable network. When we send data from a node to another, packets can be lost, they can arrive out of order, the network can be congested or the receiver node can be overloaded. When we are writing an application, though, we usually don’t need to deal with this complexity, we just write some data to a socket and TCP makes sure the packets are delivered correctly to the receiver node. Another important service that TCP provides is what is called Flow Control. Let’s talk about what that means and how TCP does its magic.

What is Flow Control (and what it’s not)

Flow Control basically means that TCP will ensure that a sender is not overwhelming a receiver by sending packets faster than it can consume. It’s pretty similar to what’s normally called Back pressure in the Distributed Systems literature. The idea is that a node receiving data will send some kind of feedback to the node sending the data to let it know about its current condition.

It’s important to understand that this is not the same as Congestion Control. Although there’s some overlap between the mechanisms TCP uses to provide both services, they are distinct features. Congestion control is about preventing a node from overwhelming the network (i.e. the links between two nodes), while Flow Control is about the end-node.

How it works

When we need to send data over a network, this is normally what happens.




The sender application writes data to a socket, the transport layer (in our case, TCP) will wrap this data in a segment and hand it to the network layer (e.g. IP), that will somehow route this packet to the receiving node.

On the other side of this communication, the network layer will deliver this piece of data to TCP, that will make it available to the receiver application as an exact copy of the data sent, meaning if will not deliver packets out of order, and will wait for a retransmission in case it notices a gap in the byte stream.

If we zoom in, we will see something like this.




TCP stores the data it needs to send in the send buffer, and the data it receives in the receive buffer. When the application is ready, it will then read data from the receive buffer.

Flow Control is all about making sure we don’t send more packets when the receive buffer is already full, as the receiver wouldn’t be able to handle them and would need to drop these packets.

To control the amount of data that TCP can send, the receiver will advertise its Receive Window (rwnd), that is, the spare room in the receive buffer.




Every time TCP receives a packet, it needs to send an ack message to the sender, acknowledging it received that packet correctly, and with this ackmessage it sends the value of the current receive window, so the sender knows if it can keep sending data.

The sliding window

TCP uses a sliding window protocol to control the number of bytes in flight it can have. In other words, the number of bytes that were sent but not yet acked.




Let’s say we want to send a 150000 bytes file from node A to node B. TCP could break this file down into 100 packets, 1500 bytes each. Now let’s say that when the connection between node A and B is established, node B advertises a receive window of 45000 bytes, because it really wants to help us with our math here.

Seeing that, TCP knows it can send the first 30 packets (1500 * 30 = 45000) before it receives an acknowledgment. If it gets an ack message for the first 10 packets (meaning we now have only 20 packets in flight), and the receive window present in these ack messages is still 45000, it can send the next 10 packets, bringing the number of packets in flight back to 30, that is the limit defined by the receive window. In other words, at any given point in time it can have 30 packets in flight, that were sent but not yet acked.

Example of a sliding window. As soon as packet 3 is acked, we can slide the window to the right and send the packet 8.

Now, if for some reason the application reading these packets in node B slows down, TCP will still ack the packets that were correctly received, but as these packets need to be stored in the receive buffer until the application decides to read them, the receive window will be smaller, so even if TCP receives the acknowledgment for the next 10 packets (meaning there are currently 20 packets, or 30000 bytes, in flight), but the receive window value received in this ack is now 30000 (instead of 45000), it will not send more packets, as the number of bytes in flight is already equal to the latest receive window advertised.

The sender will always keep this invariant:

LastByteSent - LastByteAcked <= ReceiveWindowAdvertised

Visualizing the Receive Window

Just to see this behavior in action, let’s write a very simple application that reads data from a socket and watch how the receive window behaves when we make this application slower. We will use Wireshark to see these packets, netcat to send data to this application, and a go program to read data from the socket.

And we can see, using Wireshark, that the connection was established and a window size advertised:

The persist timer

There’s still one problem, though. After the receiver advertises a zero window, if it doesn’t send any other ack message to the sender (or if the ack is lost), it will never know when it can start sending data again. We will have a deadlock situation, where the receiver is waiting for more data, and the sender is waiting for a message saying it can start sending data again.

To solve this problem, when TCP receives a zero-window message it starts thepersist timer, that will periodically send a small packet to the receiver (usually called WindowProbe), so it has a chance to advertise a nonzero window size.

When there’s some spare space in the receiver’s buffer again it can advertise a non-zero window size and the transmission can continue.

Recap

·       TCP’s flow control is a mechanism to ensure the sender is not overwhelming the receiver with more data than it can handle;

·       With every ack message the receiver advertises its current receive window;

·       The receive window is the spare space in the receive buffer, that is, rwnd = ReceiveBuffer - (LastByteReceived – LastByteReadByApplication);

·       TCP will use a sliding window protocol to make sure it never has more bytes in flight than the window advertised by the receiver;

·       When the window size is 0, TCP will stop transmitting data and will start the persist timer;

·       It will then periodically send a small WindowProbe message to the receiver to check if it can start receiving data again;

·       When it receives a non-zero window size, it resumes the transmission.

 

TCP Header

 

TCP Header | L4 Header

TCP is the layer 4-Transport layer protocol and when data is received in Transport layer as PDU DATA from Sessionn layer it adds Header on the data according to the service i.e TCP or UDP. When header is added the PDU is called Segment. Below is the Header added at this layer and task of every feild. 
The size of TCP header is Minimum 20 byte Maximum 24 bytes. 



Source & Destination Port (16/16 bits)

A port is an endpoint to a logical connection and the way a client program specifies a specific server program on a computer in a network. The port number identifies what type of port it is.Specific Destination ports are reserved ans source port are random for protocol. Example: Destination Port number of FTP is 80, but for some protocol source and destionation protocol are reserved for example DHCP, source port is 68 and destination port is 67.

 Sequence Number (32 bits)

Sequence number is a 32-bit wide field identifies the first byte of data in the data area of the TCP segment. We can identify every byte in a data stream by a sequence number.

 Acknowledge Number(32 bits)

Acknowledge number is also a 32-bit wide field which identifies the next byte of data that the connection expects to receive from the data stream.

 Header Length (4 bits)

Header length is a field which consists of 4 bit to specifies the length of the TCP header in 32-bit words. Receiving TCP module can calculate the start of the data area by examining the header length field.

 Reserved (6 bits)

Reserved for future purpose.

 Flag (6 bits)

There are 6 flags in TCP header. 
URGENT – URG flag tells the receiving TCP module as it is urgent data 

ACKNOLEDGMENT – ACK tells the receiving TCP module that the acknowledge number field contains a valid acknowledgement number 

PUSH – PSH flag tells the receiving TCP module to immediately send data to the destination application 

RESET – RST flag asks the receiving TCP module to reset the TCP connection 

SYNCHRONIZATION – SYN flag tells the receiving TCP module to synchronize sequence number 

FINISH – FIN flag tells the receiving TCP module that the sender has finished sending data

 Window Size(16 bits)

Window size field is a 16-bit wide which tells the receiving TCP module the number of bytes that the sending end id willing to accept. The value in this field specifies the width of the sliding window.

 Checksum (16 bits)

TCP checksum is a 16-bit wide filed includes the TCP data in it’s calculations. This field helps the receiving TCP module to detect data corruption. That is, TCP requires the sending TCP module to calculate and include checksums in this field and receiving TCP module to verify checksums when they receive data. The data corruption is detected in this way.

 Urgent Pointer (16 bits)

Urgent pointer is a 16-bit wide field specifies a byte location in the TCP data area. It points to the last byte of urgent data in the TCP data area.

 

TCP

 THREE-WAY HANDSHAKE or a TCP 3-way handshake is a process which is used in a TCP/IP network to make a connection between the server and client. It is a three-step process that requires both the client and server to exchange synchronization and acknowledgment packets before the real data communication process starts.


TCP message types

MessageDescription
SynUsed to initiate and establish a connection. It also helps you to synchronize sequence numbers between devices.
ACKHelps to confirm to the other side that it has received the SYN.
SYN-ACKSYN message from local device and ACK of the earlier packet.
FINUsed to terminate a connection.



TCP Three-Way Handshake Process

TCP traffic begins with a three-way handshake. In this TCP handshake process, a client needs to initiate the conversation by requesting a communication session with the Server:


  • Step 1: In the first step, the client establishes a connection with a server. It sends a segment with SYN and informs the server about the client should start communication, and with what should be its sequence number.
  • Step 2: In this step server responds to the client request with SYN-ACK signal set. ACK helps you to signify the response of segment that is received and SYN signifies what sequence number it should able to start with the segments.
  • Step 3: In this final step, the client acknowledges the response of the Server, and they both create a stable connection will begin the actual data transfer process.

Summary

  • TCP 3-way handshake or three-way handshake or TCP 3-way handshake is a process which is used in a TCP/IP network to make a connection between server and client.
  • Syn use to initiate and establish a connection
  • ACK helps to confirm to the other side that it has received the SYN.
  • SYN-ACK is a SYN message from local device and ACK of the earlier packet.
  • FIN is used for terminating a connection.
  • TCP handshake process, a client needs to initiate the conversation by requesting a communication session with the Server
  • In the first step, the client establishes a connection with a server
  • In this second step, the server responds to the client request with SYN-ACK signal set
  • In this final step, the client acknowledges the response of the Server
  • TCP automatically terminates the connection between two separate endpoints.

 


Thursday, December 3, 2020

TCP/IP Model

                                                                 

TCP/IP Model helps you to determine how a specific computer should be connected to the internet and how data should be transmitted between them.

 

TCP/IP Protocol Stack is specifically designed as a model to offer highly reliable and end-to-end byte stream over an unreliable internetwork

 

It has 4 layers


Application Layer

Transport Layer

Internet Layer

Network Interface


•     TCP Characteristics

•     Four Layers of TCP/IP model

•     Application Layer

•     Transport Layer

•     Internet Layer

•     The Network Interface Layer

•     Differences between OSI and TCP/IP models

•     Most Common TCP/IP Protocols

•     Advantages of the TCP/IP model

•     Disadvantages of the TCP/IP model

 TCP Characteristics

Here, are the essential characteristics of TCP/IP protocol

     Support for a flexible TCP/IP architecture

     Adding more system to a network is easy.

     In TCP/IP, the network remains intact until the source, and destination machines were functioning properly.

     TCP is a connection-oriented protocol.

     TCP offers reliability and ensures that data which arrives out of sequence should put back into order.

TCP allows you to implement flow control, so sender never overpowers a receiver with data.

 

TCP Characteristics

Here, are the essential characteristics of TCP/IP protocol

     Support for a flexible TCP/IP architecture

     Adding more system to a network is easy.

     In TCP/IP, the network remains intact until the source, and destination machines were functioning properly.

     TCP is a connection-oriented protocol.

     TCP offers reliability and ensures that data which arrives out of sequence should put back into order.

     TCP allows you to implement flow control, so sender never overpowers a receiver with data.



 Application Layer

     Application layer interacts with an application program, which is the highest level of OSI model. The application layer is the OSI layer, which is closest to the end-user. It means the OSI application layer allows users to interact with other software application.

 

Transport Layer

 

     Transport layer builds on the network layer in order to provide data transport from a process on a source system machine to a process on a destination system. It is hosted using single or multiple networks, and also maintains the quality of service functions.

     It determines how much data should be sent where and at what rate. This layer builds on the message which are received from the application layer. It helps ensure that data units are delivered error-free and in sequence.

 

Internet Layer

 

     An internet layer is a second layer of TCP/IP layers of the TCP/IP model. It is also known as a network layer. The main work of this layer is to send the packets from any network, and any computer still they reach the destination irrespective of the route they take.

     The Internet layer offers the functional and procedural method for transferring variable length data sequences from one node to another with the help of various networks.

     Message delivery at the network layer does not give any guaranteed to be reliable network layer protocol.

     Layer-management protocols that belong to the network layer are:

     Routing protocols

     Multicast group management

     Network-layer address assignment.

 

 

Summary

The full form or TCP/IP model explained as Transmission Control Protocol/ Internet Protocol.

TCP supports flexible architecture

Four layers of TCP/IP model are 1) Application Layer 2) Transport Layer 3) Internet Layer 4) Network Interface

Application layer interacts with an application program, which is the highest level of OSI model.

Internet layer is a second layer of the TCP/IP model. It is also known as a network layer.

Transport layer builds on the network layer in order to provide data transport from a process on a source system machine to a process on a destination system.

Network Interface Layer is this layer of the four-layer TCP/IP model. This layer is also called a network access layer.

OSI model is developed by ISO (International Standard Organization) whereas TCP/IP model is developed by ARPANET (Advanced Research Project Agency Network).

An Internet Protocol address that is also known as an IP address is a numerical label.

 

 

 

Wednesday, December 2, 2020

OSI Model

 



OSI REFERENCE MODEL

When people talk about connecting computers together, then they referred the term Networking. So, OSI model is behind this communication to share the resources between them

 

OSI model is a logical model for how systems in the network should communicate to each other. All the model does is to bre


ak down all the components for communication and arrange them into layers.

 

OSI model consists of seven logical layers that allows you to troubleshoot or know about the network.

The layers from top to bottom are for what is closer to the end user. So, what the user is interacting with will be at the top,

 

7. APPLICATION LAYER:

  • This is the layer that the user is actually interacting with. Like Chrome, Safari, Firefox or somehow Internet Explorer.
  • Not just browsers any application or piece of code or any software such as Outlook.
  • This is the actual stuff that user sees or use to interact with the Network.

6. PRESENTATION LAYER:

  • This is the layer where the operating system comes into play.
  • The Application layer sends the information to the presentation layer.
  • Device drivers for the Network Interface Card reside on this layer.
  • It is concerned about the format of data exchanging between the two systems.

5. SESSION LAYER:

  • This is the layer which deals with the communication creating a session between the two computers.
  • When you visit a website on your computer, your computer at the session layer creates a session with the web server that you are trying to get data from.
  • It is a session maintained between your computer and host server so that the connection is alive till you close the browser Tab or the page.
  • If an application creates different transport streams session layer binds all these streams belonging to the same application.

4. TRANSPORT LAYER:

  • It decides that how much information should be sent at one time. Windowing – send information back and forth.
  • Receive data from Session layer and divide it into smaller units called as messages which are passed on to the Network Layer.
  • On the receiving end, it makes sure that the messages are accepted and arranged in the correct order. These messages are merged and passed on to the upper layers.

3. NETWORK LAYER:

  • Routers operates on the Network layer.
  • Your IP address is at the network layer.
  • It breaks a message into packets and transmits them across the network.
  • It makes sure that the packet reaches the correct destination.
  • This is implemented on every node in the network.
  • A node here can be a computer, a router etc.

2. DATA LINK LAYER:

  • This the layer where all the switching occurs or where the switches operate on.
  • MAC Addresses exists here.
  • It is responsible for transmission of error-free data.
  • It divides packets into frames and passes on to the physical layer for transmission.
  • On the receiving end, it takes the raw bytes from the physical layer and aggregates them into frames.
  • Data-encoding framing error detection and correction are applied here.

1. PHYSICAL LAYER:

  • The physical is literally all the physical stuff that a network requires. Like wires, Network Interface Cards and all other devices.
  • It is responsible for transmission of raw bits over the communication link.

 

UNDERSTANDING THE FLOW OF INFORMATION:

  • Let us assume that there are two systems System A and System B which wants to communicate with each other over a network.
  • All the seven layers will be implemented by both the systems.

  • All the intermediate nodes will implement only the bottom 3 layers (Network Layer, Data Link Layer and Physical Layer) as their job is just to pass on the data to the next node.

  • The top 4 layers are implemented only by the end systems, but the bottom 3 layers are implemented by every node in the path.

  • Every layer has some protocols through which it communicates with the corresponding layer in the other system.

  • Every layer also communicates with the layer above and below it, or every layer provides some service to the layer above it.

Note: Protocols work between same layers of different machines, whereas services work among different layers within the same machine.




  1. An application, creates data that will be sent, such as an email message. The Application layer places a header fields that contains information about the source and destination ports, and passes the data to the Presentation layer.
    –| Source and Destination Port | Data |–
  2. The Presentation layer places layer 6 header information. For example, the text in the message might be converted to ASCII. The Presentation layer will then pass the new data to the Session layer.
    –| Encryption Info | Source and Destination Port | Data |–
  3. The Session layer follows the same process by adding layer 5 header information. Session layer will manage the data flow, and passes this data to the Transport layer.
    –| Session Header | Encryption Info | Source and Destination Port | Data |–
  4. The Transport layer places information, such as a sequence number to message, and passes it to the Network layer. It allows multiple applications to use the same network at the same time.
    –| Session Header | Encryption Info | Source and Destination Port |Message Secuence | Message Secuence | Data |–
  5. The Network layer places information, such as the source and destination address so the Network layer can determine the best delivery path for the packets, and passes this data to the Data Link layer.
    –| Source and Destination IP | Session Header | Encryption Info | Source and Destination Port | Message Secuence | Data |–
  6. The Data Link layer places header and trailer information, such as a Frame Check Sequence (FCS) to ensure that the information is not corrupt, the Source and Destination MAC addresses, and passes this new data to the Physical layer for transmission across the media.
    –| Source and Destination MAC | Source and Destination IP | Session Header | Encryption Info | Source and Destination Port | Message Secuence |Data | FCS |–
  7. The bit stream is then transmitted as ones and zeros on the Physical layer. It is at this point that the Physical layer ensures bit synchronization. Bit synchronization will ensure the end user data is assembled in the correct order it was sent.
  8. Steps 1 to 7 occur in reverse order on the destination device. Device B collects the raw bits from the physical wire and passes them up the Data Link layer. The Data Link layer removes the headers and trailers and passes the remaining information to the Network layer and so forth until data is received by the Application layer.
  9. Once the request has reached, the response is snet in the same secuence as steps 1 to 7 while reversing the source and destination information recieved in the request (ie. the source becomes the destination and the destination becomes the source).