background preloader

Internet and packets

Facebook Twitter

Internet protocol suite. The Internet protocol suite is the computer networking model and set of communications protocols used on the Internet and similar computer networks.

Internet protocol suite

It is commonly known as TCP/IP, because its most important protocols, the Transmission Control Protocol (TCP) and the Internet Protocol (IP), were the first networking protocols defined in this standard. Often also called the Internet model, it was originally also known as the DoD model, because the development of the networking model was funded by DARPA, an agency of the United States Department of Defense. TCP/IP provides end-to-end connectivity specifying how data should be packetized, addressed, transmitted, routed and received at the destination. OSI model. Link layer. Despite the different semantics of layering in TCP/IP and OSI, the link layer is sometimes described as a combination of the data link layer (layer 2) and the physical layer (layer 1) in the OSI model.

Link layer

However, the layers of TCP/IP are descriptions of operating scopes (application, host-to-host, network, link) and not detailed prescriptions of operating procedures, data semantics, or networking technologies. Bob Metcalfe on the First Ethernet LAN. Internet layer. IP address. The designers of the Internet Protocol defined an IP address as a 32-bit number[1] and this system, known as Internet Protocol Version 4 (IPv4), is still in use today.

IP address

However, because of the growth of the Internet and the predicted depletion of available addresses, a new version of IP (IPv6), using 128 bits for the address, was developed in 1995.[3] IPv6 was standardized as RFC 2460 in 1998,[4] and its deployment has been ongoing since the mid-2000s. IP addresses are usually written and displayed in human-readable notations, such as 172.16.254.1 (IPv4), and 2001:db8:0:1234:0:567:8:1 (IPv6). The Internet Assigned Numbers Authority (IANA) manages the IP address space allocations globally and delegates five regional Internet registries (RIRs) to allocate IP address blocks to local Internet registries (Internet service providers) and other entities. IP versions. Dynamic Host Configuration Protocol. The Dynamic Host Configuration Protocol (DHCP) is a standardized network protocol used on Internet Protocol (IP) networks for dynamically distributing network configuration parameters, such as IP addresses for interfaces and services.

Dynamic Host Configuration Protocol

With DHCP, computers request IP addresses and networking parameters automatically from a DHCP server, reducing the need for a network administrator or a user to configure these settings manually. Overview[edit] Computers use the Dynamic Host Configuration Protocol for requesting Internet Protocol parameters from a network server, such as an IP address. The protocol operates based on the client-server model. DHCP is very common in all modern networks[1] ranging in size from home networks to large campus networks and regional Internet service provider networks. On large networks that consist of multiple links, a single DHCP server may service the entire network when aided by DHCP relay agents located on the interconnecting routers. Traceroute. Traceroute outputs the list of traversed routers in simple text format, together with timing information Implementation[edit] Traceroute on Snow Leopard – Mac Traceroute, by default, sends a sequence of User Datagram Protocol (UDP) packets addressed to a destination host; ICMP Echo Request or TCP SYN packets can also be used.[1] The time-to-live (TTL) value, also known as hop limit, is used in determining the intermediate routers being traversed towards the destination.

traceroute

Routers decrement packets' TTL value by 1 when routing and discard packets whose TTL value has reached zero, returning the ICMP error message ICMP Time Exceeded.[2] Common default values for TTL are 128 (Windows OS) and 64 (Unix-based OS). Traceroute works by sending packets with gradually increasing TTL value, starting with TTL value of 1. Vint Cerf. Vint Cerf on the History of Packets. Packet switching. Packet switching is a digital networking communications method that groups all transmitted data – regardless of content, type, or structure – into suitably sized blocks, called packets.

Packet switching

Overview[edit] An animation demonstrating data packet switching across a network (Click on the image to load the animation) Packet switching features delivery of variable bitrate data streams (sequences of packets) over a computer network which allocates transmission resources as needed using statistical multiplexing or dynamic bandwidth allocation techniques. When traversing network adapters, switches, routers, and other network nodes, packets are buffered and queued, resulting in variable delay and throughput depending on the network's capacity and the traffic load on the network.

Leonard Kleinrock. Leonard Kleinrock (born June 13, 1934) is an American engineer and computer scientist.

Leonard Kleinrock

A computer science professor at UCLA's Henry Samueli School of Engineering and Applied Science, he made several important contributions to the field of computer networking, in particular to the theoretical side of computer networking. He also played an important role in the development of the ARPANET, the precursor to the Internet, at UCLA.[3] His most well-known and significant work is his early work on queueing theory, which has applications in many fields, among them as a key mathematical background to packet switching, one of the basic technologies behind the Internet. His initial contribution to this field was his doctoral thesis at the Massachusetts Institute of Technology in 1962, published in book form in 1964; he later published several of the standard works on the subject.

He described this work as: Len Kleinrock on the Theory of Packets. Queueing theory. Queueing theory is the mathematical study of waiting lines, or queues.[1] In queueing theory a model is constructed so that queue lengths and waiting times can be predicted.[1] Queueing theory is generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide a service.

Queueing theory

Single queueing nodes[edit] Single queueing nodes are usually described using Kendall's notation in the form A/S/C where A describes the time between arrivals to the queue, S the size of jobs and C the number of servers at the node.[5][6] Many theorems in queue theory can be proved by reducing queues to mathematical systems known as Markov chains, first described by Andrey Markov in his 1906 paper.[7] The M/M/1 queue is a simple model where a single server serves jobs that arrive according to a Poisson process and have exponentially distributed service requirements.

Service disciplines[edit] Processor sharing Priority See also[edit] Time-sharing. This article is about the computing term.

Time-sharing

For the type of property ownership, see Timeshare. Transmission Control Protocol. Web browsers use TCP when they connect to servers on the World Wide Web, and it is used to deliver email and transfer files from one location to another.

Transmission Control Protocol

HTTP, HTTPS, SMTP, POP3, IMAP, SSH, FTP, Telnet and a variety of other protocols are typically encapsulated in TCP. Historical origin[edit] Domain Name System. The Domain Name System (DNS) is a hierarchical distributed naming system for computers, services, or any resource connected to the Internet or a private network. It associates various information with domain names assigned to each of the participating entities. Most prominently, it translates domain names, which can be easily memorized by humans, to the numerical IP addresses needed for the purpose of computer services and devices worldwide. TCP congestion-avoidance algorithm.

Transmission Control Protocol (TCP) uses a network congestion-avoidance algorithm that includes various aspects of an additive increase/multiplicative decrease (AIMD) scheme, with other schemes such as slow-start to achieve congestion avoidance. The TCP congestion-avoidance algorithm is the primary basis for congestion control in the Internet.[1][2][3][4][5] Naming history[edit] Two such variations are those offered by TCP Tahoe and Reno. The two algorithms were retrospectively named after the 4.3BSD operating system in which each first appeared (which were themselves named after Lake Tahoe and the nearby city of Reno, Nevada). Van Jacobson. Van Jacobson in January 2006. Van Jacobson: The Slow-Start Algorithm.

Port (computer networking) List of TCP and UDP port numbers. Transport Layer Security. Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols designed to provide communications security over a computer network.[1] They use X.509 certificates and hence asymmetric cryptography to authenticate the counterparty with whom they are communicating,[2] and to exchange a symmetric key. This session key is then used to encrypt data flowing between the parties. This allows for data/message confidentiality, and message authentication codes for message integrity and as a by-product, message authentication. [clarification needed] Several versions of the protocols are in widespread use in applications such as web browsing, electronic mail, Internet faxing, instant messaging, and voice-over-IP (VoIP).

An important property in this context is forward secrecy, so the short-term session key cannot be derived from the long-term asymmetric secret key.[3] Application layer. Although both models use the same term for their respective highest level layer, the detailed definitions and purposes are different. Hypertext Transfer Protocol. Request–response. Request–response or request–reply is one of the basic methods computers use to communicate to each other. Telnet. Telnet is a network protocol used on the Internet or local area networks to provide a bidirectional interactive text-oriented communication facility using a virtual terminal connection.

Nmap. The software provides a number of features for probing computer networks, including host discovery and service and operating system detection. These features are extensible by scripts that provide more advanced service detection,[2] vulnerability detection,[2] and other features. Nmap is also capable of adapting to network conditions including latency and congestion during a scan. Nmap is under development and refinement by its user community. Van Jacobson on Content-Centric Networking.