background preloader

Internet and packets

Facebook Twitter

Internet protocol suite. The Internet protocol suite is the computer networking model and set of communications protocols used on the Internet and similar computer networks. It is commonly known as TCP/IP, because its most important protocols, the Transmission Control Protocol (TCP) and the Internet Protocol (IP), were the first networking protocols defined in this standard. Often also called the Internet model, it was originally also known as the DoD model, because the development of the networking model was funded by DARPA, an agency of the United States Department of Defense. TCP/IP provides end-to-end connectivity specifying how data should be packetized, addressed, transmitted, routed and received at the destination.

The TCP/IP model and related protocol models are maintained by the Internet Engineering Task Force (IETF). History[edit] Early research[edit] Diagram of the first internetworked connection Specification[edit] Adoption[edit] Key architectural principles[edit] Abstraction layers[edit] Link layer[edit] OSI model. Model with 7 layers to describe communications systems The Open Systems Interconnection model (OSI model) is a conceptual model that characterizes and standardizes the communication functions of a telecommunication or computing system without regard to its underlying internal structure and technology. Its goal is the interoperability of diverse communication systems with standard protocols. The model partitions a communication system into abstraction layers. The original version of the model defined seven layers. A layer serves the layer above it and is served by the layer below it. The model is a product of the Open Systems Interconnection project at the International Organization for Standardization (ISO).

Communication in the OSI-Model (example with layers 3 to 5) History[edit] In the late 1970s, the International Organization for Standardization (ISO) conducted a program to develop general standards and methods of networking. Description of OSI layers[edit] Layer 1: Physical Layer[edit] Link layer. Despite the different semantics of layering in TCP/IP and OSI, the link layer is sometimes described as a combination of the data link layer (layer 2) and the physical layer (layer 1) in the OSI model. However, the layers of TCP/IP are descriptions of operating scopes (application, host-to-host, network, link) and not detailed prescriptions of operating procedures, data semantics, or networking technologies.

RFC 1122 exemplifies that local area network protocols such as Ethernet and IEEE 802, and framing protocols such as Point-to-Point Protocol (PPP) belong to the link layer. Definition in standards and textbooks[edit] Local area networking standards such as Ethernet and IEEE 802 specifications use terminology from the seven-layer OSI model rather than the TCP/IP model. The TCP/IP model in general does not consider physical specifications, rather it assumes a working network infrastructure that can deliver media level frames on the link. Link layer protocols[edit] IETF standards[edit] Bob Metcalfe on the First Ethernet LAN. Internet layer. Internet-layer protocols use IP-based packets. The internet layer does not include the protocols that define communication between local (on-link) network nodes which fulfill the purpose of maintaining link states between the local nodes, such as the local network topology, and that usually use protocols that are based on the framing of packets specific to the link types.

Such protocols belong to the link layer. A common design aspect in the internet layer is the robustness principle: "Be liberal in what you accept, and conservative in what you send"[1] as a misbehaving host can deny Internet service to many other users. Purpose[edit] The internet layer has three basic functions: In Version 4 of the Internet Protocol (IPv4), during both transmit and receive operations, IP is capable of automatic or intentional fragmentation or defragmentation of packets, based, for example, on the maximum transmission unit (MTU) of link elements. Core protocols[edit] Security[edit] IETF standards[edit] IP address. Numerical label used to identify a network interface in an IP network Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number.[2] However, because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP (IPv6), using 128 bits for the IP address, was standardized in 1998.[3][4][5] IPv6 deployment has been ongoing since the mid-2000s.

IP addresses are written and displayed in human-readable notations, such as 172.16.254.1 in IPv4, and 2001:db8:0:1234:0:567:8:1 in IPv6. The size of the routing prefix of the address is designated in CIDR notation by suffixing the address with the number of significant bits, e.g., 192.168.1.15/24, which is equivalent to the historically used subnet mask 255.255.255.0. Network administrators assign an IP address to each device connected to a network. Such assignments may be on a static (fixed or permanent) or dynamic basis, depending on network practices and software features. Function IP versions.

Dynamic Host Configuration Protocol. The Dynamic Host Configuration Protocol (DHCP) is a standardized network protocol used on Internet Protocol (IP) networks for dynamically distributing network configuration parameters, such as IP addresses for interfaces and services. With DHCP, computers request IP addresses and networking parameters automatically from a DHCP server, reducing the need for a network administrator or a user to configure these settings manually. Overview[edit] Computers use the Dynamic Host Configuration Protocol for requesting Internet Protocol parameters from a network server, such as an IP address. The protocol operates based on the client-server model.

DHCP is very common in all modern networks[1] ranging in size from home networks to large campus networks and regional Internet service provider networks. Most residential network routers receive a globally unique IP address within the provider network. Depending on implementation, the DHCP server may have three methods of allocating IP-addresses: Traceroute. Traceroute outputs the list of traversed routers in simple text format, together with timing information Implementation[edit] Traceroute on Snow Leopard – Mac Traceroute, by default, sends a sequence of User Datagram Protocol (UDP) packets addressed to a destination host; ICMP Echo Request or TCP SYN packets can also be used.[1] The time-to-live (TTL) value, also known as hop limit, is used in determining the intermediate routers being traversed towards the destination.

Routers decrement packets' TTL value by 1 when routing and discard packets whose TTL value has reached zero, returning the ICMP error message ICMP Time Exceeded.[2] Common default values for TTL are 128 (Windows OS) and 64 (Unix-based OS). Traceroute works by sending packets with gradually increasing TTL value, starting with TTL value of 1. The first router receives the packet, decrements the TTL value and drops the packet because it then has TTL value zero. The sender expects a reply within a specified number of seconds. Vint Cerf. Vinton Gray "Vint" Cerf[1] (/ˈsɜrf/; born June 23, 1943) is an American computer scientist, who is recognized as one of[5] "the fathers of the Internet",[6] sharing this title with American computer scientist Bob Kahn.[7][8] His contributions have been acknowledged and lauded, repeatedly, with honorary degrees and awards that include the National Medal of Technology,[1] the Turing Award,[9] the Presidential Medal of Freedom,[10] and membership in the National Academy of Engineering.

In the early days, Cerf was a program manager for the United States Department of Defense Advanced Research Projects Agency (DARPA) funding various groups to develop TCP/IP technology. When the Internet began to transition to a commercial opportunity during the late 1980s,[citation needed] Cerf moved to MCI where he was instrumental in the development of the first commercial email system (MCI Mail) connected to the Internet. Cerf was instrumental in the funding and formation of ICANN from the start. Vint Cerf on the History of Packets. Packet switching. Packet switching is a digital networking communications method that groups all transmitted data – regardless of content, type, or structure – into suitably sized blocks, called packets.

Overview[edit] An animation demonstrating data packet switching across a network (Click on the image to load the animation) Packet switching features delivery of variable bitrate data streams (sequences of packets) over a computer network which allocates transmission resources as needed using statistical multiplexing or dynamic bandwidth allocation techniques.

When traversing network adapters, switches, routers, and other network nodes, packets are buffered and queued, resulting in variable delay and throughput depending on the network's capacity and the traffic load on the network. History[edit] First proposed for military uses in the early 1960s and implemented on small networks in 1968, packet switching became one of the fundamental networking technologies behind the Internet and most local area networks. Leonard Kleinrock. Leonard Kleinrock (born June 13, 1934) is an American engineer and computer scientist. A computer science professor at UCLA's Henry Samueli School of Engineering and Applied Science, he made several important contributions to the field of computer networking, in particular to the theoretical side of computer networking. He also played an important role in the development of the ARPANET, the precursor to the Internet, at UCLA.[3] His most well-known and significant work is his early work on queueing theory, which has applications in many fields, among them as a key mathematical background to packet switching, one of the basic technologies behind the Internet.

His initial contribution to this field was his doctoral thesis at the Massachusetts Institute of Technology in 1962, published in book form in 1964; he later published several of the standard works on the subject. He described this work as: Education and career[edit] ARPANET and the Internet[edit] Awards[edit] See also[edit] Works[edit] Len Kleinrock on the Theory of Packets. Queueing theory. Queueing theory is the mathematical study of waiting lines, or queues.[1] In queueing theory a model is constructed so that queue lengths and waiting times can be predicted.[1] Queueing theory is generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide a service.

Single queueing nodes[edit] Single queueing nodes are usually described using Kendall's notation in the form A/S/C where A describes the time between arrivals to the queue, S the size of jobs and C the number of servers at the node.[5][6] Many theorems in queue theory can be proved by reducing queues to mathematical systems known as Markov chains, first described by Andrey Markov in his 1906 paper.[7] The M/M/1 queue is a simple model where a single server serves jobs that arrive according to a Poisson process and have exponentially distributed service requirements.

Service disciplines[edit] Processor sharing Priority See also[edit] Time-sharing. Computing resource shared by concurrent users In computing, time-sharing is the sharing of a computing resource among many tasks or users. It enables multi-tasking by a single user or enables multiple user sessions. Developed during the 1960s, its emergence as the prominent model of computing in the 1970s represented a major technological shift in the history of computing. By allowing many users to interact concurrently with a single computer, time-sharing dramatically lowered the cost of providing computing capability, made it possible for individuals and organizations to use a computer without owning one,[1] and promoted the interactive use of computers and the development of new interactive applications. History[edit] Batch processing[edit] Comparatively inexpensive card punch or paper tape writers were used by programmers to write their programs "offline".

The alternative of allowing the user to operate the computer directly was generally far too expensive to consider. Development[edit] Transmission Control Protocol. Web browsers use TCP when they connect to servers on the World Wide Web, and it is used to deliver email and transfer files from one location to another. HTTP, HTTPS, SMTP, POP3, IMAP, SSH, FTP, Telnet and a variety of other protocols are typically encapsulated in TCP. Historical origin[edit] In May 1974 the Institute of Electrical and Electronic Engineers (IEEE) published a paper titled "A Protocol for Packet Network Intercommunication. Network function[edit] The protocol corresponds to the transport layer of TCP/IP suite. TCP is utilized extensively by many of the Internet's most popular applications, including the World Wide Web (WWW), E-mail, File Transfer Protocol, Secure Shell, peer-to-peer file sharing, and some streaming media applications.

TCP is optimized for accurate delivery rather than timely delivery, and therefore, TCP sometimes incurs relatively long delays (on the order of seconds) while waiting for out-of-order messages or retransmissions of lost messages. Padding. Domain Name System. The Domain Name System (DNS) is a hierarchical distributed naming system for computers, services, or any resource connected to the Internet or a private network.

It associates various information with domain names assigned to each of the participating entities. Most prominently, it translates domain names, which can be easily memorized by humans, to the numerical IP addresses needed for the purpose of computer services and devices worldwide. The Domain Name System is an essential component of the functionality of most Internet services because it is the Internet's primary directory service. The Domain Name System distributes the responsibility of assigning domain names and mapping those names to IP addresses by designating authoritative name servers for each domain.

Authoritative name servers are assigned to be responsible for their supported domains, and may delegate authority over sub-domains to other name servers. Function[edit] History[edit] Structure [edit] Domain name space[edit] TCP congestion-avoidance algorithm. Transmission Control Protocol (TCP) uses a network congestion-avoidance algorithm that includes various aspects of an additive increase/multiplicative decrease (AIMD) scheme, with other schemes such as slow-start to achieve congestion avoidance. The TCP congestion-avoidance algorithm is the primary basis for congestion control in the Internet.[1][2][3][4][5] Naming history[edit] Two such variations are those offered by TCP Tahoe and Reno. The two algorithms were retrospectively named after the 4.3BSD operating system in which each first appeared (which were themselves named after Lake Tahoe and the nearby city of Reno, Nevada).

The "Tahoe" algorithm first appeared in 4.3BSD-Tahoe (which was made to support the CCI Power 6/32 “Tahoe” minicomputer), and was made available to non-AT&T licensees as part of the “4.3BSD Networking Release 1”; this ensured its wide distribution and implementation. TCP Tahoe and Reno[edit] Congestion avoidance. Fast Recovery. TCP Vegas[edit] TCP New Reno[edit] Van Jacobson. Van Jacobson: The Slow-Start Algorithm. Port (computer networking) List of TCP and UDP port numbers. Transport Layer Security. Application layer. Hypertext Transfer Protocol. Request–response. Telnet. Nmap. Van Jacobson on Content-Centric Networking.