background preloader

List of HTTP header fields

List of HTTP header fields
HTTP header fields are components of the message header of requests and responses in the Hypertext Transfer Protocol (HTTP). They define the operating parameters of an HTTP transaction. General format[edit] Field names[edit] A core set of fields is standardized by the Internet Engineering Task Force (IETF) in RFC 2616 and other updates and extension documents (e.g., RFC 4229), and must be implemented by all HTTP-compliant protocol implementations. The permanent registry of headers and repository of provisional registrations are maintained by the IANA. Non-standard header fields were conventionally marked by prefixing the field name with X- .[2] However, this convention became deprecated in June 2012 due to the inconveniences it caused when non-standard headers became standard.[3] A prior restriction on use of Downgraded- has also since been lifted.[4] Field values[edit] A few fields can contain comments (i.e. in User-Agent, Server, Via fields), which can be ignored by software.[5] [edit] Related:  Technology

List of HTTP status codes Response codes of the Hypertext Transfer Protocol The Internet Assigned Numbers Authority (IANA) maintains the official registry of HTTP status codes.[1] All HTTP response status codes are separated into five classes or categories. The first digit of the status code defines the class of response, while the last two digits do not have any classifying or categorization role. There are five classes defined by the standard: 1xx informational response – the request was received, continuing process2xx successful – the request was successfully received, understood, and accepted3xx redirection – further action needs to be taken in order to complete the request4xx client error – the request contains bad syntax or cannot be fulfilled5xx server error – the server failed to fulfil an apparently valid request 1xx informational response An informational response indicates that the request was received and understood. 100 Continue 101 Switching Protocols 102 Processing (WebDAV; RFC 2518) 2xx success 410 Gone

Adresse web Les adresses web, également appelées URL (Uniform Resource Locator), sont des adresses utilisées pour identifier et localiser des ressources sur Internet, telles que des pages Web, des images, des vidéos et des fichiers. Elles sont généralement formées par la combinaison de protocoles de communication (tels que HTTP ou HTTPS), le nom de domaine (ou l'adresse IP) du serveur où se trouve la ressource, et un chemin vers la ressource spécifique. Les adresses web sont utilisées pour accéder à des contenus sur Internet à travers un navigateur web ou un autre client de réseau. Une invention fondamentale[modifier | modifier le code] Les trois inventions à la base du World Wide Web sont : Bien qu'un protocole (HTTP) et un format de données (HTML) aient été développés spécifiquement pour le Web, le web est conçu pour imposer un minimum de contraintes techniques[1]. La ressource est accessible en tant que fichier local page.html dans le répertoire /home/tim/.

IPv6 IPv6 (Internet Protocol version 6) est un protocole réseau sans connexion de la couche 3 du modèle OSI (Open Systems Interconnection). IPv6 est l'aboutissement des travaux menés au sein de l'IETF au cours des années 1990 pour succéder à IPv4 et ses spécifications ont été finalisées dans la RFC 2460[1] en décembre 1998. IPv6 a été standardisé dans la RFC 8200[2] en juillet 2017. Grâce à des adresses de 128 bits au lieu de 32 bits, IPv6 dispose d'un espace d'adressage bien plus important qu'IPv4 (plus de 340 sextillions, ou , soit près de 7,9 × 1028 de fois plus que le précédent). IPv6 dispose également de mécanismes d'attribution automatique des adresses et facilite la renumérotation. En 2011, seules quelques sociétés ont entrepris de déployer la technologie IPv6 sur leur réseau interne, Google[5] notamment. En 2022, le taux d'implémentation en France serait de plus de 75 % et la couverture chez les opérateurs français serait très élevée (à l'exclusion de SFR). 2001:db8:0:85a3:0:0:ac1f:8001

Computer science Computer science deals with the theoretical foundations of information and computation, together with practical techniques for the implementation and application of these foundations History[edit] The earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Further, algorithms for performing computations have existed since antiquity, even before sophisticated computing equipment were created. Blaise Pascal designed and constructed the first working mechanical calculator, Pascal's calculator, in 1642.[3] In 1673 Gottfried Leibniz demonstrated a digital mechanical calculator, called the 'Stepped Reckoner'.[4] He may be considered the first computer scientist and information theorist, for, among other reasons, documenting the binary number system. Contributions[edit] Philosophy[edit]

Plain text Text file of The Human Side of Animals by Royal Dixon, displayed by the command cat in an xterm window The encoding has traditionally been either ASCII, sometimes EBCDIC. Unicode-based encodings such as UTF-8 and UTF-16 are gradually replacing the older ASCII derivatives limited to 7 or 8 bit codes. Plain text and rich text[edit] Files that contain markup or other meta-data are generally considered plain-text, as long as the entirety remains in directly human-readable form (as in HTML, XML, and so on (as Coombs, Renear, and DeRose argue,[1] punctuation is itself markup)). The use of plain text rather than bit-streams to express markup, enables files to survive much better "in the wild", in part by making them largely immune to computer architecture incompatibilities. According to The Unicode Standard, "Plain text is a pure sequence of character codes; plain Ue-encoded text is therefore a sequence of Unicode character codes." Plain text, the Unicode definition[edit] Usage[edit] Encoding[edit]

HTML HTML or HyperText Markup Language is the standard markup language used to create web pages. HTML is written in the form of HTML elements consisting of tags enclosed in angle brackets (like <html>). HTML tags most commonly come in pairs like <h1>and </h1>, although some tags represent empty elements and so are unpaired, for example <img>. The first tag in a pair is the start tag, and the second tag is the end tag (they are also called opening tags and closing tags). The purpose of a web browser is to read HTML documents and compose them into visible or audible web pages. Web browsers can also refer to Cascading Style Sheets (CSS) to define the look and layout of text and other material. History[edit] The historic logo made by the W3C Development[edit] In 1980, physicist Tim Berners-Lee, who was a contractor at CERN, proposed and prototyped ENQUIRE, a system for CERN researchers to use and share documents. Further development under the auspices of the IETF was stalled by competing interests.

Fully qualified domain name A fully qualified domain name (FQDN), sometimes also referred to as an absolute domain name,[1] is a domain name that specifies its exact location in the tree hierarchy of the Domain Name System (DNS). It specifies all domain levels, including the top-level domain and the root zone.[2] A fully qualified domain name is distinguished by its lack of ambiguity: it can be interpreted only in one way. The DNS root domain is unnamed, which is expressed by the empty label, resulting in a fully qualified domain name ending with the full stop (period) character. In contrast to a domain name that is fully specified, a domain name that does not include the full path of labels up to the DNS root, is often called a partially qualified domain name. Syntax[edit] A fully qualified domain name consists of a list of domain labels representing the hierarchy from the lowest relevant level in the DNS to the top-level domain (TLD). The DNS root is unnamed, expressed as the empty label terminated by the dot.

Category:Image processing Image processing is the application of signal processing techniques to the domain of images — two-dimensional signals such as photographs or video. Image processing does typically involve filtering an image using various types of filters. Related categories: computer vision and imaging. Subcategories This category has the following 13 subcategories, out of 13 total. Pages in category "Image processing" The following 200 pages are in this category, out of 213 total. (previous 200) (next 200)(previous 200) (next 200) Uniform Resource Locator Web address to a particular file or page A uniform resource locator (URL), colloquially known as an address on the Web, is a reference to a resource that specifies its location on a computer network and a mechanism for retrieving it. A URL is a specific type of Uniform Resource Identifier (URI),[2] although many people use the two terms interchangeably.[a] URLs occur most commonly to reference web pages (HTTP/HTTPS) but are also used for file transfer (FTP), email (mailto), database access (JDBC), and many other applications. Most web browsers display the URL of a web page above the page in an address bar. A typical URL could have the form which indicates a protocol (http), a hostname (www.example.com), and a file name (index.html). History Early WorldWideWeb collaborators including Berners-Lee originally proposed the use of UDIs: Universal Document Identifiers. Syntax Every HTTP URL conforms to the syntax of a generic URI. The URI comprises: Example: Notes

Data (computing) In an alternate usage, binary files (which are not human-readable) are sometimes called "data" as distinguished from human-readable "text".[4] The total amount of digital data in 2007 was estimated to be 281 billion gigabytes (= 281 exabytes).[5][6] At its heart, a single datum is a value stored at a specific location. To store data bytes in a file, they have to be serialized in a "file format". Typically, programs are stored in special file types, different from those used for other data. Keys in data provide the context for values. Computer main memory or RAM is arranged as an array of "sets of electronic on/off switches" or locations beginning at 0. Data has some inherent features when it is sorted on a key. Retrieving a small subset of data from a much larger set implies searching though the data sequentially. The advent of databases introduced a further layer of abstraction for persistent data storage.

Website Set of related web pages served from a single domain A website (also written as a web site) is a collection of web pages and related content that is identified by a common domain name and published on at least one web server. Websites are typically dedicated to a particular topic or purpose, such as news, education, commerce, entertainment or social networking. Hyperlinking between web pages guides the navigation of the site, which often starts with a home page. As of December 2022, the top 5 most visited websites are Google Search, YouTube, Facebook, Twitter, and Instagram. Background History While "web site" was the original spelling (sometimes capitalized "Web site", since "Web" is a proper noun when referring to the World Wide Web), this variant has become rarely used, and "website" has become the standard spelling. Static website A static website is one that has Web pages stored on the server in the format that is sent to a client Web browser. Dynamic website Types See also References

Technological singularity The technological singularity is the hypothesis that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing civilization in an event called the singularity.[1] Because the capabilities of such an intelligence may be impossible for a human to comprehend, the technological singularity is an occurrence beyond which events may become unpredictable, unfavorable, or even unfathomable.[2] The first use of the term "singularity" in this context was by mathematician John von Neumann. Proponents of the singularity typically postulate an "intelligence explosion",[5][6] where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human. Basic concepts Superintelligence Non-AI singularity Intelligence explosion Exponential growth Plausibility

Related: