background preloader

Informatique

Facebook Twitter

List of FTP server return codes. List of HTTP header fields. HTTP header fields are components of the message header of requests and responses in the Hypertext Transfer Protocol (HTTP). They define the operating parameters of an HTTP transaction. General format[edit] Field names[edit] A core set of fields is standardized by the Internet Engineering Task Force (IETF) in RFC 2616 and other updates and extension documents (e.g., RFC 4229), and must be implemented by all HTTP-compliant protocol implementations.

Additional field names and permissible values may be defined by each application. The permanent registry of headers and repository of provisional registrations are maintained by the IANA. Non-standard header fields were conventionally marked by prefixing the field name with X- .[2] However, this convention became deprecated in June 2012 due to the inconveniences it caused when non-standard headers became standard.[3] A prior restriction on use of Downgraded- has also since been lifted.[4] Field values[edit] Size limits[edit] Requests[edit] [edit] List of HTTP status codes.

Response codes of the Hypertext Transfer Protocol The Internet Assigned Numbers Authority (IANA) maintains the official registry of HTTP status codes.[1] All HTTP response status codes are separated into five classes or categories. The first digit of the status code defines the class of response, while the last two digits do not have any classifying or categorization role. There are five classes defined by the standard: 1xx informational response – the request was received, continuing process2xx successful – the request was successfully received, understood, and accepted3xx redirection – further action needs to be taken in order to complete the request4xx client error – the request contains bad syntax or cannot be fulfilled5xx server error – the server failed to fulfil an apparently valid request 1xx informational response An informational response indicates that the request was received and understood. 100 Continue 101 Switching Protocols 102 Processing (WebDAV; RFC 2518) 2xx success 410 Gone.

Uniform Resource Locator. Description[modifier | modifier le code] Les URL constituent un sous-ensemble des identifiants uniformes de ressource (Uniform Resource Identifier, URI), identifiants uniques d'accès à une ressource. La syntaxe d'une URL est décrite dans la RFC 3986[3]. Une URL, outre les adresses web, concerne d'autres ressources, selon d'autres schémas, par exemple : Adresse web[modifier | modifier le code] Les URL sont une création du World Wide Web et sont utilisées pour identifier les pages et les sites web.

Elles sont aussi appelées par métonymie adresses web. L'article sur les adresses web porte sur l'identité des sites web et les aspects techniques, économiques et juridiques qui s'y rattachent, ainsi que des différentes traductions en français de l'acronyme URL. Cet article décrit les URL en tant que standard technique : toutes les formes qu'elles peuvent prendre, notamment pour pointer des ressources hors du Web, ainsi que les principaux usages techniques. Usages[modifier | modifier le code] Website. Set of related web pages served from a single domain A website (also written as a web site) is a collection of web pages and related content that is identified by a common domain name and published on at least one web server.

Websites are typically dedicated to a particular topic or purpose, such as news, education, commerce, entertainment or social networking. Hyperlinking between web pages guides the navigation of the site, which often starts with a home page. As of December 2022, the top 5 most visited websites are Google Search, YouTube, Facebook, Twitter, and Instagram. Background History While "web site" was the original spelling (sometimes capitalized "Web site", since "Web" is a proper noun when referring to the World Wide Web), this variant has become rarely used, and "website" has become the standard spelling.

Static website A static website is one that has Web pages stored on the server in the format that is sent to a client Web browser. Dynamic website Types See also References. IPv6. IPv6 (Internet Protocol version 6) est un protocole réseau sans connexion de la couche 3 du modèle OSI (Open Systems Interconnection). IPv6 est l'aboutissement des travaux menés au sein de l'IETF au cours des années 1990 pour succéder à IPv4 et ses spécifications ont été finalisées dans la RFC 2460[1] en décembre 1998. IPv6 a été standardisé dans la RFC 8200[2] en juillet 2017. Grâce à des adresses de 128 bits au lieu de 32 bits, IPv6 dispose d'un espace d'adressage bien plus important qu'IPv4 (plus de 340 sextillions, ou , soit près de 7,9 × 1028 de fois plus que le précédent).

Cette quantité d'adresses considérable permet une plus grande flexibilité dans l'attribution des adresses et une meilleure agrégation des routes dans la table de routage d'Internet. La traduction d'adresse, qui a été rendue populaire par le manque d'adresses IPv4, n'est plus nécessaire. Pour avoir une idée à l'échelle humaine, on pourrait adresser chaque grain de sable sur la Terre, plusieurs fois[3]. Avec état : Uniform Resource Locator. Web address to a particular file or page A Uniform Resource Locator (URL), colloquially termed a web address, is a reference to a web resource that specifies its location on a computer network and a mechanism for retrieving it. A URL is a specific type of Uniform Resource Identifier (URI),[2] although many people use the two terms interchangeably.

[a] URLs occur most commonly to reference web pages (http) but are also used for file transfer (ftp), email (mailto), database access (JDBC), and many other applications. Most web browsers display the URL of a web page above the page in an address bar. History Berners-Lee later expressed regret at the use of dots to separate the parts of the domain name within URIs, wishing he had used slashes throughout, and also said that, given the colon following the first component of a URI, the two slashes before the domain name were unnecessary. An early (1993) draft of the HTML Specification[11] referred to "Universal" Resource Locators. Syntax Example: Fully qualified domain name. A fully qualified domain name (FQDN), sometimes also referred to as an absolute domain name,[1] is a domain name that specifies its exact location in the tree hierarchy of the Domain Name System (DNS). It specifies all domain levels, including the top-level domain and the root zone.[2] A fully qualified domain name is distinguished by its lack of ambiguity: it can be interpreted only in one way.

The DNS root domain is unnamed, which is expressed by the empty label, resulting in a fully qualified domain name ending with the full stop (period) character. In contrast to a domain name that is fully specified, a domain name that does not include the full path of labels up to the DNS root, is often called a partially qualified domain name. Syntax[edit] A fully qualified domain name consists of a list of domain labels representing the hierarchy from the lowest relevant level in the DNS to the top-level domain (TLD). The DNS root is unnamed, expressed as the empty label terminated by the dot. Plain text. Text file of The Human Side of Animals by Royal Dixon, displayed by the command cat in an xterm window The encoding has traditionally been either ASCII, sometimes EBCDIC. Unicode-based encodings such as UTF-8 and UTF-16 are gradually replacing the older ASCII derivatives limited to 7 or 8 bit codes. Plain text and rich text[edit] Files that contain markup or other meta-data are generally considered plain-text, as long as the entirety remains in directly human-readable form (as in HTML, XML, and so on (as Coombs, Renear, and DeRose argue,[1] punctuation is itself markup)).

According to The Unicode Standard, "Plain text is a pure sequence of character codes; plain Ue-encoded text is therefore a sequence of Unicode character codes. " For instance, Rich text such as SGML, RTF, HTML, XML, and TEX relies on plain text. According to The Unicode Standard, plain text has two main properties in regard to rich text: "plain text is the underlying content stream to which formatting can be applied. "" HTTP 404. The web site hosting server will typically generate a "404 Not Found" web page when a user attempts to follow a broken or dead link; hence the 404 error is one of the most recognizable errors encountered on the World Wide Web. Overview[edit] When communicating via HTTP, a server is required to respond to a request, such as a web browser request for a web page, with a numeric response code and an optional, mandatory, or disallowed (based upon the status code) message. In the code 404, the first digit indicates a client error, such as a mistyped Uniform Resource Locator (URL).

The following two digits indicate the specific error encountered. At the HTTP level, a 404 response code is followed by a human-readable "reason phrase". A 404 error is often returned when pages have been moved or deleted. 404 errors should not be confused with DNS errors, which appear when the given URL refers to a server name that does not exist. Custom error pages[edit] The Wikimedia 404 message Slang usage[edit] Computer science. Computer science deals with the theoretical foundations of information and computation, together with practical techniques for the implementation and application of these foundations History[edit] The earliest foundations of what would become computer science predate the invention of the modern digital computer.

Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Further, algorithms for performing computations have existed since antiquity, even before sophisticated computing equipment were created. The ancient Sanskrit treatise Shulba Sutras, or "Rules of the Chord", is a book of algorithms written in 800 BCE for constructing geometric objects like altars using a peg and chord, an early precursor of the modern field of computational geometry.

Time has seen significant improvements in the usability and effectiveness of computing technology. Contributions[edit] These contributions include: Data mining. Process of extracting and discovering patterns in large data sets Data mining is the process of extracting and discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems.[1] Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal of extracting information (with intelligent methods) from a data set and transforming the information into a comprehensible structure for further use.[1][2][3][4] Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD.[5] Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating.[1] Etymology[edit] Background[edit] The manual extraction of patterns from data has occurred for centuries.

Process[edit] Machine learning. Machine learning is a subfield of computer science[1] that evolved from the study of pattern recognition and computational learning theory in artificial intelligence.[1] Machine learning explores the construction and study of algorithms that can learn from and make predictions on data.[2] Such algorithms operate by building a model from example inputs in order to make data-driven predictions or decisions,[3]:2 rather than following strictly static program instructions. Machine learning is closely related to and often overlaps with computational statistics; a discipline that also specializes in prediction-making. It has strong ties to mathematical optimization, which deliver methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit, rule-based algorithms is infeasible. Example applications include spam filtering, optical character recognition (OCR),[4] search engines and computer vision.

Internet. U.S. Army soldiers "surfing the Internet" at Forward Operating Base Yusifiyah, Iraq The Internet is a global system of interconnected computer networks that use the standard Internet protocol suite (TCP/IP) to link several billion devices worldwide. It is a network of networks[1] that consists of millions of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies.

The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents and applications of the World Wide Web (WWW), the infrastructure to support email, and peer-to-peer networks for file sharing and telephony. Most traditional communications media, including telephony and television, are being reshaped or redefined by the Internet, giving birth to new services such as voice over Internet Protocol (VoIP) and Internet Protocol television (IPTV). Terminology Users. Metasearch engine. A metasearch engine is a search tool[1][2] that sends user requests to several other search engines and/or databases and aggregates the results into a single list or displays them according to their source. Metasearch engines enable users to enter search criteria once and access several search engines simultaneously. Metasearch engines operate on the premise that the Web is too large for any one search engine to index it all and that more comprehensive search results can be obtained by combining the results from several search engines.

This also may save the user from having to use multiple search engines separately. The process of fusion also improves the search results.[3] The term "metasearch" is frequently used to classify a set of commercial search engines, see the list of Metasearch engine, but is also used to describe the paradigm of searching multiple data sources in real time. Operation[edit] architecture of a metasearch engine See also[edit] References[edit] External links[edit] List of wikis. This page contains a list of notable websites that use a wiki model. These websites will sometimes use different software in order to provide the best content management system for their users' needs, but they all share the same basic editing and viewing website model. §Table[edit] §See also[edit] §References[edit] §External links[edit] Wiki.

List of wikis. Wiki. Metadata. Adobe Systems.