background preloader

BitTorrent

BitTorrent
Programmer Bram Cohen, a former University at Buffalo graduate student in Computer Science,[4] designed the protocol in April 2001 and released the first available version on July 2, 2001,[5] and the final version in 2008.[6] BitTorrent clients are available for a variety of computing platforms and operating systems including an official client released by Bittorrent, Inc. As of 2009, BitTorrent reportedly had about the same number of active users online as viewers of YouTube and Facebook combined.[7][8] As of January 2012[update], BitTorrent is utilized by 150 million active users (according to BitTorrent, Inc.). Based on this figure, the total number of monthly BitTorrent users can be estimated at more than a quarter of a billion.[9] Description[edit] The middle computer is acting as a seed to provide a file to the other computers which act as peers. The file being distributed is divided into segments called pieces. When a peer completely downloads a file, it becomes an additional seed.

eDonkey network The eDonkey Network (also known as the eDonkey2000 network or eD2k) is a decentralized, mostly server-based, peer-to-peer file sharing network best suited to share big files among users, and to provide long term availability of files. Like most sharing networks, it is decentralized, as there is not any central hub for the network; also, files are not stored on a central server but are exchanged directly between users based on the peer-to-peer principle. Currently, the eD2k network is not supported by any organization (in the past it was supported by the MetaMachine Corporation, its creator, which now is out of business) and development and maintenance is being fully provided by its community and client developers. There are many programs that act as the client part of the network. The original eD2k protocol has been extended by subsequent releases of both eserver and eMule programs, generally working together to decide what new features the eD2k protocol should support. Features[edit]

Chmura obliczeniowa Diagram przedstawiający „chmurę” Zasada działania chmury obliczeniowej[edytuj | edytuj kod] Zasada działania polega na przeniesieniu całego ciężaru świadczenia usług IT (danych, oprogramowania lub mocy obliczeniowej) na serwer i umożliwienie stałego dostępu poprzez komputery klienckie. Dzięki temu ich bezpieczeństwo nie zależy od tego, co stanie się z komputerem klienckim, a szybkość procesów wynika z mocy obliczeniowej serwera. Wystarczy zalogować się z jakiegokolwiek komputera z dostępem do Internetu by zacząć korzystać z dobrodziejstw chmury obliczeniowej. Pojęcie chmury nie jest jednoznaczne, w szerokim znaczeniu przetwarzanym w chmurze jest wszystko przetwarzane na zewnątrz zapory sieciowej, włączając w to konwencjonalny outsourcing[2]. Rodzaje chmur[edytuj | edytuj kod] Rozróżniamy chmury: Modele chmury obliczeniowej[edytuj | edytuj kod] Współcześnie coraz więcej nowych funkcjonalności umieszczanych jest w modelu chmur obliczeniowych. Kolokacja[edytuj | edytuj kod] Osobny artykuł: IPaaS.

File Transfer Protocol FTP is built on a client-server architecture and uses separate control and data connections between the client and the server.[1] FTP users may authenticate themselves using a clear-text sign-in protocol, normally in the form of a username and password, but can connect anonymously if the server is configured to allow it. For secure transmission that hides (encrypts) the username and password, and encrypts the content, FTP is often secured with SSL/TLS ("FTPS"). SSH File Transfer Protocol ("SFTP") is sometimes also used instead, but is technologically different. History[edit] The original specification for the File Transfer Protocol was written by Abhay Bhushan and published as RFC 114 on 16 April 1971. Protocol overview[edit] Communication and data transfer[edit] Illustration of starting a passive connection using port 21 The server responds over the control connection with three-digit status codes in ASCII with an optional text message. ASCII mode: used for text. Login[edit] Syntax[edit] or:

Computación en la nube Computación en la nube, o «la nube» La computación en la nube (del inglés cloud computing),[1]​ conocida también como servicios en la nube, informática en la nube, nube de cómputo o simplemente «la nube», es un paradigma que permite ofrecer servicios de computación a través de una red, que usualmente es internet. Introducción[editar] La computación en la nube es la disponibilidad a pedido de los recursos del sistema informático, especialmente el almacenamiento de datos y la capacidad de cómputo, sin una gestión activa directa por parte del usuario. A menudo, el término «computación en la nube» se lo relaciona con una reducción de costos, disminución de vulnerabilidades y garantía de disponibilidad. La computación en la nube es un nuevo modelo de prestación de servicios tecnológicos que impacta sin lugar a dudas en diversos negocios. Comienzos[editar] Fundamentos[editar] Desde los años sesenta, la computación en nube se ha desarrollado a lo largo de una serie de líneas. Tipos de PaaS[editar]

Gnutella2 History[edit] In November 2002, Michael Stokes announced the Gnutella2 protocol to the Gnutella Developers Forum. While some thought the goals stated for Gnutella2, primarily to make a clean break with the gnutella 0.6 protocol and start over so that some of gnutella's less clean parts would be done more elegantly, to be impressive and desirable, other developers, primarily those of LimeWire and BearShare, thought it a "cheap publicity stunt" and discounted technical merits. Many still refuse to refer to the network as "Gnutella2" and instead refer to it as "Mike's Protocol" ("MP").[2] With the developers entrenched in their positions, a flame war soon erupted, further cementing both sides' resolve.[3][4][5][6] The draft specifications were released on March 26, 2003, and more detailed specifications soon followed. Design[edit] Gnutella2 relies extensively on UDP, rather than TCP, for searches. Protocol features[edit] Differences from gnutella[edit] Protocol[edit] Search algorithm[edit]

Modelos de Despliegue de CloudComputing (taxonomía del NIST) » Realcloud Project En el comentario del 20-feb-2011 veíamos la definición que el NIST realiza para el concepto de CloudComputing, y al día siguiente (el 21-feb-2011) analizábamos los 3 “Modelos (o niveles) de Servicio” (SaaS, Paas, e IaaS) tal como los define el NIST (pues ambas, definición y clasificación, son la más aceptadas en la actualidad). También el NIST distingue entre 4 “Modelos de Despliegue” (Privado, Comunitario, Público e Híbrido) que introduce una ligera diferencia (el modelo comunitario) sobre el resto de la bibliografía actual al respecto (y que, en mi opinión, es importante considerar en un país como España, donde el cooperativismo tiene buenas raíces): Nube Pública: La infraestructura de la nube está disponible al público en general (o un subconjunto en función de los criterios de venta del Proveedor). La infraestructura pertenece a la organización que vende sus servicios de Cloud Computing.

Hypertext Transfer Protocol The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, hypermedia information systems.[1] HTTP is the foundation of data communication for the World Wide Web. The standards development of HTTP was coordinated by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C), culminating in the publication of a series of Requests for Comments (RFCs), most notably RFC 2616 (June 1999), which defined HTTP/1.1, the version of HTTP most commonly used today. In June 2014, RFC 2616 was retired and HTTP/1.1 was redefined by RFCs 7230, 7231, 7232, 7233, 7234, and 7235.[2] HTTP/2 is currently in draft form. Technical overview[edit] URL beginning with the HTTP scheme and the WWW domain name label. A web browser is an example of a user agent (UA). HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. History[edit] The first documented version of HTTP was HTTP V0.9 (1991).

Gnutella Gnutella (/nʌˈtɛlə/ with a silent g, but often /ɡnʌˈtɛlə/) (possibly by analogy with the GNU Project) is a large peer-to-peer network. It was the first decentralized peer-to-peer network of its kind, leading to other, later networks adopting the model.[1] It celebrated a decade of existence on March 14, 2010 and has a user base in the millions for peer-to-peer file sharing. In June 2005, gnutella's population was 1.81 million computers[2] increasing to over three million nodes by January 2006.[3] In late 2007, it was the most popular file sharing network on the Internet with an estimated market share of more than 40%.[4] History[edit] The first client was developed by Justin Frankel, Gianluca Rubinacci and Tom Pepper of Nullsoft in early 2000, soon after the company's acquisition by AOL. The next day, AOL stopped the availability of the program over legal concerns and restrained Nullsoft from doing any further work on the project. Design[edit] The gnutella search and retrieval protocol

HTTP Secure Hypertext Transfer Protocol Secure (HTTPS) is a communications protocol for secure communication over a computer network, with especially wide deployment on the Internet. Technically, it is not a protocol in and of itself; rather, it is the result of simply layering the Hypertext Transfer Protocol (HTTP) on top of the SSL/TLS protocol, thus adding the security capabilities of SSL/TLS to standard HTTP communications. The security of HTTPS is therefore that of the underlying TLS, which uses long term public and secret keys to exchange a short term session key to encrypt the data flow between client and server. An important property in this context is perfect forward secrecy (PFS), so the short term session key cannot be derived from the long term asymmetric secret key; however, PFS is not widely adopted.[1] To guarantee one is talking to the partner one wants to talk to, X.509 certificates are used. Overview[edit] HTTPS creates a secure channel over an insecure network. Technical[edit]

Magnet URI scheme The Magnet URI scheme, defines the format of magnet links, a de facto standard for identifying files by their content, via cryptographic hash value rather than by their location. Although magnet links can be used in a number of contexts, they are particularly useful in peer-to-peer file sharing networks because they allow resources to be referred to without the need for a continuously available host, and can be generated by anyone who already has the file, without the need for a central authority to issue them. This makes them popular for use as "guaranteed" search terms within the file sharing community where anyone can distribute a magnet link to ensure that the resource retrieved by that link is the one intended, regardless of how it is retrieved. History[edit] Technical description[edit] Magnet URIs consist of a series of one or more parameters, the order of which is not significant, formatted in the same way as query strings that ordinarily terminate HTTP URLs. magnet:? Design[edit] x.

ed2k URI scheme In computing, eD2k links ( are hyperlinks used to denote files stored on computers connected to the eDonkey filesharing P2P network. General[edit] Many programs, such as eMule, MLDonkey and the original eDonkey2000 client by MetaMachine, which introduced the link type, as well as others using the eDonkey file sharing protocol, can be used to manage files stored in the filesharing network. eD2k links allow a file to be identified from a link in a web browser and to be downloaded thereafter by a client like eMule, Shareaza or any other compatible software. This linking feature was one of the first URIs to be introduced in peer-to-peer file sharing, and had a vast effect on the development of the eDonkey network, as it allowed external link sites to provide verified content within the network. Nowadays, so-called Magnet links have replaced eD2k links in practice. Like other URI protocols, web browsers can be configured to automatically handle ed2k URIs. File link format[edit] Example:

Shareaza Shareaza was developed by Michael Stokes[5] until June 1, 2004,[5] and was later maintained by a group of volunteers. On June 1, 2004, Shareaza 2.0 was released, along with the source code, under the GNU General Public License (GPL), making it free software. Shareaza v2.7.4.0 was released on March 30, 2014. In 2010 Shareaza ranked 12th on the SourceForge all-time download statistics.[6] Day-by-day download statistics are available;[7] as of May 2013[update] Shareaza was downloaded about 12,000 times a week. Features[edit] Multi-network[edit] Security filter[edit] The Shareaza client has some basic content filters including a forced child and optional adult pornography filter, and some other optional filters such as a filter for files encumbered with Digital rights management (DRM). Plugins[edit] Shareaza running in windowed mode with several activated skins. Skins[edit] This feature is also used for localization. Modes[edit] Shareaza contains 3 user modes. IRC[edit] History[edit] Yahoo! V2.7.x.x

Related: