Métadonnée Un article de Wikipédia, l'encyclopédie libre. Un exemple type est d'associer à une donnée la date à laquelle elle a été produite ou enregistrée, ou à une photo les coordonnées GPS du lieu où elle a été prise. Historique[modifier | modifier le code] Tous les établissements qui ont à gérer de l'information, bibliothèques, archives ou médiathèques ont déjà une longue pratique dans la codification du signalement ou des contenus des documents qu'ils manipulent. Ces descriptions ont ensuite été informatisées sous la forme de notices bibliographiques et normalisées (voir par exemple les formats MARC en 1964 utilisant la norme ISO 2709 dont la conception a démarré en 1960). Les bibliothèques numériques ont eu recours aux mêmes dispositifs pour gérer et localiser des documents électroniques. Le terme métadonnée (en anglais : metadata) est apparu dans le cadre de la description de ressources sur Internet dans les années 1990 et s'est ensuite généralisé. Généralisation[modifier | modifier le code]
MIT researchers measure your pulse, detect heart abnormalities with smartphone camera Last year, a group of researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) showed us just how easy it is to “see” a human heartbeat in ordinary video footage. With a little filtering, a little averaging, and a touch of turn-of-the-century (1900) mathematical analysis, the telltale color changes in the skin associated with the peak pressure pulse of the heart can be seen by anyone. The CSAIL researchers have now rejigged their algorithms to optimize instead for detection of the head motion artifact associated with each beat. The release from MIT on this work mentions that the heart rate variability (HRV) — the moment-to-moment deviations from constancy — can be used to diagnose potential heart issues. Without getting too boggled up, we will just mention here that there are many ways to derive and characterize HRV. On their own, things like pulse, blood oxygenation, pupil dilation, or skin resistance, are of limited use.
Raw data Raw datas (also known as primary data) is a term for data collected from a source. Raw data has not been subjected to processing or any other manipulation, and are also referred to as primary data. Raw data is a relative term (see data). Raw data can be input to a computer program or used in manual procedures such as analyzing statistics from a survey. The term can refer to the binary data on electronic storage devices such as hard disk drives (also referred to as low-level data). In computing, raw data may have the following attributes: possibly containing errors, not validated; in different (colloquial) formats; uncoded or unformatted; and suspect, requiring confirmation or citation. Raw data (sometimes called "sourcey" data or "eggy" data) are the data input to processing. Although raw data has the potential to become "information," extraction, organization, and sometimes analysis and formatting for presentation are required for that to occur.
Ontologie (informatique) Un article de Wikipédia, l'encyclopédie libre. Par analogie, le terme est repris en informatique et en science de l'information, où une ontologie est l'ensemble structuré des termes et concepts représentant le sens d'un champ d'informations, que ce soit par les métadonnées d'un espace de noms, ou les éléments d'un domaine de connaissances. L'ontologie constitue en soi un modèle de données représentatif d'un ensemble de concepts dans un domaine, ainsi que des relations entre ces concepts. Elle est employée pour raisonner à propos des objets du domaine concerné. Plus simplement, on peut aussi dire que l' « ontologie est aux données ce que la grammaire est au langage ». L'objectif premier d'une ontologie est de modéliser un ensemble de connaissances dans un domaine donné, qui peut être réel ou imaginaire. Les ontologies informatiques sont des outils qui permettent précisément de représenter un corpus de connaissances sous une forme utilisable par un ordinateur. Notes
Scientist-developed malware covertly jumps air gaps using inaudible sound Computer scientists have proposed a malware prototype that uses inaudible audio signals to communicate, a capability that allows the malware to covertly transmit keystrokes and other sensitive data even when infected machines have no network connection. The proof-of-concept software—or malicious trojans that adopt the same high-frequency communication methods—could prove especially adept in penetrating highly sensitive environments that routinely place an "air gap" between computers and the outside world. Using nothing more than the built-in microphones and speakers of standard computers, the researchers were able to transmit passwords and other small amounts of data from distances of almost 65 feet. The software can transfer data at much greater distances by employing an acoustical mesh network made up of attacker-controlled devices that repeat the audio signals. "This small bandwidth might actually be enough to transfer critical information (such as keystrokes)," Hanspach wrote. Update
Data type In computer science and computer programming, a data type or simply type is a classification identifying one of various types of data, such as real, integer or Boolean, that determines the possible values for that type; the operations that can be done on values of that type; the meaning of the data; and the way values of that type can be stored. Overview Data types are used within type systems, which offer various ways of defining, implementing and using them. Different type systems ensure varying degrees of type safety. Formally, a type can be defined as "any property of a programme we can determine without executing the program". Almost all programming languages explicitly include the notion of data type, though different languages may use different terminology. Most data types in statistics have comparable types in computer programming, and vice-versa, as shown in the following table: Definition of a "type" Syntactic Representation Representation and behaviour to .
Beyond Social: Read/Write in The Era of Internet of Things This blog was founded in 2003 on the philosophy of a read/write Web - a Web in which people can create content as easily as they consume it. This trend eventually came to be known as Web 2.0 - although others preferred Social Web - and was popularized by activities like blogging and social networking. It would be easy to say that the 'social' element is still the primary part of today's Web, since the popular products of this era enable you to say what's on your mind (Facebook), what's happening (Twitter), or where you are (Foursquare). All of these are mostly social activities. But more significantly, these and other products output data that will increasingly be used to build personalized services for you. The more data there is, the better Web services will be at delivering personal value to you. How We Went Beyond Social So how did we arrive at a Web that is less about social and more about you? It's not how much content you consume that is important, it's about what you do with data.
Air gap (networking) An air gap or air wall is a network security measure that consists of ensuring that a secure computer network is physically isolated from unsecured networks, such as the public Internet or an unsecured local area network. It is often taken for computers and networks that must be extraordinarily secure. Frequently the air gap is not completely literal, such as via the use of dedicated cryptographic devices that can tunnel packets over untrusted networks while avoiding packet rate or size variation; even in this case, there is no ability for computers on opposite sides of the air gap to communicate. In environments where networks or devices are rated to handle different levels of classified information, the two (dis-)connected devices/networks are referred to as "low side" and "high side", low being unclassified and high referring to classified, or classified at a higher level. This is also occasionally referred to as red (classified) and black (unclassified).
Abstraction (computer science) Abstraction captures only those details about an object that are relevant to the current perspective; in both computing and in mathematics, numbers are concepts in programming languages. Numbers can be represented in myriad ways in hardware and software, but, irrespective of how this is done, numerical operations will obey identical rules. Abstraction can apply to control or to data: Control abstraction is the abstraction of actions while data abstraction is that of data structures. Control abstraction involves the use of subprograms and related concepts control flowsData abstraction allows handling data bits in meaningful ways. For example, it is the basic motivation behind datatype. Computing mostly operates independently of the concrete world: The hardware implements a model of computation that is interchangeable with others. A central form of abstraction in computing is language abstraction: new artificial languages are developed to express specific aspects of a system.