Description Graph databases employ nodes, properties, and edges. Graph databases are based on graph theory, and employ nodes, edges, and properties. Nodes represent entities such as people, businesses, accounts, or any other item to be tracked. In contrast, graph databases directly store the relationships between records. The true value of the graph approach becomes evident when one performs searches that are more than one level deep. Properties add another layer of abstraction to this structure that also improves many common queries. Relational databases are very well suited to flat data layouts, where relationships between data is one or two levels deep. Properties Graph databases are a powerful tool for graph-like queries. History In the pre-history of graph databases, in the mid-1960s Navigational databases such as IBM's IMS supported tree-like structures in its hierarchical model, but the strict tree structure could be circumvented with virtual records.
XML databaseAn XML database is a data persistence software system that allows data to be stored in XML format. These data can then be queried, exported and serialized into the desired format. XML databases are usually associated with document-oriented databases. Two major classes of XML database exist: XML-enabled: these may either map XML to traditional database structures (such as a relational database), accepting XML as input and rendering XML as output, or more recently support native XML types within the traditional database. Rationale for XML in databases O'Connell gives one reason for the use of XML in databases: the increasingly common use of XML for data transport, which has meant that "data is extracted from databases and put into XML documents and vice-versa". It may prove more efficient (in terms of conversion costs) and easier to store the data in XML format. XML Enabled databases RDBMS that support the ISO XML Type are: Example of XML Type Query in IBM DB2 SQL
NoSQL Frankfurt 2010 - The GraphDB Landscape and sonesData structureDifferent kinds of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example, B-trees are particularly well-suited for implementation of databases, while compiler implementations usually use hash tables to look up identifiers. Data structures provide a means to manage large amounts of data efficiently, such as large databases and internet indexing services. Usually, efficient data structures are a key to designing efficient algorithms. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Overview Many others are possible, but they tend to be further variations and compounds of the above. Basic principles The implementation of a data structure usually requires writing a set of procedures that create and manipulate instances of that structure. Language support See also References External links
Social networks in the database: using a graph databaseRecently Lorenzo Alberton gave a talk on Trees In The Database where he showed the most used approaches to storing trees in a relational database. Now he has moved on to an even more interesting topic with his article Graphs in the database: SQL meets social networks. Right from the beginning of his excellent article Alberton puts this technical challenge in a proper context: Graphs are ubiquitous. Social or P2P networks, thesauri, route planning systems, recommendation systems, collaborative filtering, even the World Wide Web itself is ultimately a graph! Given their importance, it’s surely worth spending some time in studying some algorithms and models to represent and work with them effectively. After a brief explanation of what a graph data structure is, the article goes on to show how graphs can be represented in a table-based database. This post is going to show how the same things can be done when using a native graph database, namely Neo4j. Representing a graph Transitive closure
Cloud computingCloud computing metaphor: For a user, the network elements representing the provider-rendered services are invisible, as if obscured by a cloud. Cloud computing is a computing term or metaphor that evolved in the late 1990s, based on utility and consumption of computer resources. Cloud computing involves application systems which are executed within the cloud and operated through internet enabled devices. Overview Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a utility (like the electricity grid) over a network. At the foundation of cloud computing is the broader concept of converged infrastructure and shared services. Cloud computing, or in simpler shorthand just "the cloud", also focuses on maximizing the effectiveness of the shared resources. Cloud vendors are experiencing growth rates of 50% per annum. History of cloud computing Origin of the term The origin of the term cloud computing is unclear.
Database normalizationEdgar F. Codd, the inventor of the relational model, introduced the concept of normalization and what we now know as the First Normal Form (1NF) in 1970. Codd went on to define the Second Normal Form (2NF) and Third Normal Form (3NF) in 1971, and Codd and Raymond F. Boyce defined the Boyce-Codd Normal Form (BCNF) in 1974. Informally, a relational database table is often described as "normalized" if it is in the Third Normal Form. Most 3NF tables are free of insertion, update, and deletion anomalies. A standard piece of database design guidance is that the designer should first create a fully normalized design; then selective denormalization can be performed for performance reasons. Objectives The objectives of normalization beyond 1NF (First Normal Form) were stated as follows by Codd: 1. The sections below give details of each of these objectives. Free the database of modification anomalies An update anomaly. An insertion anomaly. A deletion anomaly. Example
Access controlA sailor allows a driver to enter a military base. In the fields of physical security and information security, access control is the selective restriction of access to a place or other resource. The act of accessing may mean consuming, entering, or using. Permission to access a resource is called authorization. Physical security Physical security access control with a hand geometry scanner Example of fob based access control using an ACT reader Physical access control is a matter of who, where, and when. Electronic access control uses computers to solve the limitations of mechanical locks and keys. Access control system operation When a credential is presented to a reader, the reader sends the credential’s information, usually a number, to a control panel, a highly reliable processor. The above description illustrates a single factor transaction. There are three types (factors) of authenticating information: Credential Access control system components 1. 2. 3. 4.
DatabaseDatabase management systems (DBMSs) are specially designed software applications that interact with the user, other applications, and the database itself to capture and analyze data. A general-purpose DBMS is a software system designed to allow the definition, creation, querying, update, and administration of databases. Well-known DBMSs include MySQL, MariaDB, PostgreSQL, SQLite, Microsoft SQL Server, Oracle, SAP HANA, dBASE, FoxPro, IBM DB2, LibreOffice Base and FileMaker Pro. A database is not generally portable across different DBMSs, but different DBMSs can interoperate by using standards such as SQL and ODBC or JDBC to allow a single application to work with more than one database. Terminology and overview Formally, "database" refers to the data themselves and supporting data structures. A "database management system" (DBMS) is a suite of computer software providing the interface between users and a database or databases. Applications and roles History