background preloader

Going Digital - designing for the web

Facebook Twitter

Graph Databases, NOSQL and Neo4j. Introduction Of the many different datamodels, the relational model has been dominating since the 80s, with implementations like Oracle, MySQL and MSSQL - also known as Relational Database Management System (RDBMS). Lately, however, in an increasing number of cases the use of relational databases leads to problems both because of Deficits and problems in the modeling of data and constraints of horizontal scalability over several servers and big amounts of data. There are two trends that bringing these problems to the attention of the international software community: The exponential growth of the volume of data generated by users, systems and sensors, further accelerated by the concentration of large part of this volume on big distributed systems like Amazon, Google and other cloud services.

The relational databases have increasing problems to cope with these trends. This article aims to give an overview of the position of Graph Databases in the NOSQL-movement. The NOSQL-Environment 1. Databases: relational vs object vs graph vs document. Is relational database dying? Why a model affirmed and finally consolidated over years is now object of discussion? Which alternatives exist and how they can match your needs in practical situations?

Follow me, in this post, to understand how databases evolved and how to choose between new technological alternatives that have already produced valuable results. Note: this post focus operational (OLTP) databases. I have already addressed OLAP approach for strategic applications in many precedent posts. A little history Let's go back to 70s. The hierarchical model organized data into records of different types, each having a predefined set of fields. The network model was a flexible alternative natively supporting many-to-many relationships. As you probably know the relational model organizes data into tables. No matter the approach any database system is built to guarantee data consistency under concurrent usage by mean of ACID transaction support. Relational won the battle Object Databases. Issue & Project Tracking Software.

Seven Domains of Predictability. Looking at the spectrum of different process technologies, we can identify seven distinct categories, and we can organize them according to how predictable the problem is that they address. Here is a detailed slidecast about each category. I have been kicking around this chart of 7 types of process technology for a few months. In Portugal in April I presented an expanded explanation of the categories, which got a number of positive comments. I recorded the explanations and put together with the slide because sometimes it is just easier to hear someone go through it than to read it. This presentation is to help clarify the spectrum of approaches that are available for supporting business process-like activity. Others have attempted this, but my approach is based on comparing the amount of predictability of the particular business problem being solved. Lets start with the overview and big picture.

On the left of this diagram is application development. An example is a doctor. Summary. Cumulative Flow Diagrams with Google Spreadsheets. Jan 12 2010: UPDATE - a new version of the spreadsheet with bugfixes is here. Let's face it: Most project stakeholders care more about time to market than preposterous "agile" metrics such as velocity and burndown charts. Teams that only use these tools to report progress completely conceal information about how quickly a team can deliver new features. This is because velocity and burndown charts ignore the time to market aspect and provide zero quantitative information about how to improve it.

A Cumulative Flow Diagram (CFD) is a visual tool that communicates a team's ability to deliver working software in a timely manner, showing a detailed picture of the entire process. Its primary purpose is to improve the current process, and not to predict the future (although it can be used for that too). If you know Scrum, think of a CFD as a burndown chart that goes beyond showing when work items (user stories, MMFs or even tasks) are moved to the last column on the story board.

Are you ready? Creating and Interpreting Cumulative Flow Diagrams. Cumulative Flow Diagrams (CFDs) are valuable tools for tracking and forecasting agile projects. Today we will look at creating CFDs and using them to gain insights into project issues, cycle times, and likely completion dates. In Microsoft Excel a CFD can be created using the “Area Graph” option.

The attached file “Example CFD.XLS” contains the data used to create the CFDs in this article, including the one shown below. Figure 1 – Sample Cumulative Flow DiagramDownload cfd_example.xls Interpreting CFDs Figure 1 shows the features completed verses the features remaining for a fictional project that is still in progress. The red area represents all the planned features to be built, this number has risen from 400 to 450 in June and then 500 in August as additional features were added to the project. Little’s Law In Donald Reinertsen’s “Managing the Design Factory” published in 1997, Little’s Law is introduced as a way to analyze queues from CFDs. Figure 2 – Examining Work In Progress 1. 40+ Web Design and Development Resources for Beginners. Web Development for Beginners (Resources) While I usually try to write stuff that’s geared more to experienced developers, I don’t want to neglect those who are just starting out.

I’ve been collecting links to beginners resources for web development for some time now, so I thought I’d share that list here. Feel free to add your own in the comments. A Beginner’s Guide to HTML & CSS (website) A simple and comprehensive guide dedicated to helping beginners learn HTML and CSS. Outlining the fundamentals, this guide works through all common elements of front-end design and development.” HTML & CSS: Design and Build Websites (book) A beautifully designed book by Jon Duckett that has been the #1 selling web design book on Amazon for a number of months now. 30 Days to Learn HTML and CSS (screencast series) A video screencast series by Jeffrey Way of the Tuts+ Network that teaches viewers HTML and CSS from the ground up.

Don’t Fear The Internet (screencast series) Video lessons created by Jessica Hische and Russ Maschmeyer. Hans Rosling: Debunking third-world myths with the best stats you've ever seen. Tim Berners-Lee: The next Web of open, linked data. Tim Berners-Lee: The year open data went worldwide. Quality Indicators for Linked Data Datasets. At a high level, the main drivers for me are: how easy it is to find the information I'm looking for; and conversely, how hard is it to find wrong information. From this point of view, and concentrating on the semantic web portion of the picture, we want to figure out measures and metrics for how well a particular (RDF/linked data) dataset or set of datasets satisfies these drivers. I can't see how a naive count of the number of links between datasets can offer much of a measure of how easy it is to find what you're looking for, nor how difficult it is to find wrong information.

There's probably some deep scale-free characteristic (c.f. the six degrees of separation meme) suggestive of a perfect balance between having no links -- which feels like it should be bad -- to having a complete graph with links everywhere -- which feels like it would be a needle+haystack and equally bad. Link This answer is marked "community wiki". Semantic Web. The Semantic Web is a collaborative movement led by international standards body the World Wide Web Consortium (W3C).[1] The standard promotes common data formats on the World Wide Web. By encouraging the inclusion of semantic content in web pages, the Semantic Web aims at converting the current web, dominated by unstructured and semi-structured documents into a "web of data". The Semantic Web stack builds on the W3C's Resource Description Framework (RDF).[2] According to the W3C, "The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries".[2] The term was coined by Tim Berners-Lee for a web of data that can be processed by machines.[3] While its critics have questioned its feasibility, proponents argue that applications in industry, biology and human sciences research have already proven the validity of the original concept.

History[edit] Purpose[edit] Limitations of HTML[edit] Semantic Web solutions[edit] Relational Databases and the Semantic Web (in Design Issues) $Id: RDB-RDF.html,v 1.25 2009/08/27 21:38:09 timbl Exp $ Up to Design Issues There are many other data models which RDF's Directed Labelled Graph (DLG) model compares closely with, and maps onto. See a summary in What the Semantic Web can represent One is the Relational Database (RDB) model. The Semantic Web and Entity-Relationship models Is the RDF model an entity-relationship mode? For example, one person may define a vehicle as having a number of wheels and a weight and a length, but not foresee a color. Apart from this simple but significant change, many concepts involved in the ER modelling take across directly onto the Semantic Web model.

The Semantic Web and Relational Databases The semantic web data model is very directly connected with the model of relational databases. A record is an RDF node; the field (column) name is RDF propertyType; and the record field (table cell) is a value. Special aspects of the RDB model The original RDB model defined by E.F. Schemas and Schemas. Advantages and Myths of RDF. A 10th Birthday Salute to RDF’s Role in Powering Data Interoperability There has been much welcomed visibility for the semantic Web and linked data of late. Many wonder why it has not happened earlier; and some observe progress has still been too slow. But what is often overlooked is the foundational role of RDF — the Resource Description Framework. From my own perspective focused on the issues of data interoperability and data federation, RDF is the single most important factor in today’s advances.

Sure, there have been other models and other formulations, but I think we now see the Goldilocks “just right” combination of expressiveness and simplicity to power the foreseeable future of data interoperability. So, on this 10th anniversary of the birth of RDF [1], I’d like to re-visit and update some much dated discussions regarding the advantages of RDF, and more directly address some of the mis-perceptions and myths that have grown up around this most useful framework. A Simple Intro to RDF. Sparql - Triple Stores vs Relational Databases.