High Speed Database
Database normalization is a technique for designing relational database schemas that ensures that the data is optimal for ad-hoc querying and that modifications such as deletion or insertion of data does not lead to data inconsistency. Database denormalization is the process of optimizing your database for reads by creating redundant data. A consequence of denormalization is that insertions or deletions could cause data inconsistency if not uniformly applied to all redundant copies of the data within the database. Building Scalable Databases: Denormalization, the NoSQL Movement and Digg
A Hitchhiker's Guide to NOSQL v1.0
CTO of 10gen, MongoDB creators: We are sort of similar to MySQL or PostgreSQL in terms of how you could use us « myNoSQL Some quotes and comments from ☞ (a quite long) interview with Eliot Horowitz, CTO of 10gen, creators of MongoDB: I think the first question you have to ask about any database these days is, “What’s the data model?” The only thing I’d add is: “… and how does that fit my problem?”.
In two weeks we’ll present a paper on the Dynamo technology at SOSP, the prestigious biannual Operating Systems conference. Amazon's Dynamo
CodeFutures offers an effective sharding solution with our product, dbShards. Database Sharding