Connections in Time Some relationships change over time. Think about your friends from high school, college, work, the city you used to live in, the ones that liked you ex- better, etc. When exploring a social network it is important that we understand not only the strength of the relationship now, but over time. We can use communication between people as a measure. I ran into a visualization that explored how multiple parties where connected by communications in multiple projects. Let’s give our network a little something special. The code to create a relationship is pretty simple, we’ll use the Batch commands again and reference the nodes we create. Let’s put it together to create our graph. Our visualization was built using D3.js and it makes a web request expecting to see a JSON object that looks like: We spent some time getting our data into our graph, now let’s get it all back out. We’ll write another query to get the incoming relationships for each node. Like this: Like Loading...
bio4j blog | news and updates on bio4j Hello everyone, I’m happy to announce a new set of features for our tool Bio4jExplorer plus some changes in its design. I hope this may help both potential and current users to get a better understanding of Bio4j DB structure and contents. Node & Relationship properties You can now check with Bio4jExplorer the properties that has either a node or relationship in the table situated on the lower part of the interface. Name: property name Type: property type (String, int, float, String, …) Indexed: either the property is indexed or not (yes/no) Index name: name of the index associated to this property -if there’s any Index name: type of the index associated to this property -if there’s any Node & Relationship Data source You can also see now from which source a Node or Relationship was imported, some examples would be Uniprot, Uniref, GO, RefSeq… Relationships Name property Get proteins (accession and names) associated to an interpro motif (limited to 10 results) I wish you all a great weekend!
Deploying the Aurelius Graph Cluster The Aurelius Graph Cluster is a cluster of interoperable graph technologies that can be deployed on a multi-machine compute cluster. This post demonstrates how to set up the cluster on Amazon EC2 (a popular cloud service provider) with the following graph technologies: Titan is an Apache2-licensed distributed graph database that leverages existing persistence technologies such as Apache HBase and Cassandra. Titan implements the Blueprints graph API and therefore supports the Gremlin graph traversal/query language. [OLTP] Faunus is an Apache2-licensed batch analytics, graph computing framework based on Apache Hadoop. Please note the date of this publication. Cluster Configuration The examples in this post assume the reader has access to an Amazon EC2 account. 1.~$ ssh email@example.com 4.ubuntu@ip-10-117-55-34:~$ tar -xzf whirr-0.8.0.tar.gz Whirr is a cloud service agnostic tool that simplifies the creation and destruction of a compute cluster. 09. 10. 08. 09.
command lines Update (05-02-2017) My new company Data Science Workshops provides in-company training and coaching on this exciting topic. Update (7-17-2014) You may be interested in my book Data Science at the Command Line, which contains over 70 command-line tools for doing data science. Data science is OSEMN (pronounced as awesome). That is, it involves Obtaining, Scrubbing, Exploring, Modeling, and iNterpreting data. As a data scientist, I spend quite a bit of time on the command-line, especially when there’s data to be obtained, scrubbed, or explored. I would like to continue this discussion by sharing seven command-line tools that I have found useful in my day-to-day work. 1. jq - sed for JSON JSON is becoming an increasingly common data format, especially as APIs are appearing everywhere. Imagine we’re interested in the candidate totals of the 2008 presidential election. curl -s ' where -s puts curl in silent mode. <!
Corner Star Sorry for being tardy to the party. I'm not usually the one to make a fashionably late entrance, but this block boggled my mind for longer than usual. The Corner Star block is made up of flying geese and square in a square units. This tutorial lists measurements for a 16" block, since that's what we'll need for our project this week. Fabric Requirements: Fabric A (patterned or solid)Fabric B (patterned or solid, but different from A)Fabric C (for background) Step 1 - Cut fabric as follows Fabric A: Cut 16 2.5" squares (8" block -- 1.5")(12" block -- 2") Fabric B: Cut 16 2.5" squares (8" block -- 1.5")(12" block -- 2") Fabric C: Cut 9 4.5" squares (8" block -- 2.5")(12" block -- 3.5")Cut 4 2.5" squares (8" block -- 1.5")(12" block -- 2")Cut 12 2.5" x 4.5" rectangles (8" block -- 1.5" x 2.5")(12" block -- 2" x 3.5") Step 2 - Mark a diagonal line across the back of all Fabric A and Fabric B 2.5" squares. Step 3 - Trim seam allowance to 1/4" and press seams as desired. Step 4 - Repeat step two.
HyperGraphDB - A Graph Database HyperGraphDB is a general purpose, extensible, portable, distributed, embeddable, open-source data storage mechanism. It is a graph database designed specifically for artificial intelligence and semantic web projects, it can also be used as an embedded object-oriented database for projects of all sizes. The system is reliable and in production use is several projects, including a search engine and our own Seco scripting IDE where most of the runtime environment is automatically saved as a hypergraph. HyperGraphDB is primarily what its carefully chosen name implies: a database for storing hypergraphs. While it falls into the general family of graph databases, it is hard to categorize HyperGraphDB as yet another database because much of its design evolves around providing the means to manage structure-rich information with arbitrary layers of complexity. Key Facts Possible Usage Scenarios Semantic Web projects are an obvious domain of application of HyperGraphDB.
Neo4j Blog Suggesions on large scale web applications architecture elastic search Previously, on Jepsen, we saw RabbitMQ throw away a staggering volume of data. In this post, we’ll explore Elasticsearch’s behavior under various types of network failure. Elasticsearch is a distributed search engine, built around Apache Lucene–a well-respected Java indexing library. Lucene handles the on-disk storage, indexing, and searching of documents, while ElasticSearch handles document updates, the API, and distribution. Documents are written to collections as free-form JSON; schemas can be overlaid onto collections to specify particular indexing strategies. As with many distributed systems, Elasticsearch scales in two axes: sharding and replication. Because index construction is a somewhat expensive process, Elasticsearch provides a faster, more strongly consistent database backed by a write-ahead log. But this is Jepsen, where nothing works the way it’s supposed to. What does the docs say? What is the Elasticsearch consistency model? I like “instantaneous” promotion. Speed bumps
Marmalade Squares (Two!) Quilt Author's Note: Thanks to readers who found errors in the tutorial. They have been fixed on the website but not pdf download. I will post here when the download has been fixed. A note from Oda May: Marmalade Squares is a popular name! Marmalade Squares is charm pack friendly and fun to sew—it would be a great baby gift or a charity quilt (just wrapped up 100 Quilts for Kids 2013 on my blog-- maybe you will join us next year?) 2 Charm Packs 1 ½ yards neutral Moda Bella Sand or other neutral solid (9900 201)2 ½ yards backing fabric 1 ¼ yards batting (or 44’’ x 50’’ piece of batting) 3/8 yard Stripe in Raspberry for binding (55054 12) Note: You may be able to use 1 charm pack + a little bit of coordinating fabric from the backing or your stash if you would like--the extra charms also work great as a stripe on the quilt back. 1. 2. 3. 4. 5. 6. 7. Cut some charm halves into 2.5’’ x 4.5’’ rectangles, and discard the remaining .5’’ x 2.5’’ rectangle. 8. 9. Putting Together the Quilt Top 1. 2.
phpCallGraph - A Static Call Graph Generator for PHP