background preloader

OpenTSDB - A Distributed, Scalable Monitoring System

OpenTSDB - A Distributed, Scalable Monitoring System

http://opentsdb.net/

Related:  DevOpsHadoopbigData et Visu

How Hadoop Works? HDFS case study The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. The Hadoop library contains two major components HDFS and MapReduce, in this post we will go inside each HDFS part and discover how it works internally. HDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients.

Setting Up Metabase v0.17.0 / Table of Contents / Setting Up Metabase This guide will help you set up Metabase once you’ve gotten it installed. If you haven’t installed Metabase yet, you can get Metabase here. Start Metabase up for the first time and you’ll see this screen: Go ahead and click Let’s get started.

Make your bike electric in 60 seconds by GeoOrbital Sometimes you just need all-wheel drive. Just like being in a car with all-wheel drive you have a lot more stability and control in harsh environments. The GeoOrbital wheel performs great in all weather conditions but if you pedal while using the wheel you will have all-wheel drive when you need it most. The GeoOrbital wheel comes with a flat-proof solid foam tire, so you never have to worry about getting a flat or even checking tire pressure. By using the latest hi-density foam technology the tires act and weigh the same as a traditional bike tire, but you will never get a flat. Never!

DocOps: Interview with Jim Turcotte Jim Turcotte The following is an interview with Jim Turcotte, a senior vice president for CA Technologies and business unit executive for the Information Services team. Jim recently posted several articles on LinkedIn Pulse about something he calls DocOps, so I asked him some follow-up questions. Can you explain DocOps in more detail? First, let me start by explaining the application economy. Customers today decide whether or not to do business with you based on your software. The Hadoop ecosystem: the (welcome) elephant in the room (infographic) To say Hadoop has become really big business would be to understate the case. At a broad level, it’s the focal point of a immense big data movement, but Hadoop itself is now a software and services market of its very own. In this graphic, we aim to map out the current ecosystem of Hadoop software and services — application and infrastructure software, as well as open source projects — and where those products fall in terms of use cases and delivery model. Click on a company name for more information about how they are using this technology. A couple of points about the methodology might be valuable: The first is that these are products and projects that are built with Hadoop in mind and that aim to either extend its utility in some way or expose its core functions in a new manner.

README v0.17.0 / Table of Contents In-depth Guides Users Guide Removing Old Records for Logstash / Elasticsearch / Kibana » Raging Computer Part 4 of 4 – Part 1 – Part 2 – Part 3 Now that you’ve got all your logs flying through logstash into elasticsearch, how to remove old records that are no longer doing anything but consuming space and ram for the index? These are all functions of elasticsearch. Deleting is pretty easy, as is closing an index. The awesome people working on elasticsearch already have the solution! Comparing Pattern Mining on a Billion Records with HP Vertica and Hadoop Pattern mining can help analysts discover hidden structures in data. Pattern mining has many applications—from retail and marketing to security management. For example, from a supermarket data set, you may be able to predict whether customers who buy Lay’s potato chips are likely to buy a certain brand of beer.

Smart Data Access with HADOOP  HIVE  “SAP HANA smart data access enables remote data to be accessed as if they are local tables in SAP HANA, without copying the data into SAP HANA. Not only does this capability provide operational and cost benefits, but most importantly it supports the development and deployment of the next generation of analytical applications which require the ability to access, synthesize and integrate data from multiple systems in real-time regardless of where the data is located or what systems are generating it.” Reference: Section 2.4.2 Currently Supported databases by SAP HANA smart data access include: Teradata Database: version 13.0SAPSybase IQ: version 15.4 ESD#3 and 16.0SAP Sybase Adaptive Service Enterprise: version 15.7 ESD#4Intel Distribution for Apache Hadoop: version 2.3 (This includes Apache Hadoop version 1.0.3 and Apache Hive 0.9.0.) Also Refer to:

Exclusive: a behind-the-scenes look at Facebook release engineering Facebook is headquartered in Menlo Park, California at a site that used belong to Sun Microsystems. A large sign with Facebook's distinctive "like" symbol—a hand making the thumbs-up gesture—marks the entrance. When I arrived at the campus recently, a small knot of teenagers had congregated, snapping cell phone photos of one another in front of the sign. Thanks to the film The Social Network, millions of people know the crazy story of Facebook's rise from dorm room project to second largest website in the world. But few know the equally intriguing story about the engine humming beneath the social network's hood: the sophisticated technical infrastructure that delivers an interactive Web experience to hundreds of millions of users every day.

Related:  Monitoring