background preloader

Spark and Scala

Facebook Twitter

Setup Your Zeppelin Notebook For Data Science in Apache Spark. Spark & R: data frame operations with SparkR. In this third tutorial (see the previous one) we will introduce more advanced concepts about SparkSQL with R that you can find in the SparkR documentation, applied to the 2013 American Community Survey housing data.

Spark & R: data frame operations with SparkR

These concepts are related with data frame manipulation, including data slicing, summary statistics, and aggregations. We will use them in combination with ggplot2 visualisations. We will explain what we do at every step but, if you want to go deeper into ggplot2 for exploratory data analysis, I did this Udacity on-line course in the past and I highly recommend it!

All the code for these series of Spark and R tutorials can be found in its own GitHub repository. Go there and make it yours. Creating a SparkSQL context and loading data In order to explore our data, we first need to load it into a SparkSQL data frame. Loading, Updating and Deleting From HBase Tables using HiveQL and Python. Earlier in the week I blogged about a customer looking to offload part of the data warehouse platform to Hadoop, extracting data from a source system and then incrementally loading data into HBase and Hive before analysing it using OBIEE11g.

Loading, Updating and Deleting From HBase Tables using HiveQL and Python

One of the potential complications for this project was that the fact and dimension tables weren’t append-only; Hive and HDFS are generally considered write-once, read-many systems where data is inserted or appended into a file or table but generally then can’t be updated or overwritten without deleting the whole file and writing it again with the updated dataset. Taking a step back for a moment, HBase is a NoSQL, key/value-type database where each row has a key (for example, “SFO” for San Francisco airport) and then a number of columns, grouped into column families.

(Note that at the start, these key values won’t be there – they’re more for illustrative purposes) At the time of HBase table definition, you specify one or more “column families”. Прокладка трубопровода со spark.ml / Хабрахабр. Сегодня я бы хотел рассказать о появившемся в версии 1.2 новом пакете, получившем название spark.ml.

Прокладка трубопровода со spark.ml / Хабрахабр

Он создан, чтобы обеспечить единый высокоуровневый API для алгоритмов машинного обучения, который поможет упростить создание и настройку, а также объединение нескольких алгоритмов в один конвейер или рабочий процесс. Сейчас на дворе у нас версия 1.4.1, и разработчики заявляют, что пакет вышел из альфы, хотя многие компоненты до сих пор помечены как Experimental или DeveloperApi. Spark Streaming with Kafka & HBase Example - henning's weblog. Even a simple example using Spark Streaming doesn’t quite feel complete without the use of Kafka as the message hub.

Spark Streaming with Kafka & HBase Example - henning's weblog

More and more use cases rely on Kafka for message transportation. By taking a simple streaming example (Spark Streaming – A Simple Example source at GitHub) together with a fictive word count use case this post describes the different ways to add Kafka to a Spark Streaming application. Additionally this posts describes the possibility to write out results to HBase from Spark directly using the TableOutputFormat. Kafka as DStream For data collection from incoming sources Spark Streaming uses special Receiver tasks executed inside an Executor on one or more worker nodes. The receiver in Spark Streaming can be a source of unreliable data transportation and needs careful considerations. Onurakpolat/awesome-bigdata. Matt Malone's Old-Fashioned Software Development Blog. In my last post I reviewed the implementation of scala.List’s foldLeft and foldRight methods.

Matt Malone's Old-Fashioned Software Development Blog

That post included a couple of simple examples, but today I’d like to give you a whole lot more. The foldLeft method is extremely versatile. It can do thousands of jobs. Of course, it’s not the best tool for EVERY job, but when working on a list problem it’s a good idea to stop and think, “Should I be using foldLeft?” Below, I’ll present a list of problem descriptions and solutions. Sum Write a function called ‘sum’ which takes a List[Int] and returns the sum of the Ints in the list. I’ll explain this first example in a bit more depth than the others, just to make sure we all know how foldLeft works.

These two definitions above are equivalent. The foldLeft method takes that initial value, 0, and the function literal, and it begins to apply the function on each member of the list (parameter ‘c’), updating the result value (parameter ‘r’) each time. Product Did you get it? Count Average Last. Cheatsheets/spark.md at master · KjellSchubert/cheatsheets. Exactly-once Spark Streaming from Apache Kafka. Thanks to Cody Koeninger, Senior Software Engineer at Kixer, for the guest post below about Apache Kafka integration points in Apache Spark 1.3.

Exactly-once Spark Streaming from Apache Kafka

Spark 1.3 will ship in CDH 5.4. The new release of Apache Spark, 1.3, includes new experimental RDD and DStream implementations for reading data from Apache Kafka. SparkR (R on Spark) - Spark 1.4.1 Documentation. SparkR is an R package that provides a light-weight frontend to use Apache Spark from R.

SparkR (R on Spark) - Spark 1.4.1 Documentation

In Spark 1.4.1, SparkR provides a distributed data frame implementation that supports operations like selection, filtering, aggregation etc. (similar to R data frames, dplyr) but on large datasets. A Community Site for Apache Spark. Announcing SparkR: R on Spark. I am excited to announce that the upcoming Apache Spark 1.4 release will include SparkR, an R package that allows data scientists to analyze large datasets and interactively run jobs on them from the R shell.

Announcing SparkR: R on Spark

R is a popular statistical programming language with a number of extensions that support data processing and machine learning tasks. However, interactive data analysis in R is usually limited as the runtime is single-threaded and can only process data sets that fit in a single machine’s memory. SparkR, an R package initially developed at the AMPLab, provides an R frontend to Apache Spark and using Spark’s distributed computation engine allows us to run large scale data analysis from the R shell.

Project History The SparkR project was initially started in the AMPLab as an effort to explore different techniques to integrate the usability of R with the scalability of Spark. Putting Apache Kafka To Use: A Practical Guide to Building a Stream Data Platform (Part 1) These days you hear a lot about “stream processing”, “event data”, and “real-time”, often related to technologies like Kafka, Storm, Samza, or Spark’s Streaming module.

Putting Apache Kafka To Use: A Practical Guide to Building a Stream Data Platform (Part 1)

Though there is a lot of excitement, not everyone knows how to fit these technologies into their technology stack or how to put it to use in practical applications. This guide is going to discuss our experience with real-time data streams: how to build a home for real-time data within your company, and how to build applications that make use of that data. All of this is based on real experience: we spent the last five years building Apache Kafka, transitioning LinkedIn to a fully stream-based architecture, and helping a number of Silicon Valley tech companies do the same thing. Websudos/awesome-scala.

Apache spark + cassandra: Basic steps to install and configure cassandra and use it with apache spark with example. To build an application using apache spark and cassandra you can use the datastax spark-cassandra-connector to communicate with spark.

Apache spark + cassandra: Basic steps to install and configure cassandra and use it with apache spark with example

Before we are going to communicate with spark using connector we should know how to configure cassandra. So following are prerequisite to run example smoothly. Following steps to install and configure cassandra. Kafka-exactly-once/blogpost.md at master · koeninger/kafka-exactly-once. Spark-streaming-kafka-consumer/SparkFuConsumer.scala at master · bradkarels/spark-streaming-kafka-consumer. How to install Apache Spark and Cassandra Stack on Ubuntu 14.04. This article will cover installing Apache Spark and Apache Cassandra on Ubuntu 14.04.

Spark is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning. Cassandra’s data model offers the convenience of column indexes with the performance of log-structured updates, strong support for denormalization and materialized views, and powerful built-in caching. Spark and Cassandra work together to offer a power for solution for data processing. Databricks (@databricks) on GitBook. Implementing a real-time data pipeline with Spark Streaming. Real-time analytics has become a very popular topic in recent years. Whether it is in finance (high frequency trading), adtech (real-time bidding), social networks (real-time activity), Internet of things (sensors sending real-time data), server/traffic monitoring, providing real-time reporting can bring tremendous value (e.g., detect potential attacks on network immediately, quickly adjust ad campaigns, …).

Apache Storm is one of the most popular frameworks to aggregate data in real-time but there are also many others such as Apache S4, Apache Samza, Akka Streams, SQLStream and more recently Spark Streaming. According to Kyle Moses, on his page on Spark Streaming, it can process about 400,000 records / node / second for simple aggregations on small records and significantly outperforms other popular streaming systems such as Apache Storm (40x) and Yahoo S4 (57x). Figure 1. Ad network architecture To simplify, let’s consider that impression logs are in this format: Prerequisite Conclusion.

Bhomass/marseille. Spark Packages. Spark Streaming and GraphX at Netflix - Apache Spark Meetup, May 19, 2015. Advanced Apache Spark- Sameer Farooqui (Databricks) Tune Your Apache Spark Jobs (Part 2) In the conclusion to this series, learn how resource tuning, parallelism, and data representation affect Spark job performance. In this post, we’ll finish what we started in “How to Tune Your Apache Spark Jobs (Part 1)”. I’ll try to cover pretty much everything you could care to know about making a Spark program run fast. In particular, you’ll learn about resource tuning, or configuring Spark to take advantage of everything the cluster has to offer.

Then we’ll move to tuning parallelism, the most difficult as well as most important parameter in job performance. A real time streaming implementation of markov chain based fraud detection: A real time streaming implementation of markov chain based fraud detection. Fraud is a fact of life for the financial industry. Paypal did not become one of the only dotcom survivors by remaining a pure supplier of transaction engines. While final fraud determination is still in the hands of human experts, there has been much interest in automated processes that can syphon out suspicious activities for further scrutiny. Given the level of global credit card transactions, such a problem falls squarely in the domain of big data. More than just handling the data volume, financial institutes also faces a technical challenge in being able to catch fraudulent transactions as they happen. All this points to the need for a real time streaming analysis capability. Open source big data technologies have been advancing leaps and bounds recently, with the most recent push towards in-memory and streaming computation.

Bhomass/marseille. Spark-client/src/main/scala/org/apache/spark/examples/graphx at spark-example · bhomass/spark-client. Spark-client/Analytics.scala at spark-example · bhomass/spark-client. Bhomass/marseille. Scala School - Collections. This lesson covers: Scala provides some nice collections. See Also Effective Scala has opinions about how to use collections. Lists scala> val numbers = List(1, 2, 3, 4) numbers: List[Int] = List(1, 2, 3, 4) Sets Sets have no duplicates scala> Set(1, 1, 2) res0: scala.collection.immutable.Set[Int] = Set(1, 2) Tuple A tuple groups together simple logical collections of items without using a class. scala> val hostPort = ("localhost", 80) hostPort: (String, Int) = (localhost, 80) Unlike case classes, they don’t have named accessors, instead they have accessors that are named by their position and is 1-based rather than 0-based.

Анализ данных на Scala. Считаем корреляцию 21-го века / Блог компании Retail Rocket. Очень важно выбрать правильный инструмент для анализа данных. На форумах Kaggle.com, где проводятся международные соревнования по Data Science, часто спрашивают, какой инструмент лучше. Первые строчки популярноcти занимают R и Python. Deeplearning4j/deeplearning4j. Spirom/LearningSpark.

GraphX - Spark 1.3.1 Documentation. GraphX is a new component in Spark for graphs and graph-parallel computation. At a high level, GraphX extends the Spark RDD by introducing a new Graph abstraction: a directed multigraph with properties attached to each vertex and edge. To support graph computation, GraphX exposes a set of fundamental operators (e.g., subgraph, joinVertices, and aggregateMessages) as well as an optimized variant of the Pregel API. In addition, GraphX includes a growing collection of graph algorithms and builders to simplify graph analytics tasks. Facebook-hive-udfs/src/main/java/com/facebook/hive/udf at master · brndnmtthws/facebook-hive-udfs. Spark-dataframe-introduction/dataframe-introduction.md at master · yu-iskw/spark-dataframe-introduction. LearningSpark/Transform.scala at master · spirom/LearningSpark. Java to Scala converter. Resources for Getting Started with Functional Programming and Scala - blog blog blog.

Простая программа на Scala, работающая с Amazon S3. Несмотря на существование большого количества распределенных файловых систем (Riak CS, MongoDB GridFS, Cassandra File System, и прочих) многие продолжают отдавать свое предпочтение сервису Amazon S3. Что не удивительно, учитывая его гибкость, разумную стоимость, отсутствие необходимости самому что-либо администрировать и наличие могучего SDK. В этой заметке будет показано, как работать с Amazon S3 при помощи этого SDK на языке программирования Scala. Прежде, чем перейти непосредственно к программированию, создадим небольшое тестовое окружение через консоль AWS. Находим в ней сервис S3, открываем, жмем «Create Bucket».

От вас потребуется выбрать регион, в котором будет размещаться бакет, а также ввести имя бакета. Название региона является частью endpoint и на скриншоте подчеркнуто красным. Для работы с S3 из Scala нам понадобится пользователь, обладающий соответствующими правами. QuickTip: Integrating Amazon S3 in your Scala Product. QuickTip: Integrating Amazon S3 in your Scala Product This post is supposed to be a quick cheat sheet of integrating your Scala product with Amazon S3. The groupBy method from Scala’s collection library. Scala’s collection library is a wonderfully crafted piece of software.

When learning a language I think it pays to look at the available collections and their functionality. In Scala there a many useful collections and methods which give you a lot of powerful tools. In this post, I want to look at the groupBy method defined in Traversable. Let’s look at an example before explaining how it works: This will print: How do I wait for asynchronous tasks to complete in scala? Scala - How do I convert csv file to rdd. Using in Shiny Applications. The dygraphs package provides the dygraphOutput and renderDygraph functions to enable use of dygraphs within Shiny applications and R Markdown interactive documents.

Note that using dygraphs with Shiny requires a recent version of the Shiny package (>= 0.10.2.1) so you should be sure to update Shiny before trying out dygraphs with it. Simple Example These functions work exactly like the Shiny plotOutput and renderPlot functions. For example, here’s a simple Shiny application that uses these functions: ui.R library(dygraphs) shinyUI(fluidPage( titlePanel("Predicted Deaths from Lung Disease (UK)"), sidebarLayout( sidebarPanel( numericInput("months", label = "Months to Predict", value = 72, min = 12, max = 144, step = 12), selectInput("interval", label = "Prediction Interval", choices = c("0.80", "0.90", "0.95", "0.99"), selected = "0.95"), checkboxInput("showgrid", label = "Show Grid", value = TRUE) ), mainPanel( dygraphOutput("dygraph") ) ))) Configure Spark with IntelliJ « Hackers and Duffers.

Getting Scala Spark working with IntelliJ IDE. UPDATE : Updated the instructions for build.sbt on 1/29/2015 for Spark 1.2 and Scala 2.11.4 I have recently started using Apache Spark and it is awesome. Interactive Periodic Table of Machine Learning Libraries. RFM Segmentation in R, Pandas, Spark. The RFM Customer Segmentation model is an embarrassingly simple way of segmenting the customer base inside a marketing database. The resulting groups are easy to understand, analyze and action without the need of going through complex mathematics. Unfortunately, not all CRM platforms contain a module to perform RFM Customer Segmentation. This article gives you a sketch of how to calculate it in R, Pandas and Apache Spark. The first step in RFM Customer Segmentation is to define the three attributes. The model allows for a certain flexibility with definitions and you can adjust them to the specifics of your business.

Recency which represents the “freshness” of customer activity. Certain businesses and industries will require slightly modified versions of the attributes. In this article we choose the default definitions (in pseudocode) as follows: Snip2Code - Spark SQL with Cassandra. Flexible Data Architecture With Spark, Cassandra, and Impala.

Overview. Salmon Run. Apache Spark User List - Is there a step-by-step instruction on how to build Spark App with IntelliJ IDEA? Datastax/spark-cassandra-connector. RE: Adding Spark Cassandra dependency breaks Spark Streaming? Scala - Compilation errors with spark cassandra connector and SBT. Owen Rumney - Data Engineer. Spirom/LearningSpark. Getting Scala Spark working with IntelliJ IDE. Kindling: An Introduction to Spark with Cassandra (Part 1) 16.1. Connecting to MySQL with JDBC - Scala Cookbook. How to create a connection between scala and mysql using jdbc. Introduction to Spark & Cassandra. Video about how to use the tCassandra components - Talend v5.2 features.

Blog Archive - Eugene Zhulenev. Integrating Kafka and Spark Streaming: Code Examples and State of the Game. A Spark Learning Journey of a Data Scientist.