EuroPython 2015 PySpark - Data Processing in Python on top of Apache Spark
This is the recording of my talk PySpark - Data Processing in Python on top of Apache Spark that I gave at EuroPython 2015 in Bilbao:
Apache Spark is a computational engine for large-scale data processing. It is responsible for the scheduling, distributing, and monitoring of applications that consist of many computational tasks running across worker machines in a compute cluster.
This talk gives an overview of PySpark with a focus on Resilient Distributed Datasets (RDDs) and the DataFrame API. While Spark Core is written in Scala and runs on the JVM, PySpark exposes the Spark programming model to Python. It defines an API for RDDs. RDDs are a distributed memory abstraction that lets you perform in-memory computations on large clusters in a fault-tolerant way. RDDs are immutable, partitioned collections of objects. Transformations create a new RDD from a previous one. Actions compute results from an RDD and trigger execution. Multiple computation steps are expressed as a directed acyclic graph (DAG). The DAG execution model is a generalization of the Hadoop MapReduce computation model. Execution is lazy until an action runs.
The Spark DataFrame API was introduced in Spark 1.3. DataFrames evolve Spark’s RDD model and are inspired by pandas and R data frames. The API provides simple operators for filtering, aggregating, joining, and projecting large datasets. The DataFrame API supports different data sources such as JSON, Parquet files, Hive tables, and JDBC databases.
