Apache Spark Training Courses

Apache Spark Training

Apache Spark - an engine for big data processing training

Apache Spark Course Outlines

ID Name Duration Overview
566826 Spark for Developers 21 hours OBJECTIVE: This course will introduce Apache Spark. The students will learn how  Spark fits  into the Big Data ecosystem, and how to use Spark for data analysis.  The course covers Spark shell for interactive data analysis, Spark internals, Spark APIs, Spark SQL, Spark streaming, and machine learning and graphX. AUDIENCE : Developers / Data Analysts Scala primer A quick introduction to Scala Labs : Getting know Scala Spark Basics Background and history Spark and Hadoop Spark concepts and architecture Spark eco system (core, spark sql, mlib, streaming) Labs : Installing and running Spark First Look at Spark Running Spark in local mode Spark web UI Spark shell Analyzing dataset – part 1 Inspecting RDDs Labs: Spark shell exploration RDDs RDDs concepts Partitions RDD Operations / transformations RDD types Key-Value pair RDDs MapReduce on RDD Caching and persistence Labs : creating & inspecting RDDs;   Caching RDDs Spark API programming Introduction to Spark API / RDD API Submitting the first program to Spark Debugging / logging Configuration properties Labs : Programming in Spark API, Submitting jobs Spark SQL SQL support in Spark Dataframes Defining tables and importing datasets Querying data frames using SQL Storage formats : JSON / Parquet Labs : Creating and querying data frames; evaluating data formats Mlib mlib intro mlib algorithms Labs : Writing mlib applications GraphX GraphX library overview GraphX APIs Labs : Processing graph data using Spark Spark Streaming Streaming overview Evaluating Streaming platforms Streaming operations Sliding window operations Labs : Writing spark streaming applications Spark and Hadoop Hadoop Intro (HDFS / YARN) Hadoop + Spark architecture Running Spark on Hadoop YARN Processing HDFS files using Spark Spark Performance and Tuning Broadcast variables Accumulators Memory management & caching Spark Operations Deploying Spark in production Sample deployment templates Configurations Monitoring Troubleshooting
359602 Apache Spark 14 hours Why Spark? Problems with Traditional Large-Scale Systems Introducing Spark Spark Basics What is Apache Spark? Using the Spark Shell Resilient Distributed Datasets (RDDs) Functional Programming with Spark Working with RDDs RDD Operations Key-Value Pair RDDs MapReduce and Pair RDD Operations The Hadoop Distributed File System Why HDFS? HDFS Architecture Using HDFS Running Spark on a Cluster Overview A Spark Standalone Cluster The Spark Standalone Web UI Parallel Programming with Spark RDD Partitions and HDFS Data Locality Working With Partitions Executing Parallel Operations Caching and Persistence RDD Lineage Caching Overview Distributed Persistence Writing Spark Applications Spark Applications vs. Spark Shell Creating the SparkContext Configuring Spark Properties Building and Running a Spark Application Logging Spark, Hadoop, and the Enterprise Data Center Overview Spark and the Hadoop Ecosystem Spark and MapReduce Spark Streaming Spark Streaming Overview Example: Streaming Word Count Other Streaming Operations Sliding Window Operations Developing Spark Streaming Applications Common Spark Algorithms Iterative Algorithms Graph Analysis Machine Learning Improving Spark Performance Shared Variables: Broadcast Variables Shared Variables: Accumulators Common Performance Issues
567229 Big Data Analytics 21 hours Audience If you try to make sense out of the data you have access to or want to analyse unstructured data available on the net (like Twitter, Linked in, etc...) this course is for you. It is mostly aimed at people who need to choose what data is worth collecting and what is worth analyzing. It is not aimed at people configuring the solution, those people will benefit from the big picture though. Delivery Mode During the course delegates will be presented with working examples of mostly open source technologies. Short lectures will be followed by presentation and simple exercises by the participants Content and Software used All software used is updated each time the course is run so we check the newest versions possible. It covers the process from obtaining, formatting, processing and analysing the data, to explain how to automate decision making process with machine learning. Day 1: Big Data Analytics (8.5 hours) Quick Overview Data Sources Mining Data Recommender systems Datatypess Structured vs unstructured Static vs streamed Data-driven vs user-driven analytics data validity Models and Classification Statistical Models Classification Clustering: kGroups, k-means, nearest neighbours Ant colonies, birds flocking Predictive Models Decision trees Support vector machine Naive Bayes classification Markov Model Regression Ensemble methods Building Models Data Preparation (MapReduce) Data cleansing Developing and testing a model Model evaluation, deployment and integration Overview of Open Source and commercial software Selection of R-project package Python libraries Hadoop and Mahout Selected Apache projects related to Big Data and Analytics Selected commercial solution Integration with existing software and data sources     Day 2: Mahout and Spark (8.5 hours) Implementing Recommendation Systems with Mahout Introduction to recommender systems Representing recommender data Making recommendation Optimizing recommendation Spark basics Spark and Hadoop Spark concepts and architecture Spark eco system (core, spark sql, mlib, streaming) Labs : Installing and running Spark Running Spark in local mode Spark web UI Spark shell Inspecting RDDs Labs: Spark shell exploration Spark API programming Introduction to Spark API / RDD API Submitting the first program to Spark Debugging / logging Configuration properties Spark and Hadoop Hadoop Intro (HDFS / YARN) Hadoop + Spark architecture Running Spark on Hadoop YARN Processing HDFS files using Spark Spark Operations Deploying Spark in production Sample deployment templates Configurations Monitoring Troubleshooting     Day 3 : Google Cloud Platform Big Data & Machine Learning Fundamentals (4 hours) Data Analytics on the Cloud What is the Google Cloud Platform? GCP Big Data Products CloudSQL: your SQL database on the cloud A no-ops database Lab: importing data into CloudSQL and running queries on rentals data Dataproc Managed Hadoop + Pig + Spark on the cloud Lab: Machine Learning with SparkML Scaling data analysis Fast random access Datastore: Key-Entity BigTable: wide-column Datalab Why Datalab? (interactive, iterative) Demo: Sample notebook in datalab BigQuery Interactive queries on petabytes Lab: Build machine learning dataset Machine Learning with TensorFlow TensorFlow Lab: Train and use neural network Fully built models for common needs Vision API Translate API Lab: Translate Genomics API (optional) What is linkage disequilibrium? Finding LD using Dataflow and BigQuery Data processing architectures Asynchronous processing with TaskQueues Message-oriented architectures with Pub/Sub Creating pipelines with Dataflow Summary Where to go from here Resources

Course Discounts

Course Venue Course Date Course Price [Remote/Classroom]
Excel Advanced with VBA Basel Mon, 2016-10-24 09:30 1500EUR / 2150EUR

Upcoming Courses

CourseCourse DateCourse Price [Remote/Classroom]
Apache Spark - BaselWed, 2016-10-12 08:003398EUR / 3748EUR
Apache Spark - BernTue, 2016-11-08 09:304530EUR / 5030EUR
Weekend Apache Spark courses, Evening Apache Spark training, Apache Spark boot camp, Apache Spark instructor-led , Apache Spark training courses, Apache Spark instructor, Apache Spark trainer , Apache Spark coaching, Apache Spark one on one training ,Weekend Apache Spark training, Apache Spark classes, Apache Spark on-site, Apache Spark private courses

Some of our clients