apache spark

If we think about large-scale data processing in Big data space, then Apache Spark is considered a unified analytics engine. Originally developed in 2009 at AMPLab of UC Berkeley, it became open source in 2010 as Apache acquired it as its project. Though Apache Spark is written in Scala, the API is available in Python, Scala, Java, and R. As Apache Spark supports Hive queries, at the same time, we can run SQL like Hive queries using Spark SQL.

Spark is an ideal supportive tool for machine learning. As machine learning algorithms are iterative, it needs speedy data processing which is easily performed using Spark. Its cutting edge framework is an open-source and general-purpose cluster computing framework. This framework helps to speeds up iterative data processing, which boosts data implementation and data analysis. Besides, it helps data specialists to execute machine learning algorithms effectively as well as streaming SQL workloads. Spark facilitates in-memory data processing with its advanced API.

Apache Spark is a distributed data processing engine that supports both streams as well as batch data processing.

https://mapr.com/blog/spark-101-what-it-what-it-does-and-why-it-matters/assets/image7.png

A Brief History

If we look back at Apache's history, then obviously Spark for Hadoop, this relation comes into the picture. The data processing unit of Hadoop, MapReduce, is used to dominate the space, which is now played by Spark. The robust distributed processing framework of MapReduce can index a massive volume of content on the web. However, MapReduce has few drawbacks. The most influential project of Apache Software Foundation Spark overcome the drawbacks of MapReduce, which is why it started flourishing in 2013.

Related post - What is Real-time data analytics? Top 5 Challenges and solutions

What Does Spark Do?

Spark is a robust data processing framework that can handle multiple petabytes of data at a time. We can distribute this huge data across a cluster of several servers, whether it could be virtual or physical. The number of servers could be thousands. Spark consists of an extensive set of APIs and libraries that support languages such as Python, R, Scala, and Java. This flexibility makes Spark a perfect match for a range of use cases. Also, we see the use of Spark with distributed data storages such as Hadoop’s HDFS, MapR XD, and Amazon’s S3. Besides, it supports NoSQL databases such as Apache HBase, MapR Database, Apache Cassandra, and MongoDB. Along with that, Spark works with messaging systems that could be distributed in nature like Apache Kafka and MapR Event Store.

Some of the typical use cases of Spark

Real-time data processing: Streaming data is the recent need for data analytics. These streams of data may generate from anywhere. Its span can be from log files to sensor data arrive in a steady stream. At the same time, it may simultaneously come from multiple sources. No doubt, we can store this data on disk for analysis at a later point in time. However, to get the best result, it is certainly feasible to act and process the data as it arrives. Such streaming data processing is one of the prime areas of Spark.

Machine learning: Spark performs in-memory data storing and rapid running of queries to process data. This speedy processing is a real need in machine learning, where we need to feed algorithms with well-structured data sets. This helps to train machine learning algorithms. Spark can run similar queries in a broad range repeatedly, at scale.

Interactive Analytics: Spark helps in interactive query processing as it can adapt and respond quickly. As a result, data scientists can explore data on the go instead of running pre-defined queries or using static dashboards.

Data integration: As the different systems across a business generate hardly clean or consistent data, it requires to combine this data for analysis or reporting. For this purpose, ETL processing (Extract, transform, and load) is performed on this unclean data where they are often pulled from different systems, then cleansing and standardization are performed on it. Finally, the data is loaded and analyzed into a separate system. Spark is increasingly being used for this purpose, which reduces the time and cost required for this ETL process.

What is the relationship between Spark and MapReduce

How Spark and MapReduce relate to each other that can be analyzed from below angles:

Spark Replaces MapReduce

As we have mentioned in the beginning, MapReduce has some inherent issues related to speed. Interestingly, Spark was developed to keep the advantages of MapReduce intact while making it more productive and easier to implement.

What benefits of Spark we get over MapReduce:

  • Spark executes data faster than MapReduce as it caches data in memory. Besides, it performs various parallel operations. On the other hand, MapReduce focuses more on reading and writing from disk.
  • Both MapReduce and Spark run JVM processes. However, Spark executes multi-threaded tasks, whereas MapReduce is not so seamless amidst JVM processes.
  • Not to mention, Spark starts up quickly, shows better parallelism, and improved CPU utilization.
  • Spark is preferable for enriching the functional programming experience.
  • Spark performs better parallel processing of distributed data in association with iterative algorithms.

Spark API Architecture

Apache Spark API had 3 main abstractions:

1) RDD: It stands for the resilient distributed dataset. It is a basic level of data abstraction layer in Apache Spark. It is a collection of fault-tolerant collection of elements that can operate in parallel. RDD is also immutable, partitioned across nodes in clusters that can operate parallel with a low-level API that offers actions and transformations. RDD can be created in two ways-

  • By parallelizing an existing collection in the driver program
  • By reading data from the system, such external storage as a shared file system

2) Dataframe: It can be compared with the table of a relational database where multiple named columns are there. It provides a higher level of abstraction while allows a structure for the distributed collection of data.

3) Datasets: This is a collection of strongly-typed objects which are domain-specific, and Using relational or functional operations, they can be transformed in parallel. For each transformation, a logical plan is used. This plan ultimately converts into a physical plan based on the action invoked on it. Spark’s optimizer is used for this purpose. One of the main advantages of datasets is type-safety, which easily detects any analysis or syntax error.

What Sets Spark Apart?

There are many reasons to choose Spark, but the following three are key:

Simplicity

Speed

Support.

Apart from the above three reasons, there is another powerful feature of Spark is its ability to the data pipeline. It can combine different processes and techniques into a single one. Spark is a one-stop solution for combining different discrete tasks such as selecting data, transforming that data, and analyzing the transformed results. Spark performs in-memory execution consecutively. Spark executes multiple jobs in memory. Through the pipelining operation, Spark combines a large number of inputs and consistently delivers the desired result.

Leave a comment