Apache Spark End-To-End Data Engineering Project | Apple Data Analysis

The Big Data Show
The Big Data Show
22.6 هزار بار بازدید - 2 ماه پیش - Dive into the world of
Dive into the world of big data processing with our PySpark Practice playlist. This series is designed for both beginners and seasoned data professionals looking to sharpen their Apache Spark skills through scenario-based questions and challenges.

Each video provides step-by-step solutions to real-world problems, helping you master PySpark techniques and improve your data-handling capabilities. Whether preparing for a job interview or just learning more about Spark, this playlist is your go-to resource for practical, hands-on learning. Join us to become a PySpark expert!

In this video, we used DataBricks to create multiple ETL pipelines using the Python API of Apache Spark i.e. PySpark.

We have used sources like CSV, Parquet, and Delta Table then used Factory Pattern to create the reader class. Factory Pattern is one of the most used Low-Level designs in Data Engineering pipelines that involve multiple sources.

Then we used PySpark DataFrame API and Spark SQL to write the business transformation logic. In the loader part, we have loaded data into two fashion one using DataLake and another by Data LakeHouse.

While solving the problems, we are also demonstrating the most asked PySpark #interview problems. We have discussed and demonstrated a lot of concepts like broadcast join, partition by and bucketing, sparkSession, windows functions like LAG and LEAD, delta table and many other concepts.

After watching, please let us know your thoughts,

Stay tuned to all to this playlist for all upcoming videos.

𝗝𝗼𝗶𝗻 𝗺𝗲 𝗼𝗻 𝗦𝗼𝗰𝗶𝗮𝗹 𝗠𝗲𝗱𝗶𝗮:
🔅 Topmate (For collaboration and Scheduling calls) - https://topmate.io/ankur_ranjan
🔅 LinkedIn - LinkedIn: thebigdatashow
🔅 Instagram - Instagram: ranjan_anku

DataBricks notebooks link. Extract the zip folder by downloading it and then open the HTML files as a notebook in the community version of Databricks.

🔅 Recommended Link for DataBricks community version login, after signing up:
https://community.cloud.databricks.com/

🔅 Ankur's Notebook source files
https://drive.google.com/file/d/15FBg...

🔅 Input table files
https://drive.google.com/drive/folder...

For practising different Data Engineering interview questions, go to the community section of our YouTube page.  

@thebigdatashow

Narrow vs Wide Transformation

Short Article link:
Post

Questions 1:
Post


Question 2:
Post

Question 3:
Post

Question 4:
Post

Question 5:
Post

Question 6:
Post

Question 7:
Post

Question 8:
Post

Question 9:
Post

Question 10:
Post

Broadcast Join in #apachespark

Small article link:

Post


MCQs list

1. @thebigdatashow

2. @thebigdatashow

3. @thebigdatashow

4. @thebigdatashow

5.
@thebigdatashow

Check the COMMUNITY section for a full list of questions.


Chapters
00:00 - Project Introduction
12:04 - How to use Databricks for any Pyspark/Spark Project?
25:09 - Low-Level Design Code
40:39 - Job, Stages, and Action in Spark
45:22 - Designing a code base for the Spark Project
51:40 - Applying first business Logic in the transformer class
57:34 - Difference between Lag & Lead window function
01:28:42 - Broadcast Join in Apache Spark/pyspark
01:47:50 - Difference between Partitioning and Bucketing in Apache Spark/pyspark
2:07:00 - Detailed Summary of the first pipeline
2:14:00 - Second pipeline Goal
02:24:57 - collect_set() and collect_list() in Spark/pyspark
02:48:53 - Detailed Summary of the second pipeline
02:51:03 - Why is Delta Lake when we already have DataLake?
02:54:51 - Summary

#databricks  #delta  #pyspark   #practice  #dataengineering  #apachespark  #problemsolving
#spark  #bigdata #interviewquestions #sql #datascience #dataanalytics
2 ماه پیش در تاریخ 1403/02/22 منتشر شده است.
22,686 بـار بازدید شده
... بیشتر