Introduction to Data Engineering with Microsoft Azure 2
Further your knowledge of Microsoft Azure services and improve your data engineering skills with this online course from Microsoft.
Duration
6 weeks
Weekly study
4 hours
100% online
How it works
Unlimited subscription
Learn more
This course has been created in partnership with Microsoft. |
Building on your learning from Introduction to Data Engineering with Microsoft Azure 1, this course will develop your understanding of data engineering processes in Microsoft Azure, further preparing you to take the DP-203 exam and kickstart your career in data engineering.
Using Azure data services and tools, you’ll be able to implement, develop, and optimise data storage, processing and security operations within your organisation.
You’ll be introduced to tools including Azure Synapse, Databricks and Azure Data Lake Storage, learning how each can improve and streamline your processes.
As businesses continue to move to digital processes, they recognise the value of making faster, well-informed decisions and the impact this can have on gaining a competitive advantage.
You’ll be guided through HTAP architecture and learn how to design HTAP using Azure Synapse Analytics.
With this knowledge, you’ll be able to run analytics in near-real-time, giving you the ability to respond to opportunities at speed.
Azure Databricks, a cloud-based big data and machine learning platform, empowers developers by simplifying enterprise-grade data application production.
You’ll identify the advantages of Azure Databricks over other Big Data platforms, and learn how to spend more time building apps and less time managing infrastructure.
You’ll finish this course understanding how Microsoft Azure can be used to optimise data engineering operations. Having completed both courses, you’ll be equipped to take the DP-203 exam and develop a career as a data professional.
In this activity, you will learn about planning hybrid transactional and analytical processing using Azure Synapse Analytics.
During this week, you will learn how to configure Azure Synapse link with Azure Cosmos DB.
In this activity, you will learn how to query Azure Cosmos DB with Apache Spark for Azure Synapse Analytics.
In this activity, you will learn about querying Azure Cosmos DB with SQL Serverless for Azure Synapse Analytics.
In this activity, you will describe Azure Databricks.
In this activity, you will learn about spark architecture fundamentals.
During this week, you will learn about reading and writing data in Azure Databricks.
During this week, you will learn how to work with DataFrames in Azure Databricks.
During this week, you will learn how to work with DataFrames columns in Azure Databricks.
During this week, you will learn how to describe lazy evaluation and other performance features in Azure Databricks.
In this activity, you will learn how to work with DataFrames advanced methods in Azure Databricks.
During this week, you will learn how to describe platform architecture, security, and data protection in Azure Databricks.
Learn how to use Delta Lake to create, append, and upsert data to Apache Spark tables, taking advantage of built-in reliability and optimizations.
Learn how Structured Streaming helps you process streaming data in real time, and how you can aggregate data over windows of time.
Use Delta Lakes as an optimization layer on top of blob storage to ensure reliability and low latency within unified Streaming and batch data pipelines.
Azure Data Factory helps you create workflows that orchestrate data movement and transformation at scale. Integrate Azure Databricks into your production pipelines by calling notebooks and libraries.
CI/CID isn't just for developers. Learn how to put Azure Databricks notebooks under version control in an Azure DevOps repo and build deployment pipelines to manage your release process.
Azure Databricks is just one of many powerful data services in Azure. Learn how to integrate with Azure Synapse Analytics as part of your data architecture.
Learn best practices for workspace administration, security, tools, integration, databricks runtime, HA/DR, and clusters in Azure Databricks.
Learn how Azure Data Lake Storage provides a cloud storage service that is highly available, secure, durable, scalable, and redundant and brings new efficiencies to processing big data analytics workloads.
Learn various ways to upload data to Data Lake Storage Gen 2. Upload data through the Azure portal, Azure Storage Explorer, or .NET. Or copy the data in Azure Data Factory.
Learn how Azure Storage provides multilayered security to protect your data. Find out how to use access keys, to secure networks, and to use Advanced Threat Protection to proactively monitor your system.
Explore how Azure Stream Analytics integrates with your applications or Internet of Things (IoT) devices to gain insights with real-time streaming data. Learn how to consume and analyze data streams and derive actionable results.
Connect sending and receiving applications with Event Hubs so you can handle extremely high loads without losing data.
Learn how to create Azure Stream Analytics jobs to process input data, transform it with a query, and return results.
More courses you might like
Learners who joined this course have also enjoyed these courses.
©2025 onlincourse.com. All rights reserved