The future of sustainability and training: involvement and performance of companies and strategic suppliers
-
Updated
Apr 8, 2024 - SQL
The future of sustainability and training: involvement and performance of companies and strategic suppliers
Ingested Tokyo Olympic data into Azure Data Lake using Azure Data Factory. Enhanced data quality with Apache Spark on Azure Databricks. Optimized SQL queries on Synapse Analytics, reducing execution time. Developed engaging Power BI dashboards, boosting user engagement creating KPI's with DAX.
The data engineering project aims to migrate a company's on-premises database to Azure, leveraging Azure Data Factory for data ingestion, transformation, and storage. The project will implement a three-stage storage strategy, consisting of bronze, silver, and gold data layers (Medalion architecture). Documentation of the project is in PDF file.
Tokyo Olympics Data Analysis: Creating a ETL pipeline using Azure Data Factory to ingest data, transform it using Azure Databricks and querying and building reports using tools like Synapse Analytics and PowerBI
Implemented an end-to-end Azure data engineering solution to process Tokyo Olympics 2021 data, encompassing extraction, transformation, analytics, and visualization.
This project builds an End-to-End Azure Data Engineering Pipeline, performing ETL and Analytics Reporting on the AdventureWorks2022LT Database.
This Azure Data Factory (ADF) pipeline will be automatically triggered whenever a CSV file is dropped in a specific location within the Storage Account Container. The pipeline reads the contents of the CSV file, converts it into an HTML table, and then sends a notification containing the newly created table to the designated Microsoft Teams channel
Copying data from Amazon S3 bucket to Azure Blob container by using Azure Data Factory pipeline. This Data is mounted to Databricks and further analysis is done using Spark SQL.
Repository created for programming and development in the Azure Data Enginner.
The aim of this project is to build a cost efficient Data Warehouse on Amazon's Retail sales data and perform Customer lifetime value analyses
Examples of Flink on Azure
Local SQL Database ---> Azure ---> Power BI
The aim of this project is to build a cost efficient Data Warehouse on Amazon's Retail sales data and perform Customer lifetime value analyses.
Covid ETL Project using Azure Data Engineering Stack
Azure for End to End Data Science Project
This project presents a sophisticated data-driven web application that integrates React for frontend visualization, NodeJS for backend data retrieval, and Microsoft's Cosmos DB for data storage. Leveraging the fault tolerance, partitioning, replication, and global distribution advantages of Cosmos DB.
Transform data from on-premises SQL Server to Azure Delta Lake Storage for Analytics and Visualization
Bugs In Cloud - Elenco dei Video
This project extracts data having 800k records from CSV in the data factory and convert it to parquet based data and finally create a PowerBI report of that parquet based data.
Add a description, image, and links to the azuredatafactory topic page so that developers can more easily learn about it.
To associate your repository with the azuredatafactory topic, visit your repo's landing page and select "manage topics."