<p>Key Responsibilities:</br> • Design, develop, and maintain scalable data pipelines using Azure Data Factory, Databricks, and SQL Server.</br> • Implement ETL/ELT processes to ingest, transform, and load data from various sources (on-prem, cloud, APIs).</br> • Optimize data storage and retrieval using Azure Data Lake, Delta Lake, and SQL-based solutions.</br> • Collaborate with tech-leads to define data models, schemas, and follow standards.</br> • Monitor and troubleshoot data workflows, ensuring reliability and performance.</br> • Work with DevOps to Implement CI/CD pipelines for data engineering solutions using Azure DevOps.</br> • Ensure data security and compliance with enterprise and regulatory standards.</br></p> <p>Required Skills & Qualifications:</br> • 7+ years of experience in data engineering, with at least 3 years in Azure cloud environments.</br> • Strong proficiency in Azure Data Factory, Azure Databricks, Azure SQL Server, and Azure Data Lake.</br> • Solid understanding of SQL, Python, and Spark for data processing and transformation.</br> • Experience with Delta Lake.</br> • Familiarity with CI/CD practices and tools (e.g. Azure DevOps).</br> • Knowledge of data governance, security, and compliance best practices.</br> • Excellent problem-solving and communication skills.</br></p>