Key Responsibilities
Design and develop scalable data pipelines using Azure Data Factory (ADF)
Build and optimize data transformations using Databricks and advanced PySpark
Implement and manage Unity Catalog for governance and security
Develop complex SQL queries and optimize performance
Design and manage Delta Lake architecture
Work with Azure Data Services including ADLS, Synapse, and SQL DB
Implement data warehousing models (Star & Snowflake schema)
Ensure data quality, security, and compliance standards
Optimize big data processing performance and cost efficiency
Collaborate with analytics, BI, and business stakeholders
Requirements & Qualifications
Education:
Bachelor’s degree in Computer Science, Data Engineering, Information Systems, or related field
Experience:
5–8 years of experience in data engineering
Proven experience in Azure-based data platforms
Strong hands-on production experience in Databricks
Mandatory Technical Skills:
Databricks
Unity Catalog (Governance & Security)
Advanced PySpark
Strong SQL (complex queries & performance tuning)
Azure Data Factory (ADF)
Delta Lake
Azure Data Services (ADLS, Synapse, SQL DB)
Data Warehousing concepts (Star & Snowflake schema)
Preferred Skills:
Performance optimization and cost management in Azure
CI/CD for data pipelines
Data governance and compliance best practices
Soft Skills:
Strong analytical thinking
Problem-solving mindset
Ability to work independently in a remote setup
Effective communication with cross-functional teams
Salary, Benefits & Career Growth
Average Market Salary (Estimated):
$115,000 – $135,000 per year (U.S. Remote Market – 5–8 Years Experience)