Job Description
Description: Job Summary
Responsible for the design, development and support the implementation and maintenance of cloud data platform pipelines and data models both globally and across different markets/countries.
Job Description
- Design, develop, document and implement end-to-end data pipelines and data integration processes, both batch and real-time. This includes data analysis, data profiling, data cleansing, data lineage, data mapping, data transformation, developing ETL / ELT jobs and workflows, and deployment of data solutions.
- Monitor, recommend, develop and implement ways to improve data quality including reliability, efficiency and cleanliness, and to optimize and fine-tune ETL / ELT processes.
- Recommend, execute and deliver best practices in data management and data lifecycle processes, including modular development of ETL / ELT processes, coding and configuration standards, error handling and notification standards, auditing standards, and data archival standards.
- Ensure development adheres to guidelines and governance policies on data and business intelligence platform.
- Collaborate with IT team members, SMEs, vendors and internal business stakeholders, to understand data needs, gather requirements and implement data solutions to deliver business goals.
- BAU supports any data issues and change requests, documents all investigations, findings, recommendations and resolutions.
- Support daily operation incidents and service requests.
Job Requirement
- Hands-on experience on Azure Data Solution such as Azure Synapse Spark, Synapse DB, Data Factory, Databricks, Azure Lake Storage and Power BI.
- Experience with various ETL/ELT frameworks, data warehousing concepts, data management framework and data lifecycle processes.
- Experienced in handling and processing different types of data (structured, semi-structured and unstructured).
- Utilized Azure DevOps to implement CI/CD workflows and deployment of ETL jobs across multiple environments.
- Strong knowledge in various database technologies (RDBMS, NoSQL and columnar).
- Proficient in ETL programming languages like Python, PySpark and SQL.
- Experience with Microsoft Fabric is an added advantage.
- Ability to communicate and present technical information in a clear and unambiguous manner.
- Strong ability to work independently and cooperate with diverse teams in a multiple stakeholder environment.
- Strong sense of work ownership, high affinity with any data and a desire for constant improvements.
- Experience with SAP data sources preferred.
Education
Bachelor's degree in computing/information technology or equivalent.
Work Experience
Up to three years' experience in a similar role.
#LI-YL2