Position Purpose
The Cloud Data Operations Engineer is a hands-on technical role focused on building and enhancing cloud-native data quality, profiling, and observability frameworks. Working within a small, collaborative team, this role supports the delivery of trusted, secure, and well-governed data assets across the enterprise. The successful candidate will bring a strong engineering mindset, a passion for data quality, and the ability to design scalable solutions that support both technical and business stakeholders. This is a greenfield opportunity to shape foundational data capabilities, with significant scope for strategic thinking, creativity, and innovation.
Key Responsibilities
- Design, build, and enhance serverless and container-based data quality and profiling frameworks on AWS.
- Develop automated pipelines for data validation, classification, and observability across Redshift, Databricks, and other platforms.
- Integrate data quality outputs into reporting and cataloging tools such as Power BI and Alation.
- Implement and support third-party SaaS data quality agents within cloud infrastructure and CI/CD pipelines.
- Ensure secure handling of sensitive and classified data, including access control auditing and compliance with data governance policies.
- Build and maintain robust CI/CD pipelines with integrated testing for infrastructure and application code using Azure DevOps and Terraform.
- Develop and deploy containerized applications using AWS EKS and ECS.
- Collaborate closely with team members and stakeholders to align solutions with business needs and technical standards.
- Produce clear, user-friendly documentation including requirements, designs, and support materials.
- Contribute to a culture of continuous improvement, knowledge sharing, and operational excellence.
Key Skills and Experience
- Proficiency in Python and YAML for data engineering and configuration-driven development.
- Strong experience with AWS services, particularly EKS, ECS, Lambda, S3, and Glue.
- Experience with container development and orchestration using Docker and Kubernetes.
- Familiarity with data quality tools such as Soda Core or similar.
- Experience with CI/CD pipelines and infrastructure-as-code using Azure DevOps and Terraform, including automated testing practices.
- Solid understanding of data engineering pipelines and data warehouse architectures.
- Experience with RBAC and access control auditing in cloud data environments.
- Exposure to advanced pattern recognition techniques (e.g., ML, fuzzy logic, Gen AI) is highly desirable.
- Strong documentation and communication skills, with the ability to work effectively in a collaborative, fast-paced environment.
Desirable Attributes
- Experience working with sensitive or regulated data, including privacy and compliance considerations.
- Familiarity with data cataloging platforms (e.g., Alation) and business-friendly configuration tools.
- Ability to work independently while contributing to a highly collaborative team culture.
- A proactive mindset with a passion for automation, data quality, and scalable design.
If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now.
If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion on your career.
LHS 297508