About the Role
We are seeking a Data Integration Engineer to design, build, and operate scalable data integration pipelines across batch and streaming architectures. This role is critical to enabling secure, reliable, and observable data movement across platforms while supporting and uplifting Operational Data Store (ODS) capabilities to meet business outcomes.
Key Responsibilities
Data Pipeline Design & Delivery
- Design and deliver batch and streaming data pipelines using cloud‑native and event‑driven patterns
- Build ingestion and transformation workflows to harmonize diverse datasets into common data models
- Expose integrated data through APIs and messaging platforms
Secure File‑Based Integrations
- Implement and manage secure file transfer integrations using managed transfer services
- Apply authentication, authorization, virus scanning, and cross‑account data movement
- Create and maintain operational runbooks and support documentation
Reliability, Observability & DevOps
- Engineer pipelines for high reliability, performance, and scalability
- Instrument solutions with logging, metrics, tracing, and alerting
- Automate deployment and operations aligned with DevOps and CI/CD best practices
ODS Engineering & Platform Uplift
- Engineer, manage, and support Operational Data Store (ODS) capabilities
- Optimize and uplift ODS platforms to meet evolving business outcomes and scale requirements
- Work with data caching technologies to improve performance and scalability
- Identify and remediate underlying platform and architectural issues
Required Skills & Experience
- Strong experience in data integration engineering or related roles
- Hands‑on experience with
- File Transfer & Managed Transfer Services
- Event‑driven architectures and Kafka
- ODS technologies: PostgreSQL, Graph databases, OpenSearch
- Change Data Capture (CDC) tools such as Debezium
- Experience designing cloud‑native data platforms
- Strong understanding of data security, governance, and reliability engineering
- Solid scripting and automation skills
- Excellent problem-solver and stakeholder collaboration abilities
Nice to Have
- Experience with large‑scale distributed systems
- Exposure to API management and data virtualization
- Familiarity with observability tools (e.g., Prometheus, Grafana, CloudWatch, etc.)
If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now.
If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion on your career.
LHS 297508