Data Engineer designing, implementing, and optimizing data pipelines for DeepLight AI. Collaborating closely with a multidisciplinary team to analyze large-scale data.
Responsibilities
Design, build, and optimise scalable data solutions, primarily utilising the Lakehouse architecture to unify data warehousing and data lake capabilities.
Advise stakeholders on the strategic choice between Data Warehouse, Data Lake, and Lakehouse architectures based on specific business needs, cost, and latency requirements.
Design, develop, and maintain scalable and reliable data pipelines to ingest, transform, and load diverse datasets from various sources, including structured and unstructured data, streaming data, and real-time feeds.
Implement standards and tooling to ensure ACID properties, schema evolution, and high data quality within the Lakehouse environment.
Implement robust data governance frameworks (security, privacy, integrity, compliance, auditing).
Continuously optimize data storage, compute resources, and query performance across the data platform to reduce costs and improve latency for both BI and ML workloads.
Develop and maintain CI/CD pipelines to automate the entire machine learning lifecycle, from data validation and model training to deployment and infrastructure provisioning.
Deploy, manage, and scale machine learning models into production environments, utilizing MLOps principles for reliable and repeatable operations.
Establish and manage monitoring systems to track model performance metrics, detect data drift (changes in input data), and model decay (degradation in prediction accuracy).
Ensure rigorous version control and tracking for all components: code, datasets, and trained model artifacts (using tools like MLflow or similar).
Create comprehensive documentation, including technical specifications, data flow diagrams, and operational procedures, to facilitate understanding, collaboration, and knowledge sharing.
Requirements
Proven practical experience in designing, building, and optimising solutions using Data Lakehouse architectures (e.g., Databricks, Delta Lake).
Strong hands-on experience with managing data ingestion, schema enforcement, ACID properties, and utilizing big data technologies/frameworks like Spark and Kafka.
Expertise in data modeling, ETL/ELT processes, and data warehousing concepts. Proficiency in SQL and scripting languages (e.g., Python, Scala).
Demonstrated practical experience implementing MLOps pipelines for production systems. This includes a solid understanding and implementation experience with MLOps principles: automation, governance, and monitoring of ML models throughout the entire lifecycle.
Experience with CI/CD tools, containerization/orchestration technologies (e.g., Docker, Kubernetes), model serving frameworks (e.g., TensorFlow Serving, Sagemaker), and experiment tracking (e.g., MLflow).
Experience with production monitoring tools to detect data drift or model decay.
Strong hands-on experience with major cloud platforms (e.g., AWS, Azure, GCP) and familiarity with DevOps practices.
Excellent analytical, problem-solving, and communication skills, with the ability to translate complex technical concepts into clear and actionable insights.
Proven ability to work effectively in a fast-paced, collaborative environment, with a passion for innovation and continuous learning.
Benefits
Competitive salary and performance bonuses
Comprehensive health insurance
Professional development and certification support
Opportunity to work on cutting-edge AI projects
Flexible working arrangements
Career advancement opportunities in a rapidly growing AI company
Senior Data Engineer at Trainline responsible for data pipelines and insightful analytics. Collaborate cross - functionally to enable impactful data - driven decisions.
Data Engineer responsible for creating pipelines and models to support analytics at Trainline. Collaborating with BI Developers and Data Scientists to drive business insights through data.
Distinguished Engineer driving design and architecture of mission - critical Data Platforms at Capital One. Leading technical strategies to enhance data discovery and governance for enterprise - wide AI readiness.
Principal Software Engineer at Clari + Salesloft developing enterprise - grade AI - driven applications for revenue intelligence with a dynamic team in India.
Data Engineer responsible for ingestion pipelines and data quality on AI - driven marketing platform. Collaborating with data teams to ensure accuracy and performance of data systems.
Data Architect designing and governing data foundations for analytics and AI applications at Clio. Collaborating cross - functionally to develop high - quality data models and standards.
Software Engineer at Warner Music Group developing an innovative Data Platform for the music industry. Collaborating with dynamic teams to enhance music data processing and delivery.
Data Engineer role specializing in Azure & Snowflake at InfoCentric. Leading design and delivery of enterprise - scale data platforms for large organizations.
Principal Data Architect at PointClickCare ensuring coherent and scalable data architecture. Driving unified data direction while collaborating with Engineering Architecture team for AI enablement.