Architect designing and implementing MLOps strategy for the EVOKE Phase-2 programme at Quantiphi. Leading enterprise-grade ML pipelines and collaborating across teams for production-ready ML solutions.
Responsibilities
Architect and implement the MLOps strategy for the EVOKE Phase-2 programme , ensuring alignment with the project proposal and delivery roadmap.
Design and own enterprise-grade ML/LLM pipelines covering model training, validation, deployment, versioning, monitoring, and CI/CD automation.
Build container-oriented ML platforms (EKS-first) while evaluating alternative orchestration tools with similar capabilities (Kubeflow, SageMaker, MLflow, Airflow, etc.).
Implement hybrid MLOps + LLMOps workflows , including prompt/version governance, evaluation frameworks, and monitoring for LLM-based systems.
Serve as a technical authority across multiple internal and customer projects, not limited to EVOKE, contributing architectural patterns, best practices, and reusable frameworks.
Enable observability, monitoring, drift detection, lineage tracking, and auditability across ML/LLM systems.
Collaborate with cross-functional teams — data engineering, platform, DevOps, and client stakeholders — to deliver production-ready ML solutions.
Ensure all solutions adhere to security, governance, and compliance expectations , particularly around handling cloud services, Kubernetes workloads, and MLOps tools.
Conduct architecture reviews, troubleshoot complex ML system issues, and guide teams through implementation across cloud-native ML platforms.
Mentor engineers and provide guidance on modern MLOps tools, platform capabilities, and best practices.
Requirements
7-14 years of experience in ML/AI engineering or MLOps roles with strong architecture exposure.
Strong expertise in AWS cloud-native ML stack , including: EKS (primary), ECS, Lambda, API Gateway, CI/CD (CodeBuild/CodePipeline or equivalent)
Hands-on experience with at least one major MLOps toolset and awareness of alternatives: MLflow, Kubeflow, SageMaker Pipelines, Airflow, BentoML, KServe, Seldon
Deep understanding of model lifecycle management (training → registry → deployment → monitoring).
Deep understanding of ML lifecycle : data ingestion, feature engineering, training, evaluation, model packaging, CI/CD, drift detection, monitoring, and governance.
Strong experience with AWS SageMaker (Training, Processing, Batch Transform, Pipelines, Feature Store, Model Registry, Model Monitor).
Experience implementing ML CI/CD pipelines including automated training, testing, validation, model promotion, and endpoint deployment.
Ability to build dynamic and versioned pipelines using SageMaker Pipelines, Step Functions, or Kubeflow.
Strong SQL and data transformation experience using Snowflake , Databricks, Spark, or EMR.
Experience with feature engineering pipelines and Feature Store management (SageMaker or Feast).
Understanding of lineage tracking : training data snapshot, feature versions, code versioning, metadata tracking, reproducibility.
Hands-on experience with Bedrock , OpenAI , Anthropic , or Llama models.
Experience with CloudWatch , SageMaker Model Monitor , Prometheus/Grafana , or Datadog.
Strong foundation in Python and cloud-native development patterns.
Solid understanding of security best practices, IAM, secrets management, and artifact governance.
Good to have skills: Experience with vector databases, RAG pipelines, or multi-agent AI systems.
Exposure to DevOps and infrastructure-as-code (Terraform, Helm, CDK).
Hands-on understanding of model drift detection, A/B testing, canary rollouts, and blue-green deployments.
Familiarity with Observability stacks (Prometheus, Grafana, CloudWatch, OpenTelemetry).
Knowledge of Lakehouse (Delta/Iceberg/Hudi) architecture.
Ability to translate business goals into scalable AI/ML platform designs.
Strong communication and cross-team collaboration skills.
Ability to guide engineering teams through technical uncertainty and design choices.
Software Engineer delivering MLOps solutions for Generative AI at DataGalaxy. Focusing on reliability and collaboration with product engineering teams in a hybrid environment.
Senior Machine Learning Engineer responsible for designing, building, and deploying ML solutions. Joining a global tech group tackling high - impact projects in Buenos Aires.
Principal Machine Learning Engineer at Qodea responsible for leading ML model lifecycle and collaborating on AI solutions in Buenos Aires delivery center.
Lead ML Ops Engineer for a fast - growing AI startup focused on scalable infrastructure. Drive hands - on execution across the entire model lifecycle in a collaborative environment.
Lead Machine Learning Engineer creating personalized item recommendations for Target.com and the Target App. Designing and optimizing production ML solutions with a team of data scientists and engineers.
Senior Machine Learning Engineer at Doctrine focusing on developing NLP models for legal document processing. Join an ambitious team to innovate within the field of legal technology.
MLOps Engineer responsible for designing and maintaining ML pipelines at JobCloud. Collaborating with teams to productionize ML models and ensuring robust system performance.
Senior ML Engineer developing scalable production ML systems across various teams in JobCloud. Leading innovation in the AI - driven recruitment landscape, improving job ad visibility and performance.
Senior Machine Learning Engineer at greehill developing ML solutions for sustainable urban living. Leading projects in Computer Vision and Deep Learning to transform urban environments.
Machine Learning Engineer developing deep learning models for self - driving vehicle systems at BlueSpace.ai. Engage in innovative ML applications while collaborating with a seasoned team in the autonomous vehicle ecosystem.