Senior Data Engineer designing and delivering scalable data pipelines and high performance analytical solutions on Google Cloud. Enhance data quality via KPIs and migrate data platforms to cloud-native ecosystems.
Responsibilities
Build scalable data pipelines: Design and deliver batch and real-time ETL/ELT pipelines across cloud environments to support analytics and reporting
Develop SQL and BigQuery solutions: Write and optimize advanced SQL transformations and build performant, cost‑efficient BigQuery data models
Develop Python workflows: Implement scalable data processing solutions using Python and PySpark, ensuring maintainable and high‑quality code
Design data models and ensure quality: Build robust data models and apply validation practices to maintain accuracy and reliability
Build cloud‑native data solutions: Use GCP services such as BigQuery, Dataflow, Cloud Composer, Pub/Sub, and GCS to build and operate modern data platforms
Optimize performance and reliability: Troubleshoot complex pipeline issues and continuously improve compute, storage, and processing performance
Collaborate using strong engineering practices: Work with engineering, analytics, and business teams while contributing to CI/CD, code reviews, and testing standards
Requirements
University degree in computer science or a comparable qualification
At least 5 years of experience as a Data Engineer, building scalable data pipelines and working with cloud-based data ecosystems
Strong expertise in SQL and hands‑on experience building performant datasets in BigQuery (or similar cloud data warehouses)
Proven experience with Python and PySpark for scalable data processing in distributed environments
Solid understanding of data modeling, ELT/ETL patterns, and data quality best practices
Experience with Google Cloud Platform, particularly BigQuery, Dataflow, Cloud Composer, GCS, or equivalent cloud data services
Hands‑on experience building scalable data pipelines (batch and near real‑time) in a cloud‑native environment
Proficiency with version control, CI/CD pipelines, and automated testing frameworks
Ability to troubleshoot and optimize performance across compute, storage, and processing layers
Nice to have: Experience with Infrastructure as Code (Terraform, Ansible, Chef)
Knowledge of shell scripting
Experience in financial services or regulated environments
Benefits
Private Health Insurance – it’s custom-made for you
Supported certifications, trainings and top e-learning platforms
Individual coaching sessions
Accredited Coaching School
Epic parties or themed events – lovingly designed for our people and their families
Data Architect at RSM leading AI - driven data migration initiatives within Salesforce ecosystem. Implementing data governance and optimizing performance across complex datasets.
Senior Data Engineer at Capgemini designing and optimizing scalable data architectures on Databricks and GCP. Collaborating across teams to transform business needs into reliable technical solutions.
Data Engineer transforming legacy on - premises systems to cloud - native architectures for advanced data analytics. Collaborating with teams to build efficient data solutions using Python and AWS.
Data Engineering Academy focused on Snowflake and Databricks for professionals interested in expanding their technical capabilities. Fully remote with future office work in Monterrey or Saltillo after completion.
Senior Data Engineer at Intent HQ designing and scaling data platforms. Building high - impact intelligence from millions of customer insights with a focus on performance and reliability.
SAP Data Engineer supporting MERKUR GROUP's evolution into a data - driven company. Responsible for data integration, modeling, and collaboration with various departments in Group Finance.
Data Engineer at Booz Allen Hamilton organizing data and developing advanced technology solutions. Leading data engineering activities for mission - driven projects and mentoring multidisciplinary teams.
Senior Data Engineer at Bristol Myers Squibb developing scalable data pipelines for foundational products. Collaborating with data scientists and IT professionals to ensure data quality and accessibility.
Data Engineer II role focusing on developing and maintaining data pipelines for analytics. Collaborating with Data Science and Analytics teams to ensure data quality and reliability.