About the role

  • Develop scalable data pipelines and analytics solutions at Miami University. Collaborate with stakeholders to enhance data quality and maintainability.

Responsibilities

  • Develop, maintain, and enhance data pipelines using modern ELT tools (e.g., dbt, Fivetran, Airflow, or similar) in a cloud-based environment.
  • Write, optimize, and maintain SQL and/or Python-based transformations that support scalable analytics solutions.
  • Monitor, troubleshoot, and resolve issues in production data pipelines to ensure reliability, performance, and data integrity.
  • Implement data validation, testing, and quality checks to improve the consistency and trustworthiness of data assets.
  • Collaborate with stakeholders and technical partners to translate business needs into scalable data solutions.
  • Support and improve existing data integrations and workflows with a focus on maintainability and performance optimization.
  • Contribute to and follow best practices for version control, testing, and deployment (CI/CD).
  • Create and maintain documentation for data pipelines, models, and system processes.
  • Contribute to team practices that support consistent delivery and continuous improvement.
  • Design, develop, and optimize scalable data pipelines using modern ELT tools (e.g., dbt, Fivetran, Airflow, or similar) in a cloud-based environment.
  • Lead the development of advanced SQL and/or Python-based transformations supporting enterprise analytics and reporting.
  • Own production data pipelines, including monitoring, performance tuning, troubleshooting, and ensuring reliability and data integrity.
  • Design and implement robust data validation, testing, and quality frameworks to ensure trusted data at scale.
  • Design scalable data models and transformation patterns to support enterprise reporting and analytics needs.
  • Partner with stakeholders to translate complex business requirements into sustainable, high-impact data solutions.
  • Drive improvements to existing data pipelines and processes to enhance performance, scalability, and maintainability.
  • Lead adoption of best practices for version control, testing, deployment, and operational support (CI/CD).
  • Develop and maintain comprehensive documentation for data pipelines, models, and architecture.
  • Guide team practices that support consistent delivery, operational excellence, and continuous improvement.
  • Mentor team members and contribute to the growth of technical standards and capabilities across the team.

Requirements

  • Bachelor’s degree in computer science, information technology, or a relevant field earned by date of hire with two to four or more years of relevant experience OR Associate’s degree in computer science, information technology, or a relevant field earned by date of hire and four to six or more years of relevant experience.
  • Ability to analyze complex data and develop practical, scalable solutions.
  • Ability to troubleshoot and resolve data pipeline and data quality issues in a timely manner.
  • Ability to translate business needs into effective technical data solutions.
  • Ability to communicate technical concepts clearly to both technical and non-technical audiences.
  • Ability to work collaboratively across teams and build effective working relationships.
  • Ability to manage multiple priorities and adapt to changing requirements in a dynamic environment.
  • Ability to document solutions and processes to support maintainability and knowledge sharing.
  • Experience developing and maintaining data pipelines using modern ELT tools (e.g., dbt, Fivetran, Airflow, or similar).
  • Experience working with cloud data platforms such as Snowflake, Amazon Redshift, Google BigQuery, or Azure Synapse.
  • Experience using Python for data processing, automation, or integration.
  • Experience implementing data validation, testing, or monitoring solutions.
  • Experience using version control systems (e.g., Git) and contributing to CI/CD workflows.
  • Experience building or supporting data models and analytics solutions.
  • Experience working in Agile or iterative development environments.
  • Experience supporting enterprise data systems in a higher education or similarly complex environment.
  • Experience designing and optimizing scalable ELT pipelines using tools such as dbt, Fivetran, Airflow, or similar.
  • Experience working with cloud data platforms such as Snowflake, Amazon Redshift, Google BigQuery, or Azure Synapse at scale.
  • Strong experience using Python for data processing, automation, and system integration.
  • Experience designing dimensional data models and large-scale data transformations.
  • Experience implementing and managing data quality, testing, and monitoring frameworks.
  • Experience leading or significantly contributing to CI/CD practices for data pipelines.
  • Experience optimizing data pipelines for performance, scalability, and cost efficiency.
  • Experience working in complex organizational environments (e.g., higher education, healthcare, or enterprise settings).

Benefits

  • Benefit Eligible

Job title

BI ETL Data Developer II/III

Job type

Experience level

JuniorMid level

Salary

$65,000 - $87,000 per year

Degree requirement

Bachelor's Degree

Location requirements

Report this job

See something inaccurate? Let us know and we'll update the listing.

Report job