Data Engineer designing and implementing data pipelines and services for Ford Pro analytics. Working with diverse teams and technologies to drive data-driven solutions.
Responsibilities
Develop EL/ELT/ETL pipelines to make data available in BigQuery analytical data store from disparate batch, streaming data sources for the Business Intelligence and Analytics teams.
Work with on-prem data sources (Hadoop, SQL Server), understand the data model, business rules behind the data and build data pipelines (with GCP, Informatica) for one or more Ford Pro verticals. This data will be landed in GCP BigQuery.
Build cloud-native services and APIs to support and expose data-driven solutions.
Partner closely with our data scientists to ensure the right data is made available in a timely manner to deliver compelling and insightful solutions.
Design, build and launch shared data services to be leveraged by the internal and external partner developer community.
Building out scalable data pipelines and choosing the right tools for the right job. Manage, optimize and Monitor data pipelines.
Provide extensive technical, strategic advice and guidance to key stakeholders around data transformation efforts. Understand how data is useful to the enterprise.
Requirements
Bachelors Degree
3+ years of experience with SQL and Python
2+ years of experience with GCP or AWS cloud services; Strong candidates with 5+ years in a traditional data warehouse environment (ETL pipelines with Informatica) will be considered
3+ years of experience building out data pipelines from scratch in a highly distributed and fault-tolerant manner.
Comfortable with a broad array of relational and non-relational databases.
Proven track record of building applications in a data-focused role (Cloud and Traditional Data Warehouse)
Experience with GCP cloud services including BigQuery, Cloud Composer, Dataflow, CloudSQL, GCS, Cloud Functions and Pub/Sub.
Inquisitive, proactive, and interested in learning new tools and techniques.
Familiarity with big data and machine learning tools and platforms. Comfortable with open source technologies including Apache Spark, Hadoop, Kafka.
1+ year experience with Hive, Spark, Scala, JavaScript.
Strong oral, written and interpersonal communication skills
Comfortable working in a dynamic environment where problems are not always well-defined.
M.S. in a science-based program and/or quantitative discipline with a technical emphasis.
Senior Data Engineer leading and mentoring a team in building scalable data pipelines for digital transformation projects in an international software company.
Senior Data Operations Engineer at iKnowHow designing and implementing scalable data - driven applications. Focus on data pipelines, APIs, and collaboration across teams for project success.
Data/AI Engineer supporting innovations in corporate travel with AWS technologies at HRS Group. Collaborating with data teams to develop AI solutions and maintain data pipelines.
Data Engineer developing data solutions for reporting and analytics at Assurant. Implementing and optimizing Data Warehouse solutions in cloud and on - premise environments with Agile methodology.
AI Engineer developing AI - driven analytics for trading at Deloitte. Focusing on scalable data pipelines and collaboration with traders for actionable insights.
Data Engineer involved in delivering scalable enterprise systems and digital transformation platforms for Qualysoft. Collaborate with global teams across various industries and technologies.
Program Manager leading development of AI - driven data platform to enhance revenue intelligence across global business functions. Collaborating across teams and regions in a hybrid work environment.
Project Manager overseeing payroll system implementation with global outsourcing partner. Leading cross - functional teams and stakeholder engagement for project success.
Data Engineer 3 optimizing Market Place processes for Walmart Global Tech's Chennai team. Developing data pipelines and ensuring efficient utilization of Market Place systems.