Senior Data Engineer optimizing ETL/ELT pipelines at Asahi Kasei. Evaluate programming concepts and support data science projects while ensuring solution stability in a hybrid work setup.
Responsibilities
Evaluate and improve T-SQL, MDX, DAX, HiveQL programming concepts such as queries, stored procedures, functions, temporary tables, parameterization, complex joins and groupings
Develop and optimize ETL/ELT pipelines, to load data from on premise and online systems
Ensure data solutions stability and performance optimization
Conduct data warehouse model design, development and support
Prepare, cleanse, validated datasets for data science purpose
Assist data troubleshooting, data featuring and data discovery
Develop tools to automate development and monitoring process
Develop python algorithms for data processing
Support Data science environment, assists data science projects
Manage time effectively to ensure that projects are delivered on schedule
Provide on-going maintenance and support of existing and new data solutions
Support solution automation and CI/CD
Requirements
Bachelor’s Degree in Computer Science / Information technology or related field AND five (5) years as Data Engineer, Big Data Engineer, Data Architect, SQL Developer, Database Developer, or related
5 years’ experience with: Developing business intelligence solutions including data integration, data schema development, data pipelines, modeling and reporting/analytics Database design principles, data modeling, partitioning, and data warehouse Python, and Shell scripting SQL writing and query tuning, and query performance optimization data analysis, data modeling, data migration, computer programming, and problem-solving
4 years’ experience with: Data validation, cleansing, featuring: Pandas, Spark dataframes, and DQ solutions
3 years’ experience with: CI/CD. CDC
2 years’ experience with: big data pipeline development, monitoring and support: ETL, SSIS, Hadoop, HDFS, Spark, Hive, RDD, and UDF. cloud data ecosystem: Spark API, Spark SQL, PySpark, Scala, Python, and data Streaming
Demonstrative experience with: data science tools: Python ML, Scala, and Databricks.
Technical Lead for data engineering and reporting in healthcare technology at Dedalus. Shaping innovative software solutions and leading cross - functional technical teams in Australia.
Senior ML Data Engineer working on data pipeline curation for Mobileye's autonomous vehicle dataset. Collaborating across teams to enhance ML engineering and vision model applications.
Data Engineer managing customer datasets to enhance industrial research and development. Responsible for ETL pipelines and data ingestion for the Uncountable Web Platform.
Data Engineer designing and maintaining scalable data solutions on Databricks for clinical trials. Collaborating with teams to overcome data challenges and ensure the smooth logistics of clinical supplies.
Senior Manager leading a team of database engineers to manage CCC's data platform. Overseeing mission - critical applications and collaborating with cross - functional teams in a hybrid environment.
As a Principal Data Architect at Solstice, lead the design and implementation of data architecture solutions. Ensure data integrity, security, and accessibility to meet strategic organizational goals.
Data Platform Specialist overseeing data workflows and enhancing data quality for Stackgini's AI - driven IT solutions. Collaborating with teams to drive improvements and stakeholder support.
Data Engineer designing data pipelines in Python for a major railway industry client. Collaborate with Data Scientists and ensure code quality with agile methodologies.
Senior Data Engineer responsible for building and optimizing data pipelines for banking analytics initiatives. Collaborating with data teams to ensure data quality and readiness for enterprise use.
Senior Data Engineer developing scalable data solutions on Databricks for analytics and operational workloads. Collaborating with cross - functional teams to modernize the data ecosystem.