Senior Data Engineer in PwC's FCU Technology Team designing scalable data pipelines and collaborating on analytical solutions. Working with Databricks and supporting junior engineers in a hybrid work model.
Responsibilities
Design, build and maintain scalable, reliable data pipelines and data platforms supporting analytical and reporting solutions
Work on end-to-end data engineering solutions – from data ingestion, through transformation and storage, to serving curated datasets for analytics and reporting
Develop and optimize ETL / ELT pipelines using Databricks (Apache Spark, SQL, Python) and Delta Lake technologies
Responsible for data modelling, data structures and performance optimization in analytical data stores (lakehouse / data warehouse)
Implement and maintain data quality, data validation and monitoring mechanisms ensuring accuracy, consistency and reliability of processed data
Collaborate closely with data analysts, BI developers and business stakeholders to translate business and regulatory requirements into robust technical solutions
Contribute to architecture decisions related to data platforms, data processing patterns and technology choices
Support and mentor junior data engineers, helping them grow their technical and consulting competencies
Actively participate in client-facing work – discussing requirements, presenting solutions and explaining technical concepts in an accessible way
Keep up with latest data engineering, cloud and Anti Financial Crime trends and contribute to internal initiatives and accelerators
Requirements
Master’s degree (preferably in Computer Science, Data Engineering, Mathematics, Statistics or similar)
Commercial experience in data engineering, database development or data platform roles
Strong understanding of data engineering fundamentals: ETL/ELT, data warehousing, lakehouse architectures
Hands-on experience with Databricks, including: – building and maintaining Spark-based batch and/or streaming data pipelines, – working with Delta Lake (ACID tables, schema evolution, incremental processing, merges), – optimizing performance (partitioning, file compaction, query optimization), – developing pipelines using Databricks notebooks, jobs and workflows
Very good knowledge of SQL (designing, writing and optimizing complex queries)
Experience with Python for data processing and transformations (e.g. pandas, PySpark)
Solid understanding of data modelling, data quality and data governance concepts
Experience working with cloud-based data platforms (Azure preferred)
Ability to gather and translate business requirements into technical solutions
Excellent communication skills and ability to work with both technical and non-technical stakeholders
Ability to work effectively under pressure while maintaining a high level of accuracy
Fluent written and spoken English
Willingness to work in international project teams
Nice to have: Experience with streaming data processing (e.g. Spark Structured Streaming), Knowledge of data governance or metadata tools (e.g. Collibra), Experience in financial services, AML / AFC or regulatory-driven environments, Additional languages: German, Dutch or French
Benefits
Work flexibility - hybrid working model, flexible start of the day, sabbatical leave
Development and upskilling - our full support during onboarding process, mentoring from experienced colleagues, training sessions, workshops, certification co/financed by PwC and conversations with native speaker
Wide medical and well-being program - a medical care package (incl. physiotherapy, discounts on dental care), coaching, mindfulness sessions, psychological support, education through dedicated webinars and workshops, financial and legal advice
Possibility to create your individual benefits package (a.o. lunch pass, insurance packages, concierge, veterinary package for a pet, massages) and access to a cafeteria - vouchers, discounts on IT equipment and car purchase, 3 paid hours for volunteering per month
Data Architect responsible for defining enterprise data architecture on AWS and Databricks Lakehouse platforms. Enabling scalable data lakes and enterprise analytics for financial services organizations.
Data Platform Operations Support leading data engineering strategy across projects for EXL. Driving innovation and optimization while collaborating with various teams in the organization.
Manager II leading data engineering projects at Navy Federal Credit Union. Overseeing data governance and quality initiatives while managing engineering teams in a hybrid work environment.
Senior Data Engineer designing and maintaining data pipelines for Qodea's global technology solutions. Collaborating with teams to ensure data quality and governance across platforms.
Senior Data Engineer at Qodea designing scalable data pipelines and infrastructure. Delivering solutions utilizing cutting - edge tools and collaborating closely with teams for impactful results.
Senior Data Engineer building and maintaining data pipelines for cloud and AI solutions at Qodea. Collaborating with ML engineers and focusing on reliability and performance in a cloud - native environment.
Principal Data Engineer responsible for architecting scalable data pipelines and building high - quality data foundations. Collaborating closely with experts to ensure data readiness for advanced analytics.
Product Director managing Target's Customer Data Platform. Leading strategy, financials, and team development to enhance guest experience through data - driven initiatives.
Senior AI Data Pipeline Engineer building scalable data pipelines and optimizing AI workflows at Trimble. Designing architectures that enhance digital construction technology across industries.
Data Engineering Intern at Efficy supporting data management and ETL pipeline development. Collaborate with teams and contribute to the enhancement of data architecture.