Data Engineer optimizing data pipeline architecture for NISC, enhancing data flow and collaboration across teams. Focused on leveraging Databricks technologies for efficient data handling.
Responsibilities
Assemble large, complex data sets that meet functional / non-functional business requirements.
Understanding of Data Warehouse and Data Lakehouse paradigms.
Design and build optimal data pipelines from a wide variety of data sources using AWS and Databricks technologies.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Create data tools for analytics and data scientist team members that assist them in building and optimizing a unified data stream.
Work with other data engineering experts to strive for greater functionality while making data more discoverable, addressable, trustworthy, and secure.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Create and maintain a culture of engagement and one that is conducive of NISC’s Statement of Shared Values.
Commitment to NISC’s Statement of Shared Values.
Requirements
Experience building and optimizing data pipelines, architectures, and data sets.
Hands-on experience developing and optimizing data pipelines and workflows using Databricks.
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Strong analytic skills related to working with unstructured datasets.
Build ETL processes supporting data transformation, data structures, metadata, dependency, and workload management.
Working knowledge of message queuing, stream processing, and highly scalable data stores.
Experience supporting and working with cross-functional teams in a dynamic environment.
Candidate with experience in a Data Engineer role, who has attained a BS or MS degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools: • Experience with AWS: Lambda, S3, SQS, SNS, CloudWatch, etc.
Experience with Databricks and Delta Lake.
Experience with big data tools: Hadoop, Spark, Kafka, etc.
Experience with relational SQL and NoSQL databases, including Oracle, Postgres Cassandra, and DynamoDb.
Experience with data pipeline and workflow management tools: Hevo Data, Airflow, etc.
Experience with AWS cloud services: EC2, Databricks, EMR.
Experience with stream-processing systems: Apache Spark, Kafka Streams, Spring Cloud, etc.
Experience with object-oriented languages: Java, Scala.
Nice-to-have: Experience with scripting languages: Python, JavaScript, Bash, etc.
Strong verbal and written communication skills.
Ability to demonstrate composure and think analytically in high pressure situations.
Benefits
Medical, Dental and Vision Insurance.
Health Savings Account (HSA) with $100 monthly contributions from NISC.
Like to walk? Improve your overall wellness knowledge? Ability to earn up to $800 additional dollars into your HSA each year through our Wellness Rewards program.
Dependent Care Flexible Spending Account (FSA) thru Paylocity.
Fully covered life insurance up to x3 annual base salary.
Fully covered short- and long-term disability.
401(k), traditional or Roth, with employee match up to 6% and employer 4% salary base contributions.
PTO accrual levels dependent on years of service, 120 Life Leave Event hours, 9 paid holidays and an annual holiday week.
$2,500 Interest-FREE technology loan program.
$25,000 employee educational assistance program.
Volunteer, Wellness, Family Events and other employee fun supplied by our committees.
Employee Assistance Program; assisting employees and dependents with virtually any life event.
Benevolence Committee to support employees with financial hardships like unexpected medical bills, funerals and other unfortunate hardships.
Senior Data Engineer designing and optimizing data platforms for clients using Microsoft Azure, Microsoft Fabric, Power BI, and Databricks. Working closely with clients to deliver scalable solutions.
Data Engineer providing technical expertise on mission - critical NAVSUP OIS program. Work involves data architecture and database management in AWS GovCloud environments.
Senior Data Engineer focusing on data infrastructure for an AI - driven insurtech startup based in Nepal. Collaborating with teams to optimize data models and maintain data quality.
Senior Professional Consultant leading architecture and design for SAP BW and SAC solutions at Freudenberg. Collaborating with stakeholders and optimizing performance of data landscapes.
Senior Data Engineer designing and managing data architectures to transform large - scale data into insights for Humana. Involves leading technical discussions and implementing best data practices.
Data Engineer II at Early Warning Services developing data science tools and infrastructure. Collaborating on software enhancements and mentoring interns in a hybrid work environment.
Senior Data Architect responsible for optimizing data architecture and supporting data - driven business decisions at TruStage. Leading technical guidance for data architecture and cross - functional team collaboration.
Senior Data Architect developing data architecture plans at The Hartford, collaborating with internal teams to align data standards and practices. Leading complex solutions with a focus on operational effectiveness.
Senior Solution Architect defining architecture framework for SA‑CCR in regulatory risk. Collaborating with stakeholders to ensure compliance and efficient data governance.