We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results

Senior Data Products Engineer - PySpark, Databricks

Eliassen Group
life insurance, 401(k)
United States, Massachusetts, Reading
55 Walkers Brook Drive (Show on map)
Dec 26, 2024

Description:

We're recruiting a Data Products Engineer with software development experience to implement robust data pipelines, API's, and scalable software solutions with best-practice data architectures. We want you to leverage your leading expertise in data engineering, software development, and cloud technologies to design and optimize the data infrastructure for a highly complex, employee-facing data and analytics web application. Team players only, with emphasis on openness, honesty, reliability, and mature competency.

Due to client requirement, applicants must be willing and able to work on a w2 basis. For our w2 consultants, we offer a great benefits package that includes Medical, Dental, and Vision benefits, 401k with company matching, and life insurance.

Rate: $80 - $90 / hr. w2



Experience Requirements:

  • 10+ years of professional experience in data engineering and software development roles
  • EXTENSIVE experience in Medallion Architecture, Databricks Development, and building robust data platforms for streaming / real time applications
  • Hands on experience with PySpark and Python, including multiple libraries
  • Great understanding of how data moves throughout an organization, data validation, cleansing, etc.
  • Proven track record in Agile software product development
  • 7+ years in programming languages like Python, Java, or Scala
  • Extensive experience with Databricks and Azure are a must
  • Experience with other data technologies such as Databricks and Hadoop, Spark, or Kafka are a plus
  • Strong understanding of database technologies (SQL and NoSQL) and data modeling
  • Expertise in designing and implementing scalable data pipelines and ETL processes
  • Experience designing and developing APIs for data access and integration
  • Strong understanding of software engineering principles, including design patterns and version controls
  • Bachelor's degree in Computer Science, Engineering, or related field; advanced degree preferred
  • Previous experience as a member of an Inaugural Dev Team or start-ups is a plus.


Critical technologies for this role:

- Azure Data Factory (ADF)

- Azure Databricks

- Azure SQL DB

- Azure Data Lake Storage (ADLS)

- Azure Cosmos DB

- Azure DevOps

- Python

- SQL

- Spark / PySpark

- Scala

- Advanced Scripting with Bash/Powershell

- ETL Pipelines

- Data Modeling

- Object-Oriented Programming (OOP)

- Functional Programming

- Test-Driven Development (TDD)

- Clean Code Principles

- Version Control (Git)

Applied = 0

(web-6f784b88cc-dlztm)