• Company Endpoint Clinical
  • Employment Full-time
  • Location 🇺🇸 United States nationwide
  • Submitted Posted 1 month ago - Updated 15 hours ago

About Us:


Endpoint is an interactive response technology (IRT®) systems and solutions provider that supports the life sciences industry. Since 2009, we have been working with a single vision in mind, to help sponsors and pharmaceutical companies achieve clinical trial success. Our solutions, realized through the proprietary PULSE® platform, have proven to maximize the supply chain, minimize operational costs, and ensure timely and accurate patient dosing. Endpoint is headquartered in Raleigh-Durham, North Carolina with offices across the United States, Europe, and Asia.


Position Overview:

The Data Engineer plays a critical role in designing, implementing, and maintaining the data infrastructure that drives our business intelligence, analytics, and data science initiatives. In this role, the Data Engineer will work closely with cross-functional teams to ensure data is accurate, accessible, and optimized for various business needs. This position requires expertise in Databricks, SQL, Python, Spark, and other Big Data tools, with a strong emphasis on ELT/ETL processes. The engineer will collaborate with various stakeholders to ensure data quality and to build efficient, scalable data solutions.


Responsibilities:
  • Design, develop, and maintain scalable ETL/ELT pipelines using Databricks and other big data technologies.
  • Optimize data workflows to handle large volumes of data efficiently.
  • Build and manage data warehouses and data lakes to store structured and unstructured data.
  • Utilize SQL, Python, Spark for data extraction, transformation, and loading (ETL) processes.
  • Work closely with data analysts and data scientists to understand their data needs and ensure the availability of clean, reliable data.
  • Integrate data from various sources, ensuring consistency and accuracy across the data ecosystem.
  • Implement data quality checks to ensure data accuracy, completeness, and consistency.
  • Develop and enforce data governance policies and procedures to maintain high data quality standards.
  • Develop and support BI tools and dashboards, providing business insights and data-driven decision-making support.
  • Work with stakeholders to understand reporting requirements and deliver actionable insights.
  • Automate repetitive data processing tasks to improve efficiency and reduce manual work.
  • Continuously monitor and improve data pipeline performance, addressing bottlenecks and optimizing resources.
  • Document data processes, workflows, and architecture for future reference and knowledge sharing.
  • Ensure compliance with data security and privacy regulations, such as GDPR, HIPAA, etc.


Education:
  • Bachelor's degree in Computer Science, Software Engineering, Mathematics, or a related technical field is preferred.


Experience:
  • 8+ years of technical experience with a strong focus on Big Data technologies in any of these areas: software engineering, integrations, data warehousing, data analysis, business intelligence, preferably at a technology or biotech/pharma company
  • Proficiency in Databricks for data engineering tasks.
  • Advanced knowledge of SQL for complex queries, data manipulation, and performance tuning.
  • Strong programming skills in Python for scripting and automation.
  • Experience with Big Data tools (e.g., Spark, Hadoop) and data processing frameworks.
  • Familiarity with BI tools (e.g., Tableau, Power BI) and experience in developing dashboards and reports.
  • Experience with cloud platforms & tools like Azure ADF or Databricks.
  • Familiarity with data modeling and data architecture design.
  • Understanding of machine learning concepts and their application in data engineering.


Skills:
  • Keen attention to detail and bias for action.
  • Excellent organizational skills and proven ability to multi-task.
  • Ability to influence without authority and lead successful teams.
  • Strong interpersonal skills with the ability to work effectively with a wide variety of professionals.
  • Ability to lead back-end data initiatives – identify data sources, write scripts, transfer and transform data, and automate processes.
  • Ability to understand source data, its strengths, weaknesses, semantics, and formats.
  • Excellent knowledge of logical and physical data modeling concepts (relational and dimensional).


$100,000 - $140,000 a year

Benefits:

All job offers will be based on a candidate’s location, skills and prior relevant experience, applicable degrees/certifications, as well as internal equity and market data. Regular, full-time or part-time employees working 30 or more hours per week are eligible for comprehensive benefits including: Medical, Dental, Vision, Life, STD/LTD, 401(K), Paid time off (PTO) or Flexible time off (FTO), and Company bonus where applicable.

Endpoint Clinical does not accept unsolicited resumes from search firms or any other third parties. Any unsolicited resume sent to Endpoint Clinical will be considered Endpoint Clinical property, and Endpoint Clinical will not pay a fee should it hire the subject of any unsolicited resume.


Endpoint Clinical is an equal opportunities employer AA/M/F/Veteran/Disability.    

 

Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment, qualified applicants with arrest and conviction records.

#LI-MT

Loading similar jobs...

USA Remote Jobs

Discover fully remote job opportunities in the United States at USA Remote Jobs. Apply for roles like Software Developer, Customer Service Specialist, Project Manager, and more!

© 2024 Created by USA Remote Jobs. All rights reserved.