Senior Software Engineer II, Cloud Data Pipeline

  1. Home
  2. Remote jobs
  3. Architecture
  • Company Nautilus Biotechnology
  • Employment Full-time
  • Location 🇺🇸 United States, Washington
  • Submitted Posted 4 days ago - Updated 1 hour ago

At Nautilus, we have a big and important mission: improve the health of millions by unleashing the potential of the proteome to accelerate drug development and enable a new world of precision and personalized medicine. We are developing a single-molecule protein analysis platform of unprecedented sensitivity, scale, and ease of use that we believe will democratize access to the proteome -- one of the most dynamic and valuable sources of biological insight. To accomplish this, we are pursuing hard scientific problems with an entrepreneurial mindset and creating a world-class team of builders, innovators, and dreamers across a wide range of disciplines.

We are hiring a Senior Software Engineer II to join our Cloud Data Pipeline team. This team is responsible for the data infrastructure that transforms raw protein measurement data into the scientific outputs that our customers and internal researchers depend on. You'll own ETL pipelines, cloud infrastructure, and the APIs and databases that connect our data platform to the rest of the company. As Nautilus moves into commercial deployment, this role sits at the intersection of data engineering rigor and the practical demands of a production scientific platform. Your work will directly shape what our customers can learn from their experiments and how reliably they can trust the results.

This position will report to the Manager, Data Engineering and Cloud Pipelines and is located in Seattle, WA. The position is hybrid and requires a minimum of three days onsite.

Responsibilities

  • Design and implement data pipelines and ETLs that process protein measurement data at scale, turning instrument outputs into reliable, query-able scientific results.

  • Improve the architecture of existing cloud systems: identify structural weaknesses, propose better approaches, and drive implementation alongside the technical lead.

  • Maintain and evolve the APIs and database schemas that serve internal teams including bioinformatics, science, and product development, adapting as their needs grow.

  • Contribute to the team's DevOps practice: optimize AWS costs, manage cloud deployments, improve system security, and drive performance improvements through infrastructure changes.

  • Work cross-functionally with scientific and software teams to define data quality metrics, understand how downstream consumers use pipeline outputs, and ensure the platform meets their needs.

  • Surface and advocate for changes to project priorities and architecture across the cloud pipeline and adjacent projects.

Requirements

  • 7+ years of relevant experience in a software engineering organization, with a strong track record of delivering production-quality systems.

  • Bachelor's degree in Computer Science or a related field, or equivalent practical experience.

  • Fluency in a variety of programming languages. We are currently invested in Python for our data pipelines.

  • Solid experience with cloud infrastructure on AWS including cost management and deployment practices.

  • Experience with CI/CD pipelines and infrastructure-as-code (e.g., Terraform, CDK).

  • Experience with relational and non-relational database design

  • Demonstrated experience building and maintaining data pipelines or ETL systems at production scale.

  • Skilled in multiple technology domains with the ability to independently pick up new ones as needed.

  • Strong communication skills and comfort working across engineering, science, and product stakeholders.

  • Ability to identify when a change in direction is necessary and deal competently with that shift.

  • Familiar with AI-driven development tools and methodologies.

Nice to Haves

  • Experience with Docker and container orchestration tools (Kubernetes, ECS).

  • Experience with workflow orchestration tools (e.g., Nextflow, Step Functions, Airflow, Prefect).

  • Experience with data observability, pipeline monitoring, or data quality frameworks.

  • Background in biotech, life sciences, or scientific data processing.

  • Familiarity with NoSQL data stores and when to use them alongside relational databases.

Loading similar jobs...

USA Remote Jobs

Discover fully remote job opportunities in the United States at USA Remote Jobs. Apply for roles like Software Developer, Customer Service Specialist, Project Manager, and more!

© 2026 Created by USA Remote Jobs. All rights reserved.