About Us
Relay is a cross-chain payments system that provides instant low cost cross-chain bridging, swaps, and transactions. We're the fastest and cheapest way to bridge & transact across chains, serving over 5 million users who have completed 59+ million transactions with $6+ billion in volume across 85+ chains.
Our company mission is to make transacting across chains as fast, cheap, and reliable as online payments. We are building the core infrastructure to abstract chains from end user payments, enabling the next billion users to experience the benefits of blockchain without the UX burdens. We are bringing the Relay Network to market through our consumer app (Relay.Link) and major ecosystem partners, including Opensea, Alchemy, LiFi, Metamask, Coinbase, and more.
Role Overview
We're seeking a talented Data Engineer to build the data infrastructure that powers analytics and decision-making across Relay. You'll be the critical bridge between our TypeScript production systems and our data warehouse, enabling our analytics team to derive insights from billions in transaction volume across 85+ chains.
This is a unique opportunity to build data pipelines for one of the fastest-growing cross-chain infrastructure platforms, working with high-volume blockchain data and establishing foundational data practices for the company.
Key Responsibilities
Data Pipeline Development
Design and build reliable ETL/ELT pipelines to extract data from our TypeScript production systems into our data warehouse
Instrument event tracking and data emission from our cross-chain relayer and application backend
Process and transform blockchain transaction data, cross-chain events, and user activity across 85+ chains
Build data models that support analytics on transaction volume, capital efficiency, bridge performance, and user behavior
Ensure data quality, consistency, and reliability across all pipelines
Infrastructure & Tooling
Establish data orchestration infrastructure using tools like Airflow, Dagster, or Prefect
Implement monitoring, alerting, and observability for data pipelines
Optimize query performance and data warehouse costs
Build self-service data tools and documentation for the analytics team
Cross-Functional Collaboration
Partner with backend engineers to understand data schemas and implement proper data capture
Work closely with our analytics lead to understand reporting needs and design appropriate data models
Collaborate with product and business teams to define metrics and ensure data availability
Document data flows, schemas, and best practices
Required Qualifications
Technical Expertise
3-5+ years of experience building data pipelines and working with data warehouses
Strong proficiency in SQL and data modeling techniques
Experience with Python for data processing and pipeline orchestration
Familiarity with TypeScript/Node.js applications and how to extract data from them (or willingness to learn quickly)
Experience with modern data stack tools (dbt, Airflow/Dagster/Prefect, Fivetran/Airbyte, etc.)
Knowledge of data warehouse platforms (Snowflake, BigQuery, Redshift, or similar)
What We Offer
Competitive base salary ($200K+ depending on experience)
Equity package
Comprehensive health, dental, and vision insurance
Annual company offsite
Unlimited PTO policy with encouraged minimum of 2 weeks annually
Remote-first culture with emphasis on asynchronous communication and flexibility
Opportunity to build foundational data infrastructure for the leading cross-chain payments platform
Loading similar jobs...
Discover fully remote job opportunities in the United States at USA Remote Jobs. Apply for roles like Software Developer, Customer Service Specialist, Project Manager, and more!