skip to Main Content


Data Engineer Lead

  • Operations
  • Team Leader
  • Full-time


For companies hellbent on growth, Rapyd is the global fintech partner that simplifies and advances commerce in every market. 

Rapyd lets you build bold. Liberate global commerce with all the tools your business needs to create payment, payout and fintech experiences everywhere. From famous Fortune 500s to ambitious neighborhood upstarts, our payments network and powerful fintech platform make it easy to pay suppliers and get paid by customers—locally or internationally.

With offices in London, Tel Aviv, San Francisco, Denver, Dubai, Miami, Singapore, Iceland and Hong Kong, we know what it takes to make cross-border commerce as friction-free as being next door. We’re boldly building it—so you can grow your business.

Get the tools to grow globally at Follow: Blog, Insta, LinkedIn, Twitter

We are looking for a Lead Data Engineer to join our Data Group. As a Lead Data Engineer you’ll be in charge of building and leading the first data engineering team in Rapyd, you’ll team up with our BI team and work closely with our DBA, DevOps and architecture teams.


  • Creating and structuring end-to-end data pipelines & ETL/ELT’s: from the source all the way to the analysts hands, enabling them the ideal conditions to make smart and data driven business decisions
  • Writing multi-step scalable processes from all our available data sources – Marketing, Operations, CS, Product, CRM and more, and then tying them up to a valuable & useful source of insights for the analysts
  • Be responsible for designing, implementing, and maintaining data pipelines from different sources
  • Collaborate with various stakeholders across the company like BI developers, analysts, data science, etc., in order to deliver team tasks
  • Synthesize and translate business requirements to technical specifications


  • 4+ years of experience working as a Data Engineer in a data-rich environment
  • Hands-on experience in developing end-to-end ETL/ELT processes – must
  • Experience with big data and ETL & orchestration tools: Spark, Kafka, Apache Airflow etc.- must
  • Excellent SQL skills, including complex SQL queries and procedures 
  • In-depth knowledge of Python
  • Experience with dimensional data modeling & schema design in Data Warehouses 
  • Strong orientation in Amazon Redshift and Google BigQuery environments – big advantage 
  • Experience in Google Cloud Data tools (BigQuery, Cloud Composer AutoML, DataFlow), Fivetran – a plus
  • Bachelor’s degree in Engineering, Computer Science, Math’s, or other numerate/analytical degree equivalent
  • Strong communication skills and excellent English is a must

Job Candidate Privacy Policy –

Apply For This Job

Not The Job You Were Looking For?

Share this Job Description