Contract Possible Extension 100% Remote
W2 Only · No 3rd Parties
W2 Only · No 3rd Parties
3+ Years Experience Required
Join a Scala-based data processing platform team to design, develop, and maintain robust, scalable services that process and manage large volumes of data. You will work across distributed systems and cloud infrastructure, building resilient, observable services for data ingestion and processing — integrating with both AWS and GCP.
Must-Have Technologies
Scala Dataflow PostgreSQL BigQuery Kubernetes Data Pipelines (E2E)
Responsibilities
Design, develop, and maintain scalable Scala-based services for large-volume data processing and ingestion.
Build and maintain end-to-end data pipelines integrating with AWS and GCP infrastructure.
Integrate with AWS SDKs (S3, STS) and GCP services including BigQuery, Dataflow, and Pub/Sub.
Implement asynchronous and concurrent programming patterns using FS2 and cats-effect.
Write unit, component, and integration tests to validate code quality and reliability.
Troubleshoot and resolve issues in distributed, cloud-based environments.
Collaborate via Git-based workflows and contribute to clear, well-documented codebases.
Required & Preferred Skills
Required
Scala sbt Functional Programming JSON / Circe AWS (S3, STS, IAM) GCP (BigQuery, Dataflow) Google Pub/Sub PostgreSQL / SQL FS2 / cats-effect Async & Concurrent Programming Unit / Integration Testing Git Kubernetes
Preferred
Finagle / RPC Frameworks Pipeline Orchestration Workflow Management Containerization
Soft Skills
Problem-Solving Debugging Team Collaboration Clear Documentation
Education & Experience
Bachelor’s degree in Computer Science or a related field, or equivalent experience.
3+ years of professional software engineering experience, preferably in data engineering or backend systems.
Hands-on experience building end-to-end data pipelines on modern cloud infrastructure.