Posted in

Senior Data Software Engineer

Senior Data Software Engineer

CompanyPsiQuantum
LocationPalo Alto, CA, USA, Ontario, Canada
Salary$150000 – $205000
TypeFull-Time
DegreesBachelor’s, Master’s
Experience LevelSenior, Expert or higher

Requirements

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
  • 8+ years in Data Engineering with hands-on cloud and SaaS experience.
  • Proven experience designing data pipelines and workflows, preferably for high-performance or large-scale scientific computations.
  • Strong knowledge of database design principles (relational and/or NoSQL) for complex or high-volume datasets.
  • Proficiency in one or more programming languages commonly used for data engineering (e.g., Python, C++, Rust).
  • Hands-on experience with orchestration tools such as Prefect, Apache Airflow, or equivalent frameworks.
  • Hands-on experience with cloud data services, e.g. Databricks, AWS Glue/Athena, AWS Redshift, Snowflake, or similar.
  • Excellent teamwork and communication skills, especially in collaborative, R&D-focused settings.

Responsibilities

  • Develop and refine data processing pipelines to handle complex scientific or computational datasets.
  • Design and implement scalable database solutions to efficiently store, query, and manage large volumes of domain-specific data.
  • Refactor and optimize existing codebases to enhance performance, reliability, and maintainability across various data workflows.
  • Collaborate with cross-functional teams (e.g., research scientists, HPC engineers) to support end-to-end data solutions in a high-performance environment.
  • Integrate workflow automation tools, ensuring the smooth operation of data-intensive tasks at scale.
  • Contribute to best practices for versioning, reproducibility, and metadata management of data assets.
  • Implement Observability: Deploy monitoring/logging tools (e.g., CloudWatch, Prometheus, Grafana) to preempt issues, optimize performance, and ensure SLA compliance.

Preferred Qualifications

  • Knowledge and experience with containerization and orchestration tools such as Docker and Kubernetes and event-driven architectures.
  • Knowledge of HPC job schedulers (e.g., Slurm, LSF, or PBS) and distributed computing best practices is a plus.
  • Experience with Infrastructure as Code (IaC) tools like Terraform, AWS CDK, etc.
  • Deployed domain-specific containerization (Apptainer/Singularity) or managed GPU/ML clusters.