Posted in

Software Engineer – AI Entities

Software Engineer – AI Entities

CompanyEvenUp
LocationToronto, ON, Canada, San Francisco, CA, USA, Los Angeles, CA, USA
Salary$Not Provided – $Not Provided
TypeFull-Time
Degrees
Experience LevelSenior, Expert or higher

Requirements

  • 8+ years of industry experience designing and building distributed data systems
  • Previous experience architecting and scaling event driven architectures
  • Strong understanding and practical experience with data pipeline tooling and storage systems such as Post, Dagster, BigQuery, Elasticsearch
  • The ability to communicate cross-functionally with various stakeholders to derive requirements and architect scalable solutions
  • Have several years of industry experience building high-quality software, shipping production-ready code and infrastructure
  • You enjoy owning a project from start to finish and love to drive a project across the finish line
  • Interest in making the world a fairer place (we don’t get paid unless we’re helping injured victims and/or their attorneys)

Responsibilities

  • Build and expand on the foundations of our document extraction pipeline. Leverage LLMs & AI tooling to identify accurate data across files and cases.
  • Design and develop modularized services to increase the capabilities and scope of data infrastructure and how we surface core data entities for downstream teams.
  • Collaborate with our DS team to integrate ML models into our production workflows and simplify ML deployment and observability.
  • Implement event driven, low latency systems to empower our stakeholders with accurate and reliable data
  • Analyze and solve key performance bottlenecks, scaling challenges, and high availability issues.
  • Help grow our engineering team and define a ‘data first’ mentality across our organization.

Preferred Qualifications

  • Fluency in Python, SQL and GraphQL
  • Previous experience integrating ML models and LLMs into data services.
  • Domain expertise in legal technology, medical records, and working with unstructured data.