Software Engineer – Foundation Inference Infrastructure
Company | Tesla |
---|---|
Location | Palo Alto, CA, USA |
Salary | $Not Provided – $Not Provided |
Type | Full-Time |
Degrees | |
Experience Level | Junior, Mid Level |
Requirements
- Proficiency with Python
- Familiarity of managing hardware inference chips like TPUs and of optimizing machine learning inference workloads for low latency and scale
- Familiarity with Operating Systems concepts such as networking, processes, file systems and virtualization
- Familiarity with concurrency programming
- Familiarity with C++ and / or Golang
- Experience with Linux, container orchestrator like Kubernetes or similar and bare metal setup tools like Ansible or similar
- Experience with data stores like PostgreSQL and Redis
Responsibilities
- Design and implement backend services and tooling that handles iteration and batch processing of inference, simulation, and evaluation workloads
- Work closely with the other Autonomy teams to build foundational components as well as bridge missing pieces in ML compiler and runtime infrastructure while designing for scalability, reliability, security and high performance
Preferred Qualifications
-
No preferred qualifications provided.