Software Engineer – Database Engineering
Company | Snowflake |
---|---|
Location | Menlo Park, CA, USA, Bellevue, WA, USA |
Salary | $157000 – $230000 |
Type | Full-Time |
Degrees | Bachelor’s, Master’s, PhD |
Experience Level | Junior, Mid Level |
Requirements
- 2+ years industry experience working on commercial or open-source software.
- Fluency in Java or C++.
- Familiarity with development in a Linux environment.
- Excellent problem solving skills, and strong CS fundamentals including data structures, algorithms, and distributed systems.
- Systems programming skills including multi-threading, concurrency, etc.
- Experience with implementation testing, debugging and documentation.
- Bachelor’s degree or foreign equivalent in Computer Science, Software Engineering or related field; Masters or PhD preferred.
- Ability to work on-site in our San Mateo / Bellevue / Berlin office.
Responsibilities
- Design, develop, and support a petabyte-scale cloud database that is highly parallel and fault-tolerant.
- Build high-quality and highly reliable software to meet the needs of some of the largest companies on the planet.
- Analyze and understand performance and scalability bottlenecks in the system and solve them.
- Pinpoint problems, instrument relevant components as needed, and ultimately implement solutions.
- Design and implement novel query optimization or distributed data processing algorithms which allow Snowflake to provide industry leading data warehousing capabilities.
- Design and implement the new service architecture required to enable the Snowflake Data Cloud.
- Develop tools for improving our customers’ insights into their workloads.
Preferred Qualifications
- SQL or other database technologies including internal design and implementation.
- Query optimization, query execution, compiler design and implementation.
- Experience with internals of distributed key value stores like FoundationDB and storage engines like RocksDB, InnoDB, BerkeleyDB etc.
- Experience with MySQL, PostgreSQL internals
- Data warehouse design, database systems, and large-scale data processing solutions like Hadoop and Spark.
- Large scale distributed systems, transactions and consistency models.
- Experience in database replication technology
- Big data storage technologies and their applications, e.g., HDFS, Cassandra, Columnar Databases, etc.