Skip to content

Data Engineer
Company | ibotta |
---|
Location | Denver, CO, USA |
---|
Salary | $110000 – $126000 |
---|
Type | Full-Time |
---|
Degrees | Bachelor’s |
---|
Experience Level | Mid Level, Senior |
---|
Requirements
- 3+ years of experience in software development, preferably with Scala and Python.
- Bachelor’s degree in Computer Science, Engineering or a related field required.
- Experience being a key critical contributor participating in medium and large data projects from ideation to implementation.
- Experience in database design principles supported by strong SQL abilities.
- Experience building processes supporting data transformation, data structures, metadata, dependency, and workload management.
Responsibilities
- Work with cross-functional engineering teams to enable approachable and self-service data movement and access patterns.
- Provide guidance and assistance to stakeholders with building complex datasets that meet the business needs.
- Identify, design, and implement process improvements including automating manual processes, optimizing data delivery, re-designing infrastructure for greater reliability and performance.
- Work as a member of the Data Engineering squad to deliver product features and resolve data related technical issues.
- Work with information security to keep our data secure.
- Support the engineering of distributed systems, frameworks, and design patterns enabling efficient usage of Ibotta’s Data Lake.
- Use Scala or Python to utilize Spark to collect and manage data at scale.
- Help build and manage automation tools, data pipelines that meets Data Governance and Data Security Standards.
- Evangelize Data Engineering and supporting capabilities with Platform and Analytics teams.
- Perform incident resolution and root cause analysis of critical outages. Implement solutions to systematic failures. Provide on-call support, including after-hours on a rotational basis.
- Assist with documentation of the environments and data tooling that support our products.
Preferred Qualifications
- Preferred experience building/implementing data pipelines using Databricks.
- Preferred experience with event-driven architecture design patterns and practices.
- Experience with the following a strong plus: AWS Cloud Services; EC2, S3, Experience with Scala and Spark, Experience with Delta Lake, Apache Iceberg, or Apache Hudi, Message Brokers such as Kafka or Kinesis, ETL tools and processes (Airflow or other similar tools), Infrastructure as code using Terraform, CloudFormation, etc, Experience building APIs and libraries.
- Agile (Kanban or Scrum) development experience.