Skip to content

Staff Data Engineer
Company | Later |
---|
Location | Chicago, IL, USA |
---|
Salary | $200000 – $228000 |
---|
Type | Full-Time |
---|
Degrees | |
---|
Experience Level | Expert or higher |
---|
Requirements
- 10+ years of experience in data engineering, software engineering, or related fields.
- Proven experience leading the technical strategy and execution of large-scale data platforms.
- Expertise in cloud technologies (Google Cloud Platform, AWS, Azure) with a focus on scalable data solutions (BigQuery, Snowflake, Redshift, etc.).
- Strong proficiency in SQL, Python, and distributed data processing frameworks (Apache Spark, Flink, Beam, etc.).
- Extensive experience with streaming data architectures using Kafka, Flink, Pub/Sub, Kinesis, or similar technologies.
- Expertise in data modeling, schema design, indexing, partitioning, and performance tuning for analytical workloads, including data governance (security, access control, compliance: GDPR, CCPA, SOC 2)
- Strong experience designing and optimizing scalable, fault-tolerant data pipelines using workflow orchestration tools like Airflow, Dagster, or Dataflow.
- Ability to lead and influence engineering teams, drive cross-functional projects, and align stakeholders towards a common data vision.
- Experience mentoring senior and mid-level data engineers to enhance team performance and skill development.
Responsibilities
- Lead the design and evolution of a scalable data architecture that meets analytical, machine learning, and operational needs.
- Architect and optimize data pipelines for batch and real-time data processing, ensuring efficiency and reliability.
- Implement best practices for distributed data processing, ensuring scalability, performance, and cost-effectiveness of data workflows.
- Define and enforce data governance policies, implement automated validation checks, and establish monitoring frameworks to maintain data integrity.
- Ensure data security and compliance with industry regulations by designing appropriate access controls, encryption mechanisms, and auditing processes.
- Drive innovation in data engineering practices by researching and implementing new technologies, tools, and methodologies.
- Work closely with data scientists, engineers, analysts, and business stakeholders to understand data requirements and deliver impactful solutions.
- Develop reusable frameworks, libraries, and automation tools to improve efficiency, reliability, and maintainability of data infrastructure.
- Guide and mentor data engineers, fostering a high-performing engineering culture through best practices, peer reviews, and knowledge sharing.
- Establish and monitor SLAs for data pipelines, proactively identifying and mitigating risks to ensure high availability and reliability.
Preferred Qualifications
- Experience with machine learning infrastructure and integrating ML models into data pipelines.
- Experience with Kappa/Lambda architectures for real-time data processing.
- Background in data observability, lineage tracking, and anomaly detection tools (Monte Carlo, Databand, Great Expectations, etc.).
- Experience working with decentralized data architecture (e.g., Data Mesh principles).