Data Engineer
Company | Amazon |
---|---|
Location | Vancouver, BC, Canada |
Salary | $80700 – $134800 |
Type | Full-Time |
Degrees | Bachelor’s, Master’s |
Experience Level | Entry Level/New Grad, Junior |
Requirements
- Experience with database, data warehouse or data lake solutions
- Experience building data pipelines or automated ETL processes
- Experience with SQL
- Experience with one or more scripting language (e.g., Python, KornShell)
- Are 18 years of age or older
- Are enrolled in or have completed a academic program that is physically located in Canada
- Experience with data mining and data transformation.
- Currently working towards a Bachelor’s Degree in Computer Science, Computer Engineering, Information Management, Information Systems, or an equivalent technical discipline with an expected conferral date between September 2023 and March 2025 and/or completed your degree no more than 24 months ago.
Responsibilities
- Design, implement, and automate deployment of our distributed system for collecting and processing log events from multiple sources.
- Design data schema and operate internal data warehouses and SQL/NoSQL database systems.
- Own the design, development, and maintenance of ongoing metrics, reports, analyses, and dashboards that engineers, analysts, and data scientists use to drive key business decisions.
- Monitor and troubleshoot operational or data issues in the data pipelines.
- Drive architectural plans and implementation for future data storage, reporting, and analytic solutions.
- Develop code based automated data pipelines able to process millions of data points.
- Improve database and data warehouse performance by tuning inefficient queries.
- Work collaboratively with Business Analysts, Data Scientists, and other internal partners to identify opportunities/problems.
- Provide assistance with troubleshooting, researching the root cause, and thoroughly resolving defects in the event of a problem.
Preferred Qualifications
- Experience with data visualization software (e.g., AWS QuickSight or Tableau) or open-source project
- Experience with big data processing technology (e.g., Hadoop or ApacheSpark), data warehouse technical architecture, infrastructure components, ETL, and reporting/analytic tools and environments
- Knowledge of writing and optimizing SQL queries in a business environment with large-scale, complex datasets
- Knowledge of basics of designing and implementing a data schema like normalization, relational model vs dimensional model
- Previous technical internship(s), if applicable
- Prior experience with AWS
- Can articulate the basic differences between datatypes (e.g. JSON/NoSQL, relational)
- Enrolled in a Master’s Degree or advanced technical degree with an expected conferral date between September 2023 and March 2025 and/or completed your degree no more than 24 months ago.