Posted in

Principal Data Engineer

Principal Data Engineer

CompanyVerizon Communications
LocationBoston, MA, USA, E Fowler Ave, Tampa, FL, USA, Berkeley Heights, NJ, USA, Ashburn, VA, USA, Alpharetta, GA, USA, Irving, TX, USA
Salary$Not Provided – $Not Provided
TypeFull-Time
DegreesBachelor’s, Master’s
Experience LevelSenior, Expert or higher

Requirements

  • Six or more years of relevant experience required, demonstrated through one or a combination of work and/or military experience, or specialized training.
  • Experience in data warehousing, Data Lakes, and big data platforms.
  • Experience with GCP tools such as Cloud DataFlow, Cloud Shell SDK, Cloud Composer, Google Cloud Storage (GCS) and BigQuery.

Responsibilities

  • Developing high-quality code that meets standards and delivers desired functionality using cutting-edge technology.
  • Programming components, developing features, and frameworks.
  • Working independently and contributing to immediate and cross-functional teams.
  • Participating in design discussions and contributing to architectural decisions.
  • Analyzing problem statements, breaking down problems, and providing solutions.
  • Taking ownership of large tasks, delivering on time, and mentoring team members.
  • Exploring alternate technologies and approaches to solve problems and following an agile-based approach to deliver data products.

Preferred Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Information Science, Engineering, or a related field with 4+ years of relevant work experience.
  • 4 or more years of proven experience collaborating with data engineers, architects, data scientists, and enterprise platform teams in designing and deploying data products and ML models in production.
  • 4 or years experience with GCP tools such as Cloud DataFlow, DataProc, Cloud Shell SDK, Cloud Composer, Google Cloud Storage (GCS), Cloud Functions and BigQuery.
  • Experience in designing and deploying Hadoop clusters and various big data analytical tools, including HDFS, PIG, Hive, Sqoop, Spark, and Oozie.
  • Hands-on experience in designing and building data pipelines using Airflow, or Apache Beam in GCP (Dataproc and BigQuery) for ETL jobs.
  • Hands-on experience implementing real-time solutions using Apache Beam SDK with Dataflow, Spark, or Flink as runners.
  • Experience with real-time data streaming technologies such as Google Pub/Sub and Apache Kafka.
  • Experience with advanced transformations and windowing functions in real-time data processing.
  • Experience with Google Dataflow templates for creating reusable and scalable data processing pipelines.
  • Experience working with at least one NoSQL database (HBase, Cassandra, Couchbase) and one relational database (Oracle, MySQL, Teradata).
  • Strong problem-solving skills and the ability to empathize and maintain a positive attitude.
  • Participation in technology and industry forums on evolving data engineering.
  • Strategic thinker with the ability to apply a business mindset to data issues and initiatives, crafting cross-business strategies and plans for multiple stakeholders.
  • Strong leadership, communication, persuasion, and teamwork skills.
  • GCP/AWS Cloud certifications.
  • Familiarity with Large Language Models and a passion for Generative AI tools.