Sr. Data Engineer @ Starbucks
Openings: 2
100% Remote
W2 (No C2C)
MUST HAVE data engineering experience Pyspark & Databricks within an Azure environment.
Required Skills and Experience *
-
7+ Years of experience as a Data Engineer with the following technologies
-
Strong/expert Spark (PySpark) Using DataBricks
-
Hands on data pipeline development, ingest patterns in Azure
-
Orchestration tools- Airflow
-
SQL
-
Denormalized Data modelling for big data systems
-
Strong analytical and design skills
-
Bachelor’s degree in computer science, management information systems, or related discipline, or equivalent work experience
Nice to Have Skills and Experience
Power BI/Tableau experience
Alteryx experience
ML/AI experience
Job Description *
An employer is looking for a Sr. Data Engineer in a remote location. This person will join the Store and Supply Chain Team. Originally, all of the Store and Supply Chain data was stored in an old, specialized data warehouse. These two data engineers will be responsible for moving their data into a new UC (unity catalog) that is well-governed and compliant. This position is responsible for design, development, testing and support for data pipelines to enable continuous data processing for data exploration, data preparation and real-time business analytics.
-
Demonstrate deep knowledge the data engineering domain to build and support non-interactive (batch, distributed) & real-time, highly available data, data pipeline and technology capabilities
-
Build fault tolerant, self-healing, adaptive and highly accurate data computational pipelines
-
Provide consultation and lead implementation of complex programs
-
Develop and maintain documentation relating to all assigned systems and projects
-
Tune queries running over billion of rows of data running in a distributed query engine
-
Perform root cause analysis to identify permanent resolutions to software or business process issue