Fulltime Data Engineers openings in Seattle, United States on September 05, 2022

Data Engineer at BOEING

Location: Seattle

Job Description

At Boeing, we innovate and collaborate to make the world a better place. From the seabed to outer space, you can contribute to work that matters with a company where diversity, equity and inclusion are shared values. We’re committed to fostering an environment for every teammate that’s welcoming, respectful and inclusive, with great opportunity for professional growth. Find your future with us.
At Boeing, we are all innovators on a mission to connect, protect, explore and inspire. From the seabed to outer space, you’ll learn and grow, contributing to work that shapes the world. Find your future with us.

Boeing Data Solutions Service provides multiple products and offerings that allow application teams to have the flexibility and agility to meet their specific application development and security requirements, by securing and enabling business data. The team is currently looking for Data Engineer to join Cloud Capabilities Data Warehouse services team.Position ResponsibilitiesDuties will include (but are not limited to):
• Creates data pipelines, big data platforms and data integrations in databases, data warehouses and data lakes, working with various cloud and on-premises technologies
• Drive initiatives to operationalize cloud computing starting with Google Cloud Platform, applying standards across all platform implementations for the enterprise to enable best of breed, fit for purpose analytical solutions and data strategy.
• Contribute to the evolving systems architecture to meet changing requirements for scaling, reliability, performance, manageability, security compliant, and cost.
• Work very closely with Product manager, Database Administrators and Database architects to define technical product requirements and collaborate within agile team to drive User story/tasks creation along with design and development activities.
• Automation using DevSecOps and other available tools to expedite the product development along with First Time Quality
• Participate in group sessions within developer community and share knowledge.
• Mentors junior team members and contribute in creating and maintaining best coding practices and oversight using stringent reviews.
• Effectively resolve problems and roadblocks as they occur, consistently following through details while driving innovation as well as issue resolution.
• Monitor the implementation of architecture throughout the system development lifecycle and seek and provide clarification when needed.
• Work with cross-functional teams spread across multiple products and locations within Boeing and external partners, across different cultures and time-zones
This position has been identified as a virtual opportunity and does not require applicants to live in the Seattle WA area or any of the listed locations. The position must meet Export Control compliance requirements, therefore a “US Person” as defined by 22 C.F.R. § 120.15 is required. “US Person” includes US Citizen, lawful permanent resident, refugee, or asylee.Basic Qualifications (Required Skills / Experience):
• 2+ years’ experience with Cloud development and technologies, particular focus on Google Cloud technologies a plus, though Azure and AWS knowledge is also helpful.
• 3+ years Software Development experience in Agile methodology (ADO, JIRA, Source code repository), CI/CD tools, and Testing and automation.
• 3+ Experience or familiarity with database management technologies, including Database design, development using SQL/NoSQL/Columnar databases.
• 3 + years of experience with data warehousing, and building ETL pipelines.
• 3 + years of experience in Windows and UNIX/LINUX operating systems with scripting expertise.
• 3 + years of experience of coding experience with any programming language (Such as Java, Python).
• 1 + years of knowledge of Big Data solution, data integration, data analytics (Spark, Kafka, Watson)
Preferred Qualifications (Desired Skills / Experience):
• 1 + years of experience with Google Cloud Platform (GCP) products including BigQuery, Cloud Storage, Cloud Functions, DataProc, DataStudio.
• 1 + years of familiarity with Container & Container Orchestration Platform (Docker/Kubenestes/OpenShift)
• Experience with DevSecOps (Such as GIT, Jenkins, Azure Dev Ops, etc)
• Config control, Ansible (configuration) – infrastructure as a code
• Familiarity with distributed systems and computing at scale.
• Programming skill sets in Scala, Go, Java, Java script, Python, Groovy
• Experience working in a diverse organization and ability to work with partners from within Boeing and outside, across different cultures and time-zones.
• Experience working with multiple file structures (e.g. ORC, Flat files, JSON)
• Displays excellent oral and written communication skills to team members, peers, customers and management..
Typical Education & Experience:Education/experience typically acquired through advanced technical education (e.g. Bachelor) and typically 9 or more years’ related work experience or an equivalent combination of technical education and experience (e.g. PhD+4 years’ related work experience, Master+7 years’ related work experience, 13 years’ related work experience, etc.).Relocation:Relocation assistance is not a negotiable benefit for this position.

Drug Free Workplace:Boeing is a Drug Free Workplace where post offer applicants and employees are subject to testing for marijuana, cocaine, opioids, amphetamines, PCP, and alcohol when criteria is met as outlined in our policies.

Equal Opportunity Employer:

Boeing is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, age, physical or mental disability, genetic factors, military/veteran status or other characteristics protected by law.
Apply Here
For Remote Data Engineer roles, visit Remote Data Engineer Roles


senior data engineer (Seattle OR Remote) at Starbucks

Location: Seattle

From the beginning, Starbucks set out to be a different kind of company. One that not only celebrated coffee and the rich tradition, but that also brought a feeling of connection.

As a Senior Data Engineer, is responsible for design, development, testing and support for data pipelines to enable continuous data processing for data exploration, data preparation and real-time business analytics.

As a Senior Data Engineer, You Will

Design, architect, and build data pipelines and data lakes, and develop data products and features for integration with existing cloud platform.
• Design and build cutting-edge, multi-microservice solutions to support Starbucks’s growth worldwide. Collaborate with development teams and other Starbucks Technology team developer leads to initiate process improvements for new and existing systems through data analytics.
• Drive deployment approach, including planning and execution, data conversion approach, script development and execution, warranty period, and transition of the solution to the platform’s operational context.
• Extract data from various database and perform exploratory data analysis including cleansing, processing, and aggregating data. Maintain high-performance and scalable data ingestion pipelines for analytics solutions. Maintain big data technologies, including Spark, Hive, and Hadoop.
• Ensure security and authorization practices are developed and followed to protect sensitive personal data. Employ scaling and automation to data preparation techniques.
• Perform requirements gathering and backlog refinement, lead shaping and guiding systems approach, drive project initiation, contribute to functional design, and lead technical design and development.
• Implement secure practices in building and deploying applications in cloud.
• Deploy recommended systems/platform using container and microservice technologies. Lead deployment approach, including planning and execution, data conversion approach, script development and execution, warranty period, and transition of the solution to the platform’s operational context.

Provide analytic support, including code documentation, data transformation, and algorithms to Starbucks Technology teams to implement analytic insights and recommendations into business processes.
• Work with data engineering team to build and support non-interactive (batch, distributed) and real-time, highly available data, data pipeline, and technology capabilities, including cluster analysis, network analysis and anomaly detection.
• Analyze, design, develop, and implement solutions in Oracle Exadata, Oracle Big Data Appliance (BDA), and Azure using Procedural Language/Structured Query Language (PLSQL), Spark, and relevant technologies (Python, Scala, Hadoop, Parquet).
• Architect and build large scale data pipelines using orchestration tools, including Airflow and Continuous Integration (CI) tools, including Jenkins, Hudson, Sonar, and Maven. Implement relevant cloud-based (Azure or AWS) technologies.
• Design and develop solutions to integrate disparate data sources into a consistent data product. Support system and integration testing activities, initiate design reviews for new applications, ensure they adhere to software development standards.

Work with infrastructure provisioning and configuration tools to develop scripts to automate deployment of physical and virtual environments and develop tools to monitor usage of virtual resources. Provide support and resolution for escalated software application issues.
• Code, test, debug, document, and implement complex software applications; and create more complex prototypes and ensure deliverables are high quality and meet user expectations.
• Initiate and lead root cause analysis efforts to identify and implement solutions to operational issues. Design and develop solutions to integrate disparate data sources into a consistent data product.
• Develop scripts to automate deployment of physical and virtual environments and develop tools to monitor usage of virtual resources by utilizing infrastructure provisioning and configuration tools.
• Develop and maintain documentation related to all assigned systems and projects to support training, system administration, deployment, and operational processes and procedures. Lead and train partners in diagnosing, troubleshooting and remediating incidents and problems to support end-user community.
• Attend daily scrum meetings and provide update on tasks assigned in current sprint.

We’d Love To Hear From People With

Bachelor’s or Master’s degree in computer science, management information systems, or related discipline, or equivalent work experience

We are looking for strong hands-on knowledge in the following:
• Strong/expert Spark in a Cloud environment, Azure preferred
• Hands on data pipeline development, ingest patterns
• Strong knowledge in NoSQL database technologies
• Core understanding of Distributed database systems
• Distributed Analytical processing
• Python and or Scala.

Architect and design large scale high performance distributed systems (7-10 years)

SQL Platform (7-10 years)

No-SQL Platform (3+ years)

Spark (3+ years)

Data platform implementation on Azure or AWS (3+ years)

CI/CD experience (2+ years)

Exposure to SOA architecture (2+ years)

Join us and be part of something bigger. Apply today!


All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. We are committed to creating a diverse and welcoming workplace that includes partners with diverse backgrounds and experiences. We believe that enables us to better meet our mission and values while serving customers throughout our global communities. People of color, women, LGBTQIA+, veterans and persons with disabilities are encouraged to apply. Qualified applicants with criminal histories will be considered for employment in a manner consistent with all federal state and local ordinances. Starbucks Corporation is committed to offering reasonable accommodations to job applicants with disabilities. If you need assistance or an accommodation due to a disability, please contact us at applicantaccommodation@starbucks.com.
Apply Here
For Remote senior data engineer (Seattle OR Remote) roles, visit Remote senior data engineer (Seattle OR Remote) Roles


Data Engineer II, Seller Growth Opportunities at Amazon

Location: Seattle

Job summary

The Seller Growth and Development organization owns the charter to increase profitability of sellers in Amazon. Our products are strategically important to the long term growth of Amazon’s consumer businesses, and our organization’s initiatives are highly visible to Amazon’s executive leadership.

Our larger team owns the growth recommendation platform that surfaces strategic growth recommendations on different Amazon programs and end-customer insights that sellers can leverage to increase their profitability.

We are seeking passionate data engineers to join our team to build the growth recommendations onboarding platform team. This is a great opportunity to build a large scale data architecture that involves both batch big data, high TPS streaming data, traditional data pipelines as well as heuristics and Machine Learning.

As a data engineer in the team, you will work backwards from our customers’ business needs, you will interact with our product managers to shape our product vision, you will design data solutions that solves business problems, you will work with state of art Big Data processing technologies including Elastic Search, Spark, Redshift, EMR and Hive and orchestration solution like MWAA (Airflow), and you will continuously innovate and simplify.

A successful candidate will have a data engineering background, excellent design and problem solving skills, and strong project management and communication skills. Excited? We are looking forward to you joining us!

A day in the life

The main charter of our team is creating/onboarding growth based recommendations for sellers using Amazon scale data. As a recommendation platform, our team also owns relevance and quality of the recommendations.

So, day-to-day activities of data engineers include designing and implementing optimized big data ETL pipelines to ingest recommendations. They are also responsible for coming up with business rules, heuristics or ML based solutions to increase relevance and quality of the recommendations to sellers in Amazon.

About the team

Our team’s vision is to increase the profitability of sellers on Amazon. Currently, we do this by coming with recommendations on what actions sellers can take to increase their sales or save money in fees.

Our team owns coming up with recommendations or onboarding recommendation data from partner teams, making sure the individual recommendations are high quality and relevant to the sellers before ingesting the recommendation into our product.
• Bachelor’s Degree in Computer Science or related field;
• 3+ years industry experience as a Data Engineer.
• Strong practical understanding of Computer Science fundamentals;
• Strong analytical skills;
• Strong passion for delivering high-quality software.

• MS or PhD in Computer Science or equivalent
• 5+ years industry experience as a Data Engineer
• Experience with Big Data technologies such as Elastic Search, Spark, Amazon EMR, Amazon Redshift and Amazon Athena
• Experience building distributed software systems;
• Experience with AWS or other cloud computing platforms;
• Knowledge of professional software engineering practices;
• Strong focus on clean code and robust design.

Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. For individuals with disabilities who would like to request an accommodation.
Apply Here
For Remote Data Engineer II, Seller Growth Opportunities roles, visit Remote Data Engineer II, Seller Growth Opportunities Roles


Data Engineer at Optum

Location: Seattle

Optum is a company that’s on the rise. We’re expanding in multiple directions, across borders and, most of all, in the way we think. Here, innovation isn’t about another gadget, it’s about transforming the health care industry. Ready to make a difference? Make yourself at home with us and start doing your life’s best work.(sm)
Do you want to work on an awesome team of data scientists and data engineers? Do you love big data and the challenges it entails? Do you want to create applications that people love?
As a Data Engineer , you will work cross-functionally with data scientists, data analysts, product managers, and stakeholders to understand business needs and develop, maintain and optimize the data sets, data models and large-scale data pipelines primarily in the Azure Databricks Spark cloud stack used for data science models and visualizations. You will partner with Optum Technology team to drive best practices and set standards for data engineering patterns and optimization. You are a key influencer in data engineering strategy.
This is a unique, high visibility opportunity for someone who wants to have business impact, dive deep into large scale data pipeline and work closely with cross functional team. This position would be fully remote, and report directly to the Director of Data Science.
Purpose of Position:
• Design and develop ETL/ELT solutions on Azure Databricks, Delta Lake and Spark to support OptumRx Digital MBO’s
• Develop, implement, and deploy large scale data pipelines empowering machine learning algorithms, insights generation, business intelligence dashboards, reporting and new data products
• Partner with Optum Technology to create and maintain the technical architecture of the Enterprise Delta Lake to consolidate data from many systems into a single source for machine learning and reporting analytics

You’ll enjoy the flexibility to telecommute* from anywhere within the U.S. as you take on some tough challenges.
Primary Responsibilities:
• Design, build, optimize, and manage modern large-scale data pipelines ETL/ELT processing to support data integration for analytics, machine learning features and predictive modelling
• Consume data from a variety of sources (RDBMS, APIs, FTPs and other cloud storage) & formats (Excel, CSV, XML, JSON, Parquet, Unstructured)
• Write advanced / complex SQL with performance tuning and optimization
• Identify ways to improve data reliability, data integrity, system efficiency and quality
• Participate in architectural evolution of data engineering patterns, frameworks, systems, and platforms including defining best practices and standards for managing data collections and integration
• Work with data scientists to deploy machine learning models to real-time analytics systems
• Design and build data service APIs
• Mentor other data engineers and provide significant technical direction by teaching other data engineers how to leverage cloud data platforms

You’ll be rewarded and recognized for your performance in an environment that will challenge you and give you clear direction on what it takes to succeed in your role as well as provide development for other roles you may be interested in.
Required Qualifications:
• Bachelor’s degree in Computer Science, Engineering, Mathematics, Statistics, Economics or related discipline
• 2+ years of experience in data engineering, data integration, data modeling, data architecture, and ETL/ELT processes to provide quality data and analytics solutions
• 2+ years of experience in SQL with designing complex data schemas and query performance optimization
• 1+ years of experience in Apache Spark (PySpark / Spark SQL)
• 1+ years of experience in Python
• Full COVID-19 vaccination is an essential requirement of this role. Candidates located in states that mandate COVID-19 booster doses must also comply with those state requirements. UnitedHealth Group will adhere to all federal, state and local regulations as well as all client requirements and will obtain necessary proof of vaccination, and boosters when applicable, prior to employment to ensure compliance

Preferred Qualifications:
• Experience working with large size data sets using Big Data Frameworks (i.e., Hadoop/EMR/Databricks/Spark/Hive etc.)
• Experience using Regular Expression, Rest API, NoSQL, Kafka, CI/CD technology, Git
• Experience with at least one of the following cloud platforms: (Azure, AWS or GCP)
• Extensive knowledge of data architecture principles (e.g., Data Lake, Databricks Delta Lake, Data Warehousing, etc.)
• Extensive knowledge of data modelling techniques including slowly changing dimensions, aggregation, partitioning and indexing strategies
• Excellent collaborator with experience working effectively with cross-functional teams such as leadership, product management and engineering, with a willingness to inspire other data engineers, data scientists and analysts
• Ability to independently troubleshoot and performance tune large scale enterprise systems
• Solid communication skills with the ability to communicate technical concepts to both technical and non-technical audiences

To protect the health and safety of our workforce, patients and communities we serve, UnitedHealth Group and its affiliate companies require all employees to disclose COVID-19 vaccination status prior to beginning employment. In addition, some roles and locations require full COVID-19 vaccination, including boosters, as an essential job function. UnitedHealth Group adheres to all federal, state and local COVID-19 vaccination regulations as well as all client COVID-19 vaccination requirements and will obtain the necessary information from candidates prior to employment to ensure compliance. Candidates must be able to perform all essential job functions with or without reasonable accommodation. Failure to meet the vaccination requirement may result in rescission of an employment offer or termination of employment.
Careers with OptumRx. We’re one of the largest and most innovative pharmacy benefits managers in the US, serving more than 12 million people nationwide. Here you’ll fill far more than prescriptions. As a member of one of our pharmacy teams, you’re empowered to be your best and do whatever it takes to help each customer. You’ll find unrivaled support and training as well as a wealth of growth and development opportunities driven by your performance and limited only by your imagination. Join us. There’s no better place to help people live healthier lives while doing your life’s best work.(sm)
Colorado, Connecticut or Nevada Residents Only: The salary range for Colorado residents is $66,100 to $118,300. The salary range for Connecticut / Nevada residents is $72,800 to $129,900. Pay is based on several factors including but not limited to education, work experience, certifications, etc. In addition to your salary, UnitedHealth Group offers benefits such as, a comprehensive benefits package, incentive and recognition programs, equity stock purchase and 401k contribution (all benefits are subject to eligibility requirements). No matter where or when you begin a career with UnitedHealth Group, you’ll find a far-reaching choice of benefits and incentives.
• All Telecommuters will be required to adhere to UnitedHealth Group’s Telecommuter Policy
Diversity creates a healthier atmosphere: UnitedHealth Group is an Equal Employment Opportunity/Affirmative Action employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, age, national origin, protected veteran status, disability status, sexual orientation, gender identity or expression, marital status, genetic information, or any other characteristic protected by law.
UnitedHealth Group is a drug-free workplace. Candidates are required to pass a drug test before beginning employment.
Apply Here
For Remote Data Engineer roles, visit Remote Data Engineer Roles


Senior data engineer at The Walt Disney Company

Location: Seattle

Job ID 972759BR Location Seattle, Washington, United States Business The Walt Disney Company (Corporate) Date posted Jun. 28, 2022 Flex Type Hybrid

This role is considered hybrid, which means the employee will work a portion of their time on-site from a Company designated location and the remainder of their time remotely.

Job Summary :

The Enterprise Cloud Data & Analytics team supports internal business data and insights needs for the entire The Walt Disney Company enterprise like monthly supplier invoice validation, monthly internal chargeback, forecasting, cost optimization, etc.

We apply data engineering and business intelligence analysis to various data sources to generate business insights in the form of curated data.

Responsibilities :
• Develop and maintain automated data solutions that ingest, transform and generate insights in the form of curated data.
• Support and maintain the shared consumer data platform with regard to data organization, access, security and role appropriate access.
• Analyze and model data to design data solutions that provide business insights.
• Recognizing patterns and opportunities to consolidate implementations and / or create standards
• Have a growth mindset and a desire to learn
• Be proactive and have initiative
• Active participation in the team
• Provide clear and concise communication to technical and non-technical audiences
• Engage in dialectics to advocate for the best ideas

Basic Qualifications :
• Years of experience in Data Engineering or related role
• SQL and Python experience
• Data analysis and data modeling
• Data Architecture
• Familiarity with Cloud concepts

Preferred Qualifications :

Data engineering on Cloud cost and usage data

Operating in Azure, GCP or AWS clouds

Required Education

BA / BS in Software / Data Science related area or equivalent experience

Last updated : 2022-09-05
Apply Here
For Remote Senior data engineer roles, visit Remote Senior data engineer Roles


Staff Data Engineer at Thermo Fisher Scientific

Location: Seattle

Title: Staff Data Engineer

Requisition ID: 216239BR

At Thermo Fisher Scientific, our work has a purpose. Our work requires passion and builds meaningful outcomes – our work matters. We share our expertise and technological advancements with customers, helping them make the world a better place. Whether they’re discovering a cure for cancer, protecting the environment, or ensuring our food is safe.

Our people share a common set of values – Integrity, Intensity, Innovation, and Involvement. We work together to accelerate research, tackle complex analytical challenges, improve patient diagnostics, drive innovation, and increase laboratory productivity. Each one of us contributes to our mission every day to enable our customers to make the world healthier, cleaner, and safer.

Location/Division Specific Information

On-site (anywhere in the United States), Hybrid, or Remote.

Discover Impactful Work

As part of the Machine Learning Operations (MLOps) team within R&D AI Engineering, you will be responsible for owning data lakes, pipelines, data management on-premises and on-cloud (AWS environment) in a fast-paced environment. You will work on sophisticated problems that improve our end users’ experiences, drive growth in a multifaceted company, and use analytics and MLOps technologies. Your goal is to create solutions enabling the development and deployment of AI/ML-enabled products in the cloud and onto edge devices.

A day in the Life

You are a hands-on data scientist/engineer/architect who wants to make a difference!
• Work with the MLOps team to develop and maintain scalable MLOps frameworks and DataOps tools that can be integrated into ML platforms for R&D data science and AI Engineering teams
• Collaborate with biologists, chemists, experimentalists, and data scientists in other scientific divisions to support their R&D workflows and onboard MLOps frameworks
• Develop data pipelines and manage data ingestion, data transformation, data analysis, data querying, data visualization, modeling, and deployment
• Build systems that integrate existing data lakes and align with other corporate and R&D data lakes

Keys to Success

• Bachelors degree in computer science, computer engineering, information systems, or a related field.

Minimum Qualifications You Must Have:
• 5+ years of experience in ETL/data engineering, including cataloging, enrichment, exploration, management, processing, validation, and visualization
• Comfortable with Linux, shell scripting, C/C++, and Python
• Experience with AWS purpose-built databases, including S3, RDS, DynamoDB, Redshift, and Database Migration Service
• Experience with structured/unstructured datasets and data store tools
• Excellent oral and written communication skills to present technical information to both business and technology teams with clarity and precision
• Resourcefulness, creativity, excellent interpersonal skills, attention to detail, and the ability to think critically and solve problems that are in line with business objectives and strategic vision

Preferred Qualifications to Make You Stand Out from the Crowd:
• Experience in a wide variety of data formats, including Parquet, Avro, and Protocol Buffers
• Knowledge of vendor-neutral data lakes, data pipelines, and data stores, and experience with Databricks and Snowflake for specific business use cases
• Experience with Apache Spark and Apache Hadoop
• Experience with AWS tools, including Athena, Step Function, CloudFormation, and Kinesis
• Experience with ML compute and ML model management platforms
• Knowledge of ML model development tools such as Keras, PyTorch, TensorFlow, and Jupyter
• Experience with Scala and Golang
• Excellent grasp of software practices in an agile development environment

At Thermo Fisher Scientific, each one of our 65,000 extraordinary minds has a unique story to tell. Join us and contribute to our singular missionenabling our customers to make the world healthier, cleaner and safer.

Apply today! [Link available when viewing the job]

Thermo Fisher Scientific is an EEO/Affirmative Action Employer and does not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability or any other legally protected status.

Thermo Fisher Scientific is an EEO/Affirmative Action Employer and does not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability or any other legally protected status.
Apply Here
For Remote Staff Data Engineer roles, visit Remote Staff Data Engineer Roles


Data Engineer – Data Platform, Scala / Java – Apple Media Products (AMP) at Apple

Location: Seattle

Summary Posted: Mar 28, 2022
Role Number:200360549

The Data Platform team is looking for enthusiastic engineers to help build the next generation of our data platforms, pipelines and services. This is a phenomenal opportunity to work within Apple Media Products, the team behind the App Store, Apple Music, Apple TV, and many other high-profile products. We are looking for backend engineers with Big Data experience who enjoy working on problems at scale. Essential experience includes building data platform infrastructure using technologies such as Spark, Hadoop, Hive and Kafka, as well as proficiency in Scala or Java. We offer a flexible, family-friendly working environment and a wide range of competitive benefits. You will be encouraged and supported to do your best work and to have fun doing so. If this sounds exciting to you, we’d love to hear from you!

Key Qualifications
• Experience in implementing and supporting highly scalable data systems and services written in Scala or Java.
• A strong knowledge of data structures and algorithms.
• Experience of using data metastores, e.g. Apache Hive. Integration of data metastores with Spark and other environments.
• Understanding of serialization formats and schema evolution.
• Experience with Big Data file and table formats e.g. Parquet and Iceberg.
• In-depth knowledge of one or more Big Data systems including internal implementation. For example: NoSql store, distributed message queue, search or indexing product.
• Experience with key/value stores, relational databases and Solr/Lucene/elasticsearch.
• Building and supporting APIs for engineers. Versioning and compatibility, wire formats, HTTP frameworks, authentication, tracing and monitoring.
• Communicating with users and driving adoption. Troubleshooting and diagnosing issues, advising on integrations and migrations.
• Some experience of building user interfaces.

We provide platforms, services, tools and datasets for use by engineering teams within Apple. We cover batch, realtime and near-realtime requirements, handling petabytes of data and ultimately supporting hundreds of millions of customers. In this role you will build infrastructure to drive these highly visible global-scale systems. You will work with attention to usability, performance, scalability and high availability of your systems. You will support other engineering teams as they adopt these systems for their own products. If you are a developer with experience of Big Data, and are looking for your next challenge, then we would love to hear from you.

Education & Experience
BS degree in Computer Science or a related field

Additional Requirements
Apply Here
For Remote Data Engineer – Data Platform, Scala / Java – Apple Media Products (AMP) roles, visit Remote Data Engineer – Data Platform, Scala / Java – Apple Media Products (AMP) Roles


Data Engineer, University Graduate (Data BP)- 2023 Start (BS/MS) at TikTok

Location: Seattle


TikTok is the leading destination for short-form mobile video. Our mission is to inspire creativity and bring joy. TikTok has global offices including Los Angeles, New York, London, Paris, Berlin, Dubai, Singapore, Jakarta, Seoul and Tokyo.

Why Join Us
At TikTok, our people are humble, intelligent, compassionate and creative. We create to inspire – for you, for us, and for more than 1 billion users on our platform. We lead with curiosity and aim for the highest, never shying away from taking calculated risks and embracing ambiguity as it comes. Here, the opportunities are limitless for those who dare to pursue bold ideas that exist just beyond the boundary of possibility. Join us and make impact happen with a career at TikTok.

Team Introduction
The Data Platform team works on challenges in data infrastructure and data product. Our team focuses on internal tools relating to query engines, logging and data ingestion infrastructure, A/B testing for all product and feature launches, workflow management, distributed cache, data visualization tools, and big data engineering, which supports TikTok’s business products.

We are looking for talented individuals to join our team in 2023. As a graduate, you will get unparalleled opportunities for you to kickstart your career, pursue bold ideas and explore limitless growth opportunities. Co-create a future driven by your inspiration with TikTok.

Successful candidates must be able to commit to one of the following start dates below:
1. January 16, 2023
2. February 6, 2023
3. March 6, 2023
4. May 22, 2023
5. June 12, 2023
6. July 17, 2023
7. August 14, 2023
We will prioritize candidates who are able to commit to these start dates. Please state your availability and graduation date clearly in your resume.

Application deadline: February 15th, 2023
Candidates can apply to a maximum of two positions and will be considered for jobs in the order you apply. The application limit is applicable to TikTok and its affiliates’ jobs globally. Applications will be reviewed on a rolling basis – we encourage you to apply early.

Technical Assessment
Candidates who pass resume evaluation will be invited to participate in TikTok’s technical online assessment in HackerRank.

• Design and build data transformations efficiently and reliably for different purposes (e.g. reporting, growth analysis, and multi-dimensional analysis).
• Design and implement reliable, scalable, robust and extensible big data systems that support core products and business.
• Establish solid design and best engineering practice for engineers as well as non-technical people.


• Bachelor’s or Master’s degree in Computer Science or related fields or equivalent practical experience.
• Final year or recent graduate with a background in Software Development, Computer Science, Computer Engineering, or a related technical discipline.
• Experience with coding in Java, Python, Scala, SQL.
• Experience in the Big Data technologies(Hive, Spark etc.)
• Experience with performing data analysis, data ingestion and data integration.
• Experience with ETL (Extraction, Transformation & Loading) and architecting data systems.
• Experience with schema design and data modeling.
• Experience in writing, analyzing and debugging SQL queries.
• Basic understanding of various Big Data technologies.
• Solid communication and collaboration skills.
• Passionate and self-motivated about technologies in the Big Data area.
• Must obtain work authorization in country of employment at the time of hire, and maintain ongoing work authorization during employment.

Preferred Qualifications:
• Demonstrated data science or quantitative analysis experience from previous internship, work experience, etc.
• Demonstrated experience handling terabyte size datasets, applying statistics and machine learning techniques and algorithms, using visualization tools to present data.
• Demonstrated success in leading data-driven projects from definition to execution, from defining metrics to communicating actionable insights.
• Understanding of deep learning, or distributed computing (Hive/Hadoop).
• Familiar with large data processing/storage tools such as Kafka, Flink, Spark, ElasticSearch, Redis, Hbase, Cassandra, Druid etc.

TikTok is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At TikTok, our mission is to inspire creativity and bring joy. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.

TikTok is committed to providing reasonable accommodations during our recruitment process. If you need assistance or an accommodation, please reach out to us at Accommodations-AMS@tiktok.com.

By submitting an application for this role, you accept and agree to our global applicant privacy policy, which may be accessed here: https://careers.tiktok.com/legal/privacy.
Apply Here
For Remote Data Engineer, University Graduate (Data BP)- 2023 Start (BS/MS) roles, visit Remote Data Engineer, University Graduate (Data BP)- 2023 Start (BS/MS) Roles


Lead Data Engineer at The Walt Disney Company

Location: Seattle

Disney Media & Entertainment Distribution (DMED) Technology creates products, platforms, and innovations for the DMED Segment and the Walt Disney Company by driving the strategic development and use of technology, building scalable systems and products to empower our businesses and engage consumers. The DMED Technology organization enables enterprise-wide consumer technology and data capability to deliver on the promise of more direct consumer relationships, personalized product experiences, and more active engagement with content and advertising. With global scale, local presence, and deep technological excellence, DMED Technology helps DMED and The Walt Disney Company optimize technology platforms and resources, bring creative ideas to life, and create industry-shaping approaches.
Consumer Experiences & Platforms brings together our engineering and product teams to develop, create, enhance, and grow a DMED Technology digital product portfolio that reaches more than 380 million consumers worldwide each month.
The CXP team leads efforts to deliver the most relevant experiences in real time based on the consumer, content, and context wherever they engage. The team also enables rapid A/B testing at scale to expand learnings and fuel data-driven decisions. Driving visibility of experiments, results and insights while providing tools to manage requests for new ideas.
Our team is…
• A group of engineers and data scientists with diverse expertise delivering solutions together.
• Collaborative and dynamic.
• Embracing agile practices.
• Using continuous integration/automated testing.
• Led by startup veterans.
• Create and maintain optimal data pipeline architecture
• Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
• Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
• Design, develop, test, deploy, maintain and improve software
• Participate in the design and implementation of core Platforms and Content Distribution systems
• Collaborate with internal & external teams to define requirements and delivery schedules for projects
• Design and deliver high quality code for small to medium size projects and make critical contributions working with others on larger projects
• Work with the team to iteratively improve development practices and processes
• Build strong relationships with the team while collectively finding opportunities for improvements around quality and automation
• 7+ Years of relevant work experience
• Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
• Experience building and optimizing data pipelines, architectures and data sets.
• Robust programming skills and strong experience with Python
• Experience with the following technologies is a plus: Snowflake, AWS, Kafka, Airflow, Postgresql, Spark
• Exposure to full lifecycle of application development, including practices like continuous integration, unit testing, code reviews, documentation, etc.
• Interest in industry trends on new technologies, best practices and solutions. A passion for innovation and raising the bar in all development aspects.
• Proven ability to work on a diverse scope of software projects requiring strong attention to detail and creative problem solving.
• Passion for software quality and for advancing testing as an engineering discipline
• Required education: Bachelor’s degree, with equivalent work experience in MIS, Computer Science or related discipline
Apply Here
For Remote Lead Data Engineer roles, visit Remote Lead Data Engineer Roles


Staff Data Engineer at Handshake

Location: Seattle

Everyone is welcome at Handshake. We know diverse teams build better products and we are committed to creating an inclusive culture built on a foundation of respect for all individuals. We strongly encourage candidates from non-traditional backgrounds, historically marginalized or underrepresented groups to apply.

If you are not sure that you’re 100% qualified, but up for the challenge – we want you to apply. We believe skills are transferable and passion for our mission goes a long way.

Your Impact:

Handshake is building a diverse team of dynamic engineers who value creating a high quality, high impact product. You will use your technical knowledge to drive the architecture, implementation and evolution of the data platform we are current developing. You will also be working with product teams helping millions of students find meaningful careers regardless of where they go to school, who their parents know, or how much money they have.

We’re focused on building a data platform that will ensure all teams are capable of creating data-driven features, and all sides of the business have access to the right data to make the correct decisions.
Your Role:
• Working closely with product managers and product engineers to ship data-driven features
• Collaborating with engineering teams to best expose data in a consumer product
• Closely interact with analytics engineers, data scientists, analysts, and infrastructure teams to design pipelines and warehousing solutions
• Design, implement and build data solutions that deliver data with measurable quality under the SLA from ingestion to consumption.
Your Experience:
• You have a consistent track record of designing and implementing scalable, robust data pipelines, data services and data products.
• You have a solid understanding of micro services and experience exposing data in a consumer product.
• You have experience working with tracking systems, click streams and session analytics.
• You have experience providing technical leadership and mentoring other engineers for best practices on data engineering.
• You are proud of your craft, and enjoy and value clean code that scales to keep large teams productive.
• You have the ability to navigate between big-picture and implementation details.
Compensation Range

$232,772 – $258,636

For cash compensation, we set standard ranges for all roles based on function, level, and geographic location, benchmarked against similar stage growth companies. In order to be compliant with local legislation, as well as to provide greater transparency to candidates, we share salary ranges on all job postings regardless of desired hiring location. Final offer amounts are determined by multiple factors including geographic location as well as candidate experience and expertise, and may vary from the amounts listed above.
About Us:

Handshake is the #1 place to launch a career with no connections, experience, or luck required. The platform connects up-and-coming talent with 650,000+ employers – from Fortune 500 companies like Google, Nike, and Target to thousands of public school districts, healthcare systems, and nonprofits. Earlier this year, we announced our $200M Series F funding round. This Series F fundraise and new valuation of $3.5B will fuel Handshake’s next phase of growth and propel our mission to help more people start, restart, and jumpstart their careers.

When it comes to our workforce strategy, we’ve thought deeply about how work-life should look here at Handshake. With our Hub-Based Remote Working strategy, employees can enjoy the flexibility of remote work, whilst ensuring collaboration and team experiences in a shared space remains possible. Handshake is headquartered in San Francisco with offices in Denver, New York, and London and teammates working globally. So, whether you live on the coasts, the midwest, or overseas, chances are we have a hub near you offering the best of both worlds.

Check out our careers site to find a hub near you
What We Offer:

At Handshake, we’ll give you the tools to feel healthy, happy and secure.
• Equity and ownership in a fast-growing company.
• 16 Weeks of paid parental leave for birth giving parents & 10 weeks of paid parental leave for non-birth giving parents.
• Comprehensive medical, dental, and vision policies including LGTBQ+ Coverage. We also provide resources for Mental Health Assistance, Employee Assistance Programs and counseling support.
• Handshake offers $500/£360 home office stipend for you to spend during your first 3 months to create a productive and comfortable workspace at home.
• Generous learning & development opportunities and an annual $2,000/£1,500 stipend for you to grow your skills and career.

(US Handshakers)
• 401k Match: Handshake offers a dollar-for-dollar match on 1% of deferred salary, up to a maximum of $1,200 per year.
• All full-time US-based Handshakers are eligible for our flexible time off policy to get out and see the world. In addition, we offer 8 standardized holidays, and 2 additional days of flexible holiday time off. Lastly, we have a Summer and Winter #ShakeBreak, two one-week periods of Collective Time Off.

(UK Handshakers)
• Pension: Handshake matches 3% of your salary towards your pension scheme.
• Up to 25 days of vacation to encourage people to reset, recharge, and refresh, in addition to 8 bank holidays throughout the year.

Benefits above apply to employees in full-time positions.

Looking for more? Explore our mission, values and comprehensive US benefits at
Apply Here
For Remote Staff Data Engineer roles, visit Remote Staff Data Engineer Roles


The Tech Career Guru
We will be happy to hear your thoughts

Leave a reply

Tech Jobs Here