Fulltime Data Engineers openings in Chicago, United States on September 14, 2022

Data Engineer at Boeing

Location: Chicago

Job Description

At Boeing, we innovate and collaborate to make the world a better place. From the seabed to outer space, you can contribute to work that matters with a company where diversity, equity and inclusion are shared values. We’re committed to fostering an environment for every teammate that’s welcoming, respectful and inclusive, with great opportunity for professional growth. Find your future with us.
At Boeing, we are all innovators on a mission to connect, protect, explore and inspire. From the seabed to outer space, you’ll learn and grow, contributing to work that shapes the world. Find your future with us.

Boeing Data Solutions Service provides multiple products and offerings that allow application teams to have the flexibility and agility to meet their specific application development and security requirements, by securing and enabling business data. The team is currently looking for Data Engineer to join Cloud Capabilities Data Warehouse services team. Position Responsibilities Duties will include (but are not limited to):
• Creates data pipelines, big data platforms and data integrations in databases, data warehouses and data lakes, working with various cloud and on-premises technologies
• Drive initiatives to operationalize cloud computing starting with Google Cloud Platform, applying standards across all platform implementations for the enterprise to enable best of breed, fit for purpose analytical solutions and data strategy.
• Contribute to the evolving systems architecture to meet changing requirements for scaling, reliability, performance, manageability, security compliant, and cost.
• Work very closely with Product manager, Database Administrators and Database architects to define technical product requirements and collaborate within agile team to drive User story/tasks creation along with design and development activities.
• Automation using DevSecOps and other available tools to expedite the product development along with First Time Quality
• Participate in group sessions within developer community and share knowledge.
• Mentors junior team members and contribute in creating and maintaining best coding practices and oversight using stringent reviews.
• Effectively resolve problems and roadblocks as they occur, consistently following through details while driving innovation as well as issue resolution.
• Monitor the implementation of architecture throughout the system development lifecycle and seek and provide clarification when needed.
• Work with cross-functional teams spread across multiple products and locations within Boeing and external partners, across different cultures and time-zones
This position has been identified as a virtual opportunity and does not require applicants to live in the Seattle WA area or any of the listed locations. The position must meet Export Control compliance requirements, therefore a “US Person” as defined by 22 C.F.R. 120.15 is required. “US Person” includes US Citizen, lawful permanent resident, refugee, or asylee. Basic Qualifications (Required Skills / Experience):
• 2+ years’ experience with Cloud development and technologies, particular focus on Google Cloud technologies a plus, though Azure and AWS knowledge is also helpful.
• 3+ years Software Development experience in Agile methodology (ADO, JIRA, Source code repository), CI/CD tools, and Testing and automation.
• 3+ Experience or familiarity with database management technologies, including Database design, development using SQL/NoSQL/Columnar databases.
• 3 + years of experience with data warehousing, and building ETL pipelines.
• 3 + years of experience in Windows and UNIX/LINUX operating systems with scripting expertise.
• 3 + years of experience of coding experience with any programming language (Such as Java, Python).
• 1 + years of knowledge of Big Data solution, data integration, data analytics (Spark, Kafka, Watson)
Preferred Qualifications (Desired Skills / Experience):
• 1 + years of experience with Google Cloud Platform (GCP) products including BigQuery, Cloud Storage, Cloud Functions, DataProc, DataStudio.
• 1 + years of familiarity with Container & Container Orchestration Platform (Docker/Kubenestes/OpenShift)
• Experience with DevSecOps (Such as GIT, Jenkins, Azure Dev Ops, etc)
• Config control, Ansible (configuration) – infrastructure as a code
• Familiarity with distributed systems and computing at scale.
• Programming skill sets in Scala, Go, Java, Java script, Python, Groovy
• Experience working in a diverse organization and ability to work with partners from within Boeing and outside, across different cultures and time-zones.
• Experience working with multiple file structures (e.g. ORC, Flat files, JSON)
• Displays excellent oral and written communication skills to team members, peers, customers and management..
Typical Education & Experience: Education/experience typically acquired through advanced technical education (e.g. Bachelor) and typically 9 or more years’ related work experience or an equivalent combination of technical education and experience (e.g. PhD+4 years’ related work experience, Master+7 years’ related work experience, 13 years’ related work experience, etc.). Relocation: Relocation assistance is not a negotiable benefit for this position.

Drug Free Workplace: Boeing is a Drug Free Workplace where post offer applicants and employees are subject to testing for marijuana, cocaine, opioids, amphetamines, PCP, and alcohol when criteria is met as outlined in our policies.

Equal Opportunity Employer:

Boeing is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, age, physical or mental disability, genetic factors, military/veteran status or other characteristics protected by law.
Apply Here
For Remote Data Engineer roles, visit Remote Data Engineer Roles

********

Data Engineer (Remote) at Epsilon

Location: Chicago

Company Description

Epsilon® is an all-encompassing global marketing innovator, supporting 15 of the top 20 global brands. We provide unrivaled data intelligence and customer insights, world-class technology including loyalty, email, and CRM platforms and data-driven creative, activation and execution. Epsilon’s digital media arm, Conversant®, is a leader in personalized digital advertising and insights through its proprietary technology and trove of consumer marketing data, delivering digital marketing with unprecedented scale, accuracy and reach through personalized media programs and through CJ Affiliate by Conversant®, one of the world’s largest affiliate marketing networks.

Together, we bring personalized marketing to consumers across offline and online channels, at moments of interest, that help drive business growth for brands. Recognized by Ad Age as the #1 World’s Largest CRM/Direct Marketing Agency Network, #1 Largest U.S. Agency from All Disciplines, #1 Largest U.S. CRM/Direct Marketing Agency Network and #1 Largest U.S. Mobile Marketing Agency, Epsilon employs over 8,000 associates in 70 offices worldwide. Epsilon is part of Alliance Data®, a Fortune 500 and Fortune 100 Best Places to Work For company.

For more information, visit epsilon.com. Follow us on Twitter at @EpsilonMktg.

Job Description

Love cutting-edge tech? We do too.

At Epsilon, we do more than collect and store data and we might be the most important Internet company you’ve never heard of. Join our team for your chance to work in the digital marketing space and solve meaningful problems on a massive scale-and have fun doing it.

We are looking for a Data Engineer with extensive experience in the big data and data warehouse ecosystems. The Epsilon DMS Data Organization builds and maintains MPP, HDFS and Elastic platforms to capture all data generated from the Ad Stack and external sources to support all aspects of the business.

The candidate must be proficient in SQL and Python, with the ability to be a key contributor in delivering critical business features. The candidate must have expert SQL coding skills to ingest, transform and analyze client data delivered to the Epsilon DMS Data Organization. The person is involved with managing critical data anomaly detection, integrity checks, and overall data quality processing within Epsilon, so Python coding skills are also required for ingestion, transformation, and analyzation of large datasets via complex SQL queries.

What You Will Be Doing:
• Analyze client files via Linux commands using command line interface (LCI)
• Build and maintain data quality services with Python
• Develop complex SQL queries, data aggregations, Datamodelling and business intelligence.
• Continuous improvement of our system, tests, and data quality indicators
• Influence our technical decisions
• Keep yourself informed and up-to-date with technologies
• Interface with client integration engineers, account managers, analysts, and engineers to enable data-oriented solutions
• Be a key contributor in delivering critical business features with a passion for big data technologies
• Build data expertise on subject matter and be able to speak to data warehouse constructs and data architecture
• Troubleshoot production issues and solve for performance bottlenecks
• Analyze data and identify business possibilities for better operational processes and business opportunities

About You:
• Able to do your best work in a team setting and autonomously
• Well-developed interpersonal skills.
• Owns a problem to the end
• Proud to share in team’s success
• Wants to grow a career with a great company.

What you’ll bring:
• Bachelor’s Degree in Computer Science or equivalent degree is required.
• Candidate should have 4-6 years’ experience on a development team manipulating data, and possess a bachelor’s degree of equivalent experience in technology development.
• Fluent SQL with ability to ingest complex use cases
• Prior professional experience with Python or other similar languages
• Excellent communication skills and ability to work with the internal analyst community
• Ability to thrive in a collaborative team environment
• Ability to understand complex SQL and highly complex Python code is critical.
• Having knowledge of Linux commands (via command line interface) is also required.
• Ability to develop and test code and ingest use case specifications with little supervision.
• You enjoy working with numerous programming languages, relational databases, and distributed systems. Our platform is ever evolving, but currently is a combination of SQL, Kafka, Flume, Spark, Scala, Java, Python, NoSQL (HBase, Cassandra and ScyllaDB), MPP RDBMS, Postgres, Hadoop, AWS, AirFlow, Docker, Kubernetes, and Elastic
• Internet/Digital Advertising ecosystem knowledge is a plus

Qualifications

Additional Information

When you’re one of us, you get to run with the best. For decades, we’ve been helping marketers from the world’s top brands personalize experiences for millions of people with our cutting-edge technology, solutions and services. Epsilon’s best-in-class identity gives brands a clear, privacy-safe view of their customers, which they can use across our suite of digital media, messaging and loyalty solutions. We process 400+ billion consumer actions each day and hold many patents of proprietary technology, including Real Time modeling languages and consumer privacy advancements. Thanks to the work of every employee, Epsilon has been consistently recognized as industry-leading by Forrester, Adweek and the MRC. Positioned at the core of Publicis Groupe, Epsilon is a global company with more than 8,000 employees around the world. Check out a few of these resources to learn more about what makes Epsilon so EPIC:
• Culture: https://www.epsilon.com/us/about-us/our-culture-epsilon
• DE&I: https://www.epsilon.com/us/about-us/diversity-equity-inclusion
• CSR: https://www.epsilon.com/us/about-us/corporate-social-responsibility
• Life at Epsilon: https://www.epsilon.com/us/about-us/epic-blog

Great People Deserve Great Benefits

We know that we have some of the brightest and most talented associates in the world, and we believe in rewarding them accordingly. If you work here, expect competitive pay, comprehensive health coverage, and endless opportunities to advance your career.

Epsilon is an Equal Opportunity Employer. Epsilon’s policy is not to discriminate against any applicant or employee based on actual or perceived race, age, sex or gender (including pregnancy), marital status, national origin, ancestry, citizenship status, mental or physical disability, religion, creed, color, sexual orientation, gender identity or expression (including transgender status), veteran status, genetic information, or any other characteristic protected by applicable federal, state or local law. Epsilon also prohibits harassment of applicants and employees based on any of these protected categories.

Epsilon will provide accommodations to applicants needing accommodations to complete the application process.

For San Francisco Bay and Los Angeles Areas: Epsilon will consider for employment qualified applicants with criminal histories in a manner consistent with the City of Los Angeles’ Fair Chance Initiative for Hiring Ordinance and San Francisco Police Code Sections 4901-4919, commonly referred to as the San Francisco Fair Chance Ordinance.

Applicants with criminal histories are welcome to apply.

REF170052L
Apply Here
For Remote Data Engineer (Remote) roles, visit Remote Data Engineer (Remote) Roles

********

Data Engineer at Peyton Resource Group

Location: Chicago

Contract to Hire

As Data Engineer you will be part of a team that designs and develops our enterprise Data Warehouse and Analytic systems.

Responsibilities and Duties:
• Work with business customers and data analysts to define detailed requirements from broader business challenges.
• Translate those requirements to logical and physical models that satisfy analytical needs.
• Perform data profiling and analysis to assess data quality patterns, recommend data cleansing rules, conforming data standard rules and matching algorithms.
• Own the system architecture and infrastructure of our Google Could Data warehouse and other related GCP services.
• Assist ETL and BI developers with complex query tuning and schema refinement.
• Understand end to end data interactions and dependencies across complex data pipelines and data transformation and how they impact business decisions.
• Design best practices for big data processing, data modeling and warehouse development throughout the company.
• Familiar with REST for accessing cloud-based services.
• Experience running Agile methodology and applying Agile to data engineering.
• Experience with Java, JDBC, AWS, SDK Nice to have Familiar with AWS ecosystem, including RDS, Glue, Athena, etc.

Minimum Qualifications:
• Bachelor’s degree in computer science, business administration or related field required.
• 3+ years’ experience in the development of data warehouse, decision support, or executive information systems
• Expert in writing, analyzing and tuning complex SQL queries
• Extensive experience with Python programming
• Hands-on experience in any Cloud environment preferably GCP
• Experience with big data architectures and data modeling to efficiently process large volumes of data.
• Background in ETL and data processing, know how to transform data to meet business goals.
• Good understanding of Object-Oriented programming
• Must have a good understanding of Dimensional Modeling and ER-Modeling.
• Experience with data pipeline and workflow management tools: Airflow, Dataflow, etc.
• Deep understanding of modern computing technology, systems architecture, data security, and cloud-based services
Apply Here
For Remote Data Engineer roles, visit Remote Data Engineer Roles

********

Data Engineer at Compunnel Inc.

Location: Chicago

Description

The Senior Data Engineer will be responsible for writing SQL, Python and PySpark scripts to be used in API calls to pull data from multiple disparate systems and databases.

The source data may include Analytic System data (Google Analytics, Adobe Analytics), 3rd party systems, CSV files, CSS Feeds, etc.

This individual would also assist with cleaning up the data so it’s in readily accessible format for the BI systems.

The Senior Data Engineer will contribute expertise, embrace emerging trends and provide overall guidance on best practices across all of Customer business and technology groups.

The position will require the ability to multitask and work independently, as well as work collaboratively with teams, some of which may be geographically distributed.

Required Experience Building ETL data pipelines using SQL, Python for API calls to get data.

Expertise in Google Biq Query, PySpark, SQL and relational databases (PostgreSQL, MySQL).

Education: Bachelors Degree
Apply Here
For Remote Data Engineer roles, visit Remote Data Engineer Roles

********

Manager Data Engineering at Publicis Groupe

Location: Chicago

Company Description

Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.

Job Description

Publicis Sapient is looking for Manager to join our team of bright thinkers and doers. You will team up with top-notch technologists to enable real business outcomes for our enterprise clients by translating their needs into transformative solutions that provide valuable insight. Working with the latest data technologies in the industry, you will be instrumental in helping the world’s most established brands evolve for a more digital future.

Your Impact:
• Work closely with our clients providing evaluation and recommendations of design patterns and solutions for data platforms with a focus on ETL, ELT, ALT, lambda, and kappa architectures
• Define SLAs, SLIs, and SLOs with inputs from clients, product owners, and engineers to deliver data-driven interactive experiences
• Provide expertise, proof-of-concept, prototype, and reference implementations of architectural solutions for cloud, on-prem, hybrid, and edge-based data platforms
• Provide technical inputs to agile processes, such as epic, story, and task definition to resolve issues and remove barriers throughout the lifecycle of client engagements
• Creation and maintenance of infrastructure-as-code for cloud, on-prem, and hybrid environments using tools such as Terraform, CloudFormation, Azure Resource Manager, Helm, and Google Cloud Deployment Manager
• Mentor, support and manage team members

Qualifications

Your Skills And Experience
• Demonstrable experience in enterprise level data platforms involving implementation of end-to-end data pipelines
• Hands-on experience with at least one of the leading public cloud data platforms (Amazon Web Services, Azure or Google Cloud)
• Experience with column-oriented database technologies (e.g., Big Query, Redshift, Vertica), NoSQL database technologies (e.g., DynamoDB, BigTable, Cosmos DB, etc.) and traditional database systems (e.g., SQL Server, Oracle, MySQL)
• Experience in architecting data pipelines and solutions for both streaming and batch integrations using tools/frameworks like Glue ETL, Lambda, Google Cloud DataFlow, Azure Data Factory, Spark, Spark Streaming, etc.
• Metadata definition and management via data catalogs, service catalogs, and stewardship tools such as OpenMetadata, DataHub, Alation, AWS Glue Catalog, Google Data Catalog.
• Test plan creation and test programming using automated testing frameworks, data validation and quality frameworks, and data lineage frameworks
• Data modeling, querying, and optimization for relational, NoSQL, timeseries, graph databases, data warehouses and data lakes
• Data processing programming using SQL, DBT, Python, and similar tools
• Logical programming in Python, Spark, PySpark, Java, Javascript, and/or Scala
• Cloud-native data platform design with a focus on streaming and event-driven architectures
• Participate in integrated validation and analysis sessions of components and subsystems on production servers
• Data ingest, validation, and enrichment pipeline design and implementation
• SDLC optimization across workstreams within a solution
• Bachelor’s degree in Computer Science, Engineering, or related field

Additional Information

Set Yourself Apart With:
• Certifications for any of the cloud services like AWS, GCP or Azure
• Experience working with code repositories and continuous integration
• Understanding of development and project methodologies

Benefits of Working Here:
• Flexible vacation policy: time is not limited, allocated, or accrued
• 15 paid holidays throughout the year
• Generous parental leave and new parent transition program
• Tuition reimbursement
• Corporate gift matching program
Apply Here
For Remote Manager Data Engineering roles, visit Remote Manager Data Engineering Roles

********

Sr. Data Engineer-Remote at Infinity Consulting Solutions

Location: Chicago

TITLE: Sr. Data Engineer- Remote

Location: Remote

Do you get excited when your users get the data they need? Are you experienced across cloud, on-premises, platforms, and technologies? Do you love getting direction, then getting to work and iterating through a solution? This role is for a Senior Data Engineer. This person will design, develop, and support solutions to transport, store, and analyze our analytical data.

What You’ll Do:
• General
• Create data architecture, pipelines, and analytical solutions to meet software and data science requirements for various PG Healthcare Products
• Build out infrastructure, create custom solutions, and leverage industry tools to accomplish project and ad-hoc objectives
• Identify and implement improvements for data reliability, efficiency and quality
• Identify, evaluate, select, and prove out new technologies and toolsets
• Create and execute Proofs of Concept and Proofs of Technology
• Building Data Pipelines
• Build, extend, or leverage solutions to acquire, transform, and store data for use in analytical context
• Analyze and improve performance, scalability, and fault tolerance
• Cloud Compute and Data Lake
• Use large data sets to solve business problems
• Leverage analytical skills with unstructured or minimally structured datasets
• Build solutions using massively parallel processing cloud technologies (i.e., Databricks)
• Use appropriate languages to assemble and analyze datasets
• Data Storage Design
• Collaborate with software development, business teams, and data scientists to establish data storage, pipeline, and structure requirements
• Identify and plan for data storage performance requirements
• Application Interface and Data Storage Implementation
• Collaborate with software development, business teams, and data scientists to create and execute implementations
• Identify impact of implementation on other applications and databases
• Lead and mentor data engineers on data projects
• Master Data Management and Data Governance
• Assist team to build and evolve Trusted Record systems to manage entities across the enterprise
• Design, implement, and evolve solutions around person identity management
• Data Wrangling
• Identify patterns in data
• Client opportunities to automate and optimize tasks
• Perform root cause analysis and remediate data issues
• Mentorship
• Identify areas of development and need
• Provide targeted training and exploration for team members
• Train and mentor data engineers on standards and best practices

Skills / Experience You Will Need:
• Demonstrated knowledge of Azure data technologies (Databricks, ADF, Stream Analytics, ADLS, Synapse), on-premises Microsoft tools (SQL DB and SSIS), and familiar with AWS data technologies
• Fluent in SQL and Python, and familiar with PowerShell, and APIs
• Significant experience with analytical solutions in relational databases such as MS SQL Server, Oracle, and DB2 as well as experience with NoSQL databases and solutions such as data lakes, document-oriented databases, and graph databases
• Skill in data modeling and experience (ex. Tools like ER/Studio, Erwin, or other)
• Candidate will have great communication skills both verbal and writing at a team and leadership level
• Preferred experience with major EHR and data-exchange technologies (i.e., HL7, FHIR)
• Preferred experience with person identity management and entity resolution processing.
• Preferred experience in Data Science statistical analysis and machine learning
• Minimum of 5 years Information Management experience in an enterprise environment

About Infinity Consulting Solutions

At Infinity Consulting Solutions our mission is to cultivate successful long term relationships with candidates and clients matching the right candidate with the right client. We believe technology cannot replace the real personal relationships we cultivate. We reject the notion that technology alone is the answer to staffing which is why we our successful partnerships rely on collaboration NOT automation. ICS has been providing flexible staffing solutions for over 20 years in Information Technology, Compliance, Accounting / Finance and Corporate Support. Our staffing solutions include Contract, Temp to Perm and Permanent Placement.

ICS is an Equal Opportunity Employer.
Apply Here
For Remote Sr. Data Engineer-Remote roles, visit Remote Sr. Data Engineer-Remote Roles

********

Data Engineering: Senior Associate at TheMathCompany

Location: Chicago

COMPANY OVERVIEW

At TheMathCompany, we enable viable and valuable data and analytics transformations for our clients. Our mission is to help Fortune 500 organizations build core capabilities that set them on a path to achieve analytics self-sufficiency. We are changing the way companies go about executing enterprise-wide data engineering and data science initiatives, by defining and delivering comprehensive and robust analytics engagements. With a holistic range of services across data engineering, data science and management consulting, we are in the business of disrupting the analytics services and product space. To help us achieve our objectives, we are looking for passionate and experienced practitioners to be part of our US organization and be part of the growth story of one of the fastest AI/ML startups in the world.

We, as an organization, are committed to your personal success and professional development. TheMathCompany offers and supports our employees’ development of their personal brand through professional experiences, best in-class learning opportunities, inclusion, collaboration, and personal well-being.

ROLE DESCRIPTION

•We are looking for passionate individuals to help our clients solve complex challenges enabling their sustained analytics transformation

•As a member of the Customer Success team, you have the opportunity design, execute, and implement cutting edge analytical solutions for customers

•Responsible for institutionalizing data driven insights and recommendations to bring analytics findings and customer strategies to life.

•Ideal candidate should be enthusiastic who has experience working with large datasets, is comfortable building relationships with a variety of stakeholders and loves solving complex business problems through data and analytics while leading a team and working in a fast-paced environment.

•This role will involve understanding the customer’s business, generating insights, developing recommendations, and presenting these to clients and their management. The candidate will also be accountable for communicating strategy & delivery plan with clients, ensuring full alignment across client & internal team, and mitigating potential risks & escalations.

As a Senior Associate, you will:

●Collaborate with teams of business stakeholders and lead the design and execution of roadmap to drive business outcomes

●Build relationships with customers to become a trusted partner who provides thought leadership and strategic guidance

●Leverage the global delivery model by working with cross-cultural teams

●Lead, mentor, and coach a team of Associates in our US offices

●Solve complex business problems for organizations, leveraging conventional and new age data sources and applying cutting-edge advanced analytics techniques

●Proactively identify business development opportunities for your existing client engagements &departments

REQUIRED QUALIFICATIONS

●4+ years design & implementation experience with distributed applications

●4+ years of experience architecting/operating solutions built on GCP

●End to end hands-on to carry out complex POC, Pilot, Limited production rollout assignments requiring the development of new or improved techniques and procedures

●Participate in deep architectural discussions to build confidence and ensure customer success when building new, or migrating existing, applications, software and services on the Google Cloud Platform

●Strong experience in data pipelines, ETL design (both implementation and maintenance), data warehousing, and data modeling (preferably in dbt)

●Worked collaboratively with cross-functional teams and stakeholders to achieve an organizational goal

●Worked in an agile environment and are comfortable running an agile process for the data and analytics team

●Has advanced skills in SQL, data modeling, ETL/ELT development, and data warehousing

●Has Strong skills in Optimization – performance, pipeline, spark

●Experience on Pyspark

●Are proficient in engineering best practices (e.g., using Git)

●Implementation and tuning experience in Google Cloud, (such as Data Analytics (Dataproc, Airflow, Hadoop, Spark, Hive), GCP AI and ML Services and Data Warehousing (such as BigQuery, schema design, query tuning and optimization) and data migration and integration.

PREFERRED QUALIFICATIONS

●Bachelor’s degree

●Consulting background is a plus

PLEASE NOTE: Candidates must be legally authorized to work in the United States
Apply Here
For Remote Data Engineering: Senior Associate roles, visit Remote Data Engineering: Senior Associate Roles

********

Lead Data Engineer at 84.51°

Location: Chicago

84.51° Overview:

84.51° is a retail data science, insights and media company. We help the Kroger company, consumer packaged goods companies, agencies, publishers and affiliated partners create more personalized and valuable experiences for shoppers across the path to purchase.

Powered by cutting edge science, we leverage 1st party retail data from nearly 1 of 2 US households and 2BN+ transactions to fuel a more customer-centric journey utilizing 84.51° Insights, 84.51° Loyalty Marketing and our retail media advertising solution, Kroger Precision Marketing.

Join us at 84.51°

__________________________________________________________

Responsibilities
Take ownership of features and drive them to completion through all phases of the entire 84.51° SDLC. This includes internal and external facing applications as well as process improvement activities:
• Lead the design of and develop Cloud and Hadoop based solutions
• Perform unit and integration testing
• Participate in implementation of BI visualizations
• Collaborate with architecture and lead engineers to ensure consistent development practices
• Provide mentoring to junior engineers
• Participate in retrospective reviews
• Participate in the estimation process for new work and releases
• Collaborate with other engineers to solve and bring new perspectives to complex problems
• Drive improvements in people, practices, and procedures
• Embrace new technologies and an ever-changing environment

Requirements:
Bachelor’s degree typically in Computer Science, Management Information Systems, Mathematics, Business Analytics or another STEM degree.
• 5+ years proven ability of professional data development experience
• 3+ years proven ability of developing with Hadoop/HDFS
• 3+ years developing experience with either Java or Python
• 3+ years experience with PySpark/Spark
• 3+ years experience with Airflow
• Full understanding of ETL concepts
• Exposure to VCS (Git, SVN)
• Strong understanding of Agile Principles (Scrum)

Preferred Skills – Experience in the following
• Exposure to NoSQL (Mongo, Cassandra)
• Exposure to Service Oriented Architecture
• Exposure to cloud platforms (Azure/GCP/AWS)
• Exposure to BI tooling e.g Tableau, PowerBI, Cognos
• Proficient with relational data modeling and/or data mesh principles
• Continuous Integration/Continuous Delivery

ABOUT 84.51

At 84.51° people are the key to everything.

We are dedicated to always doing what’s right and never compromising on our values. That’s why we have created a culture where every opinion is respected, and contrary thinking is encouraged.

Making life more rewarding for our community and our associates is an essential part of the process. Our team is known for their enthusiasm, camaraderie and sense of fun. We work hard and we play just as hard. That’s why we engage in a range of official and unofficial activities to get together, serve the community, and have fun.

We continually seek people who make us better. In order to continue to grow, we need more great people who want to join us in doing cool, industry-changing, brain-stimulating work.

#LI-DOLF

#LI-REMOTE
Apply Here
For Remote Lead Data Engineer roles, visit Remote Lead Data Engineer Roles

********

Data Engineer – Hiring Immediately at Journera

Location: Chicago

Journera is building the first real-time data platform for the travel industry, and we’re backed by some of the biggest brands in the world. We’re changing the way the industry works by unlocking the power of its data and are seeking a data engineer to join the Technology team.

We are building an Experience Management Platform that analyzes data from airlines, hotels, transportation and beyond. That requires us to handle a lot of real-time data in a secure, timely, and scalable way, which is where you come in.

Data engineering is core to what we do and we are looking for folks to join our team who share our passion for data.

Diversity is one of our core company values. We are committed to fostering an inclusive, equitable, and supportive working and learning environment for all our employees. We believe the diverse experiences of our employees enrich the way we identify and overcome challenges, as well as design and deliver solutions.

What you bring
• Bachelor’s in science or engineering, or relevant work experience
• Real passion for wrangling data
• Experience building secure and scalable data solutions
• Software design and development skills
• Experience with relational and NoSQL datastores
• Automated test and continuous integration experience
• Experience with languages such as Python, SQL, R, Scala, and Go

What you bring bonus edition
• Advanced academic degree
• Stats skills
• Your personal story about using data to save the world
• Cloud experience – AWS preferable
• Experience working in a startup or other entrepreneurial environment
• Your ideas on how to make travel easier through by using data
• Experience with Redshift, Airflow or Glue
• Remote work experience

In addition to your technical skills, we value experiences that demonstrate a commitment to diversity, inclusion and community involvement. Please feel free to include your involvement with clubs, meetups, ERGs, non-profits, your community, etc on your resume or cover letter and ask us what we do at Journera.

What you’ll do
• Work closely with data scientists and software engineers to build and run a world-class data streaming environment
• Solve data engineering problems at a scale and speed most don’t need to worry about
• Design and build new analytical and data solutions

We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

While we don’t have an active sponsorship program at this time, we are open to sponsorship for the successful candidate.
Apply Here
For Remote Data Engineer – Hiring Immediately roles, visit Remote Data Engineer – Hiring Immediately Roles

********

Data Engineer at Emonics, LLC

Location: Chicago

Job Description:
Expert experience with RDBMS, No-SQL DBs, data modeling, and ETL/ELT processes

Expert experience with Python and SQL, with focus on data manipulation and analysis

Expert experience with building, deploying, and maintaining data pipelines

Expert experience with sourcing and profiling highly variable data

Strong experience in software craftsmanship, unit testing, and behavior-driven development

Strong people skills, must be able to form strong, meaningful, and lasting collaborative relationships

Strong experience with collaborating on scrum teams

Strong experience with cloud infrastructure services with preference for experience with Google Cloud Platform (GCP)

Strong experience with MongoDB, BigQuery, Jira, Git, Kubernetes, Jenkins, Terraform, Apache Airflow, Apache Beam, and Apache Spark

Bachelor’s degree (masters’ preferred) in Computer Science, Applied Mathematics, Engineering, or other technology related field; an equivalent of this educational requirement in working experience is also acceptable
Apply Here
For Remote Data Engineer roles, visit Remote Data Engineer Roles

********

The Tech Career Guru
We will be happy to hear your thoughts

Leave a reply

Tech Jobs Here
Logo