PROCESSING APPLICATION
Hold tight! We’re comparing your resume to the job requirements…
ARE YOU SURE YOU WANT TO APPLY TO THIS JOB?
Based on your Resume, it doesn't look like you meet the requirements from the employer. You can still apply if you think you’re a fit.
Job Requirements of Sr. Data Engineer, Pyspark and AWS:
-
Employment Type:
Contractor
-
Manage Others:
No
-
Location:
Charlotte, NC (Onsite)
Do you meet the requirements for this job?

Sr. Data Engineer, Pyspark and AWS
We are seeking a Senior Data Engineer to join our client's team onsite in Charlotte, North Carolina. This Engineer will be responsible for the design and development of data/batch processing, data manipulation, data mining, and data extraction/transformation/loading into large data domains using Python, Pyspark and AWS tools.
Client Details
This opportunity is with a large organization in the Financial Services industry, known for its innovative approach to technology and data-driven solutions. The team is dedicated to delivering excellence and supporting the company's technological initiatives.
Description
- Lead end‑to‑end project delivery, including scoping, estimating, planning, design, development, testing, implementation, and ongoing support.
- Design and build ETL pipelines, batch processes, and data solutions by collaborating with developers, business analysts, and business teams to translate requirements into technical designs.
- Ensure high-quality delivery by following SDLC standards, performing defect tracking and analysis, resolving or escalating risks, and coordinating with vendors or contractors as needed.
- Produce clear technical documentation, present solution options, and maintain awareness of industry best practices and emerging technologies.
MPI does not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity or expression, national origin, age, disability, veteran status, marital status, or based on an individual's status in any group or class protected by applicable federal, state or local law. MPI encourages applications from minorities, women, the disabled, protected veterans and all other qualified applicants.
Profile
- Develop large-scale ETL solutions using Python and PySpark, leveraging AWS services such as Glue, Lambda, MSK (Kafka), S3, Step Functions, RDS, and EKS to build, optimize, and automate data pipelines.
- Design and implement complex data models and transformations, working with SQL (queries, stored procedures, tuning), relational databases (Postgres, SQL Server, Oracle, Sybase), REST APIs, and UNIX scripting.
- Contribute to full lifecycle development by creating mapping specs and technical designs (HLD/LLD), coding, unit testing, data profiling, and utilizing ETL tools like DataStage, Informatica, and Pentaho.
- Support modern engineering practices through CI/CD (GitHub, Jenkins), Agile/Scrum delivery, workload automation (Control-M, Autosys), and knowledge of MDM, data warehousing, and analytics.
Job Offer
- Temporary position that is long-term
- Collaborative and innovative work environment
If you are passionate about data engineering and are eager to contribute to the success of a large organization in Charlotte, we encourage you to apply.
MPI does not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity or expression, national origin, age, disability, veteran status, marital status, or based on an individual's status in any group or class protected by applicable federal, state or local law. MPI encourages applications from minorities, women, the disabled, protected veterans and all other qualified applicants.