Hiring - Python & PySpark Deveoper - Remote US
Role/Tile: Python & PySpark Deveoper
Location: Remote
Client - Accenture
End client - Capital One
Duration - 6+ Months
Capital One - Python & PySpark Developer - Remote
Responsibilities:
Design and develop data processing pipelines using PySpark libraries like Spark SQL, Spark Streaming, and MLlib.
Write clean, efficient, and well-documented Python code.
Work with various data sources, including structured, semi-structured, and unstructured data.
Perform data cleaning, transformation, and manipulation tasks.
Integrate data pipelines with other systems and applications.
Optimize PySpark applications for performance and scalability.
Troubleshoot and debug issues within data pipelines.
Collaborate with data analysts, data scientists, and other engineers.
Stay up-to-date on the latest advancements in Python, PySpark, and big data technologies.
Must Haves
Qualifications:
Minimum 3+ years of experience in software development using Python.
Proven experience in designing and developing big data applications using Apache Spark.
Strong understanding of data processing concepts and methodologies.
Experience working with cloud platforms (AWS, Azure, GCP) is a plus.
Experience with data warehousing and data lake architectures is a plus.
Familiarity with SQL and data modeling techniques is a plus.
Excellent problem-solving and analytical skills.
Strong communication and collaboration skills.
Ability to work independently and as part of a team
Thanks & Regards
Rahul Yadav
Technical Recruiter
Tigerbells LLC
Suite 52, 1405 Chews Landing Rd.
Laurel Springs, NJ 08021
Phone +1 (609) 375-8176
Email: rahul@tigerbells.com