Sign In
 [New User? Sign Up]
Mobile Version

Senior Hadoop Developer


Iselin, NJ
Job Code:
Apply on the Company Site
  • Save Ad
  • Email Friend
  • Print
  • Research Salary

Job Details

Position Title: Senior Hadoop Developer
Job Code: 388158
Job Location: New Jersey-Iselin

This is a really exciting time to be part of the Retirement Technology team. The technology organization has rolled out a new foundation with multi-year investments in Digital , SOA , Data and Analytics technologies and solution delivery capabilities. The primary driver for this investment is to help deliver solutions that can be more responsive, agile, adaptive and cost efficient, to meet changing business needs and customer experience expectations. Information Management and related platforms are a key driver of the foundational work being done, and very critical in the company’s plans.

The Prudential Retirement Information Technology organization plays a key role in enabling positive business outcomes and meeting customer expectations as part of our mission as an industry leader in Retirement Services. As Prudential becomes a more data-driven organization with focus in data analytics and Big Data, this role will be help enable this shift.

This role is a senior programmer analyst/ technical lead role with specialization in Big Data technology.  It requires hands-on development and delivery experience. An ideal candidate should have deep technology acumen and experience, including delivery of Hadoop eco-system projects. The candidate should also be able to handle competing demands, manage the work load within a distributed team. This role is a critical contributor to Retirement Information Technology's vision, and will help implement the technology roadmap, by support one or more projects.

The Candidate will   

• Use hands-on programming skills, to design, code, and testing of complex distributed application components, will work with a team of technology and business data specialists to execute the technology functions required to establish data environments, develop data maps, extract and transform data, analyze and reconcile data errors and anomalies. This is a dynamic environment where the right candidate will be comfortable and capable of operating in “test and learn” mode. 

• Build distributed and scalable data pipelines that ingest and process data using components from the Hadoop ecosystem. 

• Evaluate Big Data technologies and prototypes to improve data processing architecture. 

• Demonstrate proficiency in programming, specializing in Scala, Spark, Hive, Impala development. 

• Demonstrate proficiency in Shell, Python scripts for file validation and processing, job scheduling, distribution and automation. 

• Strong object-oriented design and analysis skills. 

• Experienced in performance tuning and analysis in both relational database and NoSQL database.

• Strong analytical and problem solving skills. 

• Experience with Agile and lean development practices. 

• Desire to learn and use different technologies to overcome complex problems. 

• Software development IT experience of 10 years. Relevant Big Data Hadoop experience 2 - 3 years. 

• Strong core knowledge of Hadoop architecture and components in the Hadoop ecosystem. 

• Strong experience in programming using Spark & Scala. Experience of programming with various components of the framework, such as Impala. Should be able to code in Python as well.  

• Strong experience in Hive Query Language including performance aspects.

• Strong knowledge and development experience using Pig, Sqoop, Impala.

• Python and Shell programming experience. 

• Experience with Linix or Solaris are required. 

• Working knowledge of version control system like PVCS or GIT. 

• Software mind-set with a focus on reusability and testability. 

• Excellent organizational, analytical, written and oral communications skills 

• Excellent interpersonal skills with proven ability to use those skills to effectively Interface with end users and development teams to conduct systems analysis and planning

• BS or MS in Computer Science, Information Systems, or equivalent


• Knowledge of Data Science, Data warehouse, Data visualization concepts, to support analytics. Knowledge of R

• Experience with analytic and reporting technologies such as Tableau and SAS

• Experience with JSON and REST Web Services

Prudential is a multinational financial services leader with operations in the United States, Asia, Europe, and Latin America. Leveraging its heritage of life insurance and asset management expertise, Prudential is focused on helping individual and institutional customers grow and protect their wealth. The company's well-known Rock symbol is an icon of strength, stability, expertise and innovation that has stood the test of time. Prudential's businesses offer a variety of products and services, including life insurance, annuities, retirement-related services, mutual funds, asset management, and real estate services.

We recognize that our strength and success are directly linked to the quality and skills of our diverse associates. We are proud to be a place where talented people who want to make a difference can grow as professionals, leaders, and as individuals. Visit to learn more about our values, our history and our brand.

Prudential is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, genetics, disability, age, veteran status, or any other characteristic protected by law.

Note that this posting is intended for individual applicants. Search firms or agencies should email Staffing at for more information about doing business with Prudential.
Job Function: Information Technology
Schedule: Full-time
Apply on the Company Site

Featured Jobs[ View All ]

Featured Employers [ View All ]