Capgemini Software Engineer in North Carolina
A global leader in consulting, technology services and digital transformation, Capgemini is at the forefront of innovation to address the entire breadth of clients’ opportunities in the evolving world of cloud, digital and platforms. Building on its strong 50-year heritage and deep industry-specific expertise, Capgemini enables organizations to realize their business ambitions through an array of services from strategy to operations. Capgemini is driven by the conviction that the business value of technology comes from and through people. It is a multicultural company of 200,000 team members in over 40 countries. The Group reported 2017 global revenues of EUR 12.8 billion.
Visit us at . People matter, results count.
Job Title: Spark Developer/Architect
Position Type: Permanent/Fulltime
Duties & Responsibilities:
• Developing and deploying distributed computing Big Data applications using Apache Spark on MapR Hadoop (others hortonworks / Cloudera will work as well)
• Help drive cross team design / development via technical leadership / mentoring
• Work with business partners to develop business rules and business rule execution
• Perform process improvement and re-engineering with an understanding of technical problems and solutions as they relate to the current and future business environment.
• Design and develop innovative solutions for demanding business situations
• Analyze complex distributed production deployments, and make recommendations to optimize performance
• At least 9 years of professional work experience programming in Java or Scala (3+)
• 3 or more years of experience with the Hadoop Stack
• 2+ years of Distributed Computing frameworks such as Apache Spark, Hadoop
• Experience with Elasticsearch, Spark (plus)
• • Strong knowledge of Object Oriented Analysis and Design, Software Design Patterns and Java coding principles
• Core Java development Background is a must
• Familiarity with Agile engineering practices
• Proficiency with MapR Hadoop distribution components and custom packages is a huge plus
• Proven understanding and related experience with Hadoop, HBase, Hive, Pig, Sqoop, Flume, Hbase and/or Map/Reduce
• Excellent RDBMS (Oracle, SQL Server) knowledge for development using SQL/PL SQL
• Solid UNIX OS and Shell Scripting skills
• Strong initiative with the ability to identify areas of improvement with little direction
• Team-player excited to work in a fast-paced environment; Agile experience preferred
• Bachelor’s degree in computer science/data processing or equivalent
Disclaimer: Capgemini America Inc and its U.S. affiliates are EEO/AA employers. Capgemini conducts all employment-related activities without regard to race, religion, color, national origin, age, sex, marital status, sexual orientation, gender identity/expression, disability, citizenship status, genetics, or status as a Vietnam-era, special disabled and other covered veteran status.
Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.