Hadoop Big Data Developer

Hadoop Big Data Developer

Our client, a leading global supplier for IT services, requires an experienced Hadoop Big Data Developer to be based in their client’s office in Windsor, UK.

This is a hybrid role – you can work remotely in the UK and attend the Windsor office once a week.

This is a 6 month temporary contract, to start asap.

Day rate: Competitive Market rate.

The right candidate should have good experience in PySpark, Spark SQL, Hive, Python, and Kafka.

Key Responsibilities:

  • Work closely with the development team to assess existing Big Data infrastructure
  • Design and code Hadoop applications to analyse data compilations
  • Create data processing frameworks
  • Extract and isolate data clusters
  • Test scripts to analyse results and troubleshoot bugs
  • Create data tracking programs and documentation
  • Maintain security and data privacy

Key Requirements:

  • 10+ years of total IT experience
  • 5+ years Hadoop development experience
  • Good expertise in PySpark, Spark SQL, Hive, Python, Kafka
  • Strong experience in Data collection and integration, scheduling, data storage and management, ETL (Extract, Transform, Load) processes
  • Knowledge of relational and non-relational databases (e.g., MySQL, PostgreSQL, MongoDB)
  • Build, schedule and maintain data pipelines
  • Experience in managing business stakeholders for requirement clarification
  • Good written and verbal communication skill

Due to the volume of applications received, unfortunately we cannot respond to everyone.

If you do not hear back from us within 7 days of sending your application, please assume that you have not been successful on this occasion.

Please do keep an eye on our website https://projectrecruit.com/jobs/ for future roles.

Upload your CV/resume or any other relevant file. Max. file size: 32 MB.