Remote
Full Time

What you will do:

  • Create or update features of the data processing pipelines using GCP components as dataflow, bigquery, bigtable
  • Develop new services, dashboard to ensure the data quality in real-time and design a moniting process
  • Participate in the design phases and implementation of new micro-services and machine-learning features plugged into the pipeline
  • Explore and implement new data sources
  • Monitor the pipeline

What we're looking for:

  • Strong knowledge of at least one of the following technologies: Apache Beam ou Spark, et Hadoop
  • Knowledge in Java, Python, and/or Golang
  • A least 2 years of experiences as a (Big) Data Engineer
  • Experience with cloud platform is a plus
  • Good level of English (written and spoken)
  • Knowledge in Clickhouse

Our mission

The data team works closely with both the DevOps and the Data Science teams, is mainly in charge of developing and monitoring the data collect pipeline. The collect, which processes a few terabytes of data per day, has been deployed on a Google Cloud Platform environment and is critical to us. Different GCP environments are available to ensure the good development and deployment of features as well as data modelisation and documentation.

You will be required to work on different major areas:

  • Streaming Data pipeline (Dataflow, Pub/Sub, Big Table, Big Query): Performances / Cost optimisation
  • Data restitution: SQL / API / Architecture optimisations / Performances
  • Data QA
  • Ensure the quality of collected data, either on a technical point of view (data consistency), and the business analysis for the client (time series analysis)
  • Innovation / Research
  • Test & Learn new technologies
  • Propose innovative and valuable features for us and our clients
  • Intern Analysis
  • For exemple analyse the usage of our platform by our clients to optimize the way we pre-process the results
Besides the projects planned annually in the roadmap, you will other tasks to perform regarding the continuous improvement of the platform:
  • Keep up-to-date the versions of used technologies / tools
  • The skills you're
  • Improve the monitoring of the plateform for each services / pipeline KPI
  • Public / Internal documentation

Soft Skills required:

  • You understand the "CRO" and/or web analytics business
  • You know what it means to work in a team, and work with other teams
  • You know how to put yourself in the client's shoes when it comes to analyzing their needs

Technical skills:

  • Strong knowledge of at least one of the following technologies: Apache Beam ou Spark, et Hadoop
  • Knowledge in Java, Python, and/or Golang
  • A least 2 years of experiences as a (Big) Data Engineer
  • Experience with cloud platform is a plus
  • Good level of English (written and spoken)
  • Knowledge in Clickhouse

Not conviced yet ?!

In addition to the company's human-centered values, joining the data team means being a real player in the daily life of the team by participating in "Chapter" meetings to share data culture and articles. Needless to say, we are flexible when it comes to remote work.

Now it's time to join the best team as Data Engineer.

Submit your application

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.