Big Data Engineer

Minsk, Belarus

Striving for excellence is in our DNA. Since 1993, we have been helping the world’s leading companies imagine, design, engineer, and deliver software and digital experiences that change the world. We are more than just specialists, we are experts.


Currently we are looking for an experienced and self-managed Big Data Engineer for our Minsk office to make the team even stronger.

The project is about migration from 3rd party DNS Security subscriptions to own database of DNS Security-related records while providing a full service to the customer’s clients without interruptions. This implies fast processing of a large amount of data (100k records/sec) and optimizing storage volume by consolidation of raw data received from several sources.

We provide great opportunities for those specialists who are ready for challenges and strive to extend their technical skills and knowledge base. Moreover, business trips to the customer’s location in Santa Clara, CA are possible.


Our client is a global technology company from California’s Silicon Valley operating in 25 countries, which delivers DDI (DNS, DHCP, IPAM) technology to help their customers control their networks. The company has more than 8000 clients all around the world including airline carriers, government establishments, technology companies, and mobile operators.

Project technologies and tools

  • Scala;
  • Spark;
  • Kafka;
  • DynamoDB;
  • S3.


  • Prepare data for UI – gather statistics;
  • Provide data filtering;
  • Implement Machine Learning;
  • Implement ETL programming;
  • Deploy cluster in AWS using the existing DevOps tool chain;
  • Work both independently and in close collaboration with others in the team and across the business;
  • Communicate with the customer to clarify business requirements.


  • Strong algorithmic skills;
  • Experience in development of high-performance applications/algorithms (there is need to process no less than 100k+ messages per second);
  • Excellent knowledge of Collections framework (Lists, Arrays, HashMaps, etc.) with an aim on performance and efficiency side, no language specifics;
  • Experience in Linux Shell – ability to troubleshoot and work with Linux from a command line;
  • Good communication skills;
  • At least Pre-Intermediate+ (A2+) English level, higher level is a bid advantage.

Nice to have

  • Big Data knowledge;
  • Experience with the following tools: Spark, Kafka, DynamoDB, Amazon S3;
  • Scala.

We offer

  • Experience exchange with colleagues all around the world;
  • Competitive compensation depending on experience and skills;
  • Regular assessments and salary reviews;
  • Social package: medical care, sports, family care;
  • Free English classes;
  • Opportunities for self-realization;
  • Friendly team and enjoyable working environment;
  • Flexible working schedule;
  • Corporate and social events.