Senior Big Data Developer

Seattle, WA, USA

Striving for excellence is in our DNA. Since 1993, we have been helping the world’s leading companies imagine, design, engineer, and deliver software and digital experiences that change the world. We are more than just specialists, we are experts.

DESCRIPTION


Currently we are looking for a Senior Big Data Developer for our Seattle, WA office to make the team even stronger.

This position will be a key on-site resource for our clients, supporting design and development of leading edge solutions for EPAM.

We do not believe in matching against a list of buzzwords - we look for smart people with good general programming skills and experience with Big Data platforms and technologies in cloud as we believe that clever developers can learn new technologies quickly and well.

Responsibilities

  • Work with rest of the team to design, develop & manage end-to-end engineering solutions for business opportunities using existing on premises or new Cloud based technology platforms;
  • Tenaciously keep the Big Data infrastructure (Hadoop and peripheral tools) operational across various environments in datacenters & Cloud;
  • Deploy and manage EMR, Redshift, and Dynamo DB etc. services based applications on AWS;
  • Lift and shift of on premises Big Data applications/tools to Cloud;
  • Work with the team to build the Big Data platform solutions for different business needs;
  • Develop scripts and glue code to integrate multiple software components;
  • Proactively monitor environments and drive troubleshooting and tuning;
  • Demonstrate deep knowledge of Hadoop & Cloud technologies to troubleshoot issues;
  • Evaluate and build different compute frameworks for all tiers for technologies in Cloud.

Requirements

  • At least 5+ years of relevant experience preferably in e-commerce and Big data technologies;
  • Strong programming skills in at least one programming language – Scala or Java or Python;
  • Experience in implementing the full life cycle of massive datasets including ETL, cleaning, data analysis and deployment in the Cloud;
  • Experience in processing large amounts of structured and unstructured data from various sources using technologies like Hadoop ecosystem, Map Reduce programming, Hive, Spark and more;
  • Experience in a wide variety of AWS technologies like S3, EMR, Redshift, Lambda, Aurora, SNS and EC2;
  • Experience in relational databases such as Postgres and with NoSQL databases like MongoDB or Cassandra;
  • Experience with Linux and hands on experience with Shell scripting;
  • Agile development methodologies including scrum, code reviews, pair programming;
  • Object-oriented design and development;
  • Performance and scalability tuning, algorithms and computational complexity;
  • Open source libraries and tools such as Spring, Maven, Guava, Apache Commons, Eclipse, Git, Jira, Jenkins;
  • Unit testing;
  • All things Linux (bash scripting, grep, sed, awk etc.);
  • MS/BS degree in a computer science field or related discipline is nice but not essential.