Ranked as #12 on Forbes’ List of 25 Fastest Growing Public Tech Companies for 2017, EPAM is committed to providing our global team of over 24,000 people with inspiring careers from day one. EPAMers lead with passion and honesty, and think creatively. Our people are the source of our success and we value collaboration, try to always understand our customers’ business, and strive for the highest standards of excellence. No matter where you are located, you’ll join a dedicated, diverse community that will help you discover your fullest potential.
You are curious, persistent, logical and clever – a true techie at heart. You enjoy living by the code of your craft and developing elegant solutions for complex problems. If this sounds like you, this could be the perfect opportunity to join EPAM as a Senior Big Data Engineer. Scroll down to learn more about the position’s responsibilities and requirements.
EPAM is looking for a Senior Big Data Engineer to join our growing team in Hartford, CT. Our ideal candidate will possess skills that make them a subject matter expert in various big data technologies and data science. This role will entail architecture analysis, design, development, and other skills.
Develop proposals for implementation and design of scalable Big Data architecture;
Participate in customer’s workshops and presentation of the proposed solution;
Develop scalable production ready data integration and processing solutions;
Convert large volumes of structured and unstructured customer data;
Design, implement, and deploy high-performance, custom applications at scale on Hadoop;
Define and develop network infrastructure solutions to enable partners and clients to scale NoSQL and relational database architecture for growing demands and traffic;
Define common business and development processes, platform and tools usage for data acquisition, storage, transformation, and analysis;
Develop roadmaps and implementation strategy around data science initiatives including recommendation engines, predictive modeling, and machine learning;
Review and audit of existing solution, design and system architecture;
Perform profiling, troubleshooting of existing solutions;
Create technical documentation.
Experience in major big data technologies and frameworks including but not limited to Hadoop, Apache Spark, Spark Streaming, Spark SQL, Dataframes, Kafka, along with experience in Hive, Pig, Sqoop, HDFS, and NoSQL (Cassandra, MongoDB, or Hbase);
Strong knowledge of programming in Scala, Java and Python;
Experience in client-driven large-scale implementation projects;
Data Science and Analytics experience is a plus (Machine Learning, Recommendation Engines, Search Personalization);
Experience in major data integration technologies and frameworks including but not limited to Informatica Integration Hub;
Experience building Data Lakes and processing Data streams;
Strong experience in applications design, development and maintenance;
Solid knowledge of design patterns and refactoring concepts;
Practical expertise in performance tuning and optimization, bottleneck problems analysis;