Ranked as #12 on Forbes’ List of 25 Fastest Growing Public Tech Companies for 2017, EPAM is committed to providing our global team of over 24,000 people with inspiring careers from day one. EPAMers lead with passion and honesty, and think creatively. Our people are the source of our success and we value collaboration, try to always understand our customers’ business, and strive for the highest standards of excellence. No matter where you are located, you’ll join a dedicated, diverse community that will help you discover your fullest potential.
You are curious, persistent, logical and clever – a true techie at heart. You enjoy living by the code of your craft and developing elegant solutions for complex problems. If this sounds like you, this could be the perfect opportunity to join EPAM as a Big Data Solution Architect. Scroll down to learn more about the position’s responsibilities and requirements.
EPAM is looking to build and strengthen our Big Data Solution Architects teams across the country. Our ideal candidate will possess skills that make him/her a subject matter expert in various Big Data technologies and data science. This role will entail architecture analysis, design, client management, and other skills.
Develop proposals for implementation and design of scalable big data architecture;
Participate in customer’s workshops and presentation of the proposed solution;
Participate in designing, development, or maintenance of a distributed application;
Design, implement, and deploy high-performance, custom applications at scale on Hadoop;
Define and develop network infrastructure solutions to enable partners and clients to scale NoSQL and relational database architecture for growing demands and traffic;
Define common business and development processes, platform and tools usage for data acquisition, storage, transformation, and analysis;
Develop roadmaps and implementation strategy around data science initiatives including recommendation engines, predictive modeling, and machine learning;
Review and audit of the existing solution, design and system architecture;
Perform profiling, troubleshooting of the existing solutions;
Create technical documentation.
Strong knowledge of programming and scripting languages such as Java, Python, or Scala;
Experience with major Big Data technologies and frameworks including but not limited to Hadoop, MapReduce, Apache Spark, Hive, Kafka, Apache Flink, Flume, ZooKeeper, HBase, MongoDB, and Cassandra;
Experience with Big Data solutions developed in large cloud computing infrastructures such as Amazon Web Services, Azure Cloud, or Google Cloud;
Experience in client-driven large-scale implementation projects;
Data Science and Analytics experience is a plus (Machine Learning, Recommendation Engines, Search Personalization, Deep Learning);
Technical team leading and team management experience, deep understanding of Agile (Scrum), RUP programming process;
Strong experience in applications design, development, and maintenance;
Solid knowledge of design patterns, refactoring concepts, unit tests, and CI/CD;
Practical expertise in performance tuning and optimization, bottleneck problems analysis;
Solid technical expertise and troubleshooting skills;