USA Jobs, Careers and Recuitements
Job Target Job Search
[x]

Contact Information

Company: ACRO STAFF
Contact Name: DAVE
Website: http://www.acrostaff.com

Job Details: Hadoop Engineer Date: Jun 11 2019

Job Reference: 4686
Job Category: IT Jobs [ View All IT Jobs Jobs ]
Company Type Recruiter
Employment type: Contract
Degree: Associate
Experience: 3 years
Location: Pleasanton, Nebraska, 68866
Job Skills: Hadoop Engineer
Share Job with Others

Job Tools

Job Description

Technical Knowledge and Skills:
Consultant resources shall possess most of the following technical knowledge and experience:
• Provide technical leadership, develop vision, gather requirements and translate client user requirements into technical architecture.
• Strong Hands-on Experience in building, deploying and productionizing ML models using software such as Spark MLLib, TensorFlow, PyTorch, Python Scikit-learn etc. is mandatory
• Ability to evaluate and choose best suited ML algorithms, perform feature engineering and optimize Machine Learning Models is mandatory
• Strong fundamentals in algorithms, data structures, statistics, predictive modeling, & distributed systems is must
• Design and implement an integrated Big Data platform and analytics solution
• Design and implement data collectors to collect and transport data to the Big Data Platform.
• 4+ years of hands-on Development, Deployment and production Support experience in Hadoop environment.
• 4-5 years of programming experience in Java, Scala, Python.
• Proficient in SQL and relational database design and methods for data retrieval.
• Knowledge of NoSQL systems like HBase or Cassandra
• Hands-on experience in Cloudera Distribution 5.x
• Hands-on experience in creating, indexing Solr collections in Solr Cloud environment.
• Hands-on experience building data pipelines using Hadoop components Sqoop, Hive, Pig, Solr, MR, Spark, Spark SQL.
• Must have experience with developing Hive QL, UDF’s for analyzing semi structured/structured datasets.
• Must have experience with Spring framework
• Hands-on experience ingesting and processing various file formats like Avro/Parquet/Sequence Files/Text Files etc.
• Hands-on experience working in Real-Time analytics like Spark/Kafka/Storm
• Experience with Graph Databases like Neo4J, Tiger Graph, Orient DB
• Must have working experience in the data warehousing and Business Intelligence systems.
• Expertise in Unix/Linux environment in writing scripts and schedule/execute jobs.
• Successful track record of building automation scripts/code using Java, Bash, Python etc. and experience in production support issue resolution process.
• Experience with R, Jupyter/Zeppelin

Copyright © 2019 Jobxoom All rights reserved. Use of this site is subject to certain Terms and Conditions.