Cutshort logo
Amazon EMR Jobs in Delhi, NCR and Gurgaon

11+ Amazon EMR Jobs in Delhi, NCR and Gurgaon | Amazon EMR Job openings in Delhi, NCR and Gurgaon

Apply to 11+ Amazon EMR Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest Amazon EMR Job opportunities across top companies like Google, Amazon & Adobe.

icon
codersbrain

at codersbrain

1 recruiter
Tanuj Uppal
Posted by Tanuj Uppal
Delhi
4 - 8 yrs
₹2L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more
  • Mandatory - Hands on experience in Python and PySpark.

 

  • Build pySpark applications using Spark Dataframes in Python using Jupyter notebook and PyCharm(IDE).

 

  • Worked on optimizing spark jobs that processes huge volumes of data.

 

  • Hands on experience in version control tools like Git.

 

  • Worked on Amazon’s Analytics services like Amazon EMR, Lambda function etc

 

  • Worked on Amazon’s Compute services like Amazon Lambda, Amazon EC2 and Amazon’s Storage service like S3 and few other services like SNS.

 

  • Experience/knowledge of bash/shell scripting will be a plus.

 

  • Experience in working with fixed width, delimited , multi record file formats etc.

 

  • Hands on experience in tools like Jenkins to build, test and deploy the applications

 

  • Awareness of Devops concepts and be able to work in an automated release pipeline environment.

 

  • Excellent debugging skills.
Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Pune, Hyderabad, Gurugram, Noida
5 - 11 yrs
₹20L - ₹36L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.


Role & Responsibilities:

Your role is focused on Design, Development and delivery of solutions involving:

• Data Integration, Processing & Governance

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Implement scalable architectural models for data processing and storage

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode

• Build functionality for data analytics, search and aggregation

Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 5+ years of IT experience with 3+ years in Data related technologies

2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc

6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Cloud data specialty and other related Big data technology certifications


Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes


Read more
consulting & implementation services in the area of Oil & Gas, Mining and Manufacturing Industry

consulting & implementation services in the area of Oil & Gas, Mining and Manufacturing Industry

Agency job
via Jobdost by Sathish Kumar
Ahmedabad, Hyderabad, Pune, Delhi
5 - 7 yrs
₹18L - ₹25L / yr
AWS Lambda
AWS Simple Notification Service (SNS)
AWS Simple Queuing Service (SQS)
skill iconPython
PySpark
+9 more
  1. Data Engineer

 Required skill set: AWS GLUE, AWS LAMBDA, AWS SNS/SQS, AWS ATHENA, SPARK, SNOWFLAKE, PYTHON

Mandatory Requirements  

  • Experience in AWS Glue
  • Experience in Apache Parquet 
  • Proficient in AWS S3 and data lake 
  • Knowledge of Snowflake
  • Understanding of file-based ingestion best practices.
  • Scripting language - Python & pyspark 

CORE RESPONSIBILITIES 

  • Create and manage cloud resources in AWS 
  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 
  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 
  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 
  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
  • Define process improvement opportunities to optimize data collection, insights and displays.
  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 
  • Identify and interpret trends and patterns from complex data sets 
  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 
  • Key participant in regular Scrum ceremonies with the agile teams  
  • Proficient at developing queries, writing reports and presenting findings 
  • Mentor junior members and bring best industry practices 

QUALIFICATIONS 

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 
  • Strong background in math, statistics, computer science, data science or related discipline
  • Advanced knowledge one of language: Java, Scala, Python, C# 
  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  
  • Proficient with
  • Data mining/programming tools (e.g. SAS, SQL, R, Python)
  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
  • Data visualization (e.g. Tableau, Looker, MicroStrategy)
  • Comfortable learning about and deploying new technologies and tools. 
  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 
  • Good written and oral communication skills and ability to present results to non-technical audiences 
  • Knowledge of business intelligence and analytical tools, technologies and techniques.

  

Familiarity and experience in the following is a plus:  

  • AWS certification
  • Spark Streaming 
  • Kafka Streaming / Kafka Connect 
  • ELK Stack 
  • Cassandra / MongoDB 
  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
Read more
Orboai

at Orboai

4 recruiters
Neha T
Posted by Neha T
Mumbai, Delhi
4 - 7 yrs
₹8L - ₹22L / yr
OpenCV
Image Processing
Image segmentation
skill iconDeep Learning
skill iconPython
+7 more

Who Are We

 

A research-oriented company with expertise in computer vision and artificial intelligence, at its core, Orbo is a comprehensive platform of AI-based visual enhancement stack. This way, companies can find a suitable product as per their need where deep learning powered technology can automatically improve their Imagery.

 

ORBO's solutions are helping BFSI, beauty and personal care digital transformation and Ecommerce image retouching industries in multiple ways.

 

WHY US

  • Join top AI company
  • Grow with your best companions
  • Continuous pursuit of excellence, equality, respect
  • Competitive compensation and benefits

You'll be a part of the core team and will be working directly with the founders in building and iterating upon the core products that make cameras intelligent and images more informative.

 

To learn more about how we work, please check out

https://www.orbo.ai/.

 

Description:

We are looking for a computer vision engineer to lead our team in developing a factory floor analytics SaaS product. This would be a fast-paced role and the person will get an opportunity to develop an industrial grade solution from concept to deployment.

 

Responsibilities:

  • Research and develop computer vision solutions for industries (BFSI, Beauty and personal care, E-commerce, Defence etc.)
  • Lead a team of ML engineers in developing an industrial AI product from scratch
  • Setup end-end Deep Learning pipeline for data ingestion, preparation, model training, validation and deployment
  • Tune the models to achieve high accuracy rates and minimum latency
  • Deploying developed computer vision models on edge devices after optimization to meet customer requirements

 

 

Requirements:

  • Bachelor’s degree
  • Understanding about depth and breadth of computer vision and deep learning algorithms.
  • 4+ years of industrial experience in computer vision and/or deep learning
  • Experience in taking an AI product from scratch to commercial deployment.
  • Experience in Image enhancement, object detection, image segmentation, image classification algorithms
  • Experience in deployment with OpenVINO, ONNXruntime and TensorRT
  • Experience in deploying computer vision solutions on edge devices such as Intel Movidius and Nvidia Jetson
  • Experience with any machine/deep learning frameworks like Tensorflow, and PyTorch.
  • Proficient understanding of code versioning tools, such as Git

Our perfect candidate is someone that:

  • is proactive and an independent problem solver
  • is a constant learner. We are a fast growing start-up. We want you to grow with us!
  • is a team player and good communicator

 

What We Offer:

  • You will have fun working with a fast-paced team on a product that can impact the business model of E-commerce and BFSI industries. As the team is small, you will easily be able to see a direct impact of what you build on our customers (Trust us - it is extremely fulfilling!)
  • You will be in charge of what you build and be an integral part of the product development process
  • Technical and financial growth!
Read more
leading pharmacy provider

leading pharmacy provider

Agency job
via Econolytics by Jyotsna Econolytics
Noida, NCR (Delhi | Gurgaon | Noida)
4 - 10 yrs
₹18L - ₹24L / yr
skill iconData Science
skill iconPython
skill iconR Programming
Algorithms
Predictive modelling
Job Description:

• Help build a Data Science team which will be engaged in researching, designing,
implementing, and deploying full-stack scalable data analytics vision and machine learning
solutions to challenge various business issues.
• Modelling complex algorithms, discovering insights and identifying business
opportunities through the use of algorithmic, statistical, visualization, and mining techniques
• Translates business requirements into quick prototypes and enable the
development of big data capabilities driving business outcomes
• Responsible for data governance and defining data collection and collation
guidelines.
• Must be able to advice, guide and train other junior data engineers in their job.

Must Have:

• 4+ experience in a leadership role as a Data Scientist
• Preferably from retail, Manufacturing, Healthcare industry(not mandatory)
• Willing to work from scratch and build up a team of Data Scientists
• Open for taking up the challenges with end to end ownership
• Confident with excellent communication skills along with a good decision maker
Read more
Infogain
Agency job
via Technogen India PvtLtd by RAHUL BATTA
Bengaluru (Bangalore), Pune, Noida, NCR (Delhi | Gurgaon | Noida)
7 - 10 yrs
₹20L - ₹25L / yr
Data engineering
skill iconPython
SQL
Spark
PySpark
+10 more
  1. Sr. Data Engineer:

 Core Skills – Data Engineering, Big Data, Pyspark, Spark SQL and Python

Candidate with prior Palantir Cloud Foundry OR Clinical Trial Data Model background is preferred

Major accountabilities:

  • Responsible for Data Engineering, Foundry Data Pipeline Creation, Foundry Analysis & Reporting, Slate Application development, re-usable code development & management and Integrating Internal or External System with Foundry for data ingestion with high quality.
  • Have good understanding on Foundry Platform landscape and it’s capabilities
  • Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
  • Defines company data assets (data models), Pyspark, spark SQL, jobs to populate data models.
  • Designs data integrations and data quality framework.
  • Design & Implement integration with Internal, External Systems, F1 AWS platform using Foundry Data Connector or Magritte Agent
  • Collaboration with data scientists, data analyst and technology teams to document and leverage their understanding of the Foundry integration with different data sources - Actively participate in agile work practices
  • Coordinating with Quality Engineer to ensure the all quality controls, naming convention & best practices have been followed

Desired Candidate Profile :

  • Strong data engineering background
  • Experience with Clinical Data Model is preferred
  • Experience in
    • SQL Server ,Postgres, Cassandra, Hadoop, and Spark for distributed data storage and parallel computing
    • Java and Groovy for our back-end applications and data integration tools
    • Python for data processing and analysis
    • Cloud infrastructure based on AWS EC2 and S3
  • 7+ years IT experience, 2+ years’ experience in Palantir Foundry Platform, 4+ years’ experience in Big Data platform
  • 5+ years of Python and Pyspark development experience
  • Strong troubleshooting and problem solving skills
  • BTech or master's degree in computer science or a related technical field
  • Experience designing, building, and maintaining big data pipelines systems
  • Hands-on experience on Palantir Foundry Platform and Foundry custom Apps development
  • Able to design and implement data integration between Palantir Foundry and external Apps based on Foundry data connector framework
  • Hands-on in programming languages primarily Python, R, Java, Unix shell scripts
  • Hand-on experience in AWS / Azure cloud platform and stack
  • Strong in API based architecture and concept, able to do quick PoC using API integration and development
  • Knowledge of machine learning and AI
  • Skill and comfort working in a rapidly changing environment with dynamic objectives and iteration with users.

 Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision

Read more
SmartJoules

at SmartJoules

1 video
9 recruiters
Saksham Dutta
Posted by Saksham Dutta
Remote, NCR (Delhi | Gurgaon | Noida)
3 - 5 yrs
₹8L - ₹12L / yr
skill iconMachine Learning (ML)
skill iconPython
Big Data
Apache Spark
skill iconDeep Learning

Responsibilities:

  • Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world.
  • Verifying data quality, and/or ensuring it via data cleaning.
  • Able to adapt and work fast in producing the output which upgrades the decision making of stakeholders using ML.
  • To design and develop Machine Learning systems and schemes. 
  • To perform statistical analysis and fine-tune models using test results.
  • To train and retrain ML systems and models as and when necessary. 
  • To deploy ML models in production and maintain the cost of cloud infrastructure.
  • To develop Machine Learning apps according to client and data scientist requirements.
  • To analyze the problem-solving capabilities and use-cases of ML algorithms and rank them by how successful they are in meeting the objective.


Technical Knowledge:


  • Worked with real time problems, solved them using ML and deep learning models deployed in real time and should have some awesome projects under his belt to showcase. 
  • Proficiency in Python and experience with working with Jupyter Framework, Google collab and cloud hosted notebooks such as AWS sagemaker, DataBricks etc.
  • Proficiency in working with libraries Sklearn, Tensorflow, Open CV2, Pyspark,  Pandas, Numpy and related libraries.
  • Expert in visualising and manipulating complex datasets.
  • Proficiency in working with visualisation libraries such as seaborn, plotly, matplotlib etc.
  • Proficiency in Linear Algebra, statistics and probability required for Machine Learning.
  • Proficiency in ML Based algorithms for example, Gradient boosting, stacked Machine learning, classification algorithms and deep learning algorithms. Need to have experience in hypertuning various models and comparing the results of algorithm performance.
  • Big data Technologies such as Hadoop stack and Spark. 
  • Basic use of clouds (VM’s example EC2).
  • Brownie points for Kubernetes and Task Queues.      
  • Strong written and verbal communications.
  • Experience working in an Agile environment.
Read more
Sagacito

at Sagacito

2 recruiters
Neha Verma
Posted by Neha Verma
NCR (Delhi | Gurgaon | Noida)
3 - 9 yrs
₹6L - ₹18L / yr
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
skill iconPython
skill iconData Science
Location: Gurgaon Role: • The person will be part of data science team. This person will be working on a close basis with the business analysts and the technology team to deliver the Data Science portion of the project and product. • Data Science contribution to a project can range between 30% to 80%. • Day to Day activities will include data exploration to solve a specific problem, researching of methods to be applied as solution, setting up ML process to be applied in context of a specific engagement/ requirement, contributing to building a DS platform, coding the solution, interacting with client on explanations, integration the DS solution with the technology solution, data cleaning and structuring etc. • Nature of work will depend on stage of a specific engagement, available engagements and individual skill At least 2-6 years of experience in: • Machine Learning (including deep learning methods): Algorithm design, analysis and development and performance improvement o Strong understanding of statistical and predictive modeling concepts, machine-learning approaches, clustering, classification, regression techniques, and recommendation (collaborative filtering) algorithms o Time Series Analysis o Optimization techniques and work experience with solvers for MILP and global optimization. • Data Science o Good experience in exploratory data analysis and feature design & development o Experience of applying and evaluating ML algorithms to practical predictive modeling scenarios in various verticals including (but not limited to) FMCG, Media, E-commerce and Hospitality. • Proficient with programming in Python (must have) & PySpark (good to have). Parallel ML algorithms design and development and usage for maximal performance on multi-core, distributed and/or GPU architectures. • Must be able to write a production ready code with reusable components and integration into data science platform. • Strong inclination to write structured code as per prevailing coding standards and best practices. • Ability to design a data science architecture for repeatability of solutions • Preparedness to manage whole cycle from data preparation to algorithm design to client presentation at individual level. • Comfort in working on AWS including managing data science AWS servers • Team player and good communication and interpersonal skills • Good experience in Natural Language Processing and its applications (Good to Have)
Read more
SpotDraft

at SpotDraft

4 recruiters
Madhav Bhagat
Posted by Madhav Bhagat
Noida, NCR (Delhi | Gurgaon | Noida)
3 - 7 yrs
₹3L - ₹24L / yr
skill iconPython
TensorFlow
caffee
We are building the AI core for a Legal Workflow solution. You will be expected to build and train models to extract relevant information from contracts and other legal documents. Required Skills/Experience: - Python - Basics of Deep Learning - Experience with one ML framework (like TensorFlow, Keras, Caffee) Preferred Skills/Expereince: - Exposure to ML concepts like LSTM, RNN and Conv Nets - Experience with NLP and Stanford POS tagger
Read more
Paysense Services India Pvt Ltd
Pragya Singh
Posted by Pragya Singh
NCR (Delhi | Gurgaon | Noida), Mumbai
2 - 7 yrs
₹10L - ₹30L / yr
skill iconPython
skill iconData Analytics
Hadoop
Data Warehouse (DWH)
skill iconMachine Learning (ML)
About the job: - You will work with data scientists to architect, code and deploy ML models - You will solve problems of storing and analyzing large scale data in milliseconds - architect and develop data processing and warehouse systems - You will code, drink, breathe and live python, sklearn and pandas. It’s good to have experience in these but not a necessity - as long as you’re super comfortable in a language of your choice. - You will develop tools and products that provide analysts ready access to the data About you: - Strong CS fundamentals - You have strong experience in working with production environments - You write code that is clean, readable and tested - Instead of doing it second time, you automate it - You have worked with some of the commonly used databases and computing frameworks (Psql, S3, Hadoop, Hive, Presto, Spark, etc) - It will be great if you have one of the following to share - a kaggle or a github profile - You are an expert in one or more programming languages (Python preferred). Also good to have experience with python-based application development and data science libraries. - Ideally, you have 2+ years of experience in tech and/or data. - Degree in CS/Maths from Tier-1 institutes.
Read more
YCH Logistics

at YCH Logistics

1 recruiter
Sanatan Upmanyu
Posted by Sanatan Upmanyu
NCR (Delhi | Gurgaon | Noida)
0 - 5 yrs
₹2L - ₹5L / yr
skill iconPython
skill iconDeep Learning
MySQL
Job Description: Data Science Analyst/ Data Science Senior Analyst Job description KSTYCH is seeking a Data Science Analyst to join our Data Science team. Individuals in this role are expected to be comfortable working as a software engineer and a quantitative researcher, should have a significant theoretical foundation in mathematical statistics. The ideal candidate will have a keen interest in the study of Pharma sector, network biology, text mining, machine learning, and a passion for identifying and answering questions that help us build the best consulting resource and continuous support to other teams. Responsibilities Work closely with a product scientific, medical, business development and commercial to identify and answer important healthcare/pharma/biology questions. Answer questions by using appropriate statistical techniques and tools on available data. Communicate findings to project managers and team managers. Drive the collection of new data and the refinement of existing data sources Analyze and interpret the results of an experiments Develop best practices for instrumentation and experimentation and communicate those to other teams Requirements B. Tech, M.Tech, M.S. or Ph.D. in a relevant technical field, or 1+ years experience in a relevant role Extensive experience solving analytical problems using quantitative approaches Comfort manipulating and analyzing complex, high-volume, high-dimensionality data from varying sources A strong passion for empirical research and for answering hard questions with data A flexible analytic approach that allows for results at varying levels of precision Ability to communicate complex quantitative analysis in a clear, precise, and actionable manner Fluency with at least one scripting language such as Python or PHP Familiarity with relational databases and SQL Experience working with large data sets, experience working with distributed computing tools a plus (KNIME, Map/Reduce, Hadoop, Hive, etc)
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort