Cutshort logo
Amazon emr jobs

10+ Amazon EMR Jobs in India

Apply to 10+ Amazon EMR Jobs on CutShort.io. Find your next job, effortlessly. Browse Amazon EMR Jobs and apply today!

icon
Gipfel & Schnell Consultings Pvt Ltd
TanmayaKumar Pattanaik
Posted by TanmayaKumar Pattanaik
Bengaluru (Bangalore)
3 - 9 yrs
₹9L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+10 more

Qualifications & Experience:


▪ 2 - 4 years overall experience in ETLs, data pipeline, Data Warehouse development and database design

▪ Software solution development using Hadoop Technologies such as MapReduce, Hive, Spark, Kafka, Yarn/Mesos etc.

▪ Expert in SQL, worked on advanced SQL for at least 2+ years

▪ Good development skills in Java, Python or other languages

▪ Experience with EMR, S3

▪ Knowledge and exposure to BI applications, e.g. Tableau, Qlikview

▪ Comfortable working in an agile environment

Read more
Kloud9 Technologies
Bengaluru (Bangalore)
3 - 6 yrs
₹5L - ₹20L / yr
skill iconAmazon Web Services (AWS)
Amazon EMR
EMR
Spark
PySpark
+9 more

About Kloud9:

 

Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.

 

Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.

 

At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.

 

Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.

 

We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.


What we are looking for:

● 3+ years’ experience developing Data & Analytic solutions

● Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive& Spark

● Experience with relational SQL

● Experience with scripting languages such as Shell, Python

● Experience with source control tools such as GitHub and related dev process

● Experience with workflow scheduling tools such as Airflow

● In-depth knowledge of scalable cloud

● Has a passion for data solutions

● Strong understanding of data structures and algorithms

● Strong understanding of solution and technical design

● Has a strong problem-solving and analytical mindset

● Experience working with Agile Teams.

● Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders

● Able to quickly pick up new programming languages, technologies, and frameworks

● Bachelor’s Degree in computer science


Why Explore a Career at Kloud9:

 

With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.

Read more
Mactores Cognition Private Limited
Remote only
5 - 15 yrs
₹5L - ₹21L / yr
ETL
Informatica
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
Amazon S3
+3 more

Mactores is a trusted leader among businesses in providing modern data platform solutions. Since 2008, Mactores have been enabling businesses to accelerate their value through automation by providing End-to-End Data Solutions that are automated, agile, and secure. We collaborate with customers to strategize, navigate, and accelerate an ideal path forward with a digital transformation via assessments, migration, or modernization.


We are looking for a DataOps Engineer with expertise while operating a data lake. Amazon S3, Amazon EMR, and Apache Airflow for workflow management are used to build the data lake.


You have experience of building and running data lake platforms on AWS. You have exposure to operating PySpark-based ETL Jobs in Apache Airflow and Amazon EMR. Expertise in monitoring services like Amazon CloudWatch.


If you love solving problems using yo, professional services background, usual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself.


What you will do?


  • Operate the current data lake deployed on AWS with Amazon S3, Amazon EMR, and Apache Airflow
  • Debug and fix production issues in PySpark.
  • Determine the RCA (Root cause analysis) for production issues.
  • Collaborate with product teams for L3/L4 production issues in PySpark.
  • Contribute to enhancing the ETL efficiency
  • Build CloudWatch dashboards for optimizing the operational efficiencies
  • Handle escalation tickets from L1 Monitoring engineers
  • Assign the tickets to L1 engineers based on their expertise


What are we looking for?


  • AWS data Ops engineer.
  • Overall 5+ years of exp in the software industry Exp in developing architecture data applications using python or scala, Airflow, and Kafka on AWS Data platform Experience and expertise.
  • Must have set up or led the project to enable Data Ops on AWS or any other cloud data platform.
  • Strong data engineering experience on Cloud platform, preferably AWS.
  • Experience with data pipelines designed for reuse and use parameterization.
  • Experience of pipelines was designed to solve common ETL problems.
  • Understanding or experience on various AWS services can be codified for enabling DataOps like Amazon EMR, Apache Airflow.
  • Experience in building data pipelines using CI/CD infrastructure.
  • Understanding of Infrastructure as code for DataOps ennoblement.
  • Ability to work with ambiguity and create quick PoCs.


You will be preferred if


  • Expertise in Amazon EMR, Apache Airflow, Terraform, CloudWatch
  • Exposure to MLOps using Amazon Sagemaker is a plus.
  • AWS Solutions Architect Professional or Associate Level Certificate
  • AWS DevOps Professional Certificate


Life at Mactores


We care about creating a culture that makes a real difference in the lives of every Mactorian. Our 10 Core Leadership Principles that honor Decision-making, Leadership, Collaboration, and Curiosity drive how we work.


1. Be one step ahead

2. Deliver the best

3. Be bold

4. Pay attention to the detail

5. Enjoy the challenge

6. Be curious and take action

7. Take leadership

8. Own it

9. Deliver value

10. Be collaborative


We would like you to read more details about the work culture on https://mactores.com/careers 


The Path to Joining the Mactores Team

At Mactores, our recruitment process is structured around three distinct stages:


Pre-Employment Assessment: 

You will be invited to participate in a series of pre-employment evaluations to assess your technical proficiency and suitability for the role.


Managerial Interview: The hiring manager will engage with you in multiple discussions, lasting anywhere from 30 minutes to an hour, to assess your technical skills, hands-on experience, leadership potential, and communication abilities.


HR Discussion: During this 30-minute session, you'll have the opportunity to discuss the offer and next steps with a member of the HR team.


At Mactores, we are committed to providing equal opportunities in all of our employment practices, and we do not discriminate based on race, religion, gender, national origin, age, disability, marital status, military status, genetic information, or any other category protected by federal, state, and local laws. This policy extends to all aspects of the employment relationship, including recruitment, compensation, promotions, transfers, disciplinary action, layoff, training, and social and recreational programs. All employment decisions will be made in compliance with these principles.


Read more
Multinational Company providing energy & Automation digital

Multinational Company providing energy & Automation digital

Agency job
via Jobdost by Sathish Kumar
Hyderabad
7 - 12 yrs
₹12L - ₹24L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more

Skills

Proficient experience of minimum 7 years into Hadoop. Hands-on experience of minimum 2 years into AWS - EMR/ S3 and other AWS services and dashboards. Good experience of minimum 2 years into Spark framework. Good understanding of Hadoop Eco system including Hive, MR, Spark and Zeppelin. Responsible for troubleshooting and recommendation for Spark and MR jobs. Should be able to use existing logs to debug the issue. Responsible for implementation and ongoing administration of Hadoop infrastructure including monitoring, tuning and troubleshooting Triage production issues when they occur with other operational teams. Hands on experience to troubleshoot incidents, formulate theories and test hypothesis and narrow down possibilities to find the root cause.
Read more
codersbrain

at codersbrain

1 recruiter
shreya Dubey
Posted by shreya Dubey
Mumbai, Kolkata
5 - 9 yrs
₹2L - ₹12L / yr
skill iconRuby
skill iconRuby on Rails (ROR)
MySQL
skill iconDocker
Terraform
+9 more

Hello,

Greetings from CodersBrain!

 

Coders Brain is a global leader in its services, digital, and business solutions that partners with its clients to simplify, strengthen, and transform their businesses. We ensure the highest levels of certainty and satisfaction through a deep-set commitment to our clients, comprehensive industry expertise, and a global network of innovation and delivery centers.

 

This is regarding the urgent opening for the'' ROR Developer role. We found your profile in the Cutshort database and it seems like a good fit for the organization. If you are interested, do revert back with your updated CV along with the details:

Permanent Payroll :  CodersBrain

 

Location:- Mumbai/ Kolkata

Notice Period:- imm./15 daysJob Description
Scope of work  This role is an exciting opportunity to build highly interactive

customer facing applications and products.
 Candidate will help transform vast collections of data into
actionable insights with intuitive and easy to use interfaces and
visualizations.
 Candidate will leverage the power of JavaScript and Ruby on
Rail to build novel features and improvements to our current suite
of tools.
 This team is responsible for customer-facing applications that
deliver SEO data insights.
 Through the applications customers are able to access insights,
workflows, and aggregations of information above and beyond
core data offerings

Responsibility 

 Build and maintain the core frontend application

 Work collaboratively with the engineers on the Frontend team to
ensure quality and performance of the systems through code
reviews, documentation, analysis and employing engineering
best practices to ensure high-quality software
 Contribute to org devops culture by maintaining our systems,
including creating documentation, run books, monitoring, alerting,
and integration tests, etc.
 Participate in architecture design and development for new
features and capabilities, and for migration of legacy systems, to
meet business and customer needs.
 Take turns in the on-call rotation, handling systems and
operations issues as they arise including responding to off-hours
alerts
 Collaborate with other teams on dependent work and integrations
as well as be vigilant for activities happening outside the team
that would have an impact on the work your team is doing.
 Work with Product Managers and UX Designers to deliver new
features and capabilities
 Use good security practices to protect code and systems
 Pitch in where needed during major efforts or when critical issues
arise
 Give constructive, critical feedback to other team members
through pull requests, design reviews, and other methods
 Seek out opportunities and work to grow skills and expertise.

C) Skills Required
Essential Skills  JavaScript (preferred ExtJS framework)

 Ruby
 Ruby on Rails
 MySQL
 Docker
Desired Skills  Terraform

 Basic Unix/Linux administration
 Redis
 Resque
 TravisCI
 AWS ECS
 AWS RDS
 AWS EMR
 AWS S3
 AWS Step Functions
 AWS Lambda Functions
 Experience working remotely with a distributed team
Great problem-solving skills

D) Other Information
Educational Qualifications Bachelor's degree/MCA
Experience  5– 8 years

 

Please confirm the mail with your updated CV  if you are interested in this position and also please share the below-mentioned details :

 


Current CTC:
Expected CTC:
Current Company:
Notice Period:

Notice Period:Are you okay with 1 week Notice period if not  then comfortable as freelancing work with us till joining:- 
Current Location:
Preferred Location:
Total-experience:
Relevant experience:
Highest qualification:

DOJ(If Offer in Hand from Other company):

offer in hand:

Alternate number:

Interview Availability

Read more
codersbrain

at codersbrain

1 recruiter
Tanuj Uppal
Posted by Tanuj Uppal
Delhi
4 - 8 yrs
₹2L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more
  • Mandatory - Hands on experience in Python and PySpark.

 

  • Build pySpark applications using Spark Dataframes in Python using Jupyter notebook and PyCharm(IDE).

 

  • Worked on optimizing spark jobs that processes huge volumes of data.

 

  • Hands on experience in version control tools like Git.

 

  • Worked on Amazon’s Analytics services like Amazon EMR, Lambda function etc

 

  • Worked on Amazon’s Compute services like Amazon Lambda, Amazon EC2 and Amazon’s Storage service like S3 and few other services like SNS.

 

  • Experience/knowledge of bash/shell scripting will be a plus.

 

  • Experience in working with fixed width, delimited , multi record file formats etc.

 

  • Hands on experience in tools like Jenkins to build, test and deploy the applications

 

  • Awareness of Devops concepts and be able to work in an automated release pipeline environment.

 

  • Excellent debugging skills.
Read more
Genesys

at Genesys

5 recruiters
Manojkumar Ganesh
Posted by Manojkumar Ganesh
Chennai, Hyderabad
4 - 10 yrs
₹10L - ₹40L / yr
ETL
Datawarehousing
Business Intelligence (BI)
Big Data
PySpark
+6 more

Join our team

 

We're looking for an experienced and passionate Data Engineer to join our team. Our vision is to empower Genesys to leverage data to drive better customer and business outcomes. Our batch and streaming solutions turn vast amounts of data into useful insights. If you’re interested in working with the latest big data technologies, using industry leading BI analytics and visualization tools, and bringing the power of data to our customers’ fingertips then this position is for you!

 

Our ideal candidate thrives in a fast-paced environment, enjoys the challenge of highly complex business contexts (that are typically being defined in real-time), and, above all, is a passionate about data and analytics.

 

 

What you'll get to do

 

  • Work in an agile development environment, constantly shipping and iterating.
  • Develop high quality batch and streaming big data pipelines.
  • Interface with our Data Consumers, gathering requirements, and delivering complete data solutions.
  • Own the design, development, and maintenance of datasets that drive key business decisions.
  • Support, monitor and maintain the data models
  • Adopt and define the standards and best practices in data engineering including data integrity, performance optimization, validation, reliability, and documentation.
  • Keep up-to-date with advances in big data technologies and run pilots to design the data architecture to scale with the increased data volume using cloud services.
  • Triage many possible courses of action in a high-ambiguity environment, making use of both quantitative analysis and business judgment.

 

Your experience should include

 

  • Bachelor’s degree in CS or related technical field.
  • 5+ years of experience in data modelling, data development, and data warehousing.
  • Experience working with Big Data technologies (Hadoop, Hive, Spark, Kafka, Kinesis).
  • Experience with large scale data processing systems for both batch and streaming technologies (Hadoop, Spark, Kinesis, Flink).
  • Experience in programming using Python, Java or Scala.
  • Experience with data orchestration tools (Airflow, Oozie, Step Functions).
  • Solid understanding of database technologies including NoSQL and SQL.
  • Strong in SQL queries (experience with Snowflake Cloud Datawarehouse is a plus)
  • Work experience in Talend is a plus
  • Track record of delivering reliable data pipelines with solid test infrastructure, CICD, data quality checks, monitoring, and alerting.
  • Strong organizational and multitasking skills with ability to balance competing priorities.
  • Excellent communication (verbal and written) and interpersonal skills and an ability to effectively communicate with both business and technical teams.
  • An ability to work in a fast-paced environment where continuous innovation is occurring, and ambiguity is the norm.

 

Good to have

  • Experience with AWS big data technologies - S3, EMR, Kinesis, Redshift, Glue
Read more
AI-powered cloud-based SaaS solution provider

AI-powered cloud-based SaaS solution provider

Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
8 - 15 yrs
₹25L - ₹60L / yr
Data engineering
Big Data
Spark
Apache Kafka
Cassandra
+20 more
Responsibilities

● Able to contribute to the gathering of functional requirements, developing technical
specifications, and test case planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● 60% hands-on coding with architecture ownership of one or more products
● Ability to articulate architectural and design options, and educate development teams and
business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Mentor and guide team members
● Work cross-functionally with various bidgely teams including product management, QA/QE,
various product lines, and/or business units to drive forward results

Requirements
● BS/MS in computer science or equivalent work experience
● 8-12 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data EcoSystems.
● Past experience with Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra,
Kafka, Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Ability to lead and mentor technical team members
● Expertise with the entire Software Development Life Cycle (SDLC)
● Excellent communication skills: Demonstrated ability to explain complex technical issues to
both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Business Acumen - strategic thinking & strategy development
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
● Experience with Agile Development, SCRUM, or Extreme Programming methodologies
Read more
Angel One

at Angel One

4 recruiters
Andleeb Mujeeb
Posted by Andleeb Mujeeb
Remote only
2 - 6 yrs
₹12L - ₹18L / yr
skill iconAmazon Web Services (AWS)
PySpark
skill iconPython
skill iconScala
skill iconGo Programming (Golang)
+19 more

Designation: Specialist - Cloud Service Developer (ABL_SS_600)

Position description:

  • The person would be primary responsible for developing solutions using AWS services. Ex: Fargate, Lambda, ECS, ALB, NLB, S3 etc.
  • Apply advanced troubleshooting techniques to provide Solutions to issues pertaining to Service Availability, Performance, and Resiliency
  • Monitor & Optimize the performance using AWS dashboards and logs
  • Partner with Engineering leaders and peers in delivering technology solutions that meet the business requirements 
  • Work with the cloud team in agile approach and develop cost optimized solutions

 

Primary Responsibilities:

  • Develop solutions using AWS services includiing Fargate, Lambda, ECS, ALB, NLB, S3 etc.

 

Reporting Team

  • Reporting Designation: Head - Big Data Engineering and Cloud Development (ABL_SS_414)
  • Reporting Department: Application Development (2487)

Required Skills:

  • AWS certification would be preferred
  • Good understanding in Monitoring (Cloudwatch, alarms, logs, custom metrics, Trust SNS configuration)
  • Good experience with Fargate, Lambda, ECS, ALB, NLB, S3, Glue, Aurora and other AWS services. 
  • Preferred to have Knowledge on Storage (S3, Life cycle management, Event configuration)
  • Good in data structure, programming in (pyspark / python / golang / Scala)
Read more
Rely

at Rely

1 video
3 recruiters
Hizam Ismail
Posted by Hizam Ismail
Bengaluru (Bangalore)
2 - 10 yrs
₹8L - ₹35L / yr
skill iconPython
Hadoop
Spark
skill iconAmazon Web Services (AWS)
Big Data
+2 more

Intro

Our data and risk team is the core pillar of our business that harnesses alternative data sources to guide the decisions we make at Rely. The team designs, architects, as well as develop and maintain a scalable data platform the powers our machine learning models. Be part of a team that will help millions of consumers across Asia, to be effortlessly in control of their spending and make better decisions.


What will you do
The data engineer is focused on making data correct and accessible, and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code.

• Optimize and automate ingestion processes for a variety of data sources such as: click stream, transactional and many other sources.

  • Create and maintain optimal data pipeline architecture and ETL processes
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Develop data pipeline and infrastructure to support real-time decisions
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.


What will you need
• 2+ hands-on experience building and implementation of large scale production pipeline and Data Warehouse
• Experience dealing with large scale

  • Proficiency in writing and debugging complex SQLs
  • Experience working with AWS big data tools
    • Ability to lead the project and implement best data practises and technology

Data Pipelining

  • Strong command in building & optimizing data pipelines, architectures and data sets
  • Strong command on relational SQL & noSQL databases including Postgres
  • Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.

Big Data: Strong experience in big data tools & applications

  • Tools: Hadoop, Spark, HDFS etc
  • AWS cloud services: EC2, EMR, RDS, Redshift
  • Stream-processing systems: Storm, Spark-Streaming, Flink etc.
  • Message queuing: RabbitMQ, Spark etc

Software Development & Debugging

  • Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc
  • Strong hold on data structures & algorithms

What would be a bonus

  • Prior experience working in a fast-growth Startup
  • Prior experience in the payments, fraud, lending, advertising companies dealing with large scale data
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort