Cutshort logo
Apache sqoop jobs

9+ Apache Sqoop Jobs in India

Apply to 9+ Apache Sqoop Jobs on CutShort.io. Find your next job, effortlessly. Browse Apache Sqoop Jobs and apply today!

icon
Miracle Software Systems, Inc
Ratnakumari Modhalavalasa
Posted by Ratnakumari Modhalavalasa
Visakhapatnam
3 - 5 yrs
₹2L - ₹4L / yr
Hadoop
Apache Sqoop
Apache Hive
Apache Spark
Apache Pig
+9 more
Position : Data Engineer

Duration : Full Time

Location : Vishakhapatnam, Bangalore, Chennai

years of experience : 3+ years

Job Description :

- 3+ Years of working as a Data Engineer with thorough understanding of data frameworks that collect, manage, transform and store data that can derive business insights.

- Strong communications (written and verbal) along with being a good team player.

- 2+ years of experience within the Big Data ecosystem (Hadoop, Sqoop, Hive, Spark, Pig, etc.)

- 2+ years of strong experience with SQL and Python (Data Engineering focused).

- Experience with GCP Data Services such as BigQuery, Dataflow, Dataproc, etc. is an added advantage and preferred.

- Any prior experience in ETL tools such as DataStage, Informatica, DBT, Talend, etc. is an added advantage for the role.
Read more
one of the leading payments bank

one of the leading payments bank

Agency job
via Mavin RPO Solutions Pvt. Ltd. by kshiteej jagtap
Navi Mumbai
3 - 5 yrs
₹7L - ₹18L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+4 more

Requirements:

  • Proficiency in shell scripting.
  • Proficiency in automation of tasks.
  • Proficiency in Pyspark/Python.
  • Proficiency in writing and understanding of sqoop.
  • Understanding of Cloud Era manager.
  • Good understanding of RDBMS.
  • Good understanding of Excel.
  • Familiarity with Hadoop ecosystem and its components.
  • Understanding of data loading tools such as Flume, Sqoop etc.
  • Ability to write reliable, manageable, and high-performance code.
  • Good knowledge of database principles, practices, structures, and theories.
Read more
Navi Mumbai
3 - 5 yrs
₹7L - ₹18L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more
  • Proficiency in shell scripting
  • Proficiency in automation of tasks
  • Proficiency in Pyspark/Python
  • Proficiency in writing and understanding of sqoop
  • Understanding of CloudEra manager
  • Good understanding of RDBMS
  • Good understanding of Excel

 

Read more
Leading StartUp Focused On Employee Growth

Leading StartUp Focused On Employee Growth

Agency job
via Qrata by Blessy Fernandes
Bengaluru (Bangalore)
2 - 6 yrs
₹25L - ₹45L / yr
Data engineering
skill iconData Analytics
Big Data
Apache Spark
airflow
+8 more
2+ years of experience in a Data Engineer role.
● Proficiency in Linux.
● Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
● Must have SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra,
and Athena.
● Must have experience with Python/Scala.
● Must have experience with Big Data technologies like Apache Spark.
● Must have experience with Apache Airflow.
● Experience with data pipelines and ETL tools like AWS Glue.
Read more
Rakuten

at Rakuten

1 video
1 recruiter
Agency job
via zyoin by RAKESH RANJAN
Remote, Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹38L / yr
Big Data
Spark
Hadoop
Apache Kafka
Apache Hive
+4 more

Company Overview:

Rakuten, Inc. (TSE's first section: 4755) is the largest ecommerce company in Japan, and third largest eCommerce marketplace company worldwide. Rakuten provides a variety of consumer and business-focused services including e-commerce, e-reading, travel, banking, securities, credit card, e-money, portal and media, online marketing and professional sports. The company is expanding globally and currently has operations throughout Asia, Western Europe, and the Americas. Founded in 1997, Rakuten is headquartered in Tokyo, with over 17,000 employees and partner staff worldwide. Rakuten's 2018 revenues were 1101.48 billions yen.   -In Japanese, Rakuten stands for ‘optimism.’ -It means we believe in the future. -It’s an understanding that, with the right mind-set, -we can make the future better by what we do today. Today, our 70+ businesses span e-commerce, digital content, communications and FinTech, bringing the joy of discovery to more than 1.2 billion members across the world.


Website
: https://www.rakuten.com/">https://www.rakuten.com/

Crunchbase : https://www.crunchbase.com/organization/rakuten">Rakuten has raised a total of https://www.crunchbase.com/search/funding_rounds/field/organizations/funding_total/rakuten">$42.4M in funding over https://www.crunchbase.com/search/funding_rounds/field/organizations/num_funding_rounds/rakuten">2 rounds

Companysize : 10,001 + Employees

Founded : 1997

Headquarters : Tokyo, Japan

Work location : Bangalore (M.G.Road)


Please find below Job Description.


Role Description – Data Engineer for AN group (Location - India)

 

Key responsibilities include:

 

We are looking for engineering candidate in our Autonomous Networking Team. The ideal candidate must have following abilities –

 

  • Hands- on experience in big data computation technologies (at least one and potentially several of the following: Spark and Spark Streaming, Hadoop, Storm, Kafka Streaming, Flink, etc)
  • Familiar with other related big data technologies, such as big data storage technologies (e.g., Phoenix/HBase, Redshift, Presto/Athena, Hive, Spark SQL, BigTable, BigQuery, Clickhouse, etc), messaging layer (Kafka, Kinesis, etc), Cloud and container- based deployments (Docker, Kubernetes etc), Scala, Akka, SocketIO, ElasticSearch, RabbitMQ, Redis, Couchbase, JAVA, Go lang.
  • Partner with product management and delivery teams to align and prioritize current and future new product development initiatives in support of our business objectives
  • Work with cross functional engineering teams including QA, Platform Delivery and DevOps
  • Evaluate current state solutions to identify areas to improve standards, simplify, and enhance functionality and/or transition to effective solutions to improve supportability and time to market
  • Not afraid of refactoring existing system and guiding the team about same.
  • Experience with Event driven Architecture, Complex Event Processing
  • Extensive experience building and owning large- scale distributed backend systems.
Read more
MNC

MNC

Agency job
via Fragma Data Systems by Harpreet kour
Bengaluru (Bangalore)
3 - 6 yrs
₹6L - ₹15L / yr
Apache Hadoop
Hadoop
HDFS
Apache Sqoop
Apache Flume
+5 more
1. Design and development of data ingestion pipelines.
2. Perform data migration and conversion activities.
3. Develop and integrate software applications using suitable development
methodologies and standards, applying standard architectural patterns, taking
into account critical performance characteristics and security measures.
4. Collaborate with Business Analysts, Architects and Senior Developers to
establish the physical application framework (e.g. libraries, modules, execution
environments).
5. Perform end to end automation of ETL process for various datasets that are
being ingested into the big data platform.
Read more
Lymbyc

at Lymbyc

1 video
2 recruiters
Venky Thiriveedhi
Posted by Venky Thiriveedhi
Bengaluru (Bangalore), Chennai
4 - 8 yrs
₹9L - ₹14L / yr
Apache Spark
Apache Kafka
Druid Database
Big Data
Apache Sqoop
+5 more
Key skill set : Apache NiFi, Kafka Connect (Confluent), Sqoop, Kylo, Spark, Druid, Presto, RESTful services, Lambda / Kappa architectures Responsibilities : - Build a scalable, reliable, operable and performant big data platform for both streaming and batch analytics - Design and implement data aggregation, cleansing and transformation layers Skills : - Around 4+ years of hands-on experience designing and operating large data platforms - Experience in Big data Ingestion, Transformation and stream/batch processing technologies using Apache NiFi, Apache Kafka, Kafka Connect (Confluent), Sqoop, Spark, Storm, Hive etc; - Experience in designing and building streaming data platforms in Lambda, Kappa architectures - Should have working experience in one of NoSQL, OLAP data stores like Druid, Cassandra, Elasticsearch, Pinot etc; - Experience in one of data warehousing tools like RedShift, BigQuery, Azure SQL Data Warehouse - Exposure to other Data Ingestion, Data Lake and querying frameworks like Marmaray, Kylo, Drill, Presto - Experience in designing and consuming microservices - Exposure to security and governance tools like Apache Ranger, Apache Atlas - Any contributions to open source projects a plus - Experience in performance benchmarks will be a plus
Read more
GeakMinds Technologies Pvt Ltd
John Richardson
Posted by John Richardson
Chennai
1 - 5 yrs
₹1L - ₹6L / yr
Hadoop
Big Data
HDFS
Apache Sqoop
Apache Flume
+2 more
• Looking for Big Data Engineer with 3+ years of experience. • Hands-on experience with MapReduce-based platforms, like Pig, Spark, Shark. • Hands-on experience with data pipeline tools like Kafka, Storm, Spark Streaming. • Store and query data with Sqoop, Hive, MySQL, HBase, Cassandra, MongoDB, Drill, Phoenix, and Presto. • Hands-on experience in managing Big Data on a cluster with HDFS and MapReduce. • Handle streaming data in real time with Kafka, Flume, Spark Streaming, Flink, and Storm. • Experience with Azure cloud, Cognitive Services, Databricks is preferred.
Read more
Pion Global Solutions LTD
Sheela P
Posted by Sheela P
Mumbai
3 - 100 yrs
₹4L - ₹15L / yr
Spark
Big Data
Hadoop
HDFS
Apache Sqoop
+2 more
Looking for Big data Developers in Mumbai Location
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort