Cutshort logo
Apache aurora jobs

11+ Apache Aurora Jobs in India

Apply to 11+ Apache Aurora Jobs on CutShort.io. Find your next job, effortlessly. Browse Apache Aurora Jobs and apply today!

icon
Angel One

at Angel One

4 recruiters
Andleeb Mujeeb
Posted by Andleeb Mujeeb
Remote only
2 - 6 yrs
₹12L - ₹18L / yr
skill iconAmazon Web Services (AWS)
PySpark
skill iconPython
skill iconScala
skill iconGo Programming (Golang)
+19 more

Designation: Specialist - Cloud Service Developer (ABL_SS_600)

Position description:

  • The person would be primary responsible for developing solutions using AWS services. Ex: Fargate, Lambda, ECS, ALB, NLB, S3 etc.
  • Apply advanced troubleshooting techniques to provide Solutions to issues pertaining to Service Availability, Performance, and Resiliency
  • Monitor & Optimize the performance using AWS dashboards and logs
  • Partner with Engineering leaders and peers in delivering technology solutions that meet the business requirements 
  • Work with the cloud team in agile approach and develop cost optimized solutions

 

Primary Responsibilities:

  • Develop solutions using AWS services includiing Fargate, Lambda, ECS, ALB, NLB, S3 etc.

 

Reporting Team

  • Reporting Designation: Head - Big Data Engineering and Cloud Development (ABL_SS_414)
  • Reporting Department: Application Development (2487)

Required Skills:

  • AWS certification would be preferred
  • Good understanding in Monitoring (Cloudwatch, alarms, logs, custom metrics, Trust SNS configuration)
  • Good experience with Fargate, Lambda, ECS, ALB, NLB, S3, Glue, Aurora and other AWS services. 
  • Preferred to have Knowledge on Storage (S3, Life cycle management, Event configuration)
  • Good in data structure, programming in (pyspark / python / golang / Scala)
Read more
Marktine

at Marktine

1 recruiter
Vishal Sharma
Posted by Vishal Sharma
Remote, Bengaluru (Bangalore)
3 - 7 yrs
₹5L - ₹10L / yr
Data Warehouse (DWH)
Spark
Data engineering
skill iconPython
PySpark
+5 more

Basic Qualifications

- Need to have a working knowledge of AWS Redshift.

- Minimum 1 year of designing and implementing a fully operational production-grade large-scale data solution on Snowflake Data Warehouse.

- 3 years of hands-on experience with building productized data ingestion and processing pipelines using Spark, Scala, Python

- 2 years of hands-on experience designing and implementing production-grade data warehousing solutions

- Expertise and excellent understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies

- Excellent presentation and communication skills, both written and verbal

- Ability to problem-solve and architect in an environment with unclear requirements

Read more
Leading Grooming Platform

Leading Grooming Platform

Agency job
via Qrata by Blessy Fernandes
Remote, Ahmedabad
3 - 6 yrs
₹15L - ₹25L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+3 more
  • Extensive exposure to at least one Business Intelligence Platform (if possible, QlikView/Qlik Sense) – if not Qlik, ETL tool knowledge, ex- Informatica/Talend
  • At least 1 Data Query language – SQL/Python
  • Experience in creating breakthrough visualizations
  • Understanding of RDMS, Data Architecture/Schemas, Data Integrations, Data Models and Data Flows is a must
Read more
Dori AI

at Dori AI

5 recruiters
Nitin Gupta
Posted by Nitin Gupta
Bengaluru (Bangalore)
2 - 8 yrs
₹8L - ₹20L / yr
skill iconPython
skill iconData Science
skill iconMachine Learning (ML)
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+5 more

Dori AI enables enterprises with AI-powered video analytics to significantly increase human productivity and improve process compliance. We leverage a proprietary full-stack end-to-end computer vision and deep learning platform to rapidly build and deploy AI solutions for enterprises. The platform was built with enterprise considerations including time-to-value, time-to-market, security, and scalability across a range of use cases. Capture visual data across multiple sites, leverage AI + Computer Vision to gather key insights, and make decisions with actionable visual insights. Launch CV applications in a matter of weeks that are optimized for both cloud and edge deployments.

 


Job brief: Sr. Software Engineer/Software Engineer


All of our team members are expected to learn, learn, and learn! We are working on cutting-edge technologies and areas of artificial intelligence that have never been explored before. We are looking for motivated software engineers with strong coding skills that want to work on problems and challenges they have never worked on before. All of our team members wear multiple hats so you will be expected to simultaneously work on multiple aspects of the products we ship.


Responsibilities

  • Participate heavily in the brainstorming of system architecture and feature design
  • Interface with external customers and key stakeholders to understand and document design requirements
  • Work cross-functionally with Engineering, Data Science, Product, UX, and Infrastructure teams
  • Drive best coding practices across the company (i.e. documentation, code reviews, coding standards, etc)
  • Perform security, legal, and license reviews of committed code
  • Complete projects with little or no supervision from senior leadership


Required Qualifications

  • Built and deployed customer-facing services and products at scale
  • Developed unit and integration tests
  • Worked on products where experimentation and data science are core to the development
  • Experience with large-scale distributed systems that have thousands of microservices and manages millions of transactions per day
  • Solid instruction-level understanding of Object Oriented design, data structures, and software engineering principles
  • Must have at least 4+ years of experience in back-end web development with the following tools: Python, Flask, FastAPI, AWS or Azure, GCP, Java or C/C++, ORM, Mongo, Postgres, TimescaleD, CI/CD


Desired Experience/Skills

  • You have a strong background in software development 
  • Experience with the following tools: Google Cloud Platform, Objective C/Swift, Github, Docker
  • Experience with open-source projects in a startup environment
  • BS, MS, or Ph.D. in Computer Science, Software Engineering, Math, Electrical Engineering, or other STEM degree


Read more
Bengaluru (Bangalore)
1 - 8 yrs
₹8L - ₹14L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+8 more
In this role, you will be part of a growing, global team of data engineers, who collaborate in DevOps mode, in order to enable Merck business with state-of-the-art technology to leverage data as an asset and to take better informed decisions.

The Merck Data Engineering Team is responsible for designing, developing, testing, and supporting automated end-to-end data pipelines and applications on Merck’s data management and global analytics platform (Palantir Foundry, Hadoop, AWS and other components).

The Foundry platform comprises multiple different technology stacks, which are hosted on Amazon Web Services (AWS) infrastructure or on-premise Merck’s own data centers. Developing pipelines and applications on Foundry requires:

• Proficiency in SQL / Java / Python (Python required; all 3 not necessary)
• Proficiency in PySpark for distributed computation
• Familiarity with Postgres and ElasticSearch
• Familiarity with HTML, CSS, and JavaScript and basic design/visual competency
• Familiarity with common databases (e.g. JDBC, mySQL, Microsoft SQL). Not all types required

This position will be project based and may work across multiple smaller projects or a single large project utilizing an agile project methodology.

Roles & Responsibilities:
• Develop data pipelines by ingesting various data sources – structured and un-structured – into Palantir Foundry
• Participate in end to end project lifecycle, from requirements analysis to go-live and operations of an application
• Acts as business analyst for developing requirements for Foundry pipelines
• Review code developed by other data engineers and check against platform-specific standards, cross-cutting concerns, coding and configuration standards and functional specification of the pipeline
• Document technical work in a professional and transparent way. Create high quality technical documentation
• Work out the best possible balance between technical feasibility and business requirements (the latter can be quite strict)
• Deploy applications on Foundry platform infrastructure with clearly defined checks
• Implementation of changes and bug fixes via Merck's change management framework and according to system engineering practices (additional training will be provided)
• DevOps project setup following Agile principles (e.g. Scrum)
• Besides working on projects, act as third level support for critical applications; analyze and resolve complex incidents/problems. Debug problems across a full stack of Foundry and code based on Python, Pyspark, and Java
• Work closely with business users, data scientists/analysts to design physical data models
Read more
Fragma Data Systems

at Fragma Data Systems

8 recruiters
Agency job
via Fragma Data Systems by Minakshi Kumari
Remote only
7 - 13 yrs
₹15L - ₹35L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+4 more
Experience
Experience Range

2 Years - 10 Years

Function Information Technology
Desired Skills
Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Education
Education Type Engineering
Degree / Diploma Bachelor of Engineering, Bachelor of Computer Applications, Any Engineering
Specialization / Subject Any Specialisation
Job Type Full Time
Job ID 000018
Department Software Development
Read more
AI Platform

AI Platform

Agency job
via Purple Hirez by Aditya K
Hyderabad
5 - 10 yrs
₹7L - ₹40L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
skill iconKubernetes
+2 more

Be a part of the growth story of a rapidly growing organization in AI. We are seeking a passionate Machine Learning (ML) Engineer, with a strong background in developing and deploying state-of-the-art models on Cloud. You will participate in the complete cycle of building machine learning models from conceptualization of ideas, data preparation, feature selection, training, evaluation, and productionization.

On a typical day, you might build data pipelines, develop a new machine learning algorithm, train a new model or deploy the trained model on the cloud. You will have a high degree of autonomy, ownership, and influence over your work, machine learning organizations' evolution, and the direction of the company.

Required Qualifications

  • Bachelor's degree in computer science/electrical engineering or equivalent practical experience
  • 7+ years of Industry experience in Data Science, ML/AI projects. Experience in productionizing machine learning in the industry setting
  • Strong grasp of statistical machine learning, linear algebra, deep learning, and computer vision
  • 3+ years experience with one or more general-purpose programming languages including but not limited to: R, Python.
  • Experience with PyTorch or TensorFlow or other ML Frameworks.
  • Experience in using Cloud services such as AWS, GCP, Azure. Understand the principles of developing cloud-native application development

In this role you will:

  • Design and implement ML components, systems and tools to automate and enable our various AI industry solutions
  • Apply research methodologies to identify the machine learning models to solve a business problem and deploy the model at scale.
  • Own the ML pipeline from data collection, through the prototype development to production.
  • Develop high-performance, scalable, and maintainable inference services that communicate with the rest of our tech stack
Read more
UAE Client

UAE Client

Agency job
via Fragma Data Systems by Harpreet kour
Dubai, Bengaluru (Bangalore)
4 - 8 yrs
₹6L - ₹16L / yr
Data engineering
Data Engineer
Big Data
Big Data Engineer
Apache Spark
+3 more
• Responsible for developing and maintaining applications with PySpark 
• Contribute to the overall design and architecture of the application developed and deployed.
• Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc.
• Interact with business users to understand requirements and troubleshoot issues.
• Implement Projects based on functional specifications.

Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Read more
Yottaasys AI LLC

at Yottaasys AI LLC

5 recruiters
Dinesh Krishnan
Posted by Dinesh Krishnan
Bengaluru (Bangalore), Singapore
2 - 5 yrs
₹9L - ₹20L / yr
skill iconData Science
skill iconDeep Learning
skill iconR Programming
skill iconPython
skill iconMachine Learning (ML)
+2 more
We are a US Headquartered Product Company looking to Hire a few Passionate Deep Learning and Computer Vision Team Players with 2-5 years of experience! If you are any of these:
1. Expert in deep learning and machine learning techniques,
2. Extremely Good in image/video processing,
3. Have a Good understanding of Linear algebra, Optimization techniques, Statistics and pattern recognition.
Then u r the right fit for this position.
Read more
Uber

at Uber

1 video
10 recruiters
Suvidha Chib
Posted by Suvidha Chib
Bengaluru (Bangalore)
7 - 15 yrs
₹0L / yr
Big Data
Hadoop
kafka
Spark
Apache Hive
+9 more

Data Platform engineering at Uber is looking for a strong Technical Lead (Level 5a Engineer) who has built high quality platforms and services that can operate at scale. 5a Engineer at Uber exhibits following qualities: 

 

  • Demonstrate tech expertise Demonstrate technical skills to go very deep or broad in solving classes of problems or creating broadly leverageable solutions. 
  • Execute large scale projects Define, plan and execute complex and impactful projects. You communicate the vision to peers and stakeholders.
  • Collaborate across teams Domain resource to engineers outside your team and help them leverage the right solutions. Facilitate technical discussions and drive to a consensus.
  • Coach engineers Coach and mentor less experienced engineers and deeply invest in their learning and success. You give and solicit feedback, both positive and negative, to others you work with to help improve the entire team.
  • Tech leadership Lead the effort to define the best practices in your immediate team, and help the broader organization establish better technical or business processes.


What You’ll Do

  • Build a scalable, reliable, operable and performant data analytics platform for Uber’s engineers, data scientists, products and operations teams.
  • Work alongside the pioneers of big data systems such as Hive, Yarn, Spark, Presto, Kafka, Flink to build out a highly reliable, performant, easy to use software system for Uber’s planet scale of data. 
  • Become proficient of multi-tenancy, resource isolation, abuse prevention, self-serve debuggability aspects of a high performant, large scale, service while building these capabilities for Uber's engineers and operation folks.

 

What You’ll Need

  • 7+ years experience in building large scale products, data platforms, distributed systems in a high caliber environment.
  • Architecture: Identify and solve major architectural problems by going deep in your field or broad across different teams. Extend, improve, or, when needed, build solutions to address architectural gaps or technical debt.
  • Software Engineering/Programming: Create frameworks and abstractions that are reliable and reusable. advanced knowledge of at least one programming language, and are happy to learn more. Our core languages are Java, Python, Go, and Scala.
  • Data Engineering: Expertise in one of the big data analytics technologies we currently use such as Apache Hadoop (HDFS and YARN), Apache Hive, Impala, Drill, Spark, Tez, Presto, Calcite, Parquet, Arrow etc. Under the hood experience with similar systems such as Vertica, Apache Impala, Drill, Google Borg, Google BigQuery, Amazon EMR, Amazon RedShift, Docker, Kubernetes, Mesos etc.
  • Execution & Results: You tackle large technical projects/problems that are not clearly defined. You anticipate roadblocks and have strategies to de-risk timelines. You orchestrate work that spans multiple teams and keep your stakeholders informed.
  • A team player: You believe that you can achieve more on a team that the whole is greater than the sum of its parts. You rely on others’ candid feedback for continuous improvement.
  • Business acumen: You understand requirements beyond the written word. Whether you’re working on an API used by other developers, an internal tool consumed by our operation teams, or a feature used by millions of customers, your attention to details leads to a delightful user experience.
Read more
Deemsoft

at Deemsoft

1 recruiter
Shreedhar shree
Posted by Shreedhar shree
Bengaluru (Bangalore)
3 - 7 yrs
₹6L - ₹20L / yr
skill iconJava
skill iconMachine Learning (ML)
skill iconPython
Natural Language Processing (NLP)
BS/Masters in Computer Science with 4+ years of experience. Excellent knowledge in algorithms and data structures and implementing them in Java. Experience in distributed systems architectures, including multithreading & concurrency issues. Experience working in Agile/Scrum environment is desired. Exceptional debugging, testing, and problem-solving skills. Self-starter, with quick learning curve. Strong written and verbal communication skills and an ability and interest to mentor other junior engineers. Working experience within product development teams is a must. Must have demonstrated capabilities to create patentable ideas. Working experience in Natural Language Processing (NLP)/Natural Language Understanding (NLU) based solutions is a plus.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort