Cutshort logo
DMS Jobs in Delhi, NCR and Gurgaon

11+ DMS Jobs in Delhi, NCR and Gurgaon | DMS Job openings in Delhi, NCR and Gurgaon

Apply to 11+ DMS Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest DMS Job opportunities across top companies like Google, Amazon & Adobe.

icon
A fast growing Big Data company
Noida, Bengaluru (Bangalore), Chennai, Hyderabad
6 - 8 yrs
₹10L - ₹15L / yr
AWS Glue
SQL
skill iconPython
PySpark
Data engineering
+6 more

AWS Glue Developer 

Work Experience: 6 to 8 Years

Work Location:  Noida, Bangalore, Chennai & Hyderabad

Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops, 

Job Reference ID:BT/F21/IND


Job Description:

Design, build and configure applications to meet business process and application requirements.


Responsibilities:

7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.


Technical Experience:

Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.


➢ Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.

➢ Create data pipeline architecture by designing and implementing data ingestion solutions.

➢ Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.

➢ Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.

➢ Author ETL processes using Python, Pyspark.

➢ Build Redshift Spectrum direct transformations and data modelling using data in S3.

➢ ETL process monitoring using CloudWatch events.

➢ You will be working in collaboration with other teams. Good communication must.

➢ Must have experience in using AWS services API, AWS CLI and SDK


Professional Attributes:

➢ Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.

➢ Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.

➢ Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.


Qualification:

➢ Degree in Computer Science, Computer Engineering or equivalent.


Salary: Commensurate with experience and demonstrated competence

Read more
Series B funded product startup
Agency job
via Qrata by Blessy Fernandes
Delhi
2 - 5 yrs
₹8L - ₹14L / yr
skill iconData Science
skill iconMachine Learning (ML)
skill iconPython
skill iconJava

Job Title -Data Scientist

 

Job Duties

  1. Data Scientist responsibilities includes planning projects and building analytics models.
  2. You should have a strong problem-solving ability and a knack for statistical analysis.
  3. If you're also able to align our data products with our business goals, we'd like to meet you. Your ultimate goal will be to help improve our products and business decisions by making the most out of our data.

 

Responsibilities

Own end-to-end business problems and metrics, build and implement ML solutions using cutting-edge technology.

Create scalable solutions to business problems using statistical techniques, machine learning, and NLP.

Design, experiment and evaluate highly innovative models for predictive learning

Work closely with software engineering teams to drive real-time model experiments, implementations, and new feature creations

Establish scalable, efficient, and automated processes for large-scale data analysis, model development, deployment, experimentation, and evaluation.

Research and implement novel machine learning and statistical approaches.

 

Requirements

 

2-5 years of experience in data science.

In-depth understanding of modern machine learning techniques and their mathematical underpinnings.

Demonstrated ability to build PoCs for complex, ambiguous problems and scale them up.

Strong programming skills (Python, Java)

High proficiency in at least one of the following broad areas: machine learning, statistical modelling/inference, information retrieval, data mining, NLP

Experience with SQL and NoSQL databases

Strong organizational and leadership skills

Excellent communication skills

Read more
Fintech lead,
Agency job
via The Hub by Sridevi Viswanathan
Gurugram, Noida
3 - 8 yrs
₹5L - ₹15L / yr
Natural Language Processing (NLP)
BERT
skill iconMachine Learning (ML)
skill iconData Science
skill iconPython
+1 more

Who we are looking for

· A Natural Language Processing (NLP) expert with strong computer science fundamentals and experience in working with deep learning frameworks. You will be working at the cutting edge of NLP and Machine Learning.

Roles and Responsibilities

· Work as part of a distributed team to research, build and deploy Machine Learning models for NLP.

· Mentor and coach other team members

· Evaluate the performance of NLP models and ideate on how they can be improved

· Support internal and external NLP-facing APIs

· Keep up to date on current research around NLP, Machine Learning and Deep Learning

Mandatory Requirements

·       Any graduation with at least 2 years of demonstrated experience as a Data Scientist.

Behavioural Skills

· Strong analytical and problem-solving capabilities.

· Proven ability to multi-task and deliver results within tight time frames

· Must have strong verbal and written communication skills

· Strong listening skills and eagerness to learn

· Strong attention to detail and the ability to work efficiently in a team as well as individually

Technical Skills

Hands-on experience with

· NLP

· Deep Learning

· Machine Learning

· Python

· Bert

Preferred Requirements

· Experience in Computer Vision is preferred

Role: Data Scientist

Industry Type: Banking

Department: Data Science & Analytics

Employment Type: Full Time, Permanent

Role Category: Data Science & Machine Learning

Read more
Epik Solutions
Sakshi Sarraf
Posted by Sakshi Sarraf
Bengaluru (Bangalore), Noida
4 - 13 yrs
₹7L - ₹18L / yr
skill iconPython
SQL
databricks
skill iconScala
Spark
+2 more

Job Description:


As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:


Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.


Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.


Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.


Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.


Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.


Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related problems.


Documentation and collaboration: You will document data pipelines, data flows, and data transformation processes. You will collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide data engineering support.


Skills and Qualifications:


Strong experience with Azure Databricks, Python, SQL, ADF, PySpark, and Scala.

Proficiency in designing and developing data pipelines and ETL processes.

Solid understanding of data modeling concepts and database design principles.

Familiarity with data integration and orchestration using Azure Data Factory.

Knowledge of data quality management and data governance practices.

Experience with performance tuning and optimization of data pipelines.

Strong problem-solving and troubleshooting skills related to data engineering.

Excellent collaboration and communication skills to work effectively in cross-functional teams.

Understanding of cloud computing principles and experience with Azure services.


Read more
Cloth software company
Agency job
via Jobdost by Sathish Kumar
Delhi
1 - 3 yrs
₹1L - ₹6L / yr
SQL
skill iconData Analytics

What you will do:

  • Understand the process of CaaStle business teams, KPIs, and pain points
  • Build scalable data products, self-service tools, data cubes to analyze and present data associated with acquisition, retention, product performance, operations, client services, etc.
  • Closely partner with data engineering, product, and business teams and participate in requirements capture, research design, data collection, dashboard generation, and translation of results into actionable insights that can add value for business stakeholders
  • Leverage advanced analytics to drive key success metrics for business and revenue generation
  • Operationalize, implement, and automate changes to drive data-driven decisions
  • Attend and play an active role in answering questions from the executive and/or business teams through data mining and analysis

We would love for you to have:

  • Education: Advanced degree in Computer Science, Statistics, Mathematics, Engineering, Economics, Business Analytics or related field is required
  • Experience: 2-4 years of professional experience
  • Proficiency in data visualization/reporting tools (i.e. Tableau, Qlikview, etc.)
  • Experience in A/B testing and measure performance of experiments
  • Strong proficiency with SQL-based languages. Experience with large scale data analytics technologies (i.e., Hadoop and Spark)
  • Strong analytical skills and business mindset with the ability to translate complex concepts and analysis into clear and concise takeaways to drive insights and strategies
  • Excellent communication, social, and presentation skills with meticulous attention to detail
  • Programming experience in Python, R, or other languages
  • Knowledge of Data mining, statistical modeling approaches, and techniques

 

CaaStle is committed to equality of opportunity in employment. It has been and will continue to be the policy of CaaStle to provide full and equal employment opportunities to all employees and candidates for employment without regard to race, color, religion, national or ethnic origin, veteran status, age, sexual orientation, gender identity, or physical or mental disability. This policy applies to all terms, conditions and privileges of employment, such as those pertaining to training, transfer, promotion, compensation and recreational programs.

Read more
A Leading Edtech Company
Noida
3 - 6 yrs
₹12L - ₹15L / yr
skill iconMongoDB
MySQL
SQL
  • Sound knowledge of Mongo as a primary skill
  • . Should have hands on experience of  MySQL as a secondary skill will be enough
  • . Experience with replication , sharding and scaling.
  • . Design, install, maintain highly available systems (includes monitoring, security, backup, and performance tuning)
  • . Implement secure database and server installations (privilege access methodology / role based access)
  • . Help application team in query writing, performance tuning & other D2D issues
  • • Deploy automation techniques for d2d operations
  • . Must possess good analytical and problem solving skills
  • . Must be willing to work flexible hours as needed
  • . Scripting experience a plus
  • . Ability to work independently and as a member of a team
  • . good verbal and written communication skills
Read more
Accolite Digital
Nitesh Parab
Posted by Nitesh Parab
Bengaluru (Bangalore), Hyderabad, Gurugram, Delhi, Noida, Ghaziabad, Faridabad
4 - 8 yrs
₹5L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
SSIS
SQL Server Integration Services (SSIS)
+10 more

Job Title: Data Engineer

Job Summary: As a Data Engineer, you will be responsible for designing, building, and maintaining the infrastructure and tools necessary for data collection, storage, processing, and analysis. You will work closely with data scientists and analysts to ensure that data is available, accessible, and in a format that can be easily consumed for business insights.

Responsibilities:

  • Design, build, and maintain data pipelines to collect, store, and process data from various sources.
  • Create and manage data warehousing and data lake solutions.
  • Develop and maintain data processing and data integration tools.
  • Collaborate with data scientists and analysts to design and implement data models and algorithms for data analysis.
  • Optimize and scale existing data infrastructure to ensure it meets the needs of the business.
  • Ensure data quality and integrity across all data sources.
  • Develop and implement best practices for data governance, security, and privacy.
  • Monitor data pipeline performance / Errors and troubleshoot issues as needed.
  • Stay up-to-date with emerging data technologies and best practices.

Requirements:

Bachelor's degree in Computer Science, Information Systems, or a related field.

Experience with ETL tools like Matillion,SSIS,Informatica

Experience with SQL and relational databases such as SQL server, MySQL, PostgreSQL, or Oracle.

Experience in writing complex SQL queries

Strong programming skills in languages such as Python, Java, or Scala.

Experience with data modeling, data warehousing, and data integration.

Strong problem-solving skills and ability to work independently.

Excellent communication and collaboration skills.

Familiarity with big data technologies such as Hadoop, Spark, or Kafka.

Familiarity with data warehouse/Data lake technologies like Snowflake or Databricks

Familiarity with cloud computing platforms such as AWS, Azure, or GCP.

Familiarity with Reporting tools

Teamwork/ growth contribution

  • Helping the team in taking the Interviews and identifying right candidates
  • Adhering to timelines
  • Intime status communication and upfront communication of any risks
  • Tech, train, share knowledge with peers.
  • Good Communication skills
  • Proven abilities to take initiative and be innovative
  • Analytical mind with a problem-solving aptitude

Good to have :

Master's degree in Computer Science, Information Systems, or a related field.

Experience with NoSQL databases such as MongoDB or Cassandra.

Familiarity with data visualization and business intelligence tools such as Tableau or Power BI.

Knowledge of machine learning and statistical modeling techniques.

If you are passionate about data and want to work with a dynamic team of data scientists and analysts, we encourage you to apply for this position.

Read more
Chegg India Private Limited

at Chegg India Private Limited

1 video
1 recruiter
Naveen Ghiya
Posted by Naveen Ghiya
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
4 - 9 yrs
Best in industry
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+4 more

Senior Data Scientist

Your goal: To improve the education process and improve the student experience through data.

 

The organization: Data Science for Learning Services Data Science and Machine Learning are core to Chegg. As a Student Hub, we want to ensure that students discover the full breadth of learning solutions we have to offer to get full value on their learning time with us. To create the most relevant and engaging interactions, we are solving a multitude of machine learning problems so that we can better model student behavior, link various types of content, optimize workflows, and provide a personalized experience.

 

The Role: Senior Data Scientist

As a Senior Data Scientist, you will focus on conducting research and development in NLP and ML. You will be responsible for writing production-quality code for data product solutions at Chegg. You will lead in identification and implementation of key projects to process data and knowledge discovery.

 

Responsibilities:

• Translate product requirements into AIML/NLP solutions

• Be able to think out of the box and be able to design novel solutions for the problem at hand

• Write production-quality code

• Be able to design data and annotation collection strategies

• Identify key evaluation metrics and release requirements for data products

• Integrate new data and design workflows

• Innovate, share, and educate team members and community

 

Requirements:

• Working experience in machine learning, NLP, recommendation systems, experimentation, or related fields, with a specialization in NLP • Working experience on large language models that cater to multiple tasks such as text generation, Q&A, summarization, translation etc is highly preferred

• Knowledge on MLOPs and deployment pipelines is a must

• Expertise on supervised, unsupervised and reinforcement ML algorithms.

• Strong programming skills in Python

• Top data wrangling skills using SQL or NOSQL queries

• Experience using containers to deploy real-time prediction services

• Passion for using technology to help students

• Excellent communication skills

• Good team player and a self-starter

• Outstanding analytical and problem-solving skills

• Experience working with ML pipeline products such as AWS Sagemaker, Google ML, or Databricks a plus.

 

Why do we exist?

Students are working harder than ever before to stabilize their future. Our recent research study called State of the Student shows that nearly 3 out of 4 students are working to support themselves through college and 1 in 3 students feel pressure to spend more than they can afford. We founded our business on provided affordable textbook rental options to address these issues. Since then, we’ve expanded our offerings to supplement many facets of higher educational learning through Chegg Study, Chegg Math, Chegg Writing, Chegg Internships, Thinkful Online Learning, and more, to support students beyond their college experience. These offerings lower financial concerns for students by modernizing their learning experience. We exist so students everywhere have a smarter, faster, more affordable way to student.

 

Video Shorts

Life at Chegg: https://jobs.chegg.com/Video-Shorts-Chegg-Services

Certified Great Place to Work!: http://reviews.greatplacetowork.com/chegg

Chegg India: http://www.cheggindia.com/

Chegg Israel: http://insider.geektime.co.il/organizations/chegg

Thinkful (a Chegg Online Learning Service): https://www.thinkful.com/about/#careers

Chegg out our culture and benefits!

http://www.chegg.com/jobs/benefits

https://www.youtube.com/watch?v=YYHnkwiD7Oo

http://techblog.chegg.com/

Chegg is an equal-opportunity employer

Read more
SpotDraft

at SpotDraft

4 recruiters
Madhav Bhagat
Posted by Madhav Bhagat
Noida, NCR (Delhi | Gurgaon | Noida)
3 - 7 yrs
₹3L - ₹24L / yr
skill iconPython
TensorFlow
caffee
We are building the AI core for a Legal Workflow solution. You will be expected to build and train models to extract relevant information from contracts and other legal documents. Required Skills/Experience: - Python - Basics of Deep Learning - Experience with one ML framework (like TensorFlow, Keras, Caffee) Preferred Skills/Expereince: - Exposure to ML concepts like LSTM, RNN and Conv Nets - Experience with NLP and Stanford POS tagger
Read more
FarmGuide

at FarmGuide

1 recruiter
Anupam Arya
Posted by Anupam Arya
NCR (Delhi | Gurgaon | Noida)
0 - 8 yrs
₹7L - ₹14L / yr
Computer Security
Image processing
OpenCV
skill iconPython
Rational ClearCase
+8 more
FarmGuide is a data driven tech startup aiming towards digitizing the periodic processes in place and bringing information symmetry in agriculture supply chain through transparent, dynamic & interactive software solutions. We, at FarmGuide (https://angel.co/farmguide) help Government in relevant and efficient policy making by ensuring seamless flow of information between stakeholders.Job Description :We are looking for individuals who want to help us design cutting edge scalable products to meet our rapidly growing business. We are building out the data science team and looking to hire across levels.- Solving complex problems in the agri-tech sector, which are long-standing open problems at the national level.- Applying computer vision techniques to satellite imagery to deduce artefacts of interest.- Applying various machine learning techniques to digitize existing physical corpus of knowledge in the sector.Key Responsibilities :- Develop computer vision algorithms for production use on satellite and aerial imagery- Implement models and data pipelines to analyse terabytes of data.- Deploy built models in production environment.- Develop tools to assess algorithm accuracy- Implement algorithms at scale in the commercial cloudSkills Required :- B.Tech/ M.Tech in CS or other related fields such as EE or MCA from IIT/NIT/BITS but not compulsory. - Demonstrable interest in Machine Learning and Computer Vision, such as coursework, open-source contribution, etc.- Experience with digital image processing techniques - Familiarity/Experience with geospatial, planetary, or astronomical datasets is valuable- Experience in writing algorithms to manipulate geospatial data- Hands-on knowledge of GDAL or open-source GIS tools is a plus- Familiarity with cloud systems (AWS/Google Cloud) and cloud infrastructure is a plus- Experience with high performance or large scale computing infrastructure might be helpful- Coding ability in R or Python. - Self-directed team player who thrives in a continually changing environmentWhat is on offer :- High impact role in a young start up with colleagues from IITs and other Tier 1 colleges- Chance to work on the cutting edge of ML (yes, we do train Neural Nets on GPUs) - Lots of freedom in terms of the work you do and how you do it - Flexible timings - Best start-up salary in industry with additional tax benefits
Read more
YCH Logistics

at YCH Logistics

1 recruiter
Sanatan Upmanyu
Posted by Sanatan Upmanyu
NCR (Delhi | Gurgaon | Noida)
0 - 5 yrs
₹2L - ₹5L / yr
skill iconPython
skill iconDeep Learning
MySQL
Job Description: Data Science Analyst/ Data Science Senior Analyst Job description KSTYCH is seeking a Data Science Analyst to join our Data Science team. Individuals in this role are expected to be comfortable working as a software engineer and a quantitative researcher, should have a significant theoretical foundation in mathematical statistics. The ideal candidate will have a keen interest in the study of Pharma sector, network biology, text mining, machine learning, and a passion for identifying and answering questions that help us build the best consulting resource and continuous support to other teams. Responsibilities Work closely with a product scientific, medical, business development and commercial to identify and answer important healthcare/pharma/biology questions. Answer questions by using appropriate statistical techniques and tools on available data. Communicate findings to project managers and team managers. Drive the collection of new data and the refinement of existing data sources Analyze and interpret the results of an experiments Develop best practices for instrumentation and experimentation and communicate those to other teams Requirements B. Tech, M.Tech, M.S. or Ph.D. in a relevant technical field, or 1+ years experience in a relevant role Extensive experience solving analytical problems using quantitative approaches Comfort manipulating and analyzing complex, high-volume, high-dimensionality data from varying sources A strong passion for empirical research and for answering hard questions with data A flexible analytic approach that allows for results at varying levels of precision Ability to communicate complex quantitative analysis in a clear, precise, and actionable manner Fluency with at least one scripting language such as Python or PHP Familiarity with relational databases and SQL Experience working with large data sets, experience working with distributed computing tools a plus (KNIME, Map/Reduce, Hadoop, Hive, etc)
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort