Cutshort logo
MLS Jobs in Delhi, NCR and Gurgaon

11+ MLS Jobs in Delhi, NCR and Gurgaon | MLS Job openings in Delhi, NCR and Gurgaon

Apply to 11+ MLS Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest MLS Job opportunities across top companies like Google, Amazon & Adobe.

icon
PAGO Analytics India Pvt Ltd
Vijay Cheripally
Posted by Vijay Cheripally
Remote, Bengaluru (Bangalore), Mumbai, NCR (Delhi | Gurgaon | Noida)
2 - 8 yrs
ā‚¹8L - ā‚¹15L / yr
skill iconPython
PySpark
Microsoft Windows Azure
SQL Azure
skill iconData Analytics
+6 more
Be an integral part of large scale client business development and delivery engagements
Develop the software and systems needed for end-to-end execution on large projects
Work across all phases of SDLC, and use Software Engineering principles to build scaled solutions
Build the knowledge base required to deliver increasingly complex technology projects


Object-oriented languages (e.g. Python, PySpark, Java, C#, C++ ) and frameworks (e.g. J2EE or .NET)
Database programming using any flavours of SQL
Expertise in relational and dimensional modelling, including big data technologies
Exposure across all the SDLC process, including testing and deployment
Expertise in Microsoft Azure is mandatory including components like Azure Data Factory, Azure Data Lake Storage, Azure SQL, Azure DataBricks, HD Insights, ML Service etc.
Good knowledge of Python and Spark are required
Good understanding of how to enable analytics using cloud technology and ML Ops
Experience in Azure Infrastructure and Azure Dev Ops will be a strong plus
Read more
codersbrain

at codersbrain

1 recruiter
Tanuj Uppal
Posted by Tanuj Uppal
Delhi
4 - 8 yrs
ā‚¹2L - ā‚¹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more
  • Mandatory - Hands on experience in Python and PySpark.

Ā 

  • Build pySpark applications using Spark Dataframes in Python using Jupyter notebook and PyCharm(IDE).

Ā 

  • Worked on optimizing spark jobs that processes huge volumes of data.

Ā 

  • Hands on experience in version control tools like Git.

Ā 

  • Worked on Amazonā€™s Analytics services like Amazon EMR, Lambda function etc

Ā 

  • Worked on Amazonā€™s Compute services like Amazon Lambda, Amazon EC2 and Amazonā€™s Storage service like S3 and few other services like SNS.

Ā 

  • Experience/knowledge of bash/shell scripting will be a plus.

Ā 

  • Experience in working with fixed width, delimited , multi record file formats etc.

Ā 

  • Hands on experience in tools like Jenkins to build, test and deploy the applications

Ā 

  • Awareness of Devops concepts and be able to work in an automated release pipeline environment.

Ā 

  • Excellent debugging skills.
Read more
Career Forge

at Career Forge

2 candid answers
Mohammad Faiz
Posted by Mohammad Faiz
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 7 yrs
ā‚¹12L - ā‚¹15L / yr
skill iconPython
Apache Spark
PySpark
Data engineering
ETL
+10 more

šŸš€ Exciting Opportunity: Data Engineer Position in Gurugram šŸŒ


HelloĀ 


We are actively seeking a talented and experienced Data Engineer to join our dynamic team at Reality Motivational Venture in Gurugram (Gurgaon). If you're passionate about data, thrive in a collaborative environment, and possess the skills we're looking for, we want to hear from you!


Position: Data EngineerĀ Ā 

Location: Gurugram (Gurgaon)Ā Ā 

Experience: 5+ yearsĀ 


Key Skills:

- Python

- Spark, Pyspark

- Data Governance

- Cloud (AWS/Azure/GCP)


Main Responsibilities:

- Define and set up analytics environments for "Big Data" applications in collaboration with domain experts.

- Implement ETL processes for telemetry-based and stationary test data.

- Support in defining data governance, including data lifecycle management.

- Develop large-scale data processing engines and real-time search and analytics based on time series data.

- Ensure technical, methodological, and quality aspects.

- Support CI/CD processes.

- Foster know-how development and transfer, continuous improvement of leading technologies within Data Engineering.

- Collaborate with solution architects on the development of complex on-premise, hybrid, and cloud solution architectures.


Qualification Requirements:

- BSc, MSc, MEng, or PhD in Computer Science, Informatics/Telematics, Mathematics/Statistics, or a comparable engineering degree.

- Proficiency in Python and the PyData stack (Pandas/Numpy).

- Experience in high-level programming languages (C#/C++/Java).

- Familiarity with scalable processing environments like Dask (or Spark).

- Proficient in Linux and scripting languages (Bash Scripts).

- Experience in containerization and orchestration of containerized services (Kubernetes).

- Education in database technologies (SQL/OLAP and Non-SQL).

- Interest in Big Data storage technologies (Elastic, ClickHouse).

- Familiarity with Cloud technologies (Azure, AWS, GCP).

- Fluent English communication skills (speaking and writing).

- Ability to work constructively with a global team.

- Willingness to travel for business trips during development projects.


Preferable:

- Working knowledge of vehicle architectures, communication, and components.

- Experience in additional programming languages (C#/C++/Java, R, Scala, MATLAB).

- Experience in time-series processing.


How to Apply:

Interested candidates, please share your updated CV/resume with me.


Thank you for considering this exciting opportunity.

Read more
Information Solution Provider Company

Information Solution Provider Company

Agency job
via Jobdost by Sathish Kumar
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 7 yrs
ā‚¹10L - ā‚¹15L / yr
Spark
skill iconScala
Hadoop
Big Data
Data engineering
+2 more

Responsibilities:

Ā 

  • Designing and implementing fine-tuned production ready data/ML pipelines in HadoopĀ platform.
  • Driving optimization, testing and tooling to improve quality.
  • Reviewing and approving high level & amp; detailed design to ensure that the solutionĀ delivers to the business needs and aligns to the data & analytics architecture principlesĀ and roadmap.
  • Understanding business requirements and solution design to develop and implementĀ solutions that adhere to big data architectural guidelines and address business requirements.
  • Following proper SDLC (Code review, sprint process).
  • Identifying, designing, and implementing internal process improvements: automatingĀ manual processes, optimizing data delivery, etc.
  • Building robust and scalable data infrastructure (both batch processing and real-time) toĀ support needs from internal and external users.
  • Understanding various data security standards and using secure data security tools toĀ apply and adhere to the required data controls for user access in the Hadoop platform.
  • Supporting and contributing to development guidelines and standards for data ingestion.
  • Working with a data scientist and business analytics team to assist in data ingestion andĀ data related technical issues.
  • Designing and documenting the development & deployment flow.

Ā 

Requirements:

Ā 

  • Experience in developing rest API services using one of the Scala frameworks.
  • Ability to troubleshoot and optimize complex queries on the Spark platform
  • Expert in building and optimizing ā€˜big dataā€™ data/ML pipelines, architectures and data sets.
  • Knowledge in modelling unstructured to structured data design.
  • Experience in Big Data access and storage techniques.
  • Experience in doing cost estimation based on the design and development.
  • Excellent debugging skills for the technical stack mentioned above which even includesĀ analyzing server logs and application logs.
  • Highly organized, self-motivated, proactive, and ability to propose best design solutions.
  • Good time management and multitasking skills to work to deadlines by workingĀ independently and as a part of a team.

Ā 

Read more
Client is a Machine Learning company based in New Delhi.

Client is a Machine Learning company based in New Delhi.

Agency job
via Jobdost by Sathish Kumar
NCR (Delhi | Gurgaon | Noida)
2 - 6 yrs
ā‚¹10L - ā‚¹25L / yr
skill iconData Science
skill iconR Programming
skill iconPython
skill iconMachine Learning (ML)
Entity Framework
+2 more

Job Responsibilities

  • Design machine learning systems
  • Research and implement appropriate ML algorithms and tools
  • Develop machine learning applications according to requirements
  • Select appropriate datasets and data representation methods
  • Run machine learning tests and experiments
  • Perform statistical analysis and fine-tuning using test results
  • Train and retrain systems when necessary

Ā 

Requirements for the Job

Ā 

  1. Bachelorā€™s/Master's/PhD in Computer Science, Mathematics, Statistics or equivalent field andmust have a minimum of 2 years of overall experience in tier one collegesĀ 
  1. Minimum 1 year of experience working as a Data Scientist in deploying ML at scale in production
  2. Experience in machine learning techniques (e.g. NLP, Computer Vision, BERT, LSTM etc..) andframeworks (e.g. TensorFlow, PyTorch, Scikit-learn, etc.)
  1. Working knowledge in deployment of Python systems (using Flask, Tensorflow Serving)
  2. Previous experience in following areas will be preferred: Natural Language Processing(NLP) - Using LSTM and BERT; chatbots or dialogue systems, machine translation, comprehension of text, text summarization.
  3. Computer Vision - Deep Neural Networks/CNNs for object detection and image classification, transfer learning pipeline and object detection/instance segmentation (Mask R-CNN, Yolo, SSD).
Read more
Information Solution Provider Company

Information Solution Provider Company

Agency job
via Jobdost by Sathish Kumar
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 7 yrs
ā‚¹10L - ā‚¹15L / yr
SQL
Hadoop
Spark
skill iconMachine Learning (ML)
skill iconData Science
+3 more

Job Description:

The data science team is responsible for solving business problems with complex data. Data complexity could be characterized in terms of volume, dimensionality and multiple touchpoints/sources. We understand the data, ask fundamental-first-principle questions, apply our analytical and machine learning skills to solve the problem in the best way possible.Ā 

Ā 

Our ideal candidate

The role would be a client facing one, hence good communication skills are a must.Ā 

The candidate should have the ability to communicate complex models and analysis in a clear and precise manner.Ā 

Ā 

The candidate would be responsible for:

  • Comprehending business problems properly - what to predict, how to build DV, what value addition he/she is bringing to the client, etc.
  • Understanding and analyzing large, complex, multi-dimensional datasets and build features relevant for business
  • Understanding the math behind algorithms and choosing one over another
  • Understanding approaches like stacking, ensemble and applying them correctly to increase accuracy

Desired technical requirements

  • Proficiency with Python and the ability to write production-ready codes.Ā 
  • Experience in pyspark, machine learning and deep learning
  • Big data experience, e.g. familiarity with Spark, Hadoop, is highly preferred
  • Familiarity with SQL or other databases.
Read more
iLink Systems

at iLink Systems

1 video
1 recruiter
Ganesh Sooriyamoorthu
Posted by Ganesh Sooriyamoorthu
Chennai, Pune, Noida, Bengaluru (Bangalore)
5 - 15 yrs
ā‚¹10L - ā‚¹15L / yr
Apache Kafka
Big Data
skill iconJava
Spark
Hadoop
+1 more
  • KSQL
  • Data Engineering spectrum (Java/Spark)
  • Spark Scala / Kafka Streaming
  • Confluent Kafka components
  • Basic understanding of Hadoop


Read more
Infogain
Agency job
via Technogen India PvtLtd by RAHUL BATTA
Bengaluru (Bangalore), Pune, Noida, NCR (Delhi | Gurgaon | Noida)
7 - 10 yrs
ā‚¹20L - ā‚¹25L / yr
Data engineering
skill iconPython
SQL
Spark
PySpark
+10 more
  1. Sr. Data Engineer:

Ā Core Skills ā€“Ā Data Engineering, Big Data, Pyspark, Spark SQL and Python

Candidate with priorĀ Palantir Cloud Foundry OR Clinical Trial Data ModelĀ background is preferred

Major accountabilities:

  • Responsible for Data Engineering, Foundry Data Pipeline Creation, Foundry Analysis & Reporting, Slate Application development, re-usable code development & management and Integrating Internal or External System with Foundry for data ingestion with high quality.
  • Have good understanding on Foundry Platform landscape and itā€™s capabilities
  • Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
  • Defines company data assets (data models), Pyspark, spark SQL, jobs to populate data models.
  • Designs data integrations and data quality framework.
  • Design & Implement integration with Internal, External Systems, F1 AWS platform using Foundry Data Connector or Magritte Agent
  • Collaboration with data scientists, data analyst and technology teams to document and leverage their understanding of the Foundry integration with different data sources - Actively participate in agile work practices
  • Coordinating with Quality Engineer to ensure the all quality controls, naming convention & best practices have been followed

Desired Candidate ProfileĀ :

  • Strong data engineering background
  • Experience with Clinical Data Model is preferred
  • Experience in
    • SQL Server ,Postgres, Cassandra, Hadoop, and Spark for distributed data storage and parallel computing
    • Java and Groovy for our back-end applications and data integration tools
    • Python for data processing and analysis
    • Cloud infrastructure based on AWS EC2 and S3
  • 7+ years IT experience, 2+ yearsā€™ experience inĀ PalantirĀ Foundry Platform, 4+ yearsā€™ experience inĀ Big DataĀ platform
  • 5+ years ofĀ PythonĀ andĀ PysparkĀ development experience
  • Strong troubleshooting and problem solving skills
  • BTech or master's degree in computer science or a related technical field
  • Experience designing, building, and maintaining big data pipelines systems
  • Hands-on experience on Palantir Foundry Platform and Foundry custom Apps development
  • Able to design and implement data integration between Palantir Foundry and external Apps based on Foundry data connector framework
  • Hands-on in programming languages primarily Python, R, Java, Unix shell scripts
  • Hand-on experience in AWS / Azure cloud platform and stack
  • Strong in API based architecture and concept, able to do quick PoC using API integration and development
  • Knowledge of machine learning and AI
  • Skill and comfort working in a rapidly changing environment with dynamic objectives and iteration with users.

Ā Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision

Read more
SmartJoules

at SmartJoules

1 video
9 recruiters
Saksham Dutta
Posted by Saksham Dutta
Remote, NCR (Delhi | Gurgaon | Noida)
3 - 5 yrs
ā‚¹8L - ā‚¹12L / yr
skill iconMachine Learning (ML)
skill iconPython
Big Data
Apache Spark
skill iconDeep Learning

Responsibilities:

  • Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world.
  • Verifying data quality, and/or ensuring it via data cleaning.
  • Able to adapt and work fast in producing the output which upgrades the decision making of stakeholders using ML.
  • To design and develop Machine Learning systems and schemes.Ā 
  • To perform statistical analysis and fine-tune models using test results.
  • To train and retrain ML systems and models as and when necessary.Ā 
  • To deploy ML models in production and maintain the cost of cloud infrastructure.
  • To develop Machine Learning apps according to client and data scientist requirements.
  • To analyze the problem-solving capabilities and use-cases of ML algorithms and rank them by how successful they are in meeting the objective.


Technical Knowledge:


  • Worked with real time problems, solved them using ML and deep learning models deployed in real time and should have some awesome projects under his belt to showcase.Ā 
  • Proficiency in Python and experience with working with Jupyter Framework, Google collab and cloud hosted notebooks such as AWS sagemaker, DataBricks etc.
  • Proficiency in working with libraries Sklearn, Tensorflow, Open CV2, Pyspark,Ā  Pandas, Numpy and related libraries.
  • Expert in visualising and manipulating complex datasets.
  • Proficiency in working with visualisation libraries such as seaborn, plotly, matplotlib etc.
  • Proficiency in Linear Algebra, statistics and probability required for Machine Learning.
  • Proficiency in ML Based algorithms for example, Gradient boosting, stacked Machine learning, classification algorithms and deep learning algorithms. Need to have experience in hypertuning various models and comparing the results of algorithm performance.
  • Big data Technologies such as Hadoop stack and Spark.Ā 
  • Basic use of clouds (VMā€™s example EC2).
  • Brownie points for Kubernetes and Task Queues.Ā Ā Ā Ā Ā Ā 
  • Strong written and verbal communications.
  • Experience working in an Agile environment.
Read more
market-leading fintech company dedicated to providing credit

market-leading fintech company dedicated to providing credit

Agency job
via Talent Socio Bizcon LLP by Hema Latha N
Noida, NCR (Delhi | Gurgaon | Noida)
1 - 4 yrs
ā‚¹8L - ā‚¹18L / yr
Analytics
Predictive analytics
Linear regression
Logistic regression
skill iconPython
+1 more
Job Description : Role : Analytics Scientist - Risk Analytics Experience Range : 1 to 4 Years Job Location : Noida Key responsibilities include ā€¢Building models to predict risk and other key metrics ā€¢Coming up with data driven solutions to control risk ā€¢Finding opportunities to acquire more customers by modifying/optimizing existing rules ā€¢Doing periodic upgrades of the underwriting strategy based on business requirements ā€¢Evaluating 3rd party solutions for predicting/controlling risk of the portfolio ā€¢Running periodic controlled tests to optimize underwriting ā€¢Monitoring key portfolio metrics and take data driven actions based on the performance Business Knowledge: Develop an understanding of the domain/function. Manage business process (es) in the work area. The individual is expected to develop domain expertise in his/her work area. Teamwork: Develop cross site relationships to enhance leverage of ideas. Set and manage partner expectations. Drive implementation of projects with Engineering team while partnering seamlessly with cross site team members Communication: Responsibly perform end to end project communication across the various levels in the organization. Candidate Specification: Skills: ā€¢ Knowledge of analytical tool - R Language or Python ā€¢ Established competency in Predictive Analytics (Logistic & Regression) ā€¢ Experience in handling complex data sources ā€¢Dexterity with MySQL, MS Excel is good to have ā€¢Strong Analytical aptitude and logical reasoning ability ā€¢Strong presentation and communication skills Preferred: ā€¢1 - 3 years of experience in Financial Services/Analytics Industry ā€¢Understanding of the financial services business ā€¢ Experience in working on advanced machine learning techniques If interested, please send your updated profile in word format with below details for further discussion at the earliest. 1. Current Company 2. Current Designation 3. Total Experience 4. Current CTC( Fixed & Variable) 5. Expected CTC 6. Notice Period 7. Current Location 8. Reason for Change 9. Availability for face to face interview on weekdays 10.Education Degreef the financial services business Thanks & Regards, Hema Talent Socio
Read more
Sagacito

at Sagacito

2 recruiters
Neha Verma
Posted by Neha Verma
NCR (Delhi | Gurgaon | Noida)
3 - 9 yrs
ā‚¹6L - ā‚¹18L / yr
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
skill iconPython
skill iconData Science
Location: Gurgaon Role: ā€¢ The person will be part of data science team. This person will be working on a close basis with the business analysts and the technology team to deliver the Data Science portion of the project and product. ā€¢ Data Science contribution to a project can range between 30% to 80%. ā€¢ Day to Day activities will include data exploration to solve a specific problem, researching of methods to be applied as solution, setting up ML process to be applied in context of a specific engagement/ requirement, contributing to building a DS platform, coding the solution, interacting with client on explanations, integration the DS solution with the technology solution, data cleaning and structuring etc. ā€¢ Nature of work will depend on stage of a specific engagement, available engagements and individual skill At least 2-6 years of experience in: ā€¢ Machine Learning (including deep learning methods): Algorithm design, analysis and development and performance improvement o Strong understanding of statistical and predictive modeling concepts, machine-learning approaches, clustering, classification, regression techniques, and recommendation (collaborative filtering) algorithms o Time Series Analysis o Optimization techniques and work experience with solvers for MILP and global optimization. ā€¢ Data Science o Good experience in exploratory data analysis and feature design & development o Experience of applying and evaluating ML algorithms to practical predictive modeling scenarios in various verticals including (but not limited to) FMCG, Media, E-commerce and Hospitality. ā€¢ Proficient with programming in Python (must have) & PySpark (good to have). Parallel ML algorithms design and development and usage for maximal performance on multi-core, distributed and/or GPU architectures. ā€¢ Must be able to write a production ready code with reusable components and integration into data science platform. ā€¢ Strong inclination to write structured code as per prevailing coding standards and best practices. ā€¢ Ability to design a data science architecture for repeatability of solutions ā€¢ Preparedness to manage whole cycle from data preparation to algorithm design to client presentation at individual level. ā€¢ Comfort in working on AWS including managing data science AWS servers ā€¢ Team player and good communication and interpersonal skills ā€¢ Good experience in Natural Language Processing and its applications (Good to Have)
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort