Cutshort logo
Microsoft app v jobs

11+ Microsoft App-V Jobs in India

Apply to 11+ Microsoft App-V Jobs on CutShort.io. Find your next job, effortlessly. Browse Microsoft App-V Jobs and apply today!

icon
Streetmark
Agency job
via STREETMARK Info Solutions by Mohan Guttula
Remote, Bengaluru (Bangalore), Chennai
3 - 9 yrs
₹3L - ₹20L / yr
SCCM
PL/SQL
APPV
Stani's Python Editor
AWS Simple Notification Service (SNS)
+3 more

Hi All,

We are hiring Data Engineer for one of our client for Bangalore & Chennai Location.


Strong Knowledge of SCCM, App V, and Intune infrastructure.

Powershell/VBScript/Python,

Windows Installer

Knowledge of Windows 10 registry

Application Repackaging

Application Sequencing with App-v

Deploying and troubleshooting applications, packages, and Task Sequences.

Security patch deployment and remediation

Windows operating system patching and defender updates

 

Thanks,
Mohan.G

Read more
Curl Analytics
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
5 - 10 yrs
₹15L - ₹30L / yr
ETL
Big Data
Data engineering
Apache Kafka
PySpark
+11 more
What you will do
  • Bring in industry best practices around creating and maintaining robust data pipelines for complex data projects with/without AI component
    • programmatically ingesting data from several static and real-time sources (incl. web scraping)
    • rendering results through dynamic interfaces incl. web / mobile / dashboard with the ability to log usage and granular user feedbacks
    • performance tuning and optimal implementation of complex Python scripts (using SPARK), SQL (using stored procedures, HIVE), and NoSQL queries in a production environment
  • Industrialize ML / DL solutions and deploy and manage production services; proactively handle data issues arising on live apps
  • Perform ETL on large and complex datasets for AI applications - work closely with data scientists on performance optimization of large-scale ML/DL model training
  • Build data tools to facilitate fast data cleaning and statistical analysis
  • Ensure data architecture is secure and compliant
  • Resolve issues escalated from Business and Functional areas on data quality, accuracy, and availability
  • Work closely with APAC CDO and coordinate with a fully decentralized team across different locations in APAC and global HQ (Paris).

You should be

  •  Expert in structured and unstructured data in traditional and Big data environments – Oracle / SQLserver, MongoDB, Hive / Pig, BigQuery, and Spark
  • Have excellent knowledge of Python programming both in traditional and distributed models (PySpark)
  • Expert in shell scripting and writing schedulers
  • Hands-on experience with Cloud - deploying complex data solutions in hybrid cloud / on-premise environment both for data extraction/storage and computation
  • Hands-on experience in deploying production apps using large volumes of data with state-of-the-art technologies like Dockers, Kubernetes, and Kafka
  • Strong knowledge of data security best practices
  • 5+ years experience in a data engineering role
  • Science / Engineering graduate from a Tier-1 university in the country
  • And most importantly, you must be a passionate coder who really cares about building apps that can help people do things better, smarter, and faster even when they sleep
Read more
A fast growing Big Data company
Noida, Bengaluru (Bangalore), Chennai, Hyderabad
6 - 8 yrs
₹10L - ₹15L / yr
AWS Glue
SQL
skill iconPython
PySpark
Data engineering
+6 more

AWS Glue Developer 

Work Experience: 6 to 8 Years

Work Location:  Noida, Bangalore, Chennai & Hyderabad

Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops, 

Job Reference ID:BT/F21/IND


Job Description:

Design, build and configure applications to meet business process and application requirements.


Responsibilities:

7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.


Technical Experience:

Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.


➢ Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.

➢ Create data pipeline architecture by designing and implementing data ingestion solutions.

➢ Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.

➢ Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.

➢ Author ETL processes using Python, Pyspark.

➢ Build Redshift Spectrum direct transformations and data modelling using data in S3.

➢ ETL process monitoring using CloudWatch events.

➢ You will be working in collaboration with other teams. Good communication must.

➢ Must have experience in using AWS services API, AWS CLI and SDK


Professional Attributes:

➢ Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.

➢ Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.

➢ Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.


Qualification:

➢ Degree in Computer Science, Computer Engineering or equivalent.


Salary: Commensurate with experience and demonstrated competence

Read more
Kloud9 Technologies
Remote only
8 - 14 yrs
₹35L - ₹45L / yr
Google Cloud Platform (GCP)
Agile/Scrum
SQL
skill iconPython
Apache Kafka
+1 more

Senior Data Engineer


Responsibilities:

●      Clean, prepare and optimize data at scale for ingestion and consumption by machine learning models

●      Drive the implementation of new data management projects and re-structure of the current data architecture

●      Implement complex automated workflows and routines using workflow scheduling tools 

●      Build continuous integration, test-driven development and production deployment frameworks

●      Drive collaborative reviews of design, code, test plans and dataset implementation performed by other data engineers in support of maintaining data engineering standards 

●      Anticipate, identify and solve issues concerning data management to improve data quality

●      Design and build reusable components, frameworks and libraries at scale to support machine learning products 

●      Design and implement product features in collaboration with business and Technology stakeholders 

●      Analyze and profile data for the purpose of designing scalable solutions 

●      Troubleshoot complex data issues and perform root cause analysis to proactively resolve product and operational issues

●      Mentor and develop other data engineers in adopting best practices 

●      Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders






Qualifications:

●      8+ years of experience developing scalable Big Data applications or solutions on distributed platforms

●      Experience in Google Cloud Platform (GCP) and good to have other cloud platform tools

●      Experience working with Data warehousing tools, including DynamoDB, SQL, and Snowflake

●      Experience architecting data products in Streaming, Serverless and Microservices Architecture and platform.

●      Experience with Spark (Scala/Python/Java) and Kafka

●      Work experience with using Databricks (Data Engineering and Delta Lake components

●      Experience working with Big Data platforms, including Dataproc, Data Bricks etc

●      Experience working with distributed technology tools including Spark, Presto, Databricks, Airflow

●      Working knowledge of Data warehousing, Data modeling


●      Experience working in Agile and Scrum development process

●      Bachelor's degree in Computer Science, Information Systems, Business, or other relevant subject area


Role:

Senior Data Engineer

Total No. of Years:

8+ years of relevant experience 

To be onboarded by:

Immediate

Notice Period:

 

Skills

Mandatory / Desirable

Min years (Project Exp)

Max years (Project Exp)

GCP Exposure 

Mandatory Min 3 to 7

BigQuery, Dataflow, Dataproc, AI Building Blocks, Looker, Cloud Data Fusion, Dataprep .Spark and PySpark

Mandatory Min 5 to 9

Relational SQL

Mandatory Min 4 to 8

Shell scripting language 

Mandatory Min 4 to 8

Python /scala language 

Mandatory Min 4 to 8

Airflow/Kubeflow workflow scheduling tool 

Mandatory Min 3 to 7

Kubernetes

Desirable 1 to 6

Scala

Mandatory Min 2 to 6

Databricks

Desirable Min 1 to 6

Google Cloud Functions

Mandatory Min 2 to 6

GitHub source control tool 

Mandatory Min 4 to 8

Machine Learning

Desirable 1 to 6

Deep Learning

Desirable Min 1to 6

Data structures and algorithms

Mandatory Min 4 to 8

Read more
Amagi Media Labs

at Amagi Media Labs

3 recruiters
Rajesh C
Posted by Rajesh C
Bengaluru (Bangalore), Chennai
12 - 15 yrs
₹50L - ₹60L / yr
skill iconData Science
skill iconMachine Learning (ML)
ETL
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
+5 more
Job Title: Data Architect
Job Location: Chennai

Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models

About Condé Nast

CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more
Kaplan

at Kaplan

6 recruiters
Akshata Ranka
Posted by Akshata Ranka
Bengaluru (Bangalore)
7 - 10 yrs
₹15L - ₹20L / yr
Statistical Analysis
Data mining
Data Visualization
skill iconData Science
skill iconR Programming
+7 more

Senior Data Scientist-Job Description

The Senior Data Scientist role is a creative problem solver who utilizes statistical/mathematical principles and modelling skills to uncover new insights that will significantly and meaningfully impact business decisions and actions.  She/he applies their data science expertise in identifying, defining, and executing state-of-art techniques for academic opportunities and business objectives in collaboration with other Analytics team members. The Senior Data Scientist will execute analyses & outputs spanning test design and measurement, predictive analytics, multivariate analysis, data/text mining, pattern recognition, artificial intelligence, and machine learning.

 

Key Responsibilities:

  • Perform the full range of data science activities including test design and measurement, predictive/advanced analytics, and data mining, and analytic dashboards.
  • Extract, manipulate, analyse & interpret data from various corporate data sources developing advanced analytic solutions, deriving key observations, findings, insights, and formulating actionable recommendations.
  • Generate clearly understood and intuitive data science / advanced analytics outputs.
  • Provide thought leadership and recommendations on business process improvement, analytic solutions to complex problems.
  • Participate in best practice sharing and communication platform for advancement of the data science discipline.
  • Coach and collaborate with other data scientists and data analysts.
  • Present impact, insights, outcomes & recommendations to key business partners and stakeholders.
  • Comply with established Service Level Agreements to ensure timely, high quality deliverables with value-add recommendations, clearly articulated key findings and observations.

Qualification:

  • Bachelor's Degree (B.A./B.S.) or Master’s Degree (M.A./M.S.) in Computer Science, Statistics, Mathematics, Machine Learning, Physics, or similar degree
  • 5+ years of experience in data science in a digitally advanced industry focusing on strategic initiatives, marketing and/or operations.
  • Advanced knowledge of best-in-class analytic software tools and languages: Python, SQL, R, SAS, Tableau, Excel, PowerPoint.
  • Expertise in statistical methods, statistical analysis, data visualization, and data mining techniques.
  • Experience in Test design, Design of Experiments, A/B Testing, Measurement Science Strong influencing skills to drive a robust testing agenda and data driven decision making for process improvements
  • Strong Critical thinking skills to track down complex data and engineering issues, evaluate different algorithmic approaches, and analyse data to solve problems.
  • Experience in partnering with IT, marketing operations & business operations to deploy predictive analytic solutions.
  • Ability to translate/communicate complex analytical/statistical/mathematical concepts with non-technical audience.
  • Strong written and verbal communications skills, as well as presentation skills.

 

Read more
Analytics Consulting Company | REMOTE
Remote, Bengaluru (Bangalore)
2 - 4 yrs
₹18L - ₹20L / yr
skill iconData Science
skill iconR Programming
skill iconPython
skill iconMongoDB
SQL
+3 more
Do you want your software skills to contribute meaningfully into finding technology driven solutions for various businesses and alongside grow your career, then read on.
 
Our client provides data-based process optimization and analytics solutions to businesses worldwide. Their innovative algorithms and customized IT solutions cater to complex problems related to every field or industry, through tools that are non standard and are backed-up by extensive research. They serve startups as well as large, medium and small enterprises, a majority of their clients being industry leaders.
 
With registered offices in India, USA and UAE, their projects support various sectors and functions like logistics, IT, Retail, Ecommerce, Healthcare industry among others, across Asia, America and Europe. The founder holds a Master’s degree from IIT and a PhD in Operations Research from USA, with rich experience in Optimization and Analytics for various industries. His team of top scientists and pedagogy experts are focusing on innovative revenue generation ideas with minimum operational costs.
 
As a Data Scientist, you will apply expertise in machine-learning, data mining and statistical methods to design, prototype, and build the next-generation analytics engines and services.
 
What you will do:
  • Conducting advanced statistical analysis to provide actionable insights, identify trends, and measure performance
  • Performing data exploration, cleaning, preparation and feature engineering; in addition to executing tasks such as building a POC, validation/ AB testing
  • Collaborating with data engineers & architects to implement and deploy scalable solutions
  • Communicating results to diverse audiences with effective writing and visualizations
  • Identifying and executing on high impact projects, triage external requests, and ensure timely completion for the results to be useful
  • Providing thought leadership by researching best practices, conducting experiments, and collaborating with industry leaders

 

 

What you need to have:
  • 2-4 year experience in machine learning algorithms, predictive analytics, demand forecasting in real-world projects
  • Strong statistical background in descriptive and inferential statistics, regression, forecasting techniques.
  • Strong Programming background in Python (including packages like Tensorflow), R, D3.js , Tableau, Spark, SQL, MongoDB.
  • Preferred exposure to Optimization & Meta-heuristic algorithm and related applications
  • Background in a highly quantitative field like Data Science, Computer Science, Statistics, Applied Mathematics,Operations Research, Industrial Engineering, or similar fields.
  • Should have 2-4 years of experience in Data Science algorithm design and implementation, data analysis in different applied problems.
  • DS Mandatory skills : Python, R, SQL, Deep learning, predictive analysis, applied statistics
Read more
Propellor.ai

at Propellor.ai

5 candid answers
1 video
Anila Nair
Posted by Anila Nair
Remote only
2 - 5 yrs
₹5L - ₹15L / yr
SQL
API
skill iconPython
Spark

Job Description - Data Engineer

About us
Propellor is aimed at bringing Marketing Analytics and other Business Workflows to the Cloud ecosystem. We work with International Clients to make their Analytics ambitions come true, by deploying the latest tech stack and data science and engineering methods, making their business data insightful and actionable. 

 

What is the role?
This team is responsible for building a Data Platform for many different units. This platform will be built on Cloud and therefore in this role, the individual will be organizing and orchestrating different data sources, and
giving recommendations on the services that fulfil goals based on the type of data

Qualifications:

• Experience with Python, SQL, Spark
• Knowledge/notions of JavaScript
• Knowledge of data processing, data modeling, and algorithms
• Strong in data, software, and system design patterns and architecture
• API building and maintaining
• Strong soft skills, communication
Nice to have:
• Experience with cloud: Google Cloud Platform, AWS, Azure
• Knowledge of Google Analytics 360 and/or GA4.
Key Responsibilities
• Work on the core backend and ensure it meets the performance benchmarks.
• Designing and developing APIs for the front end to consume.
• Constantly improve the architecture of the application by clearing the technical backlog.
• Meeting both technical and consumer needs.
• Staying abreast of developments in web applications and programming languages.

Key Responsibilities
• Design and develop platform based on microservices architecture.
• Work on the core backend and ensure it meets the performance benchmarks.
• Work on the front end with ReactJS.
• Designing and developing APIs for the front end to consume.
• Constantly improve the architecture of the application by clearing the technical backlog.
• Meeting both technical and consumer needs.
• Staying abreast of developments in web applications and programming languages.

What are we looking for?
An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of it. We are open to promising candidates who are passionate about their work and are team players.
• Education - BE/MCA or equivalent.
• Agnostic/Polyglot with multiple tech stacks.
• Worked on open-source technologies – NodeJS, ReactJS, MySQL, NoSQL, MongoDB, DynamoDB.
• Good experience with Front-end technologies like ReactJS.
• Backend exposure – good knowledge of building API.
• Worked on serverless technologies.
• Efficient in building microservices in combining server & front-end.
• Knowledge of cloud architecture.
• Should have sound working experience with relational and columnar DB.
• Should be innovative and communicative in approach.
• Will be responsible for the functional/technical track of a project.

Whom will you work with?
You will closely work with the engineering team and support the Product Team.

Hiring Process includes : 

a. Written Test on Python and SQL

b. 2 - 3 rounds of Interviews

Immediate Joiners will be preferred

Read more
Artivatic

at Artivatic

1 video
3 recruiters
Layak Singh
Posted by Layak Singh
Bengaluru (Bangalore)
2 - 7 yrs
₹5L - ₹12L / yr
OpenCV
skill iconMachine Learning (ML)
skill iconDeep Learning
skill iconPython
Artificial Intelligence (AI)
+1 more
About Artivatic :Artivatic is technology startup that uses AI/ML/Deeplearning to build intelligent products & solutions for finance, healthcare & insurance businesses. It is based out of Bangalore with 20+ team focus on technology. Artivatic building is cutting edge solutions to enable 750 Millions plus people to get insurance, financial access and health benefits with alternative data sources to increase their productivity, efficiency, automation power and profitability, hence improving their way of doing business more intelligently & seamlessly. Artivatic offers lending underwriting, credit/insurance underwriting, fraud, prediction, personalization, recommendation, risk profiling, consumer profiling intelligence, KYC Automation & Compliance, automated decisions, monitoring, claims processing, sentiment/psychology behaviour, auto insurance claims, travel insurance, disease prediction for insurance and more. We have raised US $300K earlier and built products successfully and also done few PoCs successfully with some top enterprises in Insurance, Banking & Health sector. Currently, 4 months away from generating continuous revenue.Skills : - We at artivatic are seeking for passionate, talented and research focused computer engineer with strong machine learning and computer vision background to help build industry-leading technology with a focus in document text extraction and parsing using OCR across different languages.Qualifications :- Bachelors or Master degree in Computer Science, Computer vision or related field with specialization in Image Processing or machine learning.- Research experience in Deep Learning models for Image processing or OCR related field is preferred.- Publication record in Deep Learning models for Computer Vision conferences/journals is a plus.Required Skills :- Excellent skills developing in Python in Linux environment. Programming skills with multi-threaded GPU Cuda computing and API Solutions.- Experience applying machine learning and computer vision principles to real-world data and working in Scanned and Documented Images.- Good knowledge of Computer Science, math and statistics fundamentals (algorithms and data structures, meshing, sampling theory, linear algebra, etc.)- Knowledge of data science technologies such as Python, Pandas, Scipy, Numpy, matplotlib, etc.- Broad Computer Vision knowledge - Construction, Feature Detection, Segmentation, Classification; Machine/Deep Learning - Algorithm Evaluation, Preparation, Analysis, Modeling and Execution.- Familiarity with OpenCV, Dlib, Yolo, Capslule Network or similar and Open Source AR platforms and products- Strong problem solving and logical skills.- A go-getter kind of attitude with a willingness to learn new technologies.- Well versed in software design paradigms and good development practices.Responsibilities :- Developing novel algorithms and modeling techniques to advance the state of the art in- Document and Text Extraction.- Image recognition, Object Identification and Visual Recognition - Working closely with R&D and Machine Learning engineers implementing algorithms that power user and developer-facing products.- Be responsible for measuring and optimizing the quality of your algorithmsExperience : 3 Years+ Location : Sony World Signal, Koramangala 4th Block, Bangalore
Read more
Yulu Bikes

at Yulu Bikes

1 video
3 recruiters
Keerthana k
Posted by Keerthana k
Bengaluru (Bangalore)
2 - 5 yrs
₹15L - ₹28L / yr
Big Data
Spark
skill iconScala
Hadoop
Apache Kafka
+5 more
Job Description
We are looking for a Data Engineer that will be responsible for collecting, storing, processing, and analyzing huge sets of data that is coming from different sources.

Responsibilities
Working with Big Data tools and frameworks to provide requested capabilities Identify development needs in order to improve and streamline operations Develop and manage BI solutions Implementing ETL process and Data Warehousing Monitoring performance and managing infrastructure

Skills 
Proficient understanding of distributed computing principles Proficiency with Hadoop and Spark Experience with building stream-processing systems, using solutions such as Kafka and Spark-Streaming Good knowledge of Data querying tools SQL and Hive Knowledge of various ETL techniques and frameworks Experience with Python/Java/Scala (at least one) Experience with cloud services such as AWS or GCP Experience with NoSQL databases, such as DynamoDB,MongoDB will be an advantage Excellent written and verbal communication skills
Read more
CloudMoyo

at CloudMoyo

3 recruiters
Sarabjeet Singh
Posted by Sarabjeet Singh
Pune
10 - 16 yrs
₹10L - ₹20L / yr
skill iconMachine Learning (ML)
skill iconPython
Artificial Intelligence (AI)
skill iconDeep Learning
Natural Language Processing (NLP)
+3 more

Job Description:

Roles & Responsibilities:

· You will be involved in every part of the project lifecycle, right from identifying the business problem and proposing a solution, to data collection, cleaning, and preprocessing, to training and optimizing ML/DL models and deploying them to production.

· You will often be required to design and execute proof-of-concept projects that can demonstrate business value and build confidence with CloudMoyo’s clients.

· You will be involved in designing and delivering data visualizations that utilize the ML models to generate insights and intuitively deliver business value to CXOs.


Desired Skill Set:

· Candidates should have strong Python coding skills and be comfortable working with various ML/DL frameworks and libraries.

· Hands-on skills and industry experience in one or more of the following areas is necessary:

1)      Deep Learning (CNNs/RNNs, Reinforcement Learning, VAEs/GANs)

2)      Machine Learning (Regression, Random Forests, SVMs, K-means, ensemble methods)

3)      Natural Language Processing

4)      Graph Databases (Neo4j, Apache Giraph)

5)      Azure Bot Service

6)      Azure ML Studio / Azure Cognitive Services

7)      Log Analytics with NLP/ML/DL

· Previous experience with data visualization, C# or Azure Cloud platform and services will be a plus.

· Candidates should have excellent communication skills and be highly technical, with the ability to discuss ideas at any level from executive to developer.

· Creative problem-solving, unconventional approaches and a hacker mindset is highly desired.

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort