Cutshort logo
Beautiful Soup Jobs in Chennai

11+ Beautiful Soup Jobs in Chennai | Beautiful Soup Job openings in Chennai

Apply to 11+ Beautiful Soup Jobs in Chennai on CutShort.io. Explore the latest Beautiful Soup Job opportunities across top companies like Google, Amazon & Adobe.

icon
OJCommerce

at OJCommerce

3 recruiters
Rajalakshmi N
Posted by Rajalakshmi N
Chennai
2 - 5 yrs
₹7L - ₹12L / yr
Beautiful Soup
Web Scraping
skill iconPython
Selenium

Role : Web Scraping Engineer

Experience : 2 to 3 Years

Job Location : Chennai

About OJ Commerce: 


OJ Commerce (OJC), a rapidly expanding and profitable online retailer, is headquartered in Florida, USA, with a fully-functional office in Chennai, India. We deliver exceptional value to our customers by harnessing cutting-edge technology, fostering innovation, and establishing strategic brand partnerships to enable a seamless, enjoyable shopping experience featuring high-quality products at unbeatable prices. Our advanced, data-driven system streamlines operations with minimal human intervention.

Our extensive product portfolio encompasses over a million SKUs and more than 2,500 brands across eight primary categories. With a robust presence on major platforms such as Amazon, Walmart, Wayfair, Home Depot, and eBay, we directly serve consumers in the United States.

As we continue to forge new partner relationships, our flagship website, www.ojcommerce.com, has rapidly emerged as a top-performing e-commerce channel, catering to millions of customers annually.

Job Summary:

We are seeking a Web Scraping Engineer and Data Extraction Specialist who will play a crucial role in our data acquisition and management processes. The ideal candidate will be proficient in developing and maintaining efficient web crawlers capable of extracting data from large websites and storing it in a database. Strong expertise in Python, web crawling, and data extraction, along with familiarity with popular crawling tools and modules, is essential. Additionally, the candidate should demonstrate the ability to effectively utilize API tools for testing and retrieving data from various sources. Join our team and contribute to our data-driven success!


Responsibilities:


  • Develop and maintain web crawlers in Python.
  • Crawl large websites and extract data.
  • Store data in a database.
  • Analyze and report on data.
  • Work with other engineers to develop and improve our web crawling infrastructure.
  • Stay up to date on the latest crawling tools and techniques.



Required Skills and Qualifications:


  • Bachelor's degree in computer science or a related field.
  • 2-3 years of experience with Python and web crawling.
  • Familiarity with tools / modules such as
  • Scrapy, Selenium, Requests, Beautiful Soup etc.
  • API tools such as Postman or equivalent. 
  • Working knowledge of SQL.
  • Experience with web crawling and data extraction.
  • Strong problem-solving and analytical skills.
  • Ability to work independently and as part of a team.
  • Excellent communication and documentation skills.


What we Offer

• Competitive salary

• Medical Benefits/Accident Cover

• Flexi Office Working Hours

• Fast paced start up

Read more
Top Management Consulting Company

Top Management Consulting Company

Agency job
via People First Consultants by Naveed Mohd
Gurugram, Bengaluru (Bangalore), Chennai
2 - 9 yrs
₹9L - ₹27L / yr
DevOps
Microsoft Windows Azure
gitlab
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+15 more
Greetings!!

We are looking out for a technically driven  "ML OPS Engineer" for one of our premium client

COMPANY DESCRIPTION:
This Company is a global management consulting firm. We are the trusted advisor to the world's leading businesses, governments, and institutions. We work with leading organizations across the private, public and social sectors. Our scale, scope, and knowledge allow us to address


Key Skills
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Read more
one-to-one, one-to-many, and many-to-many

one-to-one, one-to-many, and many-to-many

Agency job
via The Hub by Sridevi Viswanathan
Chennai
5 - 9 yrs
₹1L - ₹15L / yr
PowerBI
skill iconPython
Spark
skill iconData Analytics
data brick

Position Overview: We are seeking a talented Data Engineer with expertise in Power BI to join our team. The ideal candidate will be responsible for designing and implementing data pipelines, as well as developing insightful visualizations and reports using Power BI. Additionally, the candidate should have strong skills in Python, data analytics, PySpark, and Databricks. This role requires a blend of technical expertise, analytical thinking, and effective communication skills.

Key Responsibilities:

  1. Design, develop, and maintain data pipelines and architectures using PySpark and Databricks.
  2. Implement ETL processes to extract, transform, and load data from various sources into data warehouses or data lakes.
  3. Collaborate with data analysts and business stakeholders to understand data requirements and translate them into actionable insights.
  4. Develop interactive dashboards, reports, and visualizations using Power BI to communicate key metrics and trends.
  5. Optimize and tune data pipelines for performance, scalability, and reliability.
  6. Monitor and troubleshoot data infrastructure to ensure data quality, integrity, and availability.
  7. Implement security measures and best practices to protect sensitive data.
  8. Stay updated with emerging technologies and best practices in data engineering and data visualization.
  9. Document processes, workflows, and configurations to maintain a comprehensive knowledge base.

Requirements:

  1. Bachelor’s degree in Computer Science, Engineering, or related field. (Master’s degree preferred)
  2. Proven experience as a Data Engineer with expertise in Power BI, Python, PySpark, and Databricks.
  3. Strong proficiency in Power BI, including data modeling, DAX calculations, and creating interactive reports and dashboards.
  4. Solid understanding of data analytics concepts and techniques.
  5. Experience working with Big Data technologies such as Hadoop, Spark, or Kafka.
  6. Proficiency in programming languages such as Python and SQL.
  7. Hands-on experience with cloud platforms like AWS, Azure, or Google Cloud.
  8. Excellent analytical and problem-solving skills with attention to detail.
  9. Strong communication and collaboration skills to work effectively with cross-functional teams.
  10. Ability to work independently and manage multiple tasks simultaneously in a fast-paced environment.

Preferred Qualifications:

  • Advanced degree in Computer Science, Engineering, or related field.
  • Certifications in Power BI or related technologies.
  • Experience with data visualization tools other than Power BI (e.g., Tableau, QlikView).
  • Knowledge of machine learning concepts and frameworks.


Read more
A Product Based Client,Chennai

A Product Based Client,Chennai

Agency job
via SangatHR by Anna Poorni
Chennai
4 - 8 yrs
₹10L - ₹15L / yr
Data Warehouse (DWH)
Informatica
ETL
Spark
PySpark
+2 more

Analytics Job Description

We are hiring an Analytics Engineer to help drive our Business Intelligence efforts. You will

partner closely with leaders across the organization, working together to understand the how

and why of people, team and company challenges, workflows and culture. The team is

responsible for delivering data and insights that drive decision-making, execution, and

investments for our product initiatives.

You will work cross-functionally with product, marketing, sales, engineering, finance, and our

customer-facing teams enabling them with data and narratives about the customer journey.

You’ll also work closely with other data teams, such as data engineering and product analytics,

to ensure we are creating a strong data culture at Blend that enables our cross-functional partners

to be more data-informed.


Role : DataEngineer 

Please find below the JD for the DataEngineer Role..

  Location: Guindy,Chennai

How you’ll contribute:

• Develop objectives and metrics, ensure priorities are data-driven, and balance short-

term and long-term goals


• Develop deep analytical insights to inform and influence product roadmaps and

business decisions and help improve the consumer experience

• Work closely with GTM and supporting operations teams to author and develop core

data sets that empower analyses

• Deeply understand the business and proactively spot risks and opportunities

• Develop dashboards and define metrics that drive key business decisions

• Build and maintain scalable ETL pipelines via solutions such as Fivetran, Hightouch,

and Workato

• Design our Analytics and Business Intelligence architecture, assessing and

implementing new technologies that fitting


• Work with our engineering teams to continually make our data pipelines and tooling

more resilient


Who you are:

• Bachelor’s degree or equivalent required from an accredited institution with a

quantitative focus such as Economics, Operations Research, Statistics, Computer Science OR 1-3 Years of Experience as a Data Analyst, Data Engineer, Data Scientist

• Must have strong SQL and data modeling skills, with experience applying skills to

thoughtfully create data models in a warehouse environment.

• A proven track record of using analysis to drive key decisions and influence change

• Strong storyteller and ability to communicate effectively with managers and

executives

• Demonstrated ability to define metrics for product areas, understand the right

questions to ask and push back on stakeholders in the face of ambiguous, complex

problems, and work with diverse teams with different goals

• A passion for documentation.

• A solution-oriented growth mindset. You’ll need to be a self-starter and thrive in a

dynamic environment.

• A bias towards communication and collaboration with business and technical

stakeholders.

• Quantitative rigor and systems thinking.

• Prior startup experience is preferred, but not required.

• Interest or experience in machine learning techniques (such as clustering, decision

tree, and segmentation)

• Familiarity with a scientific computing language, such as Python, for data wrangling

and statistical analysis

• Experience with a SQL focused data transformation framework such as dbt

• Experience with a Business Intelligence Tool such as Mode/Tableau


Mandatory Skillset:


-Very Strong in SQL

-Spark OR pyspark OR Python

-Shell Scripting


Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Sukhdeep Singh
Posted by Sukhdeep Singh
Chennai
4 - 7 yrs
₹13L - ₹15L / yr
skill iconData Analytics
Data Visualization
PowerBI
Tableau
Qlikview
+10 more

Title: Platform Engineer Location: Chennai Work Mode: Hybrid (Remote and Chennai Office) Experience: 4+ years Budget: 16 - 18 LPA

Responsibilities:

  • Parse data using Python, create dashboards in Tableau.
  • Utilize Jenkins for Airflow pipeline creation and CI/CD maintenance.
  • Migrate Datastage jobs to Snowflake, optimize performance.
  • Work with HDFS, Hive, Kafka, and basic Spark.
  • Develop Python scripts for data parsing, quality checks, and visualization.
  • Conduct unit testing and web application testing.
  • Implement Apache Airflow and handle production migration.
  • Apply data warehousing techniques for data cleansing and dimension modeling.

Requirements:

  • 4+ years of experience as a Platform Engineer.
  • Strong Python skills, knowledge of Tableau.
  • Experience with Jenkins, Snowflake, HDFS, Hive, and Kafka.
  • Proficient in Unix Shell Scripting and SQL.
  • Familiarity with ETL tools like DataStage and DMExpress.
  • Understanding of Apache Airflow.
  • Strong problem-solving and communication skills.

Note: Only candidates willing to work in Chennai and available for immediate joining will be considered. Budget for this position is 16 - 18 LPA.

Read more
Amagi Media Labs

at Amagi Media Labs

3 recruiters
Rajesh C
Posted by Rajesh C
Chennai
15 - 18 yrs
Best in industry
Data architecture
Architecture
Data Architect
Architect
skill iconJava
+5 more
Job Title: Data Architect
Job Location: Chennai
Job Summary

The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more
Amagi Media Labs

at Amagi Media Labs

3 recruiters
Rajesh C
Posted by Rajesh C
Bengaluru (Bangalore), Chennai
12 - 15 yrs
₹50L - ₹60L / yr
skill iconData Science
skill iconMachine Learning (ML)
ETL
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
+5 more
Job Title: Data Architect
Job Location: Chennai

Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models

About Condé Nast

CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more
Tredence
Sharon Joseph
Posted by Sharon Joseph
Bengaluru (Bangalore), Gurugram, Chennai, Pune
7 - 10 yrs
Best in industry
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
skill iconPython
+1 more

Job Summary

As a Data Science Lead, you will manage multiple consulting projects of varying complexity and ensure on-time and on-budget delivery for clients. You will lead a team of data scientists and collaborate across cross-functional groups, while contributing to new business development, supporting strategic business decisions and maintaining & strengthening client base

  1. Work with team to define business requirements, come up with analytical solution and deliver the solution with specific focus on Big Picture to drive robustness of the solution
  2. Work with teams of smart collaborators. Be responsible for their appraisals and career development.
  3. Participate and lead executive presentations with client leadership stakeholders.
  4. Be part of an inclusive and open environment. A culture where making mistakes and learning from them is part of life
  5. See how your work contributes to building an organization and be able to drive Org level initiatives that will challenge and grow your capabilities.

​​​​​​Role & Responsibilities

  1. Serve as expert in Data Science, build framework to develop Production level DS/AI models.
  2. Apply AI research and ML models to accelerate business innovation and solve impactful business problems for our clients.
  3. Lead multiple teams across clients ensuring quality and timely outcomes on all projects.
  4. Lead and manage the onsite-offshore relation, at the same time adding value to the client.
  5. Partner with business and technical stakeholders to translate challenging business problems into state-of-the-art data science solutions.
  6. Build a winning team focused on client success. Help team members build lasting career in data science and create a constant learning/development environment.
  7. Present results, insights, and recommendations to senior management with an emphasis on the business impact.
  8. Build engaging rapport with client leadership through relevant conversations and genuine business recommendations that impact the growth and profitability of the organization.
  9. Lead or contribute to org level initiatives to build the Tredence of tomorrow.

 

Qualification & Experience

  1. Bachelor's /Master's /PhD degree in a quantitative field (CS, Machine learning, Mathematics, Statistics, Data Science) or equivalent experience.
  2. 6-10+ years of experience in data science, building hands-on ML models
  3. Expertise in ML – Regression, Classification, Clustering, Time Series Modeling, Graph Network, Recommender System, Bayesian modeling, Deep learning, Computer Vision, NLP/NLU, Reinforcement learning, Federated Learning, Meta Learning.
  4. Proficient in some or all of the following techniques: Linear & Logistic Regression, Decision Trees, Random Forests, K-Nearest Neighbors, Support Vector Machines ANOVA , Principal Component Analysis, Gradient Boosted Trees, ANN, CNN, RNN, Transformers.
  5. Knowledge of programming languages SQL, Python/ R, Spark.
  6. Expertise in ML frameworks and libraries (TensorFlow, Keras, PyTorch).
  7. Experience with cloud computing services (AWS, GCP or Azure)
  8. Expert in Statistical Modelling & Algorithms E.g. Hypothesis testing, Sample size estimation, A/B testing
  9. Knowledge in Mathematical programming – Linear Programming, Mixed Integer Programming etc , Stochastic Modelling – Markov chains, Monte Carlo, Stochastic Simulation, Queuing Models.
  10. Experience with Optimization Solvers (Gurobi, Cplex) and Algebraic programming Languages(PulP)
  11. Knowledge in GPU code optimization, Spark MLlib Optimization.
  12. Familiarity to deploy and monitor ML models in production, delivering data products to end-users.
  13. Experience with ML CI/CD pipelines.
Read more
SUVI

SUVI

Agency job
via SUVI BUSINESS VENTURE by VINOTH KUMAR
Chennai
3 - 4 yrs
₹3L - ₹7L / yr
DevOps
Bash
Linux/Unix
skill iconPython
skill iconJava
+3 more
  • 3to 4years of professional experience as a DevOps / System Engineer
  • Command line experience with Linux including writing bash scripts
  • Programming in Python, Java or similar
  • Fluent in Python and Python testing best practices
  • Extensive experience working within AWS and with it’s managed products (EC2, ECS, ECR,R53,SES, Elasticache, RDS,VPCs etc)
  • Strong experience with containers (Docker, Compose, ECS)
  • Version control system experience (e.g. Git)
  • Networking fundamentals
  • Ability to learn and apply new technologies through self-learning

 

 

Responsibilities

 

 

 

  • As part of a team implement DevOps infrastructure projects
  • Design and implement secure automation solutions for development, testing, and productionenvironments
  • Build and deploy automation, monitoring, and analysis solutions
  • Manage our continuous integration and delivery pipeline to maximize efficiency
  • Implement industry best practices for system hardening and configuration management
  • Secure, scale, and manage Linux virtual environments
  • Develop and maintain solutions for operational administration, system/data backup, disasterrecovery, and security/performance monitoring
Continuously evaluate existing systems with industry standards, and make recommendations forimprovement
Read more
codeMantra

at codeMantra

3 recruiters
Ranjith PR
Posted by Ranjith PR
Chennai
13.5 - 28 yrs
₹15L - ₹37L / yr
skill iconData Science
skill iconMachine Learning (ML)
skill iconDeep Learning
OpenCV
Solution architecture
+4 more

GREETINGS FROM CODEMANTRA !!!

 

EXCELLENT OPPORTUNITY FOR DATA SCIENCE/AI AND ML ARCHITECT !!!

 

Skills and Qualifications

 

*Strong Hands-on experience in Python Programming

*** Working experience with Computer Vision models - Object Detection Model, Image Classification

* Good experience in feature extraction, feature selection techniques and transfer learning

* Working Experience in building deep learning NLP Models for text classification, image analytics-CNN,RNN,LSTM.

* Working Experience in any of the AWS/GCP cloud platforms, exposure in fetching data from various sources.

* Good experience in exploratory data analysis, data visualisation, and other data pre-processing techniques.

* Knowledge in any one of the DL frameworks like Tensorflow, Pytorch, Keras, Caffe Good knowledge in statistics, distribution of data and in supervised and unsupervised machine learning algorithms.

* Exposure to OpenCV Familiarity with GPUs + CUDA Experience with NVIDIA software for cluster management and provisioning such as nvsm, dcgm and DeepOps.

* We are looking for a candidate with 9+ years of relevant experience , who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: *Experience with big data tools: Hadoop, Spark, Kafka, etc.
*Experience with AWS cloud services: EC2, RDS, AWS-Sagemaker(Added advantage)
*Experience with object-oriented/object function scripting languages in any: Python, Java, C++, Scala, etc.

Responsibilities
*Selecting features, building and optimizing classifiers using machine learning techniques
*Data mining using state-of-the-art methods
*Enhancing data collection procedures to include information that is relevant for building analytic systems
*Processing, cleansing, and verifying the integrity of data used for analysis
*Creating automated anomaly detection systems and constant tracking of its performance
*Assemble large, complex data sets that meet functional / non-functional business requirements.
*Secure and manage when needed GPU cluster resources for events
*Write comprehensive internal feedback reports and find opportunities for improvements
*Manage GPU instances/machines to increase the performance and efficiency of the ML/DL model

 

Regards

Ranjith PR
Read more
company logo
Agency job
via UpgradeHR by Lavanya G N
Hyderabad, Chennai
3 - 6 yrs
₹8L - ₹12L / yr
skill iconJava
C++
skill iconPython
opensource
open daylight
+1 more
Our client is largest U.S. wireless company with the largest 4G LTE network. Available in more than 500 markets, covering more than 301 million people. No. 1 in the telecommunications sector of Fortune magazine’s 2013 “World’s Most Admired Companies” list. More than 180,900 employees worldwide.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort