Cutshort logo
Generalized linear model Jobs in Bangalore (Bengaluru)

11+ Generalized linear model Jobs in Bangalore (Bengaluru) | Generalized linear model Job openings in Bangalore (Bengaluru)

Apply to 11+ Generalized linear model Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Generalized linear model Job opportunities across top companies like Google, Amazon & Adobe.

icon
Banyan Data Services

at Banyan Data Services

1 recruiter
Sathish Kumar
Posted by Sathish Kumar
Bengaluru (Bangalore)
3 - 15 yrs
₹6L - ₹20L / yr
skill iconData Science
Data Scientist
skill iconMongoDB
skill iconJava
Big Data
+14 more

Senior Big Data Engineer 

Note:   Notice Period : 45 days 

Banyan Data Services (BDS) is a US-based data-focused Company that specializes in comprehensive data solutions and services, headquartered in San Jose, California, USA. 

 

We are looking for a Senior Hadoop Bigdata Engineer who has expertise in solving complex data problems across a big data platform. You will be a part of our development team based out of Bangalore. This team focuses on the most innovative and emerging data infrastructure software and services to support highly scalable and available infrastructure. 

 

It's a once-in-a-lifetime opportunity to join our rocket ship startup run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer that address next-gen data evolution challenges. 

 

 

Key Qualifications

 

·   5+ years of experience working with Java and Spring technologies

· At least 3 years of programming experience working with Spark on big data; including experience with data profiling and building transformations

· Knowledge of microservices architecture is plus 

· Experience with any NoSQL databases such as HBase, MongoDB, or Cassandra

· Experience with Kafka or any streaming tools

· Knowledge of Scala would be preferable

· Experience with agile application development 

· Exposure of any Cloud Technologies including containers and Kubernetes 

· Demonstrated experience of performing DevOps for platforms 

· Strong Skillsets in Data Structures & Algorithm in using efficient way of code complexity

· Exposure to Graph databases

· Passion for learning new technologies and the ability to do so quickly 

· A Bachelor's degree in a computer-related field or equivalent professional experience is required

 

Key Responsibilities

 

· Scope and deliver solutions with the ability to design solutions independently based on high-level architecture

· Design and develop the big data-focused micro-Services

· Involve in big data infrastructure, distributed systems, data modeling, and query processing

· Build software with cutting-edge technologies on cloud

· Willing to learn new technologies and research-orientated projects 

· Proven interpersonal skills while contributing to team effort by accomplishing related results as needed 

Read more
It's a deep-tech and research company.

It's a deep-tech and research company.

Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
3 - 8 yrs
₹10L - ₹25L / yr
skill iconData Science
skill iconPython
Natural Language Processing (NLP)
skill iconDeep Learning
Long short-term memory (LSTM)
+8 more
Job Description: 
 
We are seeking passionate engineers experienced in software development using Machine Learning (ML) and Natural Language Processing (NLP) techniques to join our development team in Bangalore, India. We're a fast-growing startup working on an enterprise product - An intelligent data extraction Platform for various types of documents. 
 
Your responsibilities: 
 
• Build, improve and extend NLP capabilities 
• Research and evaluate different approaches to NLP problems 
• Must be able to write code that is well designed, produce deliverable results 
• Write code that scales and can be deployed to production 
 
You must have: 
 
• Fundamentals of statistical methods is a must 
• Experience in named entity recognition, POS Tagging, Lemmatization, vector representations of textual data and neural networks - RNN, LSTM 
• A solid foundation in Python, data structures, algorithms, and general software development skills. 
• Ability to apply machine learning to problems that deal with language 
• Engineering ability to build robustly scalable pipelines
 • Ability to work in a multi-disciplinary team with a strong product focus
Read more
Top Management Consulting Company

Top Management Consulting Company

Agency job
Bengaluru (Bangalore), Gurugram
2 - 8 yrs
₹10L - ₹35L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
skill iconPython
+11 more
Greetings!!

We are looking for a Machine Learning engineer for on of our premium client.
Experience: 2-9 years
Location: Gurgaon/Bangalore
Tech Stack:

Python, PySpark, the Python Scientific Stack; MLFlow, Grafana, Prometheus for machine learning pipeline management and monitoring; SQL, Airflow, Databricks, our own open-source data pipelining framework called Kedro, Dask/RAPIDS; Django, GraphQL and ReactJS for horizontal product development; container technologies such as Docker and Kubernetes, CircleCI/Jenkins for CI/CD, cloud solutions such as AWS, GCP, and Azure as well as Terraform and Cloudformation for deployment
Read more
Tier 1 MNC

Tier 1 MNC

Agency job
Chennai, Pune, Bengaluru (Bangalore), Noida, Gurugram, Kochi (Cochin), Coimbatore, Hyderabad, Mumbai, Navi Mumbai
3 - 12 yrs
₹3L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+1 more
Greetings,
We are hiring for Tier 1 MNC for the software developer with good knowledge in Spark,Hadoop and Scala
Read more
Kaleidofin

at Kaleidofin

3 recruiters
Poornima B
Posted by Poornima B
Chennai, Bengaluru (Bangalore)
3 - 8 yrs
Best in industry
skill iconData Science
skill iconMachine Learning (ML)
skill iconPython
SQL
Natural Language Processing (NLP)
4+ year experience in advanced analytics, model building, statistical modeling,
• Solid technical / data-mining skills and ability to work with large volumes of data; extract
and manipulate large datasets using common tools such as Python and SQL other
programming/scripting languages to translate data into business decisions/results
• Be data-driven and outcome-focused
• Must have good business judgment with demonstrated ability to think creatively and
strategically
• Must be an intuitive, organized analytical thinker, with the ability to perform detailed
analysis
• Takes personal ownership; Self-starter; Ability to drive projects with minimal guidance
and focus on high impact work
• Learns continuously; Seeks out knowledge, ideas and feedback.
• Looks for opportunities to build owns skills, knowledge and expertise.
• Experience with big data and cloud computing viz. Spark, Hadoop (MapReduce, PIG,
HIVE)
• Experience in risk and credit score domains preferred
• Comfortable with ambiguity and frequent context-switching in a fast-paced
environment
Read more
Kloud9 Technologies
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹30L / yr
Google Cloud Platform (GCP)
PySpark
skill iconPython
skill iconScala

About Kloud9:

 

Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.

 

Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.

 

At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.

 

Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.

 

We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.


●    Overall 8+ Years of Experience in Web Application development.

●    5+ Years of development experience with JAVA8 , Springboot, Microservices and middleware

●    3+ Years of Designing Middleware using Node JS platform.

●    good to have 2+ Years of Experience in using NodeJS along with AWS Serverless platform.

●    Good Experience with Javascript / TypeScript, Event Loops, ExpressJS, GraphQL, SQL DB (MySQLDB), NoSQL DB(MongoDB) and YAML templates.

●    Good Experience with TDD Driven Development and Automated Unit Testing.

●    Good Experience with exposing and consuming Rest APIs in Java 8, Springboot platform and Swagger API contracts.

●    Good Experience in building NodeJS middleware performing Transformations, Routing, Aggregation, Orchestration and Authentication(JWT/OAUTH).

●    Experience supporting and working with cross-functional teams in a dynamic environment.

●    Experience working in Agile Scrum Methodology.

●    Very good Problem-Solving Skills.

●    Very good learner and passion for technology.

●     Excellent verbal and written communication skills in English

●     Ability to communicate effectively with team members and business stakeholders


Secondary Skill Requirements:

 

● Experience working with any of Loopback, NestJS, Hapi.JS, Sails.JS, Passport.JS


Why Explore a Career at Kloud9:

 

With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.

Read more
NeuranceAI Technologies Private Limited
Bengaluru (Bangalore)
2 - 5 yrs
₹4L - ₹12L / yr
Named-entity recognition
Long short-term memory (LSTM)
Natural Language Processing (NLP)
BERT
NLU
+5 more
𝐄𝐥𝐢𝐠𝐢𝐛𝐢𝐥𝐢𝐭𝐲 𝐂𝐫𝐢𝐭𝐞𝐫𝐢𝐚

2-5 yrs of proven experience in ML, DL, and preferably NLP.

Preferred Educational Background - B.E/B.Tech, M.S./M.Tech, Ph.D.


𝐖𝐡𝐚𝐭 𝐰𝐢𝐥𝐥 𝐲𝐨𝐮 𝐰𝐨𝐫𝐤 𝐨𝐧?
𝟏) Problem formulation and solution designing of ML/NLP applications across complex well-defined as well as open-ended healthcare problems.
2) Cutting-edge machine learning, data mining, and statistical techniques to analyse and utilise large-scale structured and unstructured clinical data.
3) End-to-end development of company proprietary AI engines - data collection, cleaning, data modelling, model training / testing, monitoring, and deployment.
4) Research and innovate novel ML algorithms and their applications suited to the problem at hand.


𝐖𝐡𝐚𝐭 𝐚𝐫𝐞 𝐰𝐞 𝐥𝐨𝐨𝐤𝐢𝐧𝐠 𝐟𝐨𝐫?
𝟏) Deeper understanding of business objectives and ability to formulate the problem as a Data Science problem.
𝟐) Solid expertise in knowledge graphs, graph neural nets, clustering, classification.
𝟑) Strong understanding of data normalization techniques, SVM, Random forest, data visualization techniques.
𝟒) Expertise in RNN, LSTM, and other neural network architectures.
𝟓) DL frameworks: Tensorflow, Pytorch, Keras
𝟔) High proficiency with standard database skills (e.g., SQL, MongoDB, Graph DB), data preparation, cleaning, and wrangling/munging.
𝟕) Comfortable with web scraping, extracting, manipulating, and analyzing complex, high-volume, high-dimensionality data from varying sources.
𝟖) Experience with deploying ML models on cloud platforms like AWS or Azure.
9) Familiarity with version control with GIT, BitBucket, SVN, or similar.


𝐖𝐡𝐲 𝐜𝐡𝐨𝐨𝐬𝐞 𝐮𝐬?
𝟏) We offer Competitive remuneration.
𝟐) We give opportunities to work on exciting and cutting-edge machine learning problems so you contribute towards transforming the healthcare industry.
𝟑) We offer flexibility to choose your tools, methods, and ways to collaborate.
𝟒) We always value and believe in new ideas and encourage creative thinking.
𝟓) We offer open culture where you will work closely with the founding team and have the chance to influence the product design and execution.
𝟔) And, of course, the thrill of being part of an early-stage startup, launching a product, and seeing it in the hands of the users.
Read more
CommerceIQ

at CommerceIQ

3 recruiters
Abhijit  Ravuri
Posted by Abhijit Ravuri
Bengaluru (Bangalore)
3 - 8 yrs
₹20L - ₹35L / yr
skill iconData Science
Data Scientist
skill iconR Programming
skill iconPython
skill iconMachine Learning (ML)

CommerceIQ is Hiring Data Scientist (3-5 yrs)

 

At CommerceIQ, we are building the world’s most sophisticated E-commerce Channel Optimization software to help brands leverage Machine Learning, Analytics and Automation to grow their E-commerce business on all channels, globally.

Using CommerceIQ as a single source of truth, customers have driven 40% increase in incremental sales, 20% improvement in profitability and 32% reduction in out of stock rates on Amazon.

 

What You’ll Be Doing

As a Senior Data Scientist, you will work closely with Engineering/Product/Operations teams to build state-of-the-art ML based solutions for B2B SaaS products. This entails not only leveraging advanced techniques for predictions, time-series forecasting, topic modelling, optimisation but deep understanding of business and product too.

  • Apply excellent problem solving skills to deconstruct and formulate solutions from first-principles
  • Work on data science roadmap and build the core engine of our flagship CommerceIQ product
  • Collaborate with product and engineering to design product strategy, identify key metrics to drive and support with proof of concept
  • Perform rapid prototyping of experimental solutions and develop robust, sustainable and scalable production systems
  • Work with large scale ecommerce data of the biggest brands on amazon
  • Apply out-of-the-box, advanced algorithms to complex problems in real-time systems
  • Drive productization of techniques to be made available to a wide range of customers
  • You would be working with and mentoring fellow team members on the owned charter

What we are looking for -

  • Bachelor’s or Masters in Computer Science or Maths/Stats from a reputed college with 4+ years of experience in solving data science problems that have driven value to customers
  • Good depth and breadth in machine learning (theory and practice), optimization methods, data mining, statistics and linear algebra. Experience in NLP would be an advantage
  • Hands-on programming skills and ability to write modular and scalable code in Python/R. Knowledge of SQL is required
  • Familiarity with distributed computing architecture like Spark, Map-Reduce paradigm and Hadoop will be an added advantage
  • Strong spoken and written communication skills, able to explain complex ideas in a simple, intuitive manner, write/maintain good technical documentation on projects
  • Experience with building ML data products in an engineering organization interfacing with other teams and departments to deliver impact
  • We are looking for candidates who are curious and self-starters; obsess over customer problems to deliver maximum value to them.
  • Data scientist, Machine Learning, data science, data analyst

Job Type: Full-time

Experience:

  • Data Scientist: 3 years (Required)

Application Question:

  • Looking for product based industry experience from tier 1 /tier 2 colleges (NIT ,BIT, IIT,IIIT, BITS, Strong Profiles)
Read more
MNC

MNC

Agency job
via Fragma Data Systems by geeti gaurav mohanty
Bengaluru (Bangalore)
3 - 5 yrs
₹6L - ₹12L / yr
Spark
Big Data
Data engineering
Hadoop
Apache Kafka
+5 more
Data Engineer

• Drive the data engineering implementation
• Strong experience in building data pipelines
• AWS stack experience is must
• Deliver Conceptual, Logical and Physical data models for the implementation
teams.

• SQL stronghold is must. Advanced SQL working knowledge and experience
working with a variety of relational databases, SQL query authoring
• AWS Cloud data pipeline experience is must. Data pipelines and data centric
applications using distributed storage platforms like S3 and distributed processing
platforms like Spark, Airflow, Kafka
• Working knowledge of AWS technologies such as S3, EC2, EMR, RDS, Lambda,
Elasticsearch
• Ability to use a major programming (e.g. Python /Java) to process data for
modelling.
Read more
Dataweave Pvt Ltd

at Dataweave Pvt Ltd

32 recruiters
Pramod Shivalingappa S
Posted by Pramod Shivalingappa S
Bengaluru (Bangalore)
5 - 7 yrs
Best in industry
skill iconPython
skill iconData Science
skill iconR Programming
(Senior) Data Scientist Job Description

About us
DataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to help businesses develop data-driven strategies and make smarter decisions.

Data Science@DataWeave
We the Data Science team at DataWeave (called Semantics internally) build the core machine learning backend and structured domain knowledge needed to deliver insights through our data products. Our underpinnings are: innovation, business awareness, long term thinking, and pushing the envelope. We are a fast paced labs within the org applying the latest research in Computer Vision, Natural Language Processing, and Deep Learning to hard problems in different domains.

How we work?
It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale! 

What do we offer?
● Some of the most challenging research problems in NLP and Computer Vision. Huge text and image
datasets that you can play with!
● Ability to see the impact of your work and the value you're adding to our customers almost immediately.
● Opportunity to work on different problems and explore a wide variety of tools to figure out what really
excites you.
● A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible
working hours.
● Learning opportunities with courses and tech conferences. Mentorship from seniors in the team.
● Last but not the least, competitive salary packages and fast paced growth opportunities.

Who are we looking for?
The ideal candidate is a strong software developer or a researcher with experience building and shipping production grade data science applications at scale. Such a candidate has keen interest in liaising with the business and product teams to understand a business problem, and translate that into a data science problem.

You are also expected to develop capabilities that open up new business productization opportunities.

We are looking for someone with a Master's degree and 1+ years of experience working on problems in NLP or Computer Vision.

If you have 4+ years of relevant experience with a Master's degree (PhD preferred), you will be considered for a senior role.

Key problem areas
● Preprocessing and feature extraction noisy and unstructured data -- both text as well as images.
● Keyphrase extraction, sequence labeling, entity relationship mining from texts in different domains.
● Document clustering, attribute tagging, data normalization, classification, summarization, sentiment
analysis.
● Image based clustering and classification, segmentation, object detection, extracting text from images,
generative models, recommender systems.
● Ensemble approaches for all the above problems using multiple text and image based techniques.

Relevant set of skills
● Have a strong grasp of concepts in computer science, probability and statistics, linear algebra, calculus,
optimization, algorithms and complexity.
● Background in one or more of information retrieval, data mining, statistical techniques, natural language
processing, and computer vision.
● Excellent coding skills on multiple programming languages with experience building production grade
systems. Prior experience with Python is a bonus.
● Experience building and shipping machine learning models that solve real world engineering problems.
Prior experience with deep learning is a bonus.
● Experience building robust clustering and classification models on unstructured data (text, images, etc).
Experience working with Retail domain data is a bonus.
● Ability to process noisy and unstructured data to enrich it and extract meaningful relationships.
● Experience working with a variety of tools and libraries for machine learning and visualization, including
numpy, matplotlib, scikit-learn, Keras, PyTorch, Tensorflow.
● Use the command line like a pro. Be proficient in Git and other essential software development tools.
● Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
● Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
● It's a huge bonus if you have some personal projects (including open source contributions) that you work
on during your spare time. Show off some of your projects you have hosted on GitHub.

Role and responsibilities
● Understand the business problems we are solving. Build data science capability that align with our product strategy.
● Conduct research. Do experiments. Quickly build throw away prototypes to solve problems pertaining to the Retail domain.
● Build robust clustering and classification models in an iterative manner that can be used in production.
● Constantly think scale, think automation. Measure everything. Optimize proactively.
● Take end to end ownership of the projects you are working on. Work with minimal supervision.
● Help scale our delivery, customer success, and data quality teams with constant algorithmic improvements and automation.
● Take initiatives to build new capabilities. Develop business awareness. Explore productization opportunities.
● Be a tech thought leader. Add passion and vibrance to the team. Push the envelope. Be a mentor to junior members of the team.
● Stay on top of latest research in deep learning, NLP, Computer Vision, and other relevant areas.
Read more
Rely

at Rely

1 video
3 recruiters
Hizam Ismail
Posted by Hizam Ismail
Bengaluru (Bangalore)
2 - 10 yrs
₹8L - ₹35L / yr
skill iconPython
Hadoop
Spark
skill iconAmazon Web Services (AWS)
Big Data
+2 more

Intro

Our data and risk team is the core pillar of our business that harnesses alternative data sources to guide the decisions we make at Rely. The team designs, architects, as well as develop and maintain a scalable data platform the powers our machine learning models. Be part of a team that will help millions of consumers across Asia, to be effortlessly in control of their spending and make better decisions.


What will you do
The data engineer is focused on making data correct and accessible, and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code.

• Optimize and automate ingestion processes for a variety of data sources such as: click stream, transactional and many other sources.

  • Create and maintain optimal data pipeline architecture and ETL processes
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Develop data pipeline and infrastructure to support real-time decisions
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.


What will you need
• 2+ hands-on experience building and implementation of large scale production pipeline and Data Warehouse
• Experience dealing with large scale

  • Proficiency in writing and debugging complex SQLs
  • Experience working with AWS big data tools
    • Ability to lead the project and implement best data practises and technology

Data Pipelining

  • Strong command in building & optimizing data pipelines, architectures and data sets
  • Strong command on relational SQL & noSQL databases including Postgres
  • Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.

Big Data: Strong experience in big data tools & applications

  • Tools: Hadoop, Spark, HDFS etc
  • AWS cloud services: EC2, EMR, RDS, Redshift
  • Stream-processing systems: Storm, Spark-Streaming, Flink etc.
  • Message queuing: RabbitMQ, Spark etc

Software Development & Debugging

  • Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc
  • Strong hold on data structures & algorithms

What would be a bonus

  • Prior experience working in a fast-growth Startup
  • Prior experience in the payments, fraud, lending, advertising companies dealing with large scale data
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort