Cutshort logo
Foundry Jobs in Bangalore (Bengaluru)

11+ Foundry Jobs in Bangalore (Bengaluru) | Foundry Job openings in Bangalore (Bengaluru)

Apply to 11+ Foundry Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Foundry Job opportunities across top companies like Google, Amazon & Adobe.

icon
Infogain
Agency job
via Technogen India PvtLtd by RAHUL BATTA
Bengaluru (Bangalore), Pune, Noida, NCR (Delhi | Gurgaon | Noida)
7 - 10 yrs
₹20L - ₹25L / yr
Data engineering
skill iconPython
SQL
Spark
PySpark
+10 more
  1. Sr. Data Engineer:

 Core Skills – Data Engineering, Big Data, Pyspark, Spark SQL and Python

Candidate with prior Palantir Cloud Foundry OR Clinical Trial Data Model background is preferred

Major accountabilities:

  • Responsible for Data Engineering, Foundry Data Pipeline Creation, Foundry Analysis & Reporting, Slate Application development, re-usable code development & management and Integrating Internal or External System with Foundry for data ingestion with high quality.
  • Have good understanding on Foundry Platform landscape and it’s capabilities
  • Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
  • Defines company data assets (data models), Pyspark, spark SQL, jobs to populate data models.
  • Designs data integrations and data quality framework.
  • Design & Implement integration with Internal, External Systems, F1 AWS platform using Foundry Data Connector or Magritte Agent
  • Collaboration with data scientists, data analyst and technology teams to document and leverage their understanding of the Foundry integration with different data sources - Actively participate in agile work practices
  • Coordinating with Quality Engineer to ensure the all quality controls, naming convention & best practices have been followed

Desired Candidate Profile :

  • Strong data engineering background
  • Experience with Clinical Data Model is preferred
  • Experience in
    • SQL Server ,Postgres, Cassandra, Hadoop, and Spark for distributed data storage and parallel computing
    • Java and Groovy for our back-end applications and data integration tools
    • Python for data processing and analysis
    • Cloud infrastructure based on AWS EC2 and S3
  • 7+ years IT experience, 2+ years’ experience in Palantir Foundry Platform, 4+ years’ experience in Big Data platform
  • 5+ years of Python and Pyspark development experience
  • Strong troubleshooting and problem solving skills
  • BTech or master's degree in computer science or a related technical field
  • Experience designing, building, and maintaining big data pipelines systems
  • Hands-on experience on Palantir Foundry Platform and Foundry custom Apps development
  • Able to design and implement data integration between Palantir Foundry and external Apps based on Foundry data connector framework
  • Hands-on in programming languages primarily Python, R, Java, Unix shell scripts
  • Hand-on experience in AWS / Azure cloud platform and stack
  • Strong in API based architecture and concept, able to do quick PoC using API integration and development
  • Knowledge of machine learning and AI
  • Skill and comfort working in a rapidly changing environment with dynamic objectives and iteration with users.

 Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision

Read more
Conversational AI- Product Development company

Conversational AI- Product Development company

Agency job
via The Hub by Sridevi Viswanathan
Bengaluru (Bangalore)
5 - 10 yrs
₹12L - ₹15L / yr
Artificial Intelligence (AI)
Chatbot
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
+2 more

Chatbot Developer

We are a Conversational AI- Product Development company which is located in the USA, Bangalore.

 

We are looking for a Senior Chatbot /Javascript Developer to join the Avaamo PSG(delivery) team.

 

Responsibilities:

  • Independent team member for analyzing requirements, designing, coding, and implementing  Conversation AI products.
  • Its a  product expert work closely with IT Managers and Business Groups to gather requirements and translate those into the required technical solution.
  • Drive solution implementation using the Conversational design approach.
  • Develop, deploy and maintain customized extensions to the Avaamo platform-specific to customer requirements.
  • Conduct training and technical guidance sessions for partner and customer development teams.
  • Evaluating reported defects and the correction of prioritized defects.
  • Travel onsite to customer locations for close support.
  • Document how to's and implement best practices for Avaamo product solutions.

 

Requirements:

  • Strong programming experience in javascript, HTML/CSS.
  • Experience of creating and consuming REST APIs and SOAP services.
  • Strong knowledge and awareness of Web Technologies and current web trends.
  • Working knowledge of Security in Web applications and services.
  • experience in using the NodeJS framework with good understanding of the underlying architecture.
  • Experience of deploying web applications on Linux servers in production environment.
  • Excellent communication skills.

 

Good to haves:

  • Full stack experience UI and UX design experience or insights
  • Working knowledge of AI, ML and NLP.
  • Experience of enterprise systems integration like MS Dynamics CRM, Salesforce, ServiceNow, MS Active Directory etc.
  • Experience of building Single Sign On in web/mobile applications.
  • Ability to learn latest technologies and handle small engineering teams.
Read more
Anicaa Data

Anicaa Data

Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
3 - 6 yrs
₹10L - ₹25L / yr
TensorFlow
PyTorch
skill iconMachine Learning (ML)
skill iconData Science
data scientist
+11 more

Job Title – Data Scientist (Forecasting)

Anicca Data is seeking a Data Scientist (Forecasting) who is motivated to apply his/her/their skill set to solve complex and challenging problems. The focus of the role will center around applying deep learning models to real-world applications.  The candidate should have experience in training, testing deep learning architectures.  This candidate is expected to work on existing codebases or write an optimized codebase at Anicca Data. The ideal addition to our team is self-motivated, highly organized, and a team player who thrives in a fast-paced environment with the ability to learn quickly and work independently.

 

Job Location: Remote (for time being) and Bangalore, India (post-COVID crisis)

 

Required Skills:

  • At least 3+ years of experience in a Data Scientist role
  • Bachelor's/Master’s degree in Computer Science, Engineering, Statistics, Mathematics, or similar quantitative discipline. D. will add merit to the application process
  • Experience with large data sets, big data, and analytics
  • Exposure to statistical modeling, forecasting, and machine learning. Deep theoretical and practical knowledge of deep learning, machine learning, statistics, probability, time series forecasting
  • Training Machine Learning (ML) algorithms in areas of forecasting and prediction
  • Experience in developing and deploying machine learning solutions in a cloud environment (AWS, Azure, Google Cloud) for production systems
  • Research and enhance existing in-house, open-source models, integrate innovative techniques, or create new algorithms to solve complex business problems
  • Experience in translating business needs into problem statements, prototypes, and minimum viable products
  • Experience managing complex projects including scoping, requirements gathering, resource estimations, sprint planning, and management of internal and external communication and resources
  • Write C++ and Python code along with TensorFlow, PyTorch to build and enhance the platform that is used for training ML models

Preferred Experience

  • Worked on forecasting projects – both classical and ML models
  • Experience with training time series forecasting methods like Moving Average (MA) and Autoregressive Integrated Moving Average (ARIMA) with Neural Networks (NN) models as Feed-forward NN and Nonlinear Autoregressive
  • Strong background in forecasting accuracy drivers
  • Experience in Advanced Analytics techniques such as regression, classification, and clustering
  • Ability to explain complex topics in simple terms, ability to explain use cases and tell stories
Read more
A fast growing Big Data company

A fast growing Big Data company

Agency job
via Careerconnects by Kumar Narayanan
Noida, Bengaluru (Bangalore), Chennai, Hyderabad
6 - 8 yrs
₹10L - ₹15L / yr
AWS Glue
SQL
skill iconPython
PySpark
Data engineering
+6 more

AWS Glue Developer 

Work Experience: 6 to 8 Years

Work Location:  Noida, Bangalore, Chennai & Hyderabad

Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops, 

Job Reference ID:BT/F21/IND


Job Description:

Design, build and configure applications to meet business process and application requirements.


Responsibilities:

7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.


Technical Experience:

Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.


➢ Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.

➢ Create data pipeline architecture by designing and implementing data ingestion solutions.

➢ Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.

➢ Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.

➢ Author ETL processes using Python, Pyspark.

➢ Build Redshift Spectrum direct transformations and data modelling using data in S3.

➢ ETL process monitoring using CloudWatch events.

➢ You will be working in collaboration with other teams. Good communication must.

➢ Must have experience in using AWS services API, AWS CLI and SDK


Professional Attributes:

➢ Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.

➢ Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.

➢ Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.


Qualification:

➢ Degree in Computer Science, Computer Engineering or equivalent.


Salary: Commensurate with experience and demonstrated competence

Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Sukhdeep Singh
Posted by Sukhdeep Singh
Bengaluru (Bangalore)
4 - 6 yrs
₹10L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
Snow flake schema
Snowflake
+5 more

Job Title: AWS-Azure Data Engineer with Snowflake

Location: Bangalore, India

Experience: 4+ years

Budget: 15 to 20 LPA

Notice Period: Immediate joiners or less than 15 days

Job Description:

We are seeking an experienced AWS-Azure Data Engineer with expertise in Snowflake to join our team in Bangalore. As a Data Engineer, you will be responsible for designing, implementing, and maintaining data infrastructure and systems using AWS, Azure, and Snowflake. Your primary focus will be on developing scalable and efficient data pipelines, optimizing data storage and processing, and ensuring the availability and reliability of data for analysis and reporting.

Responsibilities:

  1. Design, develop, and maintain data pipelines on AWS and Azure to ingest, process, and transform data from various sources.
  2. Optimize data storage and processing using cloud-native services and technologies such as AWS S3, AWS Glue, Azure Data Lake Storage, Azure Data Factory, etc.
  3. Implement and manage data warehouse solutions using Snowflake, including schema design, query optimization, and performance tuning.
  4. Collaborate with cross-functional teams to understand data requirements and translate them into scalable and efficient technical solutions.
  5. Ensure data quality and integrity by implementing data validation, cleansing, and transformation processes.
  6. Develop and maintain ETL processes for data integration and migration between different data sources and platforms.
  7. Implement and enforce data governance and security practices, including access control, encryption, and compliance with regulations.
  8. Collaborate with data scientists and analysts to support their data needs and enable advanced analytics and machine learning initiatives.
  9. Monitor and troubleshoot data pipelines and systems to identify and resolve performance issues or data inconsistencies.
  10. Stay updated with the latest advancements in cloud technologies, data engineering best practices, and emerging trends in the industry.

Requirements:

  1. Bachelor's or Master's degree in Computer Science, Information Systems, or a related field.
  2. Minimum of 4 years of experience as a Data Engineer, with a focus on AWS, Azure, and Snowflake.
  3. Strong proficiency in data modelling, ETL development, and data integration.
  4. Expertise in cloud platforms such as AWS and Azure, including hands-on experience with data storage and processing services.
  5. In-depth knowledge of Snowflake, including schema design, SQL optimization, and performance tuning.
  6. Experience with scripting languages such as Python or Java for data manipulation and automation tasks.
  7. Familiarity with data governance principles and security best practices.
  8. Strong problem-solving skills and ability to work independently in a fast-paced environment.
  9. Excellent communication and interpersonal skills to collaborate effectively with cross-functional teams and stakeholders.
  10. Immediate joiner or notice period less than 15 days preferred.

If you possess the required skills and are passionate about leveraging AWS, Azure, and Snowflake to build scalable data solutions, we invite you to apply. Please submit your resume and a cover letter highlighting your relevant experience and achievements in the AWS, Azure, and Snowflake domains.

Read more
Kloud9 Technologies
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹30L / yr
Google Cloud Platform (GCP)
PySpark
skill iconPython
skill iconScala

About Kloud9:

 

Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.

 

Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.

 

At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.

 

Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.

 

We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.


●    Overall 8+ Years of Experience in Web Application development.

●    5+ Years of development experience with JAVA8 , Springboot, Microservices and middleware

●    3+ Years of Designing Middleware using Node JS platform.

●    good to have 2+ Years of Experience in using NodeJS along with AWS Serverless platform.

●    Good Experience with Javascript / TypeScript, Event Loops, ExpressJS, GraphQL, SQL DB (MySQLDB), NoSQL DB(MongoDB) and YAML templates.

●    Good Experience with TDD Driven Development and Automated Unit Testing.

●    Good Experience with exposing and consuming Rest APIs in Java 8, Springboot platform and Swagger API contracts.

●    Good Experience in building NodeJS middleware performing Transformations, Routing, Aggregation, Orchestration and Authentication(JWT/OAUTH).

●    Experience supporting and working with cross-functional teams in a dynamic environment.

●    Experience working in Agile Scrum Methodology.

●    Very good Problem-Solving Skills.

●    Very good learner and passion for technology.

●     Excellent verbal and written communication skills in English

●     Ability to communicate effectively with team members and business stakeholders


Secondary Skill Requirements:

 

● Experience working with any of Loopback, NestJS, Hapi.JS, Sails.JS, Passport.JS


Why Explore a Career at Kloud9:

 

With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.

Read more
upGrad

at upGrad

1 video
19 recruiters
Priyanka Muralidharan
Posted by Priyanka Muralidharan
Bengaluru (Bangalore), Mumbai
4 - 6 yrs
₹19L - ₹24L / yr
SQL
skill iconPython
Tableau
Team Management
Statistical Analysis

Role Summary

We Are looking for an analytically inclined, Insights Driven Product Analyst to make our organisation more data driven. In this role you will be responsible for creating dashboards to drive insights for product and business teams. Be it Day to Day decisions as well as long term impact assessment, Measuring the Efficacy of different products or certain teams, You'll be Empowering each of them. The growing nature of the team will require you to be in touch with all of the teams at upgrad. Are you the "Go-To" person everyone looks at for getting Data, Then this role is for you.

 

Roles & Responsibilities

  • Lead and own the analysis of highly complex data sources, identifying trends and patterns in data and provide insights/recommendations based on analysis results
  • Build, maintain, own and communicate detailed reports to assist Marketing, Growth/Learning Experience and Other Business/Executive Teams
  • Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions.
  • Analyze data and generate insights in the form of user analysis, user segmentation, performance reports, etc.
  • Facilitate review sessions with management, business users and other team members
  • Design and create visualizations to present actionable insights related to data sets and business questions at hand
  • Develop intelligent models around channel performance, user profiling, and personalization

Skills Required

  • Having 4-6 yrs hands-on experience with Product related analytics and reporting
  • Experience with building dashboards in Tableau or other data visualization tools such as D3
  • Strong data, statistics, and analytical skills with a good grasp of SQL.
  • Programming experience in Python is must
  • Comfortable managing large data sets
  • Good Excel/data management skills
Read more
Our Client company is into Computer Software. (EC1)

Our Client company is into Computer Software. (EC1)

Agency job
via Multi Recruit by Manjunath Multirecruit
Bengaluru (Bangalore)
14 - 18 yrs
₹36.4L - ₹38.7L / yr
ADF
Snowflake
Data Architect
Data architecture
Data engineering
  • Establish and maintain a trusted advisor relationship within the company’s IT, Commercial Digital Solutions, Functions, and Businesses you interact with
  • Establish and maintain close working relationships with teams responsible for delivering solutions to the company’s businesses and functions
  • Perform key management and thought leadership in the areas of advanced data techniques, including data modeling, data access, data integration, data visualization, big data solutions, text mining, data discovery, statistical methods, and database design
  • Work with business partners to define ways to leverage data to develop platforms and solutions to drive business growth
  • Engage collaboratively with project teams to support project objectives through the application of sound data architectural principles; support the project with knowledge of existing data assets and provide guidance on reusable data structures
  • Share knowledge of external and internal data capabilities and trends, provide leadership, and facilitate the evaluation of vendors and products
  • Utilize advanced data analysis, including statistical analysis and data mining techniques
  • Collaborate with others to set an enterprise data vision with solid recommendations, and work to gain business and IT consensus

Basic Qualifications

  • Overall 15+ years of IT Environment Experience.
  • Solid background as a data architect/cloud architect with a minimum of 5 years as a core architect
  • Architecting experience in the cloud data warehouse solutions (Snowflake preferred, big query/redshift/synapse analytics-nice to have)
  • Strong architecture and design skills using Azure Services( Like ADF/Data Flows/Evengrids/IOTHUB/EvenHub/ADLS Gen2 /Serverless Azure Functions/Logic Apps/Azure Analysis Services Cube Design patterns/Azure SQL Db)
  • Working Knowledge on Lambda/Kappa frameworks within the data Lake designs& architecture solutions
  • Deep understanding of the DevOps/DataOps patterns
  • Architecting the semantic models
  • Data modeling experience with Data vault principles
  • Cloud-native Batch & Realtime ELT/ETL patterns
  • Familiarity with Lucid Chart/Visio
  • Logging & monitoring pattern designs in the data lake /data warehouse context
  • Meta data-driven design patterns * solutioning expertise
  • Data Catalog integration experience in the data lake designs
  • 3+ years of experience partnering with business managers to develop technical strategies and architectures to support their objectives
  • 2+ years of hands-on experience with analytics deployment in the cloud (prefer Azure but AWS knowledge is acceptable)
  • 5+ years of delivering analytics in modern data architecture (Hadoop, Massively Parallel Processing Database Platforms, and Semantic Modeling)
  • Demonstrable knowledge of ETL and ELT patterns and when to use either one; experience selecting among different tools that could be leveraged to accomplish this (Talend, Informatica, Azure Data Factory, SSIS, SAP Data Services)
  • Demonstrable knowledge of and experience with different scripting languages (python, javascript, PIG, or object-oriented programming like Java or . NET)

Preferred Qualifications

  • Bachelor’s degree in Computer Science, MIS, related field, or equivalent experience
  • Experience working with solutions delivery teams using Agile/Scrum or similar methodologies
  • 2+ years of experience designing solutions leveraging Microsoft Cortana Intelligence Suite of Products [Azure SQL, Azure SQL DW, Cosmos DB, HDInsight, DataBricks]
  • Experience with enterprise systems, like CRM, ERP, Field Services Management, Supply Chain solutions, HR systems
  • Ability to work independently, establishing strategic objectives, project plans, and milestones
  • Exceptional written, verbal & presentation skills

 

 

Read more
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹20L / yr
skill iconData Science
skill iconPython
skill iconMachine Learning (ML)
skill iconDeep Learning
SQL
Work-days: Sunday through Thursday
Work shift: Day time


  •  Strong problem-solving skills with an emphasis on product development.
• Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw
insights from large data sets.
• Experience in building ML pipelines with Apache Spark, Python
• Proficiency in implementing end to end Data Science Life cycle
• Experience in Model fine-tuning and advanced grid search techniques
• Experience working with and creating data architectures.
• Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural
networks, etc.) and their real-world advantages/drawbacks.
• Knowledge of advanced statistical techniques and concepts (regression, properties of distributions,
statistical tests and proper usage, etc.) and experience with applications.
• Excellent written and verbal communication skills for coordinating across teams.
• A drive to learn and master new technologies and techniques.
• Assess the effectiveness and accuracy of new data sources and data gathering techniques.
• Develop custom data models and algorithms to apply to data sets.
• Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting, and other business outcomes.
• Develop company A/B testing framework and test model quality.
• Coordinate with different functional teams to implement models and monitor outcomes.
• Develop processes and tools to monitor and analyze model performance and data accuracy.

Key skills:
● Strong knowledge in Data Science pipelines with Python
● Object-oriented programming
● A/B testing framework and model fine-tuning
● Proficiency in using sci-kit, NumPy, and pandas package in python
Nice to have:
● Ability to work with containerized solutions: Docker/Compose/Swarm/Kubernetes
● Unit testing, Test-driven development practice
● DevOps, Continuous integration/ continuous deployment experience
● Agile development environment experience, familiarity with SCRUM
● Deep learning knowledge
Read more
Graphene Services Pte Ltd
Swetha Seshadri
Posted by Swetha Seshadri
Remote, Bengaluru (Bangalore)
3 - 7 yrs
Best in industry
PyTorch
skill iconDeep Learning
Natural Language Processing (NLP)
skill iconPython
skill iconMachine Learning (ML)
+8 more
ML Engineer
WE ARE GRAPHENE

Graphene is an award-winning AI company, developing customized insights and data solutions for corporate clients. With a focus on healthcare, consumer goods and financial services, our proprietary AI platform is disrupting market research with an approach that allows us to get into the mind of customers to a degree unprecedented in traditional market research.

Graphene was founded by corporate leaders from Microsoft and P&G and works closely with the Singapore Government & universities in creating cutting edge technology. We are gaining traction with many Fortune 500 companies globally.

Graphene has a 6-year track record of delivering financially sustainable growth and is one of the few start-ups which are self-funded, yet profitable and debt free.

We already have a strong bench strength of leaders in place. Now, we are looking to groom more talents for our expansion into the US. Join us and take both our growths to the next level!

 

WHAT WILL THE ENGINEER-ML DO?

 

  • Primary Purpose: As part of a highly productive and creative AI (NLP) analytics team, optimize algorithms/models for performance and scalability, engineer & implement machine learning algorithms into services and pipelines to be consumed at web-scale
  • Daily Grind: Interface with data scientists, project managers, and the engineering team to achieve sprint goals on the product roadmap, and ensure healthy models, endpoints, CI/CD,
  • Career Progression: Senior ML Engineer, ML Architect

 

YOU CAN EXPECT TO

  • Work in a product-development team capable of independently authoring software products.
  • Guide junior programmers, set up the architecture, and follow modular development approaches.
  • Design and develop code which is well documented.
  • Optimize of the application for maximum speed and scalability
  • Adhere to the best Information security and Devops practices.
  • Research and develop new approaches to problems.
  • Design and implement schemas and databases with respect to the AI application
  • Cross-pollinated with other teams.

 

HARD AND SOFT SKILLS

Must Have

  • Problem-solving abilities
  • Extremely strong programming background – data structures and algorithm
  • Advanced Machine Learning: TensorFlow, Keras
  • Python, spaCy, NLTK, Word2Vec, Graph databases, Knowledge-graph, BERT (derived models), Hyperparameter tuning
  • Experience with OOPs and design patterns
  • Exposure to RDBMS/NoSQL
  • Test Driven Development Methodology

 

Good to Have

  • Working in cloud-native environments (preferably Azure)
  • Microservices
  • Enterprise Design Patterns
  • Microservices Architecture
  • Distributed Systems
Read more
Computer Power Group Pvt Ltd
Bengaluru (Bangalore), Chennai, Pune, Mumbai
7 - 13 yrs
₹14L - ₹20L / yr
skill iconR Programming
skill iconPython
skill iconData Science
SQL server
Business Analysis
+3 more
Requirement Specifications: Job Title:: Data Scientist Experience:: 7 to 10 Years Work Location:: Mumbai, Bengaluru, Chennai Job Role:: Permanent Notice Period :: Immediate to 60 days Job description: • Support delivery of one or more data science use cases, leading on data discovery and model building activities Conceptualize and quickly build POC on new product ideas - should be willing to work as an individual contributor • Open to learn, implement newer tools\products • Experiment & identify best methods\techniques, algorithms for analytical problems • Operationalize – Work closely with the engineering, infrastructure, service management and business teams to operationalize use cases Essential Skills • Minimum 2-7 years of hands-on experience with statistical software tools: SQL, R, Python • 3+ years’ experience in business analytics, forecasting or business planning with emphasis on analytical modeling, quantitative reasoning and metrics reporting • Experience working with large data sets in order to extract business insights or build predictive models • Proficiency in one or more statistical tools/languages – Python, Scala, R, SPSS or SAS and related packages like Pandas, SciPy/Scikit-learn, NumPy etc. • Good data intuition / analysis skills; sql, plsql knowledge must • Manage and transform variety of datasets to cleanse, join, aggregate the datasets • Hands-on experience running in running various methods like Regression, Random forest, k-NN, k-Means, boosted trees, SVM, Neural Network, text mining, NLP, statistical modelling, data mining, exploratory data analysis, statistics (hypothesis testing, descriptive statistics) • Deep domain (BFSI, Manufacturing, Auto, Airlines, Supply Chain, Retail & CPG) knowledge • Demonstrated ability to work under time constraints while delivering incremental value. • Education Minimum a Masters in Statistics, or PhD in domains linked to applied statistics, applied physics, Artificial Intelligence, Computer Vision etc. BE/BTECH/BSC Statistics/BSC Maths
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort