11+ Applied mathematics Jobs in Pune | Applied mathematics Job openings in Pune
Apply to 11+ Applied mathematics Jobs in Pune on CutShort.io. Explore the latest Applied mathematics Job opportunities across top companies like Google, Amazon & Adobe.
Advanced degree in computer science, math, statistics or a related discipline ( Must have master degree )
Extensive data modeling and data architecture skills
Programming experience in Python, R
Background in machine learning frameworks such as TensorFlow or Keras
Knowledge of Hadoop or another distributed computing systems
Experience working in an Agile environment
Advanced math skills (Linear algebra
Discrete math
Differential equations (ODEs and numerical)
Theory of statistics 1
Numerical analysis 1 (numerical linear algebra) and 2 (quadrature)
Abstract algebra
Number theory
Real analysis
Complex analysis
Intermediate analysis (point set topology)) ( important )
Strong written and verbal communications
Hands on experience on NLP and NLG
Experience in advanced statistical techniques and concepts. ( GLM/regression, Random forest, boosting, trees, text mining ) and experience with application.
Job Description:
1.Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges
a. Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem
b. Improve Model accuracy to deliver greater business impact
c.Estimate business impact due to deployment of model
2.Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge
3.Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines
4.Design , develop & deploy Deep learning models using Tensorflow / Pytorch
5.Experience in using Deep learning models with text, speech, image and video data
a.Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc
b.Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV
c.Knowledge of State of the art Deep learning algorithms
6.Optimize and tune Deep Learnings model for best possible accuracy
7.Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau
8.Work with application teams, in deploying models on cloud as a service or on-prem
a.Deployment of models in Test / Control framework for tracking
b.Build CI/CD pipelines for ML model deployment
9.Integrating AI&ML models with other applications using REST APIs and other connector technologies
10.Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact.
· Technology/Subject Matter Expertise
- Sufficient expertise in machine learning, mathematical and statistical sciences
- Use of versioning & Collaborative tools like Git / Github
- Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming
- Develop prototype level ideas into a solution that can scale to industrial grade strength
- Ability to quantify & estimate the impact of ML models.
· Softskills Profile
- Curiosity to think in fresh and unique ways with the intent of breaking new ground.
- Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control
- Ability to think ahead, and anticipate the needs for solving the problem will be important
· Ability to communicate key messages effectively, and articulate strong opinions in large forums
· Desirable Experience:
- Keen contributor to open source communities, and communities like Kaggle
- Ability to process Huge amount of Data using Pyspark/Hadoop
- Development & Application of Reinforcement Learning
- Knowledge of Optimization/Genetic Algorithms
- Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios
- Optimize and tune deep learning model for best possible accuracy
- Understanding of stream data processing, RPA, edge computing, AR/VR etc
- Appreciation of digital ethics, data privacy will be important
- Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus
- Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus
The role is with a Fintech Credit Card company based in Pune within the Decision Science team. (OneCard )
About
Credit cards haven't changed much for over half a century so our team of seasoned bankers, technologists, and designers set out to redefine the credit card for you - the consumer. The result is OneCard - a credit card reimagined for the mobile generation. OneCard is India's best metal credit card built with full-stack tech. It is backed by the principles of simplicity, transparency, and giving back control to the user.
The Engineering Challenge
“Re-imaging credit and payments from First Principles”
Payments is an interesting engineering challenge in itself with requirements of low latency, transactional guarantees, security, and high scalability. When we add credit and engagement into the mix, the challenge becomes even more interesting with underwriting and recommendation algorithms working on large data sets. We have eliminated the current call center, sales agent, and SMS-based processes with a mobile app that puts the customers in complete control. To stay agile, the entire stack is built on the cloud with modern technologies.
Purpose of Role :
- Develop and implement the collection analytics and strategy function for the credit cards. Use analysis and customer insights to develop optimum strategy.
CANDIDATE PROFILE :
- Successful candidates will have in-depth knowledge of statistical modelling/data analysis tools (Python, R etc.), techniques. They will be an adept communicator with good interpersonal skills to work with senior stake holders in India to grow revenue primarily through identifying / delivering / creating new, profitable analytics solutions.
We are looking for someone who:
- Proven track record in collection and risk analytics preferably in Indian BFSI industry. This is a must.
- Identify & deliver appropriate analytics solutions
- Experienced in Analytics team management
Essential Duties and Responsibilities :
- Responsible for delivering high quality analytical and value added services
- Responsible for automating insights and proactive actions on them to mitigate collection Risk.
- Work closely with the internal team members to deliver the solution
- Engage Business/Technical Consultants and delivery teams appropriately so that there is a shared understanding and agreement as to deliver proposed solution
- Use analysis and customer insights to develop value propositions for customers
- Maintain and enhance the suite of suitable analytics products.
- Actively seek to share knowledge within the team
- Share findings with peers from other teams and management where required
- Actively contribute to setting best practice processes.
Knowledge, Experience and Qualifications :
Knowledge :
- Good understanding of collection analytics preferably in Retail lending industry.
- Knowledge of statistical modelling/data analysis tools (Python, R etc.), techniques and market trends
- Knowledge of different modelling frameworks like Linear Regression, Logistic Regression, Multiple Regression, LOGIT, PROBIT, time- series modelling, CHAID, CART etc.
- Knowledge of Machine learning & AI algorithms such as Gradient Boost, KNN, etc.
- Understanding of decisioning and portfolio management in banking and financial services would be added advantage
- Understanding of credit bureau would be an added advantage
Experience :
- 4 to 8 years of work experience in core analytics function of a large bank / consulting firm.
- Experience on working on Collection analytics is must
- Experience on handling large data volumes using data analysis tools and generating good data insights
- Demonstrated ability to communicate ideas and analysis results effectively both verbally and in writing to technical and non-technical audiences
- Excellent communication, presentation and writing skills Strong interpersonal skills
- Motivated to meet and exceed stretch targets
- Ability to make the right judgments in the face of complexity and uncertainty
- Excellent relationship and networking skills across our different business and geographies
Qualifications :
- Masters degree in Statistics, Mathematics, Economics, Business Management or Engineering from a reputed college
Hi,
Enterprise Minds is looking for Data Architect for Pune Location.
Req Skills:
Python,Pyspark,Hadoop,Java,Scala
at Concinnity Media Technologies
- Develop, train, and optimize machine learning models using Python, ML algorithms, deep learning frameworks (e.g., TensorFlow, PyTorch), and other relevant technologies.
- Implement MLOps best practices, including model deployment, monitoring, and versioning.
- Utilize Vertex AI, MLFlow, KubeFlow, TFX, and other relevant MLOps tools and frameworks to streamline the machine learning lifecycle.
- Collaborate with cross-functional teams to design and implement CI/CD pipelines for continuous integration and deployment using tools such as GitHub Actions, TeamCity, and similar platforms.
- Conduct research and stay up-to-date with the latest advancements in machine learning, deep learning, and MLOps technologies.
- Provide guidance and support to data scientists and software engineers on best practices for machine learning development and deployment.
- Assist in developing tooling strategies by evaluating various options, vendors, and product roadmaps to enhance the efficiency and effectiveness of our AI and data science initiatives.
Graas uses predictive AI to turbo-charge growth for eCommerce businesses. We are “Growth-as-a-Service”. Graas is a technology solution provider using predictive AI to turbo-charge growth for eCommerce businesses. Graas integrates traditional data silos and applies a machine-learning AI engine, acting as an in-house data scientist to predict trends and give real-time insights and actionable recommendations for brands. The platform can also turn insights into action by seamlessly executing these recommendations across marketplace store fronts, brand.coms, social and conversational commerce, performance marketing, inventory management, warehousing, and last mile logistics - all of which impacts a brand’s bottom line, driving profitable growth.
Location – Pune
Job Responsibilities
- Work closely with data scientists and data analysts to build models and continuous data monitoring workflows.
- Implement algorithms / models within companies recommendation engine framework.
- Own the MLOps life-cycle; build and own ML model life-cycle management process encompassing coding to building robust model monitoring workflows.
- Consult with product and business teams to build prototypes and then deploy holistic machine learning solutions.
- Recommend and implement architecture to deploy machine learning pipelines and CI/CD processes at scale
Skills Needed
- Minimum 3 years of experience as Machine Learning Engineer
- Knowledge of machine learning and statistics
- Strong experience working in the areas of time series analysis, reinforcement learning, NLP, optimization and heuristics based implementation to solve real-world problems
- Experienced in architecting solutions with Continuous Integration and Continuous Delivery in mind
- Strong knowledge of coding in Python and libraries such as Pandas, Numpy, Scikit-Learn, PyTorch, etc.
- Experience handling Big Data leveraging technologies like Snowflake, Spark. Ability to work in a big data ecosystem - expert in SQL and ability to work in distributed databases.
- Able to refactor data science code and has collaborated with data scientists in developing ML solutions.
- Experience playing the role of full-stack data scientist and taking solutions to production.
- Educational qualifications should be preferably in Computer Science, Statistics, Engineering or a related area.
- KSQL
- Data Engineering spectrum (Java/Spark)
- Spark Scala / Kafka Streaming
- Confluent Kafka components
- Basic understanding of Hadoop
at Persistent Systems
Location: Pune/Nagpur,Goa,Hyderabad/
Job Requirements:
- 9 years and above of total experience preferably in bigdata space.
- Creating spark applications using Scala to process data.
- Experience in scheduling and troubleshooting/debugging Spark jobs in steps.
- Experience in spark job performance tuning and optimizations.
- Should have experience in processing data using Kafka/Pyhton.
- Individual should have experience and understanding in configuring Kafka topics to optimize the performance.
- Should be proficient in writing SQL queries to process data in Data Warehouse.
- Hands on experience in working with Linux commands to troubleshoot/debug issues and creating shell scripts to automate tasks.
- Experience on AWS services like EMR.
Company Profile:
Easebuzz is a payment solutions (fintech organisation) company which enables online merchants to accept, process and disburse payments through developer friendly APIs. We are focusing on building plug n play products including the payment infrastructure to solve complete business problems. Definitely a wonderful place where all the actions related to payments, lending, subscription, eKYC is happening at the same time.
We have been consistently profitable and are constantly developing new innovative products, as a result, we are able to grow 4x over the past year alone. We are well capitalised and have recently closed a fundraise of $4M in March, 2021 from prominent VC firms and angel investors. The company is based out of Pune and has a total strength of 180 employees. Easebuzz’s corporate culture is tied into the vision of building a workplace which breeds open communication and minimal bureaucracy. An equal opportunity employer, we welcome and encourage diversity in the workplace. One thing you can be sure of is that you will be surrounded by colleagues who are committed to helping each other grow.
Easebuzz Pvt. Ltd. has its presence in Pune, Bangalore, Gurugram.
Salary: As per company standards.
Designation: Data Engineering
Location: Pune
Experience with ETL, Data Modeling, and Data Architecture
Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties
- Spark, EMR, DynamoDB, RedShift, Kinesis, Lambda, Glue.
Experience with AWS cloud data lake for development of real-time or near real-time use cases
Experience with messaging systems such as Kafka/Kinesis for real time data ingestion and processing
Build data pipeline frameworks to automate high-volume and real-time data delivery
Create prototypes and proof-of-concepts for iterative development.
Experience with NoSQL databases, such as DynamoDB, MongoDB etc
Create and maintain optimal data pipeline architecture,
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
Evangelize a very high standard of quality, reliability and performance for data models and algorithms that can be streamlined into the engineering and sciences workflow
Build and enhance data pipeline architecture by designing and implementing data ingestion solutions.
Employment Type
Full-time
● Frame ML / AI use cases that can improve the company’s product
● Implement and develop ML / AI / Data driven rule based algorithms as software items
● For example, building a chatbot that replies an answer from relevant FAQ, and
reinforcing the system with a feedback loop so that the bot improves
Must have skills:
● Data extraction and ETL
● Python (numpy, pandas, comfortable with OOP)
● Django
● Knowledge of basic Machine Learning / Deep Learning / AI algorithms and ability to
implement them
● Good understanding of SDLC
● Deployed ML / AI model in a mobile / web product
● Soft skills : Strong communication skills & Critical thinking ability
Good to have:
● Full stack development experience
Required Qualification:
B.Tech. / B.E. degree in Computer Science or equivalent software engineering
ears of Exp: 3-6+ Years
Skills: Scala, Python, Hive, Airflow, SparkLanguages: Java, Python, Shell Scripting
GCP: BigTable, DataProc, BigQuery, GCS, Pubsub
OR
AWS: Athena, Glue, EMR, S3, RedshiftMongoDB, MySQL, Kafka
Platforms: Cloudera / Hortonworks
AdTech domain experience is a plus.
Job Type - Full Time