11+ GLM Jobs in India
Apply to 11+ GLM Jobs on CutShort.io. Find your next job, effortlessly. Browse GLM Jobs and apply today!
Advanced degree in computer science, math, statistics or a related discipline ( Must have master degree )
Extensive data modeling and data architecture skills
Programming experience in Python, R
Background in machine learning frameworks such as TensorFlow or Keras
Knowledge of Hadoop or another distributed computing systems
Experience working in an Agile environment
Advanced math skills (Linear algebra
Discrete math
Differential equations (ODEs and numerical)
Theory of statistics 1
Numerical analysis 1 (numerical linear algebra) and 2 (quadrature)
Abstract algebra
Number theory
Real analysis
Complex analysis
Intermediate analysis (point set topology)) ( important )
Strong written and verbal communications
Hands on experience on NLP and NLG
Experience in advanced statistical techniques and concepts. ( GLM/regression, Random forest, boosting, trees, text mining ) and experience with application.
About us:
Hypersonix.ai is revolutionizing the e-commerce landscape by harnessing the power of AI, ML, and advanced decision capabilities to deliver real-time business insights. Built from the ground up with cutting-edge technology, Hypersonix.ai simplifies data consumption for our diverse range of customers across various industry verticals.
Roles and Responsibilities:
- Collaborate with cross-functional teams in designing, developing, and deploying traditional machine learning models and algorithms for supply chain optimization.
- Conduct research, experimentation, and implementation of state-of-the-art traditional machine learning techniques and frameworks to address complex challenges.
- Develop and enhance forecasting models for demand prediction, inventory management, and pricing optimization.
- Optimize traditional machine learning models for tasks such as inventory forecasting, pricing strategies, and demand forecasting.
- Stay abreast of the latest advancements in traditional machine learning, forecasting techniques, and optimization methods, integrating them into our projects.
- Collaborate closely with data scientists, software engineers, and product teams to seamlessly integrate machine learning solutions into production environments.
- Document research findings, methodologies, and codebase for effective knowledge sharing and team collaboration.
- Troubleshoot and resolve issues in production environments to ensure system reliability and performance.
- Conduct root cause analysis of product defects and implement effective solutions.
- Design, develop, and maintain components of the product to drive customer adoption.
- Utilize various data science methodologies to tackle complex business problems effectively.
Qualifications:
- Strong problem-solving skills and the ability to tackle complex, open-ended challenges.
- Self-motivated individual with a strong work ethic, capable of working independently and collaboratively within a team.
- Proven experience in traditional machine learning, forecasting, pricing, and inventory optimization with a strong portfolio of projects (7-9 years).
- Experience working on NLP and deep learning.
- Proficiency in Python programming and the ability to write efficient, maintainable code.
- Expertise in traditional machine learning libraries and frameworks such as scikit-learn, XGBoost, and LightGBM.
- Experience with cloud-based AI services and infrastructure (e.g., AWS).
- Demonstrated experience in API development and integration.
- Previous experience working in production environments, ensuring system stability and performance.
- 3+ years experience in practical implementation and deployment of ML based systems preferred.
- BE/B Tech or M Tech (preferred) in CS/Engineering with strong mathematical/statistical background
- Strong mathematical and analytical skills, especially statistical and ML techniques, with familiarity with different supervised and unsupervised learning algorithms
- Implementation experiences and deep knowledge of Classification, Time Series Analysis, Pattern Recognition, Reinforcement Learning, Deep Learning, Dynamic Programming and Optimisation
- Experience in working on modeling graph structures related to spatiotemporal systems
- Programming skills in Python
- Experience in developing and deploying on cloud (AWS or Google or Azure)
- Good verbal and written communication skills
- Familiarity with well-known ML frameworks such as Pandas, Keras, TensorFlow
Location: Ahmedabad / Pune
Team: Technology
Company Profile
InFoCusp is a company working in the broad field of Computer Science, Software Engineering, and Artificial Intelligence (AI). It is headquartered in Ahmedabad, India, having a branch office in Pune.
We have worked on / are working on AI projects / algorithms-heavy projects with applications ranging in finance, healthcare, e-commerce, legal, HR/recruiting, pharmaceutical, leisure sports and computer gaming domains. All of this is based on the core concepts of data science,
computer vision, machine learning (with emphasis on deep learning), cloud computing, biomedical signal processing, text and natural language processing, distributed systems, embedded systems and the Internet of Things.
PRIMARY RESPONSIBILITIES:
● Applying machine learning, deep learning, and signal processing on large datasets (Audio, sensors, images, videos, text) to develop models.
● Architecting large scale data analytics/modeling systems.
● Designing and programming machine learning methods and integrating them into our ML framework/pipeline.
● Analyzing data collected from various sources,
● Evaluate and validate the analysis with statistical methods. Also presenting this in a lucid form to people not familiar with the domain of data science/computer science.
● Writing specifications for algorithms, reports on data analysis, and documentation of algorithms.
● Evaluating new machine learning methods and adapting them for our
purposes.
● Feature engineering to add new features that improve model
performance.
KNOWLEDGE AND SKILL REQUIREMENTS:
● Background and knowledge of recent advances in machine learning, deep learning, natural language processing, and/or image/signal/video processing with at least 3 years of professional work experience working on real-world data.
● Strong programming background, e.g. Python, C/C++, R, Java, and knowledge of software engineering concepts (OOP, design patterns).
● Knowledge of machine learning libraries Tensorflow, Jax, Keras, scikit-learn, pyTorch. Excellent mathematical skills and background, e.g. accuracy, significance tests, visualization, advanced probability concepts
● Ability to perform both independent and collaborative research.
● Excellent written and spoken communication skills.
● A proven ability to work in a cross-discipline environment in defined time frames. Knowledge and experience of deploying large-scale systems using distributed and cloud-based systems (Hadoop, Spark, Amazon EC2, Dataflow) is a big plus.
● Knowledge of systems engineering is a big plus.
● Some experience in project management and mentoring is also a big plus.
EDUCATION:
- B.E.\B. Tech\B.S. candidates' entries with significant prior experience in the aforementioned fields will be considered.
- M.E.\M.S.\M. Tech\PhD preferably in fields related to Computer Science with experience in machine learning, image and signal processing, or statistics preferred.
- B.E Computer Science or equivalent.
- In-depth knowledge of machine learning algorithms and their applications including
practical experience with and theoretical understanding of algorithms for classification,
regression and clustering.
- Hands-on experience in computer vision and deep learning projects to solve real world
problems involving vision tasks such as object detection, Object tracking, instance
segmentation, activity detection, depth estimation, optical flow, multi-view geometry,
domain adaptation etc.
- Strong understanding of modern and traditional Computer Vision Algorithms.
- Experience in one of the Deep Learning Frameworks / Networks: PyTorch, TensorFlow,
Darknet (YOLO v4 v5), U-Net, Mask R-CNN, EfficientDet, BERT etc.
- Proficiency with CNN architectures such as ResNet, VGG, UNet, MobileNet, pix2pix,
and Cycle GAN.
- Experienced user of libraries such as OpenCV, scikit-learn, matplotlib and pandas.
- Ability to transform research articles into working solutions to solve real-world problems.
- High proficiency in Python programming knowledge.
- Familiar with software development practices/pipelines (DevOps- Kubernetes, docker
containers, CI/CD tools).
- Strong communication skills.
Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.
Job Description
Experience : 7+ years
Location : Pune / Hyderabad
Skills :
- Drive and participate in requirements gathering workshops, estimation discussions, design meetings and status review meetings
- Participate and contribute in Solution Design and Solution Architecture for implementing Big Data Projects on-premise and on cloud
- Technical Hands on experience in design, coding, development and managing Large Hadoop implementation
- Proficient in SQL, Hive, PIG, Spark SQL, Shell Scripting, Kafka, Flume, Scoop with large Big Data and Data Warehousing projects with either Java, Python or Scala based Hadoop programming background
- Proficient with various development methodologies like waterfall, agile/scrum and iterative
- Good Interpersonal skills and excellent communication skills for US and UK based clients
About Us!
A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.
We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.
Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.
We have our own products!
Eagle – Data warehouse Assessment & Migration Planning Product
Raven – Automated Workload Conversion Product
Pelican - Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.
Why join us!
Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.
Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy
Check out more about us on our website below!
www.datametica.com
The Role
We are looking for a tech enthusiast who can work with us to help further our product development with Augmented Reality to keep us ahead of the technology curve. We have a tight product roadmap that needs enthusiastic people to solve problems in the realm of computer vision systems building towards a high accuracy SLAM solution.
Qualified candidates will be driven self-starters, robust thinkers, strong collaborators, and adept at operating in a highly dynamic environment. We look for colleagues that are passionate about our product and embody our values.
Some of the main responsibilities include:
-
Develop objectives and design research projects
- Design, build and maintain high performance, reusable and reliable code
- Work with the core team to bring ideas to life and keep pace with the latest research in Computer Vision, Deep Learning.
Qualifications, Skills & Competencies
-
Masters / Phd in Computer Science, Mathematics or relevant experience
-
3+ years of experience in geometric computer vision, algorithms, SfM / SLAM, visual inertial odometry / 3D reconstruction / sensor fusion.
-
Experience in Deep Learning for SLAM and related frameworks
-
Experience in sensor fusion (IMU, camera) and in probabilistic filters- EKF, UKF
-
Strong mathematical understanding - linear algebra, 3d-geometry, probability.
-
Proficiency in programming - C++, python and knowledge of algorithms
-
Proficiency in programming - C++, python and knowledge of algorithms
-
Proven experience in product development (monocular SLAM, multi-view camera
calibration)
-
Strong background in computer vision and ML/DL
-
Experience in optimization of SLAM algorithms
-
Comfort with communication and collaboration across teams. The ability to multitask, manage tasks with varying priorities and align with stakeholders.
- 3-5yrs of practical DS experience working with varied data sets. Working with retail banking is preferred but not necessary.
- Need to be strong in concepts of statistical modelling – particularly looking for practical knowledge learnt from work experience (should be able to give "rule of thumb" answers)
- Strong problem solving skills and the ability to articulate really well.
- Ideally, the data scientist should have interfaced with data engineering and model deployment teams to bring models / solutions to "live" in production.
- Strong working knowledge of python ML stack is very important here.
- Willing to work on diverse range of tasks in building ML related capability on the Corridor Platform as well as client work.
- Someone with strong interest in data engineering aspect of ML is highly preferred, i.e. can play dual role of Data Scientist as well as someone who can code a module on our Corridor Platform writing robust code.
Structured ML techniques for candidates:
- GBM
- XgBoost
- Random Forest
- Neural Net
- Logistic Regression
Job Title: Power BI Developer(Onsite)
Location: Park Centra, Sec 30, Gurgaon
CTC: 8 LPA
Time: 1:00 PM - 10:00 PM
Must Have Skills:
- Power BI Desktop Software
- Dax Queries
- Data modeling
- Row-level security
- Visualizations
- Data Transformations and filtering
- SSAS and SQL
Job description:
We are looking for a PBI Analytics Lead responsible for efficient Data Visualization/ DAX Queries and Data Modeling. The candidate will work on creating complex Power BI reports. He will be involved in creating complex M, Dax Queries and working on data modeling, Row-level security, Visualizations, Data Transformations, and filtering. He will be closely working with the client team to provide solutions and suggestions on Power BI.
Roles and Responsibilities:
- Accurate, intuitive, and aesthetic Visual Display of Quantitative Information: We generate data, information, and insights through our business, product, brand, research, and talent teams. You would assist in transforming this data into visualizations that represent easy-to-consume visual summaries, dashboards and storyboards. Every graph tells a story.
- Understanding Data: You would be performing and documenting data analysis, data validation, and data mapping/design. You would be mining large datasets to determine its characteristics and select appropriate visualizations.
- Project Owner: You would develop, maintain, and manage advanced reporting, analytics, dashboards and other BI solutions, and would be continuously reviewing and improving existing systems and collaborating with teams to integrate new systems. You would also contribute to the overall data analytics strategy by knowledge sharing and mentoring end users.
- Perform ongoing maintenance & production of Management dashboards, data flows, and automated reporting.
- Manage upstream and downstream impact of all changes on automated reporting/dashboards
- Independently apply problem-solving ability to identify meaningful insights to business
- Identify automation opportunities and work with a wide range of stakeholders to implement the same.
- The ability and self-confidence to work independently and increase the scope of the service line
Requirements:
- 3+ years of work experience as an Analytics Lead / Senior Analyst / Sr. PBI Developer.
- Sound understanding and knowledge of PBI Visualization and Data Modeling with DAX queries
- Experience in leading and mentoring a small team.
Working closely with the Product group and other teams, the Data Engineer is responsible for the development, deployment and maintenance of our data infrastructure and applications. With a focus on quality, error-free data delivery, the Data Engineer works to ensure our data is appropriately available and fully supporting various constituencies across our organization. This is a multifaceted opportunity to work with a small, talented team on impactful projects that are essential to ACUE’s higher education success. As an early member of our tech group, you’ll have the unique opportunity to build critical systems and features while helping shape the direction of the team and the product.