Cutshort logo
Linear algebra Jobs in Hyderabad

11+ Linear algebra Jobs in Hyderabad | Linear algebra Job openings in Hyderabad

Apply to 11+ Linear algebra Jobs in Hyderabad on CutShort.io. Explore the latest Linear algebra Job opportunities across top companies like Google, Amazon & Adobe.

icon
Monarch Tractors India
Hyderabad
2 - 8 yrs
Best in industry
skill iconMachine Learning (ML)
skill iconData Science
Algorithms
skill iconPython
skill iconC++
+10 more

Designation: Perception Engineer (3D) 

Experience: 0 years to 8 years 

Position Type: Full Time 

Position Location: Hyderabad 

Compensation: As Per Industry standards 

 

About Monarch: 

At Monarch, we’re leading the digital transformation of farming. Monarch Tractor augments both muscle and mind with fully loaded hardware, software, and service machinery that will spur future generations of farming technologies. 

With our farmer-first mentality, we are building a smart tractor that will enhance (not replace) the existing farm ecosystem, alleviate labor availability, and cost issues, and provide an avenue for competitive organic and beyond farming by providing mechanical solutions to replace harmful chemical solutions. Despite all the cutting-edge technology we will incorporate, our tractor will still plow, still, and haul better than any other tractor in its class. We have all the necessary ingredients to develop, build and scale the Monarch Tractor and digitally transform farming around the world. 

 

Description: 

We are looking for engineers to work on applied research problems related to perception in autonomous driving of electric tractors. The team works on classical and deep learning-based techniques for computer vision. Several problems like SFM, SLAM, 3D Image processing, multiple view geometry etc. Are being solved to deploy on resource constrained hardware. 

 

Technical Skills: 

  • Background in Linear Algebra, Probability and Statistics, graphical algorithms and optimization problems is necessary. 
  • Solid theoretical background in 3D computer vision, computational geometry, SLAM and robot perception is desired. Deep learning background is optional. 
  • Knowledge of some numerical algorithms or libraries among: Bayesian filters, SLAM, Eigen, Boost, g2o, PCL, Open3D, ICP. 
  • Experience in two view and multi-view geometry. 
  • Necessary Skills: Python, C++, Boost, Computer Vision, Robotics, OpenCV. 
  • Academic experience for freshers in Vision for Robotics is preferred.  
  • Experienced candidates in Robotics with no prior Deep Learning experience willing to apply their knowledge to vision problems are also encouraged to apply. 
  • Software development experience on low-power embedded platforms is a plus. 

 

Responsibilities: 

  • Understanding engineering principles and a clear understanding of data structures and algorithms. 
  • Ability to understand, optimize and debug imaging algorithms. 
  • Ability to drive a project from conception to completion, research papers to code with disciplined approach to software development on Linux platform. 
  • Demonstrate outstanding ability to perform innovative and significant research in the form of technical papers, thesis, or patents. 
  • Optimize runtime performance of designed models. 
  • Deploy models to production and monitor performance and debug inaccuracies and exceptions. 
  • Communicate and collaborate with team members in India and abroad for the fulfillment of your duties and organizational objectives. 
  • Thrive in a fast-paced environment and can own the project end to end with minimum hand holding. 
  • Learn & adapt to new technologies & skillsets. 
  • Work on projects independently with timely delivery & defect free approach. 
  • Thesis focusing on the above skill set may be given more preference. 

 

What you will get: 

At Monarch Tractor, you’ll play a key role on a capable, dedicated, high-performing team of rock stars. Our compensation package includes a competitive salary, excellent health benefits commensurate with the role you’ll play in our success.  

 

Read more
Hyderabad
3 - 8 yrs
₹23L - ₹28L / yr
skill iconMachine Learning (ML)
Algorithms
Data mining
Pattern recognition
Digital Signal Processing
Algorithm Engineer
Experience 3 to 8 Years

Skill Set
  • experience in algorithm development with a focus on signal processing, pattern recognition, machine learning, classification, data mining, and other areas of machine intelligence.
  • Ability to analyse data streams from multiple sensors and develop algorithms to extract accurate and meaningful sport metrics.
  • Should have a deeper understanding of IMU sensors and Biosensors like HRM, ECG
  • A good understanding on Power and memory management on embedded platform
Responsibilities
  •  Expertise in the design of multitasking, event-driven, real-time firmware using C and understanding of RTOS concepts
  • Knowledge of Machine learning, Analytical and methodical approaches to data analysis and verification and Python
  • Prior experience on fitness algorithm development using IMU sensor
  •  Interest in fitness activities and knowledge of human body anatomy
Read more
Hyderabad
4 - 7 yrs
₹14L - ₹25L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more

Roles and Responsibilities

Big Data Engineer + Spark Responsibilies Atleast 3 to 4 years of relevant experience as Big Data Engineer Min 1 year of relevant hands-on experience into Spark framework. Minimum 4 years of Application Development experience using any programming language like Scala/Java/Python. Hands on experience on any major components in Hadoop Ecosystem like HDFS or Map or Reduce or Hive or Impala. Strong programming experience of building applications / platforms using Scala/Java/Python. Experienced in implementing Spark RDD Transformations, actions to implement business analysis. An efficient interpersonal communicator with sound analytical problemsolving skills and management capabilities. Strive to keep the slope of the learning curve high and able to quickly adapt to new environments and technologies. Good knowledge on agile methodology of Software development.
Read more
Accolite Digital
Nitesh Parab
Posted by Nitesh Parab
Bengaluru (Bangalore), Hyderabad, Gurugram, Delhi, Noida, Ghaziabad, Faridabad
4 - 8 yrs
₹5L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
SSIS
SQL Server Integration Services (SSIS)
+10 more

Job Title: Data Engineer

Job Summary: As a Data Engineer, you will be responsible for designing, building, and maintaining the infrastructure and tools necessary for data collection, storage, processing, and analysis. You will work closely with data scientists and analysts to ensure that data is available, accessible, and in a format that can be easily consumed for business insights.

Responsibilities:

  • Design, build, and maintain data pipelines to collect, store, and process data from various sources.
  • Create and manage data warehousing and data lake solutions.
  • Develop and maintain data processing and data integration tools.
  • Collaborate with data scientists and analysts to design and implement data models and algorithms for data analysis.
  • Optimize and scale existing data infrastructure to ensure it meets the needs of the business.
  • Ensure data quality and integrity across all data sources.
  • Develop and implement best practices for data governance, security, and privacy.
  • Monitor data pipeline performance / Errors and troubleshoot issues as needed.
  • Stay up-to-date with emerging data technologies and best practices.

Requirements:

Bachelor's degree in Computer Science, Information Systems, or a related field.

Experience with ETL tools like Matillion,SSIS,Informatica

Experience with SQL and relational databases such as SQL server, MySQL, PostgreSQL, or Oracle.

Experience in writing complex SQL queries

Strong programming skills in languages such as Python, Java, or Scala.

Experience with data modeling, data warehousing, and data integration.

Strong problem-solving skills and ability to work independently.

Excellent communication and collaboration skills.

Familiarity with big data technologies such as Hadoop, Spark, or Kafka.

Familiarity with data warehouse/Data lake technologies like Snowflake or Databricks

Familiarity with cloud computing platforms such as AWS, Azure, or GCP.

Familiarity with Reporting tools

Teamwork/ growth contribution

  • Helping the team in taking the Interviews and identifying right candidates
  • Adhering to timelines
  • Intime status communication and upfront communication of any risks
  • Tech, train, share knowledge with peers.
  • Good Communication skills
  • Proven abilities to take initiative and be innovative
  • Analytical mind with a problem-solving aptitude

Good to have :

Master's degree in Computer Science, Information Systems, or a related field.

Experience with NoSQL databases such as MongoDB or Cassandra.

Familiarity with data visualization and business intelligence tools such as Tableau or Power BI.

Knowledge of machine learning and statistical modeling techniques.

If you are passionate about data and want to work with a dynamic team of data scientists and analysts, we encourage you to apply for this position.

Read more
Hyderabad
4 - 8 yrs
₹6L - ₹25L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+4 more
  1. Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
  2. Experience in developing lambda functions with AWS Lambda
  3. Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
  4. Should be able to code in Python and Scala.
  5. Snowflake experience will be a plus

 

Read more
Hyderabad
5 - 12 yrs
₹10L - ₹35L / yr
Analytics
skill iconKubernetes
Apache Kafka
skill iconData Analytics
skill iconPython
+3 more
  • 3+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd and streaming databases like druid
  • Strong industry expertise with containerization technologies including kubernetes, docker-compose
  • 2+ years of industry in experience in developing scalable data ingestion processes and ETLs
  • Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
  • Experience with scripting languages. Python experience highly desirable.
  • 2+ Industry experience in python
  • Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
  • Demonstrated expertise of building cloud native applications
  • Experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd
  • Experience in API development using Swagger
  • Strong expertise with containerization technologies including kubernetes, docker-compose
  • Experience with cloud platform services such as AWS, Azure or GCP.
  • Implementing automated testing platforms and unit tests
  • Proficient understanding of code versioning tools, such as Git
  • Familiarity with continuous integration, Jenkins
Responsibilities
  • Design and Implement Large scale data processing pipelines using Kafka, Fluentd and Druid
  • Assist in dev ops operations
  • Develop data ingestion processes and ETLs
  • Design and Implement APIs
  • Assist in dev ops operations
  • Identify performance bottlenecks and bugs, and devise solutions to these problems
  • Help maintain code quality, organization, and documentation
  • Communicate with stakeholders regarding various aspects of solution.
  • Mentor team members on best practices
Read more
Genesys

at Genesys

5 recruiters
Praveen Immanuel
Posted by Praveen Immanuel
Chennai, Hyderabad
2 - 5 yrs
₹1L - ₹12L / yr
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm

Location: Chennai

Summary of the Position:

Job Purpose:    

Genesys is the global omnichannel customer experience and contact center solution leader. With over 4,500 successful customers, our customer experience platform and solutions help companies engage effortlessly with their customers, across all touchpoints, channels and interactions to deliver differentiated customer journeys, while maximizing revenue and loyalty. 

Genesys is embracing AI and Machine Learning in delivering AI-powered products like Predictive Routing, Chat bots, and Customer Journey management. The PS Data Scientist share a passion for AI & data working with global project teams and customers to extend data science services across Genesys solutions both on premise and in the cloud.

Join us and be a part of this journey as we write Customer Success stories with these products.

Location: (Chennai/Hyderabad/Bangalore)

WHAT YOU DO:

  • Interface with business customers, gathering and understanding requirements
  • Interface with customer and Genesys data science teams in discovery, extraction, loading, data transformation, and analysis of results
  • Define and utilize data intuition process to cleanse and verify the integrity of customer & Genesys data to be used for analysis
  • Implement, own, and improve data pipelines using best practices in data modeling, ETL/ELT processes
  • Build, improve, and provide ongoing optimization of high quality models
  • Work with PS & Engineering to deliver specific customer requirements and report back customer feedback, issues and feature requests. Continuous improvement in reporting, analysis, overall process.
  • Visualize, present and demonstrate findings as required. Perform knowledge transfer to customer and internal teams.
  • Communicate within the global community respecting cultural, language and time zone variations
  • Demonstrate flexibility to adjust working hours to match customer and team interactions

 

 

ABOUT YOU:

  • Bachelor’s / Master’s degree in quantitative field (e.g. Computer Science, Statistics, Engineering)
  • 2-4 years of relevant experience in Data Science or Data Engineering
  • 2+ years of hands-on experience performing statistical data analysis across large datasets writing highly optimized SQL queries and utilizing Python (NumPy, Pandas) or similar software (Primary)
  • Expertise with major statistical & analytical software like (Python or R or SAS) - Primary
  • Experience in Snowflake, Tableau, Elasticsearch, Kibana and real time analytics solution development will be major plus (Secondary)
  • Should have the application development background of using any contact center product suites such as Genesys, Avaya, Cisco etc.
  • Expertise with data modelling, data warehousing and ETL/ELT development
  • Expertise with database solutions such as SQL, MongoDB, Redshift, Hadoop, Hive
  • Proficiency with REST API, JSON, (AWS/AZURE/GCP) - Primary
  • Experience in working and delivering projects independently. Ability to multi-task and context switch between projects and tasks
  • Curiosity, passion, and drive for data queries, analysis, quality, models
  • Excellent communication, initiative, and coordination skills with great attention to detail. Ability to explain and discuss complex topics with both experts and business leaders

How You Do It:

How You Think: Understands the business and takes a non-traditional approach to solving common problems. Willing to draw outside the lines and find new ways to make an impact on old problems.

How You Interact: Can easily build collaborative relationships that energize individuals, teams, and the company into action. You are a global thinker and can work across locations and time zones. You are an excellent communicator and listener, and can easily persuade to drive a vision and purpose.

How You Own It: You are a hands-on executor who can drive change and clearly communicate across all stakeholders.

How You Show Up: 

Embodies Genesys core cultural values and pushes to create an authentic employee experience. You are the type of person who can succeed through ambiguity, bringing clarity where there is no roadmap, who can re-set when a change in direction is needed without getting derailed or frustrated. You are authentic and in still the trust in others.

About Us

Genesys ® powers more than 25 billion of the world’s best customer experiences each year. We put the customer at the center of everything we do and passionately believe that great customer engagement drives great business outcomes. More than 10,000 companies in more than 100 countries trust the industry’s #1 customer experience platform to orchestrate omnichannel customer journeys that eliminate silos and build lasting relationships. With a strong track record of innovation and a never-ending desire to be first, Genesys is the only company recognized by top industry analysts as a leader in both cloud and on-premise customer engagement solutions. Connect with Genesys via www.genesys.com , Twitter , Facebook , YouTube , LinkedIn , and the Genesys blog.

Genesys is an equal opportunity employer committed to diversity in the workplace. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, disability, veteran status, and other protected characteristics.

Read more
El Corte Ingls
Saradhi Reddy
Posted by Saradhi Reddy
Hyderabad
3 - 7 yrs
₹10L - ₹25L / yr
skill iconData Science
skill iconR Programming
skill iconPython
View the profiles of professionals named Vijaya More on LinkedIn. There are 20+ professionals named Vijaya More, who use LinkedIn to exchange information, ideas, and opportunities.
Read more
DataMetica

at DataMetica

1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune, Hyderabad
7 - 12 yrs
₹12L - ₹33L / yr
Big Data
Hadoop
Spark
Apache Spark
Apache Hive
+3 more

Job description

Role : Lead Architecture (Spark, Scala, Big Data/Hadoop, Java)

Primary Location : India-Pune, Hyderabad

Experience : 7 - 12 Years

Management Level: 7

Joining Time: Immediate Joiners are preferred


  • Attend requirements gathering workshops, estimation discussions, design meetings and status review meetings
  • Experience of Solution Design and Solution Architecture for the data engineer model to build and implement Big Data Projects on-premises and on cloud.
  • Align architecture with business requirements and stabilizing the developed solution
  • Ability to build prototypes to demonstrate the technical feasibility of your vision
  • Professional experience facilitating and leading solution design, architecture and delivery planning activities for data intensive and high throughput platforms and applications
  • To be able to benchmark systems, analyses system bottlenecks and propose solutions to eliminate them
  • Able to help programmers and project managers in the design, planning and governance of implementing projects of any kind.
  • Develop, construct, test and maintain architectures and run Sprints for development and rollout of functionalities
  • Data Analysis, Code development experience, ideally in Big Data Spark, Hive, Hadoop, Java, Python, PySpark,
  • Execute projects of various types i.e. Design, development, Implementation and migration of functional analytics Models/Business logic across architecture approaches
  • Work closely with Business Analysts to understand the core business problems and deliver efficient IT solutions of the product
  • Deployment sophisticated analytics program of code using any of cloud application.


Perks and Benefits we Provide!


  • Working with Highly Technical and Passionate, mission-driven people
  • Subsidized Meals & Snacks
  • Flexible Schedule
  • Approachable leadership
  • Access to various learning tools and programs
  • Pet Friendly
  • Certification Reimbursement Policy
  • Check out more about us on our website below!

www.datametica.com

Read more
Claim Genius

at Claim Genius

1 recruiter
KalyaniMuley
Posted by KalyaniMuley
Nagpur, Hyderabad
3 - 10 yrs
₹5L - ₹25L / yr
skill iconData Science
skill iconDeep Learning
skill iconPython
Image Processing
CNN
+2 more

Responsibilities: 
 

  • The Machine & Deep Machine Learning Software Engineer (Expertise in Computer Vision) will be an early member of a growing team with responsibilities for designing and developing highly scalable machine learning solutions that impact many areas of our business. 
  • The individual in this role will help in the design and development of Neural Network (especially Convolution Neural Networks) & ML solutions based on our reference architecture which is underpinned by big data & cloud technology, micro-service architecture and high performing compute infrastructure. 
  • Typical daily activities include contributing to all phases of algorithm development including ideation, prototyping, design, and development production implementation. 


Required Skills: 
 

  • An ideal candidate will have a background in software engineering and data science with expertise in machine learning algorithms, statistical analysis tools, and distributed systems. 
  • Experience in building machine learning applications, and broad knowledge of machine learning APIs, tools, and open-source libraries 
  • Strong coding skills and fundamentals in data structures, predictive modeling, and big data concepts 
  • Experience in designing full stack ML solutions in a distributed computing environment 
  • Experience working with Python, Tensor Flow, Kera’s, Sci-kit, pandas, NumPy, AZURE, AWS GPU
  • Excellent communication skills with multiple levels of the organization 
  • Image CNN, Image processing, MRCNN, FRCNN experience is a must.
Read more
Hyderabad
2 - 4 yrs
₹10L - ₹15L / yr
skill iconPython
PySpark
Knowledge in AWS
  • Desire to explore new technology and break new ground.
  • Are passionate about Open Source technology, continuous learning, and innovation.
  • Have the problem-solving skills, grit, and commitment to complete challenging work assignments and meet deadlines.

Qualifications

  • Engineer enterprise-class, large-scale deployments, and deliver Cloud-based Serverless solutions to our customers.
  • You will work in a fast-paced environment with leading microservice and cloud technologies, and continue to develop your all-around technical skills.
  • Participate in code reviews and provide meaningful feedback to other team members.
  • Create technical documentation.
  • Develop thorough Unit Tests to ensure code quality.

Skills and Experience

  • Advanced skills in troubleshooting and tuning AWS Lambda functions developed with Java and/or Python.
  • Experience with event-driven architecture design patterns and practices
  • Experience in database design and architecture principles and strong SQL abilities
  • Message brokers like Kafka and Kinesis
  • Experience with Hadoop, Hive, and Spark (either PySpark or Scala)
  • Demonstrated experience owning enterprise-class applications and delivering highly available distributed, fault-tolerant, globally accessible services at scale.
  • Good understanding of distributed systems.
  • Candidates will be self-motivated and display initiative, ownership, and flexibility.

 

Preferred Qualifications

  • AWS Lambda function development experience with Java and/or Python.
  • Lambda triggers such as SNS, SES, or cron.
  • Databricks
  • Cloud development experience with AWS services, including:
  • IAM
  • S3
  • EC2
  • AWS CLI
  • API Gateway
  • ECR
  • CloudWatch
  • Glue
  • Kinesis
  • DynamoDB
  • Java 8 or higher
  • ETL data pipeline building
  • Data Lake Experience
  • Python
  • Docker
  • MongoDB or similar NoSQL DB.
  • Relational Databases (e.g., MySQL, PostgreSQL, Oracle, etc.).
  • Gradle and/or Maven.
  • JUnit
  • Git
  • Scrum
  • Experience with Unix and/or macOS.
  • Immediate Joiners

Nice to have:

  • AWS / GCP / Azure Certification.
  • Cloud development experience with Google Cloud or Azure

 

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort