Cutshort logo
Ubuntu Jobs in Hyderabad

11+ Ubuntu Jobs in Hyderabad | Ubuntu Job openings in Hyderabad

Apply to 11+ Ubuntu Jobs in Hyderabad on CutShort.io. Explore the latest Ubuntu Job opportunities across top companies like Google, Amazon & Adobe.

icon
Monarch Tractors India
Hyderabad
5 - 8 yrs
Best in industry
skill iconPython
skill iconAmazon Web Services (AWS)
skill iconPostgreSQL
Ubuntu
Web Service Definition Language (WSDL)

Designation: Principal Data Engineer

Experience: Experienced

Position Type: Full Time Position

Location: Hyderabad

Office Timings: 9AM to 6PM

Compensation: As Per Industry standards

 

About Monarch:

 

At Monarch, we’re leading the digital transformation of farming. Monarch Tractor augments both muscle and mind with fully loaded hardware, software, and service machinery that will spur future generations of farming technologies. With our farmer-first mentality, we are building a smart tractor that will enhance (not replace) the existing farm ecosystem, alleviate labor availability, and cost issues, and provide an avenue for competitive organic and beyond farming by providing mechanical solutions to replace harmful chemical solutions. Despite all the cutting-edge technology we will incorporate, our tractor will still plow, till, and haul better than any other tractor in its class. We have all the necessary ingredients to develop, build and scale the Monarch Tractor and digitally transform farming around the world.

 

Description:

 

Monarch Tractor likes to invite an experience Python data engineer to lead our internal data engineering team in India. This is a unique opportunity to work on computer vision AI data pipelines for electric tractors. You will be dealing with data from a farm environment like videos, images, tractor logs, GPS coordinates and map polygons. You will be responsible for collecting data for research and development. For example, this includes setting up ETL data pipelines to extract data from tractors, loading these data into the cloud and recording AI training results.

 

This role includes, but not limited to, the following tasks:

 

● Lead data engineering team

● Own and contribute to more than 50% of the data engineering code base

● Scope out new project requirements

● Costing data pipeline solutions

● Create data engineering tooling

● Design custom data structures for efficient processing of data

 

Data engineering skills we are looking for:

 

● Able to work with large amounts of text log data, image data, and video data

● Fluently use AWS cloud solutions like S3, Lambda, and EC2

● Able to work with data from Robot Operating System

 

Required Experience:

 

● 3 to 5 years of experience using Python

● 3 to 5 years of experience using PostgreSQL

● 3 to 5 years of experience using AWS EC2, S3, Lambda

● 3 to 5 years of experience using Ubuntu OS or WSL

 

Good to have experience:

 

● Ray

● Robot Operating System

 

What you will get:

 

At Monarch Tractor, you’ll play a key role on a capable, dedicated, high-performing team of rock stars. Our compensation package includes a competitive salary, excellent health, dental and vision benefits, and company equity commensurate with the role you’ll play in our success. 

Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Noida
4 - 10 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.


Role & Responsibilities:

Job Title: Senior Associate L1 – Data Engineering

Your role is focused on Design, Development and delivery of solutions involving:

• Data Ingestion, Integration and Transformation

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time

• Build functionality for data analytics, search and aggregation


Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies

2.Minimum 1.5 years of experience in Big Data technologies

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security

7.Cloud data specialty and other related Big data technology certifications


Job Title: Senior Associate L1 – Data Engineering

Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes

Read more
Softobiz Technologies Private limited

at Softobiz Technologies Private limited

2 candid answers
1 recruiter
Adlin Asha
Posted by Adlin Asha
Hyderabad
8 - 18 yrs
₹15L - ₹30L / yr
ETL
Informatica
Data Warehouse (DWH)
Amazon Redshift
skill iconPostgreSQL
+2 more

Experience: 8+ Years

Work Location: Hyderabad

Mode of work: Work from Office


Senior Data Engineer / Architect

 

Summary of the Role

 

The Senior Data Engineer / Architect will be a key role within the data and technology team, responsible for engineering and building data solutions that enable seamless use of data within the organization. 

 

Core Activities

-         Work closely with the business teams and business analysts to understand and document data usage requirements

-         Develop designs relating to data engineering solutions including data pipelines, ETL, data warehouse, data mart and data lake solutions

-         Develop data designs for reporting and other data use requirements

-         Develop data governance solutions that provide data governance services including data security, data quality, data lineage etc.

-         Lead implementation of data use and data quality solutions

-         Provide operational support for users for the implemented data solutions

-         Support development of solutions that automate reporting and business intelligence requirements

-         Support development of machine learning and AI solution using large scale internal and external datasets

 

Other activities

-         Work on and manage technology projects as and when required

-         Provide user and technical training on data solutions

 

Skills and Experience

-         At least 5-8 years of experience in a senior data engineer / architect role

-         Strong experience with AWS based data solutions including AWS Redshift, analytics and data governance solutions

-         Strong experience with industry standard data governance / data quality solutions  

-         Strong experience with managing a Postgres SQL data environment

-         Background as a software developer working in AWS / Python will be beneficial

-         Experience with BI tools like Power BI and Tableau

Strong written and oral communication skills

 

Read more
Picture the future

Picture the future

Agency job
via Jobdost by Sathish Kumar
Hyderabad
4 - 7 yrs
₹5L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 
  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 
  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 
  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 
  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
  • Define process improvement opportunities to optimize data collection, insights and displays.
  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 
  • Identify and interpret trends and patterns from complex data sets 
  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 
  • Key participant in regular Scrum ceremonies with the agile teams  
  • Proficient at developing queries, writing reports and presenting findings 
  • Mentor junior members and bring best industry practices 

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 
  • Strong background in math, statistics, computer science, data science or related discipline
  • Advanced knowledge one of language: Java, Scala, Python, C# 
  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  
  • Proficient with
  • Data mining/programming tools (e.g. SAS, SQL, R, Python)
  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
  • Data visualization (e.g. Tableau, Looker, MicroStrategy)
  • Comfortable learning about and deploying new technologies and tools. 
  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 
  • Good written and oral communication skills and ability to present results to non-technical audiences 
  • Knowledge of business intelligence and analytical tools, technologies and techniques.


Mandatory Requirements 

  • Experience in AWS Glue
  • Experience in Apache Parquet 
  • Proficient in AWS S3 and data lake 
  • Knowledge of Snowflake
  • Understanding of file-based ingestion best practices.
  • Scripting language - Python & pyspark

 

Read more
RandomTrees

at RandomTrees

1 recruiter
Amareswarreddt yaddula
Posted by Amareswarreddt yaddula
Hyderabad
5 - 16 yrs
₹1L - ₹30L / yr
ETL
Informatica
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
SQL
+3 more

We are #hiring for AWS Data Engineer expert to join our team


Job Title: AWS Data Engineer

Experience: 5 Yrs to 10Yrs

Location: Remote

Notice: Immediate or Max 20 Days

Role: Permanent Role


Skillset: AWS, ETL, SQL, Python, Pyspark, Postgres DB, Dremio.


Job Description:

 Able to develop ETL jobs.

Able to help with data curation/cleanup, data transformation, and building ETL pipelines.

Strong Postgres DB exp and knowledge of Dremio data visualization/semantic layer between DB and the application is a plus.

Sql, Python, and Pyspark is a must.

Communication should be good





Read more
Quadratic Insights
Praveen Kondaveeti
Posted by Praveen Kondaveeti
Hyderabad
7 - 10 yrs
₹15L - ₹24L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+6 more

About Quadratyx:

We are a product-centric insight & automation services company globally. We help the world’s organizations make better & faster decisions using the power of insight & intelligent automation. We build and operationalize their next-gen strategy, through Big Data, Artificial Intelligence, Machine Learning, Unstructured Data Processing and Advanced Analytics. Quadratyx can boast more extensive experience in data sciences & analytics than most other companies in India.

We firmly believe in Excellence Everywhere.


Job Description

Purpose of the Job/ Role:

• As a Technical Lead, your work is a combination of hands-on contribution, customer engagement and technical team management. Overall, you’ll design, architect, deploy and maintain big data solutions.


Key Requisites:

• Expertise in Data structures and algorithms.

• Technical management across the full life cycle of big data (Hadoop) projects from requirement gathering and analysis to platform selection, design of the architecture and deployment.

• Scaling of cloud-based infrastructure.

• Collaborating with business consultants, data scientists, engineers and developers to develop data solutions.

• Led and mentored a team of data engineers.

• Hands-on experience in test-driven development (TDD).

• Expertise in No SQL like Mongo, Cassandra etc, preferred Mongo and strong knowledge of relational databases.

• Good knowledge of Kafka and Spark Streaming internal architecture.

• Good knowledge of any Application Servers.

• Extensive knowledge of big data platforms like Hadoop; Hortonworks etc.

• Knowledge of data ingestion and integration on cloud services such as AWS; Google Cloud; Azure etc. 


Skills/ Competencies Required

Technical Skills

• Strong expertise (9 or more out of 10) in at least one modern programming language, like Python, or Java.

• Clear end-to-end experience in designing, programming, and implementing large software systems.

• Passion and analytical abilities to solve complex problems Soft Skills.

• Always speaking your mind freely.

• Communicating ideas clearly in talking and writing, integrity to never copy or plagiarize intellectual property of others.

• Exercising discretion and independent judgment where needed in performing duties; not needing micro-management, maintaining high professional standards.


Academic Qualifications & Experience Required

Required Educational Qualification & Relevant Experience

• Bachelor’s or Master’s in Computer Science, Computer Engineering, or related discipline from a well-known institute.

• Minimum 7 - 10 years of work experience as a developer in an IT organization (preferably Analytics / Big Data/ Data Science / AI background.

Read more
Phenom People

at Phenom People

5 recruiters
Srivatsav Chilukoori
Posted by Srivatsav Chilukoori
Hyderabad
3 - 6 yrs
₹10L - ₹18L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
skill iconPython
skill iconDeep Learning
+4 more

JOB TITLE - Product Development Engineer - Machine Learning
● Work Location: Hyderabad
● Full-time
 

Company Description

Phenom People is the leader in Talent Experience Marketing (TXM for short). We’re an early-stage startup on a mission to fundamentally transform how companies acquire talent. As a category creator, our goals are two-fold: to educate talent acquisition and HR leaders on the benefits of TXM and to help solve their recruiting pain points.
 

Job Responsibilities:

  • Design and implement machine learning, information extraction, probabilistic matching algorithms and models
  • Research and develop innovative, scalable and dynamic solutions to hard problems
  • Work closely with Machine Learning Scientists (PhDs), ML engineers, data scientists and data engineers to address challenges head-on.
  • Use the latest advances in NLP, data science and machine learning to enhance our products and create new experiences
  • Scale machine learning algorithm that powers our platform to support our growing customer base and increasing data volume
  • Be a valued contributor in shaping the future of our products and services
  • You will be part of our Data Science & Algorithms team and collaborate with product management and other team members
  • Be part of a fast pace, fun-focused, agile team

Job Requirement:

  • 4+ years of industry experience
  • Ph.D./MS/B.Tech in computer science, information systems, or similar technical field
  • Strong mathematics, statistics, and data analytics
  • Solid coding and engineering skills preferably in Machine Learning (not mandatory)
  • Proficient in Java, Python, and Scala
  • Industry experience building and productionizing end-to-end systems
  • Knowledge of Information Extraction, NLP algorithms coupled with Deep Learning
  • Experience with data processing and storage frameworks like Hadoop, Spark, Kafka etc.


Position Summary

We’re looking for a Machine Learning Engineer to join our team of Phenom. We are expecting the below points to full fill this role.

  • Capable of building accurate machine learning models is the main goal of a machine learning engineer
  • Linear Algebra, Applied Statistics and Probability
  • Building Data Models
  • Strong knowledge of NLP
  • Good understanding of multithreaded and object-oriented software development
  • Mathematics, Mathematics and Mathematics
  • Collaborate with Data Engineers to prepare data models required for machine learning models
  • Collaborate with other product team members to apply state-of-the-art Ai methods that include dialogue systems, natural language processing, information retrieval and recommendation systems
  • Build large-scale software systems and numerical computation topics
  • Use predictive analytics and data mining to solve complex problems and drive business decisions
  • Should be able to design the accurate ML end-to-end architecture including the data flows, algorithm scalability, and applicability
  • Tackle situations where problem is unknown and the Solution is unknown
  • Solve analytical problems, and effectively communicate methodologies and results to the customers
  • Adept at translating business needs into technical requirements and translating data into actionable insights
  • Work closely with internal stakeholders such as business teams, product managers, engineering teams, and customer success teams.

Benefits

  • Competitive salary for a startup
  • Gain experience rapidly
  • Work directly with the executive team
  • Fast-paced work environment

 

About Phenom People

At PhenomPeople, we believe candidates (Job seekers) are consumers. That’s why we’re bringing e-commerce experience to the job search, with a view to convert candidates into applicants. The Intelligent Career Site™ platform delivers the most relevant and personalized job search yet, with a career site optimized for mobile and desktop interfaces designed to integrate with any ATS, tailored content selection like Glassdoor reviews, YouTube videos and LinkedIn connections based on candidate search habits and an integrated real-time recruiting analytics dashboard.

 

 Use Company career sites to reach candidates and encourage them to convert. The Intelligent Career Site™ offers a single platform to serve candidates a modern e-commerce experience from anywhere on the globe and on any device.

 We track every visitor that comes to the Company career site. Through fingerprinting technology, candidates are tracked from the first visit and served jobs and content based on their location, click-stream, behavior on site, browser and device to give each visitor the most relevant experience.

 Like consumers, candidates research companies and read reviews before they apply for a job. Through our understanding of the candidate journey, we are able to personalize their experience and deliver relevant content from sources such as corporate career sites, Glassdoor, YouTube and LinkedIn.

 We give you clear visibility into the Company's candidate pipeline. By tracking up to 450 data points, we build profiles for every career site visitor based on their site visit behavior, social footprint and any other relevant data available on the open web.

 Gain a better understanding of Company’s recruiting spending and where candidates convert or drop off from Company’s career site. The real-time analytics dashboard offers companies actionable insights on optimizing source spending and the candidate experience.

 

Kindly explore about the company phenom (https://www.phenom.com/">https://www.phenom.com/)
Youtube - https://www.youtube.com/c/PhenomPeople">https://www.youtube.com/c/PhenomPeople
LinkedIn - https://www.linkedin.com/company/phenompeople/">https://www.linkedin.com/company/phenompeople/

https://www.phenom.com/">Phenom | Talent Experience Management

Read more
Hyderabad
4 - 8 yrs
₹5L - ₹14L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more
Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
Experience in developing lambda functions with AWS Lambda
Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
Should be able to code in Python and Scala.
Snowflake experience will be a plus
Read more
Statusneo

at Statusneo

6 recruiters
Yashika Sharma
Posted by Yashika Sharma
Hyderabad, Bengaluru (Bangalore)
2 - 4 yrs
₹2L - ₹4L / yr
skill iconData Science
Computer Vision
Natural Language Processing (NLP)
skill iconMachine Learning (ML)
skill iconPython
+2 more

Responsibilities Description:

Responsible for the development and implementation of machine learning algorithms and techniques to solve business problems and optimize member experiences. Primary duties may include are but not limited to: Design machine learning projects to address specific business problems determined by consultation with business partners. Work with data-sets of varying degrees of size and complexity including both structured and unstructured data. Piping and processing massive data-streams in distributed computing environments such as Hadoop to facilitate analysis. Implements batch and real-time model scoring to drive actions. Develops machine learning algorithms to build customized solutions that go beyond standard industry tools and lead to innovative solutions. Develop sophisticated visualization of analysis output for business users.

 

Experience Requirements:

BS/MA/MS/PhD in Statistics, Computer Science, Mathematics, Machine Learning, Econometrics, Physics, Biostatistics or related Quantitative disciplines. 2-4 years of experience in predictive analytics and advanced expertise with software such as Python, or any combination of education and experience which would provide an equivalent background. Experience in the healthcare sector. Experience in Deep Learning strongly preferred.

 

Required Technical Skill Set:

  • Full cycle of building machine learning solutions,

o   Understanding of wide range of algorithms and their corresponding problems to solve

o   Data preparation and analysis

o   Model training and validation

o   Model application to the problem

  • Experience using the full open source programming tools and utilities
  • Experience in working in end-to-end data science project implementation.
  • 2+ years of experience with development and deployment of Machine Learning applications
  • 2+ years of experience with NLP approaches in a production setting
  • Experience in building models using bagging and boosting algorithms
  • Exposure/experience in building Deep Learning models for NLP/Computer Vision use cases preferred
  • Ability to write efficient code with good understanding of core Data Structures/algorithms is critical
  • Strong python skills following software engineering best practices
  • Experience in using code versioning tools like GIT, bit bucket
  • Experience in working in Agile projects
  • Comfort & familiarity with SQL and Hadoop ecosystem of tools including spark
  • Experience managing big data with efficient query program good to have
  • Good to have experience in training ML models in tools like Sage Maker, Kubeflow etc.
  • Good to have experience in frameworks to depict interpretability of models using libraries like Lime, Shap etc.
  • Experience with Health care sector is preferred
  • MS/M.Tech or PhD is a plus
Read more
15 years US based Product Company

15 years US based Product Company

Agency job
Chennai, Bengaluru (Bangalore), Hyderabad
4 - 10 yrs
₹9L - ₹20L / yr
Informatica
informatica developer
Informatica MDM
Data integration
Informatica Data Quality
+7 more
  • Should have good hands-on experience in Informatica MDM Customer 360, Data Integration(ETL) using PowerCenter, Data Quality.
  • Must have strong skills in Data Analysis, Data Mapping for ETL processes, and Data Modeling.
  • Experience with the SIF framework including real-time integration
  • Should have experience in building C360 Insights using Informatica
  • Should have good experience in creating performant design using Mapplets, Mappings, Workflows for Data Quality(cleansing), ETL.
  • Should have experience in building different data warehouse architecture like Enterprise,
  • Federated, and Multi-Tier architecture.
  • Should have experience in configuring Informatica Data Director in reference to the Data
  • Governance of users, IT Managers, and Data Stewards.
  • Should have good knowledge in developing complex PL/SQL queries.
  • Should have working experience on UNIX and shell scripting to run the Informatica workflows and to control the ETL flow.
  • Should know about Informatica Server installation and knowledge on the Administration console.
  • Working experience with Developer with Administration is added knowledge.
  • Working experience in Amazon Web Services (AWS) is an added advantage. Particularly on AWS S3, Data pipeline, Lambda, Kinesis, DynamoDB, and EMR.
  • Should be responsible for the creation of automated BI solutions, including requirements, design,development, testing, and deployment
Read more
MNC

MNC

Agency job
via Fragma Data Systems by geeti gaurav mohanty
Bengaluru (Bangalore), Hyderabad
3 - 6 yrs
₹10L - ₹15L / yr
Big Data
Spark
ETL
Apache
Hadoop
+2 more
Desired Skill, Experience, Qualifications, and Certifications:
• 5+ years’ experience developing and maintaining modern ingestion pipeline using
technologies like Spark, Apache Nifi etc).
• 2+ years’ experience with Healthcare Payors (focusing on Membership, Enrollment, Eligibility,
• Claims, Clinical)
• Hands on experience on AWS Cloud and its Native components like S3, Athena, Redshift &
• Jupyter Notebooks
• Strong in Spark Scala & Python pipelines (ETL & Streaming)
• Strong experience in metadata management tools like AWS Glue
• String experience in coding with languages like Java, Python
• Worked on designing ETL & streaming pipelines in Spark Scala / Python
• Good experience in Requirements gathering, Design & Development
• Working with cross-functional teams to meet strategic goals.
• Experience in high volume data environments
• Critical thinking and excellent verbal and written communication skills
• Strong problem-solving and analytical abilities, should be able to work and delivery
individually
• Good-to-have AWS Developer certified, Scala coding experience, Postman-API and Apache
Airflow or similar schedulers experience
• Nice-to-have experience in healthcare messaging standards like HL7, CCDA, EDI, 834, 835, 837
• Good communication skills
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort