Cutshort logo
RF Jobs in Bangalore (Bengaluru)

11+ RF Jobs in Bangalore (Bengaluru) | RF Job openings in Bangalore (Bengaluru)

Apply to 11+ RF Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest RF Job opportunities across top companies like Google, Amazon & Adobe.

icon
Banyan Data Services

at Banyan Data Services

1 recruiter
Sathish Kumar
Posted by Sathish Kumar
Bengaluru (Bangalore)
3 - 15 yrs
₹6L - ₹20L / yr
skill iconData Science
Data Scientist
skill iconMongoDB
skill iconJava
Big Data
+14 more

Senior Big Data Engineer 

Note:   Notice Period : 45 days 

Banyan Data Services (BDS) is a US-based data-focused Company that specializes in comprehensive data solutions and services, headquartered in San Jose, California, USA. 

 

We are looking for a Senior Hadoop Bigdata Engineer who has expertise in solving complex data problems across a big data platform. You will be a part of our development team based out of Bangalore. This team focuses on the most innovative and emerging data infrastructure software and services to support highly scalable and available infrastructure. 

 

It's a once-in-a-lifetime opportunity to join our rocket ship startup run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer that address next-gen data evolution challenges. 

 

 

Key Qualifications

 

·   5+ years of experience working with Java and Spring technologies

· At least 3 years of programming experience working with Spark on big data; including experience with data profiling and building transformations

· Knowledge of microservices architecture is plus 

· Experience with any NoSQL databases such as HBase, MongoDB, or Cassandra

· Experience with Kafka or any streaming tools

· Knowledge of Scala would be preferable

· Experience with agile application development 

· Exposure of any Cloud Technologies including containers and Kubernetes 

· Demonstrated experience of performing DevOps for platforms 

· Strong Skillsets in Data Structures & Algorithm in using efficient way of code complexity

· Exposure to Graph databases

· Passion for learning new technologies and the ability to do so quickly 

· A Bachelor's degree in a computer-related field or equivalent professional experience is required

 

Key Responsibilities

 

· Scope and deliver solutions with the ability to design solutions independently based on high-level architecture

· Design and develop the big data-focused micro-Services

· Involve in big data infrastructure, distributed systems, data modeling, and query processing

· Build software with cutting-edge technologies on cloud

· Willing to learn new technologies and research-orientated projects 

· Proven interpersonal skills while contributing to team effort by accomplishing related results as needed 

Read more
With Reputed service based company

With Reputed service based company

Agency job
via Jobdost by Saida Jabbar
Bengaluru (Bangalore)
4 - 6 yrs
₹12L - ₹15L / yr
SQL
MySQL
MySQL DBA
MariaDB
MS SQLServer
Role Description
As a Database Administrator, you will be responsible for designing, testing, planning,
implementing, protecting, operating, managing and maintaining our company’s
databases. The goal is to provide a seamless flow of information throughout the

company, considering both backend data structure and frontend accessibility for end-
users. You get to work with some of the best minds in the industry at a place where

opportunity lurks everywhere and in everything.
Responsibilities
Your responsibilities are as follows.
• Build database systems of high availability and quality depending on each end
user’s specialised role
• Design and implement database in accordance to end users’ information needs
and views
• Define users and enable data distribution to the right user, in appropriate format
and in a timely manner
• Use high-speed transaction recovery techniques and backup data
• Minimise database downtime and manage parameters to provide fast query
responses
• Provide proactive and reactive data management support and training to users
• Determine, enforce and document database policies, procedures and
standards
• Perform tests and evaluations regularly to ensure data security, privacy and
integrity
• Monitor database performance, implement changes and apply new patches
and versions when required
Required Qualifications
We are looking for individuals who are curious, excited about learning, and navigating
through the uncertainties and complexities that are associated with a growing
company. Some qualifications that we think would help you thrive in this role are:
• Minimum 4 Years of experience as a Database Administrator
• Hands-on experience with database standards and end user applications
• Excellent knowledge of data backup, recovery, security, integrity and SQL
• Familiarity with database design, documentation and coding
• Previous experience with DBA case tools (frontend/backend) and third-party
tools
• Familiarity with programming languages API
• Problem solving skills and ability to think algorithmically
• Bachelor/Masters of CS/IT Engineering, BCA/MCA, B Sc/M Sc in CS/IT

Preferred Qualifications
• Sense of ownership and pride in your performance and its impact on company’s
success
• Critical thinker and problem-solving skills
• Team player
• Good time-management skills
• Great interpersonal and communication skills.
Read more
A stealth mode realty tech start-up

A stealth mode realty tech start-up

Agency job
via Qrata by Prajakta Kulkarni
Bengaluru (Bangalore)
1 - 5 yrs
₹10L - ₹32L / yr
skill iconData Science
Natural Language Processing (NLP)
skill iconR Programming
skill iconPython
SQL
+2 more
A good understanding of the fundamentals of data science/algorithms or software
engineering
2. Preferably should have done some project or internship related to the field
3. Knowledge of SQL is a plus
4. A deep desire to learn new things and be a part of a vibrant start-up.
5. You will have a lot of freehand and this comes with immense responsibility - so it
is expected that you will be willing to master new things that come along!

Job Description:
1. Design and build a pipeline to train models for NLP problems like Classification,
NER
2. Develop APIs that showcase our models' capabilities and enable third-party
integrations
3. Work across a microservices architecture that processes thousands of
documents per day.
Read more
MindTickle

at MindTickle

1 video
11 recruiters
Shama Afroj
Posted by Shama Afroj
Pune, Bengaluru (Bangalore)
6 - 10 yrs
₹30L - ₹65L / yr
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+6 more

About Us


Mindtickle provides a comprehensive, data-driven solution for sales readiness and enablement that fuels revenue growth and brand value for dozens of Fortune 500 and Global 2000 companies and hundreds of the world’s most recognized companies across technology, life sciences, financial services, manufacturing, and service sectors.


With purpose-built applications, proven methodologies, and best practices designed to drive effective sales onboarding and ongoing readiness, mindtickle enables company leaders and sellers to continually assess, diagnose and develop the knowledge, skills, and behaviors required to engage customers and drive growth effectively. We are funded by great investors, like – Softbank, Canaan partners, NEA, Accel Partners, and others.



Job Brief


We are looking for a rockstar researcher at the Center of Excellence for Machine Learning. You are responsible for thinking outside the box, crafting new algorithms, developing end-to-end artificial intelligence-based solutions, and rightly selecting the most appropriate architecture for the system(s), such that it suits the business needs, and achieves the desired results under given constraints.


Credibility:

  • You must have a proven track record in research and development with adequate publication/patenting and/or academic credentials in data science.
  • You have the ability to directly connect business problems to research problems along with the latest emerging technologies.


Strategic Responsibility:

  • To perform the following: understanding problem statements, connecting the dots between high-level business statements and deep technology algorithms, crafting new systems and methods in the space of structured data mining, natural language processing, computer vision, speech technologies, robotics or Internet of things etc. 
  • To be responsible for end-to-end production level coding with data science and machine learning algorithms, unit and integration testing, deployment, optimization and fine-tuning of models on cloud, desktop, mobile or edge etc.
  • To learn in a continuous mode, upgrade and upskill along with publishing novel articles in journals and conference proceedings and/or filing patents, and be involved in evangelism activities and ecosystem development etc.
  • To share knowledge, mentor colleagues, partners, and customers, take sessions on artificial intelligence topics both online or in-person, participate in workshops, conferences, seminars/webinars as a speaker, instructor, demonstrator or jury member etc.
  • To design and develop high-volume, low-latency applications for mission-critical systems and deliver high availability and performance.
  • To collaborate within the product streams and team to bring best practices and leverage world-class tech stack.
  • To set up every essentials (tracking / alerting) to make sure the infrastructure / software built is working as expected.
  • To search, collect and clean Data for analysis and setting up efficient storage and retrieval pipelines.


Personality:

  • Requires excellent communication skills – written, verbal, and presentation.
  • You should be a team player.
  • You should be positive towards problem-solving and have a very structured thought process to solve problems.
  • You should be agile enough to learn new technology if needed.


Qualifications:

  • B Tech / BS / BE / M Tech / MS / ME in CS or equivalent from Tier I / II or Top Tier Engineering Colleges and Universities.
  • 6+ years of strong software (application or infrastructure) development experience and software engineering skills (Python, R, C, C++ / Java / Scala / Golang).
  • Deep expertise and practical knowledge of operating systems, MySQL and NoSQL databases(Redis/couchbase/mongodb/ES or any graphDB).
  • Good understanding of Machine Learning Algorithms, Linear Algebra and Statistics.
  • Working knowledge of Amazon Web Services(AWS).
  • Experience with Docker and Kubernetes will be a plus.
  • Experience with Natural Language Processing, Recommendation Systems, or Search Engines.


Our Culture


As an organization, it’s our priority to create a highly engaging and rewarding workplace. We offer tons of awesome perks, great learning opportunities & growth.


Our culture reflects the globally diverse backgrounds of our employees along with our commitment to our customers, each other, and a passion for excellence.


To know more about us, feel free to go through these videos:

1. Sales Readiness Explained: https://www.youtube.com/watch?v=XyMJj9AlNww&;t=6s

2. What We Do: https://www.youtube.com/watch?v=jv3Q2XgnkBY

3. Ready to Close More Deals, Faster: https://www.youtube.com/watch?v=nB0exreVU-s


To view more videos, please access the below-mentioned link:

https://www.youtube.com/c/mindtickle/videos



Mindtickle is proud to be an Equal Opportunity Employer


All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law.


Your Right to Work - In compliance with applicable laws, all persons hired will be required to verify identity and eligibility to work in the respective work locations and to complete the required employment eligibility verification document form upon hire.

Read more
Cubera Tech India Pvt Ltd
Bengaluru (Bangalore), Chennai
5 - 8 yrs
Best in industry
Data engineering
Big Data
skill iconJava
skill iconPython
Hibernate (Java)
+10 more

Data Engineer- Senior

Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.

What are you going to do?

Design & Develop high performance and scalable solutions that meet the needs of our customers.

Closely work with the Product Management, Architects and cross functional teams.

Build and deploy large-scale systems in Java/Python.

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

Create data tools for analytics and data scientist team members that assist them in building and optimizing their algorithms.

Follow best practices that can be adopted in Bigdata stack.

Use your engineering experience and technical skills to drive the features and mentor the engineers.

What are we looking for ( Competencies) :

Bachelor’s degree in computer science, computer engineering, or related technical discipline.

Overall 5 to 8 years of programming experience in Java, Python including object-oriented design.

Data handling frameworks: Should have a working knowledge of one or more data handling frameworks like- Hive, Spark, Storm, Flink, Beam, Airflow, Nifi etc.

Data Infrastructure: Should have experience in building, deploying and maintaining applications on popular cloud infrastructure like AWS, GCP etc.

Data Store: Must have expertise in one of general-purpose No-SQL data stores like Elasticsearch, MongoDB, Redis, RedShift, etc.

Strong sense of ownership, focus on quality, responsiveness, efficiency, and innovation.

Ability to work with distributed teams in a collaborative and productive manner.

Benefits:

Competitive Salary Packages and benefits.

Collaborative, lively and an upbeat work environment with young professionals.

Job Category: Development

Job Type: Full Time

Job Location: Bangalore

 

Read more
Ganit Business Solutions

at Ganit Business Solutions

3 recruiters
Viswanath Subramanian
Posted by Viswanath Subramanian
Chennai, Bengaluru (Bangalore), Mumbai
4.5 - 6 yrs
₹15L - ₹18L / yr
skill iconData Science
skill iconMachine Learning (ML)
What you will be doing-
Understand business problems and translate business requirements into technical requirements.
Conduct complex data analysis to ensure data quality & reliability i.e., make the data talk by extracting, preparing, and transforming it.
Identify, develop and implement statistical techniques and algorithms to address business challenges and add value to the organization.
Gather requirements and communicate findings in the form of a meaningful story with the stakeholders.
Build & implement data models using predictive modelling techniques. Interact with clients and provide support for queries and delivery
adoption.
Lead and mentor data analysts.

What we are looking for-
Apart from your love for data and ability to code even while sleeping you would need the following.
Minimum of 02 years of experience in designing and delivery of data science solutions.
You should have successful projects of retail/BFSI/FMCG/Manufacturing/QSR in your kitty to show-off.
Deep understanding of various statistical techniques, mathematical models, and algorithms to start the conversation with the data in hand.
Ability to choose the right model for the data and translate that into a code using R, Python, VBA, SQL, etc.
Bachelors/Masters degree in Engineering/Technology or MBA from
Tier-1 B School or MSc. in Statistics or Mathematics.
Read more
Cloudbloom Systems LLP

at Cloudbloom Systems LLP

5 recruiters
Sahil Rana
Posted by Sahil Rana
Bengaluru (Bangalore)
6 - 10 yrs
₹13L - ₹25L / yr
skill iconMachine Learning (ML)
skill iconPython
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
+3 more
Description
Duties and Responsibilities:
 Research and Develop Innovative Use Cases, Solutions and Quantitative Models
 Quantitative Models in Video and Image Recognition and Signal Processing for cloudbloom’s
cross-industry business (e.g., Retail, Energy, Industry, Mobility, Smart Life and
Entertainment).
 Design, Implement and Demonstrate Proof-of-Concept and Working Proto-types
 Provide R&D support to productize research prototypes.
 Explore emerging tools, techniques, and technologies, and work with academia for cutting-
edge solutions.
 Collaborate with cross-functional teams and eco-system partners for mutual business benefit.
 Team Management Skills
Academic Qualification
 7+ years of professional hands-on work experience in data science, statistical modelling, data
engineering, and predictive analytics assignments
 Mandatory Requirements: Bachelor’s degree with STEM background (Science, Technology,
Engineering and Management) with strong quantitative flavour
 Innovative and creative in data analysis, problem solving and presentation of solutions.
 Ability to establish effective cross-functional partnerships and relationships at all levels in a
highly collaborative environment
 Strong experience in handling multi-national client engagements
 Good verbal, writing & presentation skills
Core Expertise
 Excellent understanding of basics in mathematics and statistics (such as differential
equations, linear algebra, matrix, combinatorics, probability, Bayesian statistics, eigen
vectors, Markov models, Fourier analysis).
 Building data analytics models using Python, ML libraries, Jupyter/Anaconda and Knowledge
database query languages like  SQL
 Good knowledge of machine learning methods like k-Nearest Neighbors, Naive Bayes, SVM,
Decision Forests. 
 Strong Math Skills (Multivariable Calculus and Linear Algebra) - understanding the
fundamentals of Multivariable Calculus and Linear Algebra is important as they form the basis
of a lot of predictive performance or algorithm optimization techniques. 
 Deep learning : CNN, neural Network, RNN, tensorflow, pytorch, computervision,
 Large-scale data extraction/mining, data cleansing, diagnostics, preparation for Modeling
 Good applied statistical skills, including knowledge of statistical tests, distributions,
regression, maximum likelihood estimators, Multivariate techniques & predictive modeling
cluster analysis, discriminant analysis, CHAID, logistic & multiple regression analysis
 Experience with Data Visualization Tools like Tableau, Power BI, Qlik Sense that help to
visually encode data
 Excellent Communication Skills – it is incredibly important to describe findings to a technical
and non-technical audience
 Capability for continuous learning and knowledge acquisition.
 Mentor colleagues for growth and success
 Strong Software Engineering Background
 Hands-on experience with data science tools
Read more
Marktine

at Marktine

1 recruiter
Vishal Sharma
Posted by Vishal Sharma
Remote, Bengaluru (Bangalore)
3 - 7 yrs
₹5L - ₹10L / yr
Data Warehouse (DWH)
Spark
Data engineering
skill iconPython
PySpark
+5 more

Basic Qualifications

- Need to have a working knowledge of AWS Redshift.

- Minimum 1 year of designing and implementing a fully operational production-grade large-scale data solution on Snowflake Data Warehouse.

- 3 years of hands-on experience with building productized data ingestion and processing pipelines using Spark, Scala, Python

- 2 years of hands-on experience designing and implementing production-grade data warehousing solutions

- Expertise and excellent understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies

- Excellent presentation and communication skills, both written and verbal

- Ability to problem-solve and architect in an environment with unclear requirements

Read more
BDI Plus Lab

at BDI Plus Lab

2 recruiters
Puja Kumari
Posted by Puja Kumari
Bengaluru (Bangalore)
1 - 3 yrs
₹3L - ₹6L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more
Experience in a Data Engineer role, who has attained a Graduate degree in Computer
Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
● Experience with big data tools: Hive/Hadoop, Spark, Kafka, Hive etc.
● Experience with querying multiple databases SQL/NoSQL, including
Oracle, MySQL and MongoDB etc.
● Experience in Redis, RabbitMQ, Elastic Search is desirable.
● Strong Experience with object-oriented/functional/ scripting languages:
Python(preferred), Core Java, Java Script, Scala, Shell Scripting etc.
● Must have debugging complex code skills, experience on ML/AI
algorithms is a plus.
● Experience in version control tool Git or any is mandatory.
● Experience with AWS cloud services: EC2, EMR, RDS, Redshift, S3
● Experience with stream-processing systems: Storm, Spark-Streaming,
etc
Read more
Lincode Labs India Pvt Ltd

at Lincode Labs India Pvt Ltd

1 video
4 recruiters
Ritika Nigam
Posted by Ritika Nigam
Bengaluru (Bangalore)
4 - 10 yrs
₹6L - ₹20L / yr
OpenCV
skill iconDeep Learning
Artificial Intelligence (AI)
TensorFlow
skill iconPython
+1 more
Please apply if and only if you enjoy engineering, wish to write a lot of code, wish to do a lot of hands-on Python experimentation, already have in-depth knowledge of deep learning. This position is strictly for people having knowledge various Neural networks and can customize the neural network and not for people who have experience in downloading various AI/ML code.

This position is not for freshers. We are looking for candidates with AI/ML/CV experience of at least 4 year in the industry.
Read more
MediaMelon Inc

at MediaMelon Inc

1 video
2 recruiters
Katreddi Kiran Kumar
Posted by Katreddi Kiran Kumar
Bengaluru (Bangalore), Bengaluru (Bangalore)
1 - 7 yrs
₹0L / yr
skill iconScala
Spark Streaming
Aero spike
Cassandra
Apache Kafka
+2 more
Develop analytic tools, working on BigData and Distributed systems. - Provide technical leadership on developing our core Analytic platform - Lead development efforts on product features using Scala/Java -Demonstrable excellence in innovation, problem solving, analytical skills, data structures and design patterns - Expert in building applications using Spark and Spark Streaming -Exposure to NoSQL: HBase/Cassandra, Hive and Pig -Latin, Mahout -Extensive experience with Hadoop and Machine learning algorithms
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort