Cutshort logo
KDB Jobs in Bangalore (Bengaluru)

11+ KDB Jobs in Bangalore (Bengaluru) | KDB Job openings in Bangalore (Bengaluru)

Apply to 11+ KDB Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest KDB Job opportunities across top companies like Google, Amazon & Adobe.

icon
Wissen Technology

at Wissen Technology

4 recruiters
Lokesh Manikappa
Posted by Lokesh Manikappa
Bengaluru (Bangalore)
5 - 12 yrs
₹15L - ₹35L / yr
ETL
Informatica
Data Warehouse (DWH)
Data modeling
Spark
+5 more

Job Description

The applicant must have a minimum of 5 years of hands-on IT experience, working on a full software lifecycle in Agile mode.

Good to have experience in data modeling and/or systems architecture.
Responsibilities will include technical analysis, design, development and perform enhancements.

You will participate in all/most of the following activities:
- Working with business analysts and other project leads to understand requirements.
- Modeling and implementing database schemas in DB2 UDB or other relational databases.
- Designing, developing, maintaining and Data processing using Python, DB2, Greenplum, Autosys and other technologies

 

Skills /Expertise Required :

Work experience in developing large volume database (DB2/Greenplum/Oracle/Sybase).

Good experience in writing stored procedures, integration of database processing, tuning and optimizing database queries.

Strong knowledge of table partitions, high-performance loading and data processing.
Good to have hands-on experience working with Perl or Python.
Hands on development using Spark / KDB / Greenplum platform will be a strong plus.
Designing, developing, maintaining and supporting Data Extract, Transform and Load (ETL) software using Informatica, Shell Scripts, DB2 UDB and Autosys.
Coming up with system architecture/re-design proposals for greater efficiency and ease of maintenance and developing software to turn proposals into implementations.

Need to work with business analysts and other project leads to understand requirements.
Strong collaboration and communication skills

Read more
Marktine

at Marktine

1 recruiter
Vishal Sharma
Posted by Vishal Sharma
Remote, Bengaluru (Bangalore)
2 - 4 yrs
₹10L - ₹20L / yr
skill iconData Science
skill iconR Programming
skill iconPython
SQL
Natural Language Processing (NLP)

- Modeling complex problems, discovering insights, and identifying opportunities through the use of statistical, algorithmic, mining, and visualization techniques

- Experience working with business understanding the requirement, creating the problem statement, and building scalable and dependable Analytical solutions

- Must have hands-on and strong experience in Python

- Broad knowledge of fundamentals and state-of-the-art in NLP and machine learning

- Strong analytical & algorithm development skills

- Deep knowledge of techniques such as Linear Regression, gradient descent, Logistic Regression, Forecasting, Cluster analysis, Decision trees, Linear Optimization, Text Mining, etc

- Ability to collaborate across teams and strong interpersonal skills

 

Skills

- Sound theoretical knowledge in ML algorithm and their application

- Hands-on experience in statistical modeling tools such as R, Python, and SQL

- Hands-on experience in Machine learning/data science

- Strong knowledge of statistics

- Experience in advanced analytics / Statistical techniques – Regression, Decision trees, Ensemble machine learning algorithms, etc

- Experience in Natural Language Processing & Deep Learning techniques 

- Pandas, NLTK, Scikit-learn, SpaCy, Tensorflow

Read more
Epik Solutions
Sakshi Sarraf
Posted by Sakshi Sarraf
Bengaluru (Bangalore), Noida
4 - 13 yrs
₹7L - ₹18L / yr
skill iconPython
SQL
databricks
skill iconScala
Spark
+2 more

Job Description:


As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:


Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.


Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.


Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.


Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.


Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.


Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related problems.


Documentation and collaboration: You will document data pipelines, data flows, and data transformation processes. You will collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide data engineering support.


Skills and Qualifications:


Strong experience with Azure Databricks, Python, SQL, ADF, PySpark, and Scala.

Proficiency in designing and developing data pipelines and ETL processes.

Solid understanding of data modeling concepts and database design principles.

Familiarity with data integration and orchestration using Azure Data Factory.

Knowledge of data quality management and data governance practices.

Experience with performance tuning and optimization of data pipelines.

Strong problem-solving and troubleshooting skills related to data engineering.

Excellent collaboration and communication skills to work effectively in cross-functional teams.

Understanding of cloud computing principles and experience with Azure services.


Read more
Accolite Digital
Nitesh Parab
Posted by Nitesh Parab
Bengaluru (Bangalore), Hyderabad, Gurugram, Delhi, Noida, Ghaziabad, Faridabad
4 - 8 yrs
₹5L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
SSIS
SQL Server Integration Services (SSIS)
+10 more

Job Title: Data Engineer

Job Summary: As a Data Engineer, you will be responsible for designing, building, and maintaining the infrastructure and tools necessary for data collection, storage, processing, and analysis. You will work closely with data scientists and analysts to ensure that data is available, accessible, and in a format that can be easily consumed for business insights.

Responsibilities:

  • Design, build, and maintain data pipelines to collect, store, and process data from various sources.
  • Create and manage data warehousing and data lake solutions.
  • Develop and maintain data processing and data integration tools.
  • Collaborate with data scientists and analysts to design and implement data models and algorithms for data analysis.
  • Optimize and scale existing data infrastructure to ensure it meets the needs of the business.
  • Ensure data quality and integrity across all data sources.
  • Develop and implement best practices for data governance, security, and privacy.
  • Monitor data pipeline performance / Errors and troubleshoot issues as needed.
  • Stay up-to-date with emerging data technologies and best practices.

Requirements:

Bachelor's degree in Computer Science, Information Systems, or a related field.

Experience with ETL tools like Matillion,SSIS,Informatica

Experience with SQL and relational databases such as SQL server, MySQL, PostgreSQL, or Oracle.

Experience in writing complex SQL queries

Strong programming skills in languages such as Python, Java, or Scala.

Experience with data modeling, data warehousing, and data integration.

Strong problem-solving skills and ability to work independently.

Excellent communication and collaboration skills.

Familiarity with big data technologies such as Hadoop, Spark, or Kafka.

Familiarity with data warehouse/Data lake technologies like Snowflake or Databricks

Familiarity with cloud computing platforms such as AWS, Azure, or GCP.

Familiarity with Reporting tools

Teamwork/ growth contribution

  • Helping the team in taking the Interviews and identifying right candidates
  • Adhering to timelines
  • Intime status communication and upfront communication of any risks
  • Tech, train, share knowledge with peers.
  • Good Communication skills
  • Proven abilities to take initiative and be innovative
  • Analytical mind with a problem-solving aptitude

Good to have :

Master's degree in Computer Science, Information Systems, or a related field.

Experience with NoSQL databases such as MongoDB or Cassandra.

Familiarity with data visualization and business intelligence tools such as Tableau or Power BI.

Knowledge of machine learning and statistical modeling techniques.

If you are passionate about data and want to work with a dynamic team of data scientists and analysts, we encourage you to apply for this position.

Read more
Impetus Technologies

at Impetus Technologies

1 recruiter
Gangadhar T.M
Posted by Gangadhar T.M
Bengaluru (Bangalore)
4 - 8 yrs
₹20L - ₹35L / yr
skill iconData Science
Pricing Strategy
skill iconPython
Predictive analytics
Pricing models
+1 more
Looking for Data Scientist with strong expertise in Classical Machine Learning algorithms and strong expertise in SQL and Python.
Experience in Pricing models will be definite plus
Read more
Indium Software

at Indium Software

16 recruiters
Swaathipriya P
Posted by Swaathipriya P
Bengaluru (Bangalore), Hyderabad
2 - 5 yrs
₹1L - ₹15L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+6 more
2+ years of Analytics with predominant experience in SQL, SAS, Statistics, R , Python, Visualization
Experienced in writing complex SQL select queries (window functions & CTE’s) with advanced SQL experience
Should be an individual contributor for initial few months based on project movement team will be aligned
Strong in querying logic and data interpretation
Solid communication and articulation skills
Able to handle stakeholders independently with less interventions of reporting manager
Develop strategies to solve problems in logical yet creative ways
Create custom reports and presentations accompanied by strong data visualization and storytelling
Read more
Accolite Software

at Accolite Software

1 video
3 recruiters
Nikita Sadarangani
Posted by Nikita Sadarangani
Remote, Bengaluru (Bangalore)
3 - 10 yrs
₹5L - ₹24L / yr
skill iconData Science
skill iconR Programming
skill iconPython
skill iconDeep Learning
Neural networks
+3 more
  • Adept at Machine learning techniques and algorithms.

Feature selection, dimensionality reduction, building and

  • optimizing classifiers using machine learning techniques
  • Data mining using state-of-the-art methods
  • Doing ad-hoc analysis and presenting results
  • Proficiency in using query languages such as N1QL, SQL

Experience with data visualization tools, such as D3.js, GGplot,

  • Plotly, PyPlot, etc.

Creating automated anomaly detection systems and constant tracking

  • of its performance
  • Strong in Python is a must.
  • Strong in Data Analysis and mining is a must
  • Deep Learning, Neural Network, CNN, Image Processing (Must)

Building analytic systems - data collection, cleansing and

  • integration

Experience with NoSQL databases, such as Couchbase, MongoDB,

Cassandra, HBase

Read more
Infogain
Agency job
via Technogen India PvtLtd by RAHUL BATTA
NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore), Mumbai, Pune
7 - 8 yrs
₹15L - ₹16L / yr
Data steward
MDM
Tamr
Reltio
Data engineering
+7 more
  1. Data Steward :

Data Steward will collaborate and work closely within the group software engineering and business division. Data Steward has overall accountability for the group's / Divisions overall data and reporting posture by responsibly managing data assets, data lineage, and data access, supporting sound data analysis. This role requires focus on data strategy, execution, and support for projects, programs, application enhancements, and production data fixes. Makes well-thought-out decisions on complex or ambiguous data issues and establishes the data stewardship and information management strategy and direction for the group. Effectively communicates to individuals at various levels of the technical and business communities. This individual will become part of the corporate Data Quality and Data management/entity resolution team supporting various systems across the board.

 

Primary Responsibilities:

 

  • Responsible for data quality and data accuracy across all group/division delivery initiatives.
  • Responsible for data analysis, data profiling, data modeling, and data mapping capabilities.
  • Responsible for reviewing and governing data queries and DML.
  • Accountable for the assessment, delivery, quality, accuracy, and tracking of any production data fixes.
  • Accountable for the performance, quality, and alignment to requirements for all data query design and development.
  • Responsible for defining standards and best practices for data analysis, modeling, and queries.
  • Responsible for understanding end-to-end data flows and identifying data dependencies in support of delivery, release, and change management.
  • Responsible for the development and maintenance of an enterprise data dictionary that is aligned to data assets and the business glossary for the group responsible for the definition and maintenance of the group's data landscape including overlays with the technology landscape, end-to-end data flow/transformations, and data lineage.
  • Responsible for rationalizing the group's reporting posture through the definition and maintenance of a reporting strategy and roadmap.
  • Partners with the data governance team to ensure data solutions adhere to the organization’s data principles and guidelines.
  • Owns group's data assets including reports, data warehouse, etc.
  • Understand customer business use cases and be able to translate them to technical specifications and vision on how to implement a solution.
  • Accountable for defining the performance tuning needs for all group data assets and managing the implementation of those requirements within the context of group initiatives as well as steady-state production.
  • Partners with others in test data management and masking strategies and the creation of a reusable test data repository.
  • Responsible for solving data-related issues and communicating resolutions with other solution domains.
  • Actively and consistently support all efforts to simplify and enhance the Clinical Trial Predication use cases.
  • Apply knowledge in analytic and statistical algorithms to help customers explore methods to improve their business.
  • Contribute toward analytical research projects through all stages including concept formulation, determination of appropriate statistical methodology, data manipulation, research evaluation, and final research report.
  • Visualize and report data findings creatively in a variety of visual formats that appropriately provide insight to the stakeholders.
  • Achieve defined project goals within customer deadlines; proactively communicate status and escalate issues as needed.

 

Additional Responsibilities:

 

  • Strong understanding of the Software Development Life Cycle (SDLC) with Agile Methodologies
  • Knowledge and understanding of industry-standard/best practices requirements gathering methodologies.
  • Knowledge and understanding of Information Technology systems and software development.
  • Experience with data modeling and test data management tools.
  • Experience in the data integration project • Good problem solving & decision-making skills.
  • Good communication skills within the team, site, and with the customer

 

Knowledge, Skills and Abilities

 

  • Technical expertise in data architecture principles and design aspects of various DBMS and reporting concepts.
  • Solid understanding of key DBMS platforms like SQL Server, Azure SQL
  • Results-oriented, diligent, and works with a sense of urgency. Assertive, responsible for his/her own work (self-directed), have a strong affinity for defining work in deliverables, and be willing to commit to deadlines.
  • Experience in MDM tools like MS DQ, SAS DM Studio, Tamr, Profisee, Reltio etc.
  • Experience in Report and Dashboard development
  • Statistical and Machine Learning models
  • Python (sklearn, numpy, pandas, genism)
  • Nice to Have:
  • 1yr of ETL experience
  • Natural Language Processing
  • Neural networks and Deep learning
  • xperience in keras,tensorflow,spacy, nltk, LightGBM python library

 

Interaction :  Frequently interacts with subordinate supervisors.

Education : Bachelor’s degree, preferably in Computer Science, B.E or other quantitative field related to the area of assignment. Professional certification related to the area of assignment may be required

Experience :  7 years of Pharmaceutical /Biotech/life sciences experience, 5 years of Clinical Trials experience and knowledge, Excellent Documentation, Communication, and Presentation Skills including PowerPoint

 

Read more
App-based lending platform. ( AF1)

App-based lending platform. ( AF1)

Agency job
via Multi Recruit by Ayub Pasha
Bengaluru (Bangalore)
2 - 5 yrs
₹25L - ₹28L / yr
skill iconData Science
skill iconMachine Learning (ML)
Data Scientist
skill iconPython
Logistic regression
+2 more
  • Use data to develop machine learning models that optimize decision making in Credit Risk, Fraud, Marketing, and Operations
  • Implement data pipelines, new features, and algorithms that are critical to our production models
  • Create scalable strategies to deploy and execute your models
  • Write well designed, testable, efficient code
  • Identify valuable data sources and automate collection processes.
  • Undertake to preprocess of structured and unstructured data.
  • Analyze large amounts of information to discover trends and patterns.

 

Requirements:

  • 2+ years of experience in applied data science or engineering with a focus on machine learning
  • Python expertise with good knowledge of machine learning libraries, tools, techniques, and frameworks (e.g. pandas, sklearn, xgboost, lightgbm, logistic regression, random forest classifier, gradient boosting regressor, etc)
  • strong quantitative and programming skills with a product-driven sensibility

 

 

Read more
Niki.ai

at Niki.ai

6 recruiters
Alyeska Araujo
Posted by Alyeska Araujo
Bengaluru (Bangalore)
5 - 12 yrs
₹20L - ₹40L / yr
Natural Language Processing (NLP)
skill iconMachine Learning (ML)
skill iconPython
We at  https://niki.ai/" target="_blank">niki.ai are currently looking for a NLP Engineer to join our team. We would love to tell you a little more about the position and learn a few things about you. Below is the job description for the Natural Language Processing Engineer Role for your reference. 

Job Description
Niki is an artificially intelligent ordering application (http://niki.ai/app" target="_blank">niki.ai/app). Our founding team is from IIT Kharagpur, and we are looking for a Natural Language Processing Engineer to join our engineering team.

The ideal candidate will have industry experience solving language-related problems using statistical methods on vast quantities of data available from Indian mobile consumers and elsewhere.

Major responsibilities would be:

1. Create language models from text data. These language models draw heavily from statistical, deep learning as well as rule based research in recent times around building taggers, parsers, knowledge graph based dictionaries etc.

2. Develop highly scalable classifiers and tools leveraging machine learning, data regression, and rules based models

3. Work closely with product teams to implement algorithms that power user and developer-facing products

We work mostly in Java and Python and object oriented concepts are a must to fit in the team. Basic eligibility criteria are:

1. Graduate/Post-Graduate/M.S./Ph.D in Computer Science/Mathematics/Machine Learning/NLP or allied fields.
2. Industry experience of min 5 years.
3. Strong background in Natural Language Processing and Machine Learning
4. Have some experience in leading a team big or small.
5. Experience with Hadoop/Hbase/Pig or MaprReduce/Sawzall/Bigtable is a plus

Competitive Compensation.

What We're Building
We are building an automated messaging platform to simplify ordering experience for consumers. We have launched the Android App: http://niki.ai/app" target="_blank">niki.ai/app . In the current avatar, Niki can process mobile phone recharge and book cabs for the consumers. It assists in finding the right recharge plans across topup, 2g, 3g and makes the transaction. In cab booking, it helps in end to end booking along with tracking and cancellation within the App. You may also compare to get the nearest or the cheapest cab among available ones.

Being an instant messaging App, it works seamlessly on 2G / 3G / Wifi and is light weight around 3.6 MB. You may check out using: https://niki.ai/" target="_blank">niki.ai app
 
Read more
A Fintech startup in Dubai

A Fintech startup in Dubai

Agency job
via Jobbie by Sourav Nandi
Remote, Dubai, Bengaluru (Bangalore), Mumbai
2 - 18 yrs
₹14L - ₹38L / yr
skill iconData Science
skill iconPython
skill iconR Programming
RESPONSIBILITIES AND QUALIFICATIONS The mission of the Marcus Surveillance Analytics team is to deliver a platform which detects security incidents which have a tangible business impact and actionable response. You will work alongside industry leading technologists from who have recently joined Goldman from across consumer security, technology, fintech, finance and quant firms. The role has a broad scope which will involve interacting with senior leaders of Goldman and the Consumer business on a regular basis. The position is hands-on and requires a driven and “take ownership” oriented individual who is intently focused on execution. You will work directly with developers, business leaders, vendors and partners in order to deliver security assets to the consumer business. Develop a team, vision and platform which identifies/prioritizes actionable security & fraud risks which have tangible businesses impact across Goldman's consumer and commercial banking businesses. Develop response and recovery technology and programs to ensure resilience from fraud and abuse events. Manage, develop and operationalize analytics which discover security & fraud events and identifies risks for all of Goldman's consumer businesses. Partner with fraud / abuse operations and leadership to ensure consumer fraud rates are within industry norms and own outcomes related to fraud improvements. Skills And Experience We Are Looking For BA/BS degree in Computer Science, Cybersecurity, or other relevant Computer/Data/Engineering degrees 2+ years of experience as a security professional or data analyst/scientist/engineer Python, PySpark, R, Bash, SQL, Splunk (search, ES, UBA) Experience with cloud infrastructure/big data tool sets Visualization tools such as Tableau or D3 Research and development to create innovative predictive detections for security and fraud Build a 24/7 real-time monitoring system with long term vision for scaling to new lines of consumer businesses Strong focus on customer experience and product usability Ability to work closely with the business, fraud, and security incident response teams on creating actionable detections
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort