Cutshort logo
Foundry Jobs in Bangalore (Bengaluru)

11+ Foundry Jobs in Bangalore (Bengaluru) | Foundry Job openings in Bangalore (Bengaluru)

Apply to 11+ Foundry Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Foundry Job opportunities across top companies like Google, Amazon & Adobe.

icon
Infogain
Agency job
via Technogen India PvtLtd by RAHUL BATTA
Bengaluru (Bangalore), Pune, Noida, NCR (Delhi | Gurgaon | Noida)
7 - 10 yrs
₹20L - ₹25L / yr
Data engineering
skill iconPython
SQL
Spark
PySpark
+10 more
  1. Sr. Data Engineer:

 Core Skills – Data Engineering, Big Data, Pyspark, Spark SQL and Python

Candidate with prior Palantir Cloud Foundry OR Clinical Trial Data Model background is preferred

Major accountabilities:

  • Responsible for Data Engineering, Foundry Data Pipeline Creation, Foundry Analysis & Reporting, Slate Application development, re-usable code development & management and Integrating Internal or External System with Foundry for data ingestion with high quality.
  • Have good understanding on Foundry Platform landscape and it’s capabilities
  • Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
  • Defines company data assets (data models), Pyspark, spark SQL, jobs to populate data models.
  • Designs data integrations and data quality framework.
  • Design & Implement integration with Internal, External Systems, F1 AWS platform using Foundry Data Connector or Magritte Agent
  • Collaboration with data scientists, data analyst and technology teams to document and leverage their understanding of the Foundry integration with different data sources - Actively participate in agile work practices
  • Coordinating with Quality Engineer to ensure the all quality controls, naming convention & best practices have been followed

Desired Candidate Profile :

  • Strong data engineering background
  • Experience with Clinical Data Model is preferred
  • Experience in
    • SQL Server ,Postgres, Cassandra, Hadoop, and Spark for distributed data storage and parallel computing
    • Java and Groovy for our back-end applications and data integration tools
    • Python for data processing and analysis
    • Cloud infrastructure based on AWS EC2 and S3
  • 7+ years IT experience, 2+ years’ experience in Palantir Foundry Platform, 4+ years’ experience in Big Data platform
  • 5+ years of Python and Pyspark development experience
  • Strong troubleshooting and problem solving skills
  • BTech or master's degree in computer science or a related technical field
  • Experience designing, building, and maintaining big data pipelines systems
  • Hands-on experience on Palantir Foundry Platform and Foundry custom Apps development
  • Able to design and implement data integration between Palantir Foundry and external Apps based on Foundry data connector framework
  • Hands-on in programming languages primarily Python, R, Java, Unix shell scripts
  • Hand-on experience in AWS / Azure cloud platform and stack
  • Strong in API based architecture and concept, able to do quick PoC using API integration and development
  • Knowledge of machine learning and AI
  • Skill and comfort working in a rapidly changing environment with dynamic objectives and iteration with users.

 Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision

Read more
Institutional-grade tools to understand digital assets

Institutional-grade tools to understand digital assets

Agency job
via Qrata by Blessy Fernandes
Bengaluru (Bangalore)
3 - 8 yrs
₹20L - ₹35L / yr
SQL
skill iconPython
Metrics management
skill iconData Analytics

Responsibilities

  • Work with large and complex blockchain data sets and derive investment relevant metrics in close partnership with financial analysts and blockchain engineers.
  • Apply knowledge of statistics, programming, data modeling, simulation, and advanced mathematics to recognize patterns, identify opportunities, pose business questions, and make valuable discoveries leading to the development of fundamental metrics needed to evaluate various crypto assets.
  • Build a strong understanding of existing metrics used to value various decentralized applications and protocols.
  • Build customer facing metrics and dashboards.
  • Work closely with analysts, engineers, Product Managers and provide feedback as we develop our data analytics and research platform.

Qualifications

  • Bachelor's degree in Mathematics, Statistics, a relevant technical field, or equivalent practical experience (or) degree in an analytical field (e.g. Computer Science, Engineering, Mathematics, Statistics, Operations Research, Management Science)
  • 3+ years experience with data analysis and metrics development
  • 3+ years experience analyzing and interpreting data, drawing conclusions, defining recommended actions, and reporting results across stakeholders
  • 2+ years experience writing SQL queries
  • 2+ years experience scripting in Python
  • Demonstrated curiosity in and excitement for Web3/blockchain technologies
Read more
Acuity Knowledge Partners

at Acuity Knowledge Partners

2 candid answers
1 video
Gangadhar S
Posted by Gangadhar S
Bengaluru (Bangalore)
4 - 9 yrs
₹16L - ₹40L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
CI/CD
skill iconMongoDB
MLOps
+1 more

Job Responsibilities:

1. Develop/debug applications using Python.

2. Improve code quality and code coverage for existing or new program.

3. Deploy and Integrate the Machine Learning models.

4. Test and validate the deployments.

5. ML Ops function.


Technical Skills

1. Graduate in Engineering or Technology with strong academic credentials

2. 4 to 8 years of experience as a Python developer.

3. Excellent understanding of SDLC processes

4. Strong knowledge of Unit testing, code quality improvement

5. Cloud based deployment and integration of applications/micro services.

6. Experience with NoSQL databases, such as MongoDB, Cassandra

7. Strong applied statistics skills

8. Knowledge of creating CI/CD pipelines and touchless deployment.

9. Knowledge about API, Data Engineering techniques.

10. AWS

11. Knowledge of Machine Learning and Large Language Model.


Nice to Have

1. Exposure to financial research domain

2. Experience with JIRA, Confluence

3. Understanding of scrum and Agile methodologies

4. Experience with data visualization tools, such as Grafana, GGplot, etc

Read more
It's a deep-tech firm

It's a deep-tech firm

Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
3 - 10 yrs
₹5L - ₹20L / yr
skill iconData Science
skill iconPython
Natural Language Processing (NLP)
skill iconDeep Learning
TensorFlow
+2 more
  • Your responsibilities:
  • Build, improve and extend NLP capabilities
  • Research and evaluate different approaches to NLP problems
  • Must be able to write code that is well designed, produce deliverable results
  • Write code that scales and can be deployed to production
You must have:
  • Fundamentals of statistical methods is a must
  • Experience in named entity recognition, POS Tagging, Lemmatization, vector representations of textual data and neural networks - RNN, LSTM
  • A solid foundation in Python, data structures, algorithms, and general software development skills.
  • Ability to apply machine learning to problems that deal with language
  • Engineering ability to build robustly scalable pipelines
  • Ability to work in a multi-disciplinary team with a strong product focus
Read more
MNC Company - Product Based

MNC Company - Product Based

Agency job
via Bharat Headhunters by Ranjini C. N
Bengaluru (Bangalore), Chennai, Hyderabad, Pune, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 9 yrs
₹10L - ₹15L / yr
Data Warehouse (DWH)
Informatica
ETL
skill iconPython
Google Cloud Platform (GCP)
+2 more

Job Responsibilities

  • Design, build & test ETL processes using Python & SQL for the corporate data warehouse
  • Inform, influence, support, and execute our product decisions
  • Maintain advertising data integrity by working closely with R&D to organize and store data in a format that provides accurate data and allows the business to quickly identify issues.
  • Evaluate and prototype new technologies in the area of data processing
  • Think quickly, communicate clearly and work collaboratively with product, data, engineering, QA and operations teams
  • High energy level, strong team player and good work ethic
  • Data analysis, understanding of business requirements and translation into logical pipelines & processes
  • Identification, analysis & resolution of production & development bugs
  • Support the release process including completing & reviewing documentation
  • Configure data mappings & transformations to orchestrate data integration & validation
  • Provide subject matter expertise
  • Document solutions, tools & processes
  • Create & support test plans with hands-on testing
  • Peer reviews of work developed by other data engineers within the team
  • Establish good working relationships & communication channels with relevant departments

 

Skills and Qualifications we look for

  • University degree 2.1 or higher (or equivalent) in a relevant subject. Master’s degree in any data subject will be a strong advantage.
  • 4 - 6 years experience with data engineering.
  • Strong coding ability and software development experience in Python.
  • Strong hands-on experience with SQL and Data Processing.
  • Google cloud platform (Cloud composer, Dataflow, Cloud function, Bigquery, Cloud storage, dataproc)
  • Good working experience in any one of the ETL tools (Airflow would be preferable).
  • Should possess strong analytical and problem solving skills.
  • Good to have skills - Apache pyspark, CircleCI, Terraform
  • Motivated, self-directed, able to work with ambiguity and interested in emerging technologies, agile and collaborative processes.
  • Understanding & experience of agile / scrum delivery methodology

 

Read more
hopscotch
Bengaluru (Bangalore)
5 - 8 yrs
₹6L - ₹15L / yr
skill iconPython
Amazon Redshift
skill iconAmazon Web Services (AWS)
PySpark
Data engineering
+3 more

About the role:

 Hopscotch is looking for a passionate Data Engineer to join our team. You will work closely with other teams like data analytics, marketing, data science and individual product teams to specify, validate, prototype, scale, and deploy data pipelines features and data architecture.


Here’s what will be expected out of you:

➢ Ability to work in a fast-paced startup mindset. Should be able to manage all aspects of data extraction transfer and load activities.

➢ Develop data pipelines that make data available across platforms.

➢ Should be comfortable in executing ETL (Extract, Transform and Load) processes which include data ingestion, data cleaning and curation into a data warehouse, database, or data platform.

➢ Work on various aspects of the AI/ML ecosystem – data modeling, data and ML pipelines.

➢ Work closely with Devops and senior Architect to come up with scalable system and model architectures for enabling real-time and batch services.


What we want:

➢ 5+ years of experience as a data engineer or data scientist with a focus on data engineering and ETL jobs.

➢ Well versed with the concept of Data warehousing, Data Modelling and/or Data Analysis.

➢ Experience using & building pipelines and performing ETL with industry-standard best practices on Redshift (more than 2+ years).

➢ Ability to troubleshoot and solve performance issues with data ingestion, data processing & query execution on Redshift.

➢ Good understanding of orchestration tools like Airflow.

 ➢ Strong Python and SQL coding skills.

➢ Strong Experience in distributed systems like spark.

➢ Experience with AWS Data and ML Technologies (AWS Glue,MWAA, Data Pipeline,EMR,Athena, Redshift,Lambda etc).

➢ Solid hands on with various data extraction techniques like CDC or Time/batch based and the related tools (Debezium, AWS DMS, Kafka Connect, etc) for near real time and batch data extraction.


Note :

Product based companies, Ecommerce companies is added advantage

Read more
Leading MNC Service firm

Leading MNC Service firm

Agency job
Mumbai, Bengaluru (Bangalore), Pune, Hyderabad, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 9 yrs
₹1L - ₹15L / yr
Credit Risk Modelling
Predictive modelling
Risk analysis
Forecasting
Credit risk
+8 more

Hi,

we are looking for Credit Risk Modeler for One of our Leading MNC Service firm based out of Across India.

Exp: 2-9years

Qualification: Full time Graduation

JD:

  • Candidate must have experience in Credit Risk Management (development or validation)
  • Should have good exposure into at least 3 out of below –

1.      Predictive modelling

2.      Risk Modeling (PD, EAD, LGD)

1.      Regression Techniques - Logistic, Linear, etc.

2.      Time Series Modeling

3.      Economic Forecasting Models

4.      Stress Testing, Back-testing

5.      Decision/Scorecard Modelling (Behavior Scorecard, Acquisition Scorecard, Collection Scorecard)

1.      Financial Regulations (anyone) – Basel, IFRS9, CCAR, CECL, PPNR, ICAAP, etc.

2.      Tools experience (anyone)

1.      SAS

2.      Python

3.      R

 

https://www.linkedin.com/talent/contract-chooser?contractId=333406871&;destUrl=https%3A%2F%2Fwww.linkedin.com%2Ftalent%2Fjob-posting%2Fonline%2Fdescription%3FjobId%3D3371384912">Edit job description
Read more
Kloud9 Technologies
manjula komala
Posted by manjula komala
Bengaluru (Bangalore)
3 - 6 yrs
₹18L - ₹27L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more

About Kloud9:


Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.


Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.


At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.


Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.


We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.



What we are looking for:


●       3+ years’ experience developing Big Data & Analytic solutions

●       Experience building data lake solutions leveraging Google Data Products (e.g. Dataproc, AI Building Blocks, Looker, Cloud Data Fusion, Dataprep, etc.), Hive, Spark

●       Experience with relational SQL/No SQL

●       Experience with Spark (Scala/Python/Java) and Kafka

●       Work experience with using Databricks (Data Engineering and Delta Lake components)

●       Experience with source control tools such as GitHub and related dev process

●       Experience with workflow scheduling tools such as Airflow

●       In-depth knowledge of any scalable cloud vendor(GCP preferred)

●       Has a passion for data solutions

●       Strong understanding of data structures and algorithms

●       Strong understanding of solution and technical design

●       Has a strong problem solving and analytical mindset

●       Experience working with Agile Teams.

●       Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders

●       Able to quickly pick up new programming languages, technologies, and frameworks

●       Bachelor’s Degree in computer science


Why Explore a Career at Kloud9:


With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers!

Read more
Rudhra Info Solutions

at Rudhra Info Solutions

1 recruiter
Monica Devi
Posted by Monica Devi
Bengaluru (Bangalore), Chennai
5 - 6 yrs
₹7L - ₹15L / yr
Data engineering
skill iconPython
skill iconDjango
SQL
  • Analyze and organize raw data 
  • Build data systems and pipelines
  • Evaluate business needs and objectives
  • Interpret trends and patterns
  • Conduct complex data analysis and report on results 
  • Build algorithms and prototypes
  • Combine raw information from different sources
  • Explore ways to enhance data quality and reliability
  • Identify opportunities for data acquisition
  • Should have experience in Python, Django Micro Service Senior developer with Financial Services/Investment Banking background.
  • Develop analytical tools and programs
  • Collaborate with data scientists and architects on several projects
  • Should have 5+ years of experience as a data engineer or in a similar role
  • Technical expertise with data models, data mining, and segmentation techniques
  • Should have experience programming languages such as Python
  • Hands-on experience with SQL database design
  • Great numerical and analytical skills
  • Degree in Computer Science, IT, or similar field; a Master’s is a plus
  • Data engineering certification (e.g. IBM Certified Data Engineer) is a plus
Read more
Fragma Data Systems

at Fragma Data Systems

8 recruiters
Evelyn Charles
Posted by Evelyn Charles
Remote, Bengaluru (Bangalore)
3.5 - 8 yrs
₹5L - ₹18L / yr
PySpark
Data engineering
Data Warehouse (DWH)
SQL
Spark
+1 more
Must-Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skill
 
 
Technology Skills (Good to Have):
  • Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
  • Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. 
  • Designing and implementing data engineering, ingestion, and transformation functions
  • Azure Synapse or Azure SQL data warehouse
  • Spark on Azure is available in HD insights and data bricks
 
Good to Have: 
  • Experience with Azure Analysis Services
  • Experience in Power BI
  • Experience with third-party solutions like Attunity/Stream sets, Informatica
  • Experience with PreSales activities (Responding to RFPs, Executing Quick POCs)
  • Capacity Planning and Performance Tuning on Azure Stack and Spark.
Read more
Our client company is into IT & Services (GI1)

Our client company is into IT & Services (GI1)

Agency job
via Multi Recruit by Ravish E
Bengaluru (Bangalore)
1 - 3 yrs
₹10L - ₹12L / yr
skill iconPython
skill iconMachine Learning (ML)
R
  • Proficient in R and Python
  • Work experience 1+ years with at least 6 months working with Python
  • Prior experience with building ML models
  • Prior experience with SQL
  • Knowledge of statistical techniques
  • Experience with working on Spatial Data will be an added advantage
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort