11+ Shared services Jobs in India
Apply to 11+ Shared services Jobs on CutShort.io. Find your next job, effortlessly. Browse Shared services Jobs and apply today!
We are looking for an exceptional Software Developer for our Data Engineering India team who can-
contribute to building a world-class big data engineering stack that will be used to fuel us
Analytics and Machine Learning products. This person will be contributing to the architecture,
operation, and enhancement of:
Our petabyte-scale data platform with a key focus on finding solutions that can support
Analytics and Machine Learning product roadmap. Everyday terabytes of ingested data
need to be processed and made available for querying and insights extraction for
various use cases.
About the Organisation:
- It provides a dynamic, fun workplace filled with passionate individuals. We are at the cutting edge of advertising technology and there is never a dull moment at work.
- We have a truly global footprint, with our headquarters in Singapore and offices in Australia, United States, Germany, United Kingdom, and India.
- You will gain work experience in a global environment. We speak over 20 different languages, from more than 16 different nationalities and over 42% of our staff are multilingual.
Job Description
Position:
Software Developer, Data Engineering team
Location: Pune(Initially 100% Remote due to Covid 19 for coming 1 year)
- Our bespoke Machine Learning pipelines. This will also provide opportunities to
contribute to the prototyping, building, and deployment of Machine Learning models.
You:
- Have at least 4+ years’ Experience.
- Deep technical understanding of Java or Golang.
- Production experience with Python is a big plus, extremely valuable supporting skill for
us.
- Exposure to modern Big Data tech: Cassandra/Scylla, Kafka, Ceph, the Hadoop Stack,
Spark, Flume, Hive, Druid etc… while at the same time understanding that certain
problems may require completely novel solutions.
- Exposure to one or more modern ML tech stacks: Spark ML-Lib, TensorFlow, Keras,
GCP ML Stack, AWS Sagemaker - is a plus.
- Experience includes working in Agile/Lean model
- Experience with supporting and troubleshooting large systems
- Exposure to configuration management tools such as Ansible or Salt
- Exposure to IAAS platforms such as AWS, GCP, Azure…
- Good addition - Experience working with large-scale data
- Good addition - Good to have experience architecting, developing, and operating data
warehouses, big data analytics platforms, and high velocity data pipelines
**** Not looking for a Big Data Developer / Hadoop Developer
About DeepIntent:
DeepIntent is a marketing technology company that helps healthcare brands strengthen communication with patients and healthcare professionals by enabling highly effective and performant digital advertising campaigns. Our healthcare technology platform, MarketMatch™, connects advertisers, data providers, and publishers to operate the first unified, programmatic marketplace for healthcare marketers. The platform’s built-in identity solution matches digital IDs with clinical, behavioural, and contextual data in real-time so marketers can qualify 1.6M+ verified HCPs and 225M+ patients to find their most clinically-relevant audiences and message them on a one-to-one basis in a privacy-compliant way. Healthcare marketers use MarketMatch to plan, activate, and measure digital campaigns in ways that best suit their business, from managed service engagements to technical integration or self-service solutions. DeepIntent was founded by Memorial Sloan Kettering alumni in 2016 and acquired by Propel Media, Inc. in 2017. We proudly serve major pharmaceutical and Fortune 500 companies out of our offices in New York, Bosnia and India.
What You’ll Do:
- Establish formal data practice for the organisation.
- Build & operate scalable and robust data architectures.
- Create pipelines for the self-service introduction and usage of new data
- Implement DataOps practices
- Design, Develop, and operate Data Pipelines which support Data scientists and machine learning
- Engineers.
- Build simple, highly reliable Data storage, ingestion, and transformation solutions which are easy
- to deploy and manage.
- Collaborate with various business stakeholders, software engineers, machine learning
- engineers, and analysts.
Who You Are:
- Experience in designing, developing and operating configurable Data pipelines serving high
- volume and velocity data.
- Experience working with public clouds like GCP/AWS.
- Good understanding of software engineering, DataOps, data architecture, Agile and
- DevOps methodologies.
- Experience building Data architectures that optimize performance and cost, whether the
- components are prepackaged or homegrown
- Proficient with SQL, Java, Spring boot, Python or JVM-based language, Bash
- Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow
- etc. and big data databases like BigQuery, Clickhouse, etc
- Good communication skills with the ability to collaborate with both technical and non-technical
- people.
- Ability to Think Big, take bets and innovate, Dive Deep, Bias for Action, Hire and Develop the Best, Learn and be Curious
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.
Role & Responsibilities:
Your role is focused on Design, Development and delivery of solutions involving:
• Data Integration, Processing & Governance
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Implement scalable architectural models for data processing and storage
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 5+ years of IT experience with 3+ years in Data related technologies
2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Cloud data specialty and other related Big data technology certifications
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Concepts of RDBMS, Normalization techniques
Entity Relationship diagram/ ER-Model
Transaction, commit, rollback, ACID properties
Transaction log
Difference in behavior of the column if it is nullable
SQL Statements
Join Operations
DDL, DML, Data Modelling
Optimal Query writing - with Aggregate fn, Group By, having clause, Order by etc. Should be
hands on for scenario-based query Writing
Query optimizing technique, Indexing in depth
Understanding query plan
Batching
Locking schemes
Isolation levels
Concept of stored procedure, Cursor, trigger, View
Beginner level - PL/SQL - Procedure Function writing skill.
Spring JPA and Spring Data basics
Hibernate mappings
UNIX
Basic Concepts on Unix
Commonly used Unix Commands with their options
Combining Unix commands using Pipe Filter etc.
Vi Editor & its different modes
Basic level Scripting and basic knowledge on how to execute jar files from host
Files and directory permissions
Application based scenarios.
What you will do:
- Identifying alternate data sources beyond financial statements and implementing them as a part of assessment criteria
- Automating appraisal mechanisms for all newly launched products and revisiting the same for an existing product
- Back-testing investment appraisal models at regular intervals to improve the same
- Complementing appraisals with portfolio data analysis and portfolio monitoring at regular intervals
- Working closely with the business and the technology team to ensure the portfolio is performing as per internal benchmarks and that relevant checks are put in place at various stages of the investment lifecycle
- Identifying relevant sub-sector criteria to score and rate investment opportunities internally
Desired Candidate Profile
What you need to have:
- Bachelor’s degree with relevant work experience of at least 3 years with CA/MBA (mandatory)
- Experience in working in lending/investing fintech (mandatory)
- Strong Excel skills (mandatory)
- Previous experience in credit rating or credit scoring or investment analysis (preferred)
- Prior exposure to working on data-led models on payment gateways or accounting systems (preferred)
- Proficiency in data analysis (preferred)
- Good verbal and written skills
• Strong experience working with Big Data technologies like Spark (Scala/Java),
• Apache Solr, HIVE, HBase, ElasticSearch, MongoDB, Airflow, Oozie, etc.
• Experience working with Relational databases like MySQL, SQLServer, Oracle etc.
• Good understanding of large system architecture and design
• Experience working in AWS/Azure cloud environment is a plus
• Experience using Version Control tools such as Bitbucket/GIT code repository
• Experience using tools like Maven/Jenkins, JIRA
• Experience working in an Agile software delivery environment, with exposure to
continuous integration and continuous delivery tools
• Passionate about technology and delivering solutions to solve complex business
problems
• Great collaboration and interpersonal skills
• Ability to work with team members and lead by example in code, feature
development, and knowledge sharing
- 6+ years of recent hands-on Java development
- Developing data pipelines in AWS or Google Cloud
- Java, Python, JavaScript programming languages
- Great understanding of designing for performance, scalability, and reliability of data intensive application
- Hadoop MapReduce, Spark, Pig. Understanding of database fundamentals and advanced SQL knowledge.
- In-depth understanding of object oriented programming concepts and design patterns
- Ability to communicate clearly to technical and non-technical audiences, verbally and in writing
- Understanding of full software development life cycle, agile development and continuous integration
- Experience in Agile methodologies including Scrum and Kanban
- Design AWS data ingestion frameworks and pipelines based on the specific needs driven by the Product Owners and user stories…
- Experience building Data Lake using AWS and Hands-on experience in S3, EKS, ECS, AWS Glue, AWS KMS, AWS Firehose, EMR
- Experience Apache Spark Programming with Databricks
- Experience working on NoSQL Databases such as Cassandra, HBase, and Elastic Search
- Hands on experience with leveraging CI/CD to rapidly build & test application code
- Expertise in Data governance and Data Quality
- Experience working with PCI Data and working with data scientists is a plus
- At least 4+ years of experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing and RDBMS
- 5+ years of experience on designing and developing Data Pipelines for Data Ingestion or Transformation using AWS technologies
Strong experience in Scala/Spark
End client: Sapient
Mode of Hiring : FTE
Notice should be less than 30days