11+ Data mapping Jobs in Hyderabad | Data mapping Job openings in Hyderabad
Apply to 11+ Data mapping Jobs in Hyderabad on CutShort.io. Explore the latest Data mapping Job opportunities across top companies like Google, Amazon & Adobe.
15 years US based Product Company
- Should have good hands-on experience in Informatica MDM Customer 360, Data Integration(ETL) using PowerCenter, Data Quality.
- Must have strong skills in Data Analysis, Data Mapping for ETL processes, and Data Modeling.
- Experience with the SIF framework including real-time integration
- Should have experience in building C360 Insights using Informatica
- Should have good experience in creating performant design using Mapplets, Mappings, Workflows for Data Quality(cleansing), ETL.
- Should have experience in building different data warehouse architecture like Enterprise,
- Federated, and Multi-Tier architecture.
- Should have experience in configuring Informatica Data Director in reference to the Data
- Governance of users, IT Managers, and Data Stewards.
- Should have good knowledge in developing complex PL/SQL queries.
- Should have working experience on UNIX and shell scripting to run the Informatica workflows and to control the ETL flow.
- Should know about Informatica Server installation and knowledge on the Administration console.
- Working experience with Developer with Administration is added knowledge.
- Working experience in Amazon Web Services (AWS) is an added advantage. Particularly on AWS S3, Data pipeline, Lambda, Kinesis, DynamoDB, and EMR.
- Should be responsible for the creation of automated BI solutions, including requirements, design,development, testing, and deployment
Job Description-
Responsibilities:
* Work on real-world computer vision problems
* Write robust industry-grade algorithms
* Leverage OpenCV, Python and deep learning frameworks to train models.
* Use Deep Learning technologies such as Keras, Tensorflow, PyTorch etc.
* Develop integrations with various in-house or external microservices.
* Must have experience in deployment practices (Kubernetes, Docker, containerization, etc.) and model compression practices
* Research latest technologies and develop proof of concepts (POCs).
* Build and train state-of-the-art deep learning models to solve Computer Vision related problems, including, but not limited to:
* Segmentation
* Object Detection
* Classification
* Objects Tracking
* Visual Style Transfer
* Generative Adversarial Networks
* Work alongside other researchers and engineers to develop and deploy solutions for challenging real-world problems in the area of Computer Vision
* Develop and plan Computer Vision research projects, in the terms of scope of work including formal definition of research objectives and outcomes
* Provide specialized technical / scientific research to support the organization on different projects for existing and new technologies
Skills:
* Object Detection
* Computer Science
* Image Processing
* Computer Vision
* Deep Learning
* Artificial Intelligence (AI)
* Pattern Recognition
* Machine Learning
* Data Science
* Generative Adversarial Networks (GANs)
* Flask
* SQL
Company Profile :
Merilytics, an Accordion company is a fast-growing analytics firm offering advanced a and intelligent analytical solutions to clients globally. We combine domain expertise, advanced analytics, and technology to provide robust solutions for clients' business problems. You can find further details about the company at https://merilytics.com.
We partner with our clients in Private Equity, CPG, Retail, Healthcare, Media & Entertainment, Technology, Logistics industries etc. by providing analytical solutions to generate superior returns. We solve clients' business problems by analyzing large amount of data to help guide their Operations, Marketing, Pricing, Customer Strategies, and much more.
Position :
- Business Associate at Merilytics will be working on complex analytical projects and is the primary owner of the work streams involved.
- The Business Associates are expected to lead the team of Business Analysts to deliver robust analytical solutions consistently and mentor the Analysts for professional development.
Location : Hyderabad
Roles and Responsibilities :
The roles and responsibilities of a Business Associate will include the below:
- Proactively provide thought leadership to the team and have complete control on the delivery process of the project.
- Understand the client's point of view and translate it into sound judgment calls in ambiguous analytical situations.
- Highlight potential analytical issues upfront and resolve them independently.
- Synthesizes the analysis and derives insights independently.
- Identify the crux of the client problem and leverage it to draw relevant actionable insights from the analysis/work.
- Ability to manage multiple Analysts and provide customized guidance for individual development.
- Resonate with our five core values - Client First, Excellence, Integrity, Respect and Teamwork.
Pre-requisites and skillsets required to apply for this role :
- Undergraduate degree (B.E/B.Tech.) from tier-1/tier-2 colleges are preferred.
- Should have 2-4 years of experience.
- Strong leadership & proactive communication to coordinate with the project team and other internal stakeholders.
- Ability to use business judgement and a structured approach towards solving complex problems.
- Experience in client-facing/professional services environment is a plus.
- Strong hard skills on analytics tools such as R, Python, SQL, and Excel is a plus.
Why Explore a Career at Merilytics :
- High growth environment: Semi-annual performance management and promotion cycles coupled with a strong meritocratic culture, enables fast track to leadership responsibility.
- Cross Domain Exposure: Interesting and challenging work streams across industries and domains that always keep you excited, motivated, and on your toes.
- Entrepreneurial Environment: Intellectual freedom to make decisions and own them. We expect you to spread your wings and assume larger responsibilities.
- Fun culture and peer group: Non-bureaucratic and fun working environment; Strong peer environment that will challenge you and accelerate your learning curve.
Other benefits for full time employees:
(i) Health and wellness programs that include employee health insurance covering immediate family members and parents, term life insurance for employees, free health camps for employees, discounted health services (including vision, dental) for employee and family members, free doctor's consultations, counselors, etc.
(ii) Corporate Meal card options for ease of use and tax benefits.
(iii) Work dinners, team lunches, company sponsored team outings and celebrations.
(iv) Reimbursement support for travel to the office, as and when promulgated by the Company.
(v) Cab reimbursement for women employees beyond a certain time of the day.
(vi) Robust leave policy to support work-life balance. Specially designed leave structure to support woman employees for maternity and related requests.
(vii) Reward and recognition platform to celebrate professional and personal milestones.
(viii) A positive & transparent work environment including various employee engagement and employee benefit initiatives to support personal and professional learning and development.
Data Engineer
Mandatory Requirements
- Experience in AWS Glue
- Experience in Apache Parquet
- Proficient in AWS S3 and data lake
- Knowledge of Snowflake
- Understanding of file-based ingestion best practices.
- Scripting language - Python & pyspark
CORE RESPONSIBILITIES
- Create and manage cloud resources in AWS
- Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies
- Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform
- Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations
- Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
- Define process improvement opportunities to optimize data collection, insights and displays.
- Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible
- Identify and interpret trends and patterns from complex data sets
- Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.
- Key participant in regular Scrum ceremonies with the agile teams
- Proficient at developing queries, writing reports and presenting findings
- Mentor junior members and bring best industry practices
QUALIFICATIONS
- 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)
- Strong background in math, statistics, computer science, data science or related discipline
- Advanced knowledge one of language: Java, Scala, Python, C#
- Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake
- Proficient with
- Data mining/programming tools (e.g. SAS, SQL, R, Python)
- Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
- Data visualization (e.g. Tableau, Looker, MicroStrategy)
- Comfortable learning about and deploying new technologies and tools.
- Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.
- Good written and oral communication skills and ability to present results to non-technical audiences
- Knowledge of business intelligence and analytical tools, technologies and techniques.
Familiarity and experience in the following is a plus:
- AWS certification
- Spark Streaming
- Kafka Streaming / Kafka Connect
- ELK Stack
- Cassandra / MongoDB
- CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
Multinational Company providing energy & Automation digital
Roles and Responsibilities
We are #hiring for AWS Data Engineer expert to join our team
Job Title: AWS Data Engineer
Experience: 5 Yrs to 10Yrs
Location: Remote
Notice: Immediate or Max 20 Days
Role: Permanent Role
Skillset: AWS, ETL, SQL, Python, Pyspark, Postgres DB, Dremio.
Job Description:
Able to develop ETL jobs.
Able to help with data curation/cleanup, data transformation, and building ETL pipelines.
Strong Postgres DB exp and knowledge of Dremio data visualization/semantic layer between DB and the application is a plus.
Sql, Python, and Pyspark is a must.
Communication should be good
About Quadratyx:
We are a product-centric insight & automation services company globally. We help the world’s organizations make better & faster decisions using the power of insight & intelligent automation. We build and operationalize their next-gen strategy, through Big Data, Artificial Intelligence, Machine Learning, Unstructured Data Processing and Advanced Analytics. Quadratyx can boast more extensive experience in data sciences & analytics than most other companies in India.
We firmly believe in Excellence Everywhere.
Job Description
Purpose of the Job/ Role:
• As a Technical Lead, your work is a combination of hands-on contribution, customer engagement and technical team management. Overall, you’ll design, architect, deploy and maintain big data solutions.
Key Requisites:
• Expertise in Data structures and algorithms.
• Technical management across the full life cycle of big data (Hadoop) projects from requirement gathering and analysis to platform selection, design of the architecture and deployment.
• Scaling of cloud-based infrastructure.
• Collaborating with business consultants, data scientists, engineers and developers to develop data solutions.
• Led and mentored a team of data engineers.
• Hands-on experience in test-driven development (TDD).
• Expertise in No SQL like Mongo, Cassandra etc, preferred Mongo and strong knowledge of relational databases.
• Good knowledge of Kafka and Spark Streaming internal architecture.
• Good knowledge of any Application Servers.
• Extensive knowledge of big data platforms like Hadoop; Hortonworks etc.
• Knowledge of data ingestion and integration on cloud services such as AWS; Google Cloud; Azure etc.
Skills/ Competencies Required
Technical Skills
• Strong expertise (9 or more out of 10) in at least one modern programming language, like Python, or Java.
• Clear end-to-end experience in designing, programming, and implementing large software systems.
• Passion and analytical abilities to solve complex problems Soft Skills.
• Always speaking your mind freely.
• Communicating ideas clearly in talking and writing, integrity to never copy or plagiarize intellectual property of others.
• Exercising discretion and independent judgment where needed in performing duties; not needing micro-management, maintaining high professional standards.
Academic Qualifications & Experience Required
Required Educational Qualification & Relevant Experience
• Bachelor’s or Master’s in Computer Science, Computer Engineering, or related discipline from a well-known institute.
• Minimum 7 - 10 years of work experience as a developer in an IT organization (preferably Analytics / Big Data/ Data Science / AI background.
4-6 years of total experience in data warehousing and business intelligence
3+ years of solid Power BI experience (Power Query, M-Query, DAX, Aggregates)
2 years’ experience building Power BI using cloud data (Snowflake, Azure Synapse, SQL DB, data lake)
Strong experience building visually appealing UI/UX in Power BI
Understand how to design Power BI solutions for performance (composite models, incremental refresh, analysis services)
Experience building Power BI using large data in direct query mode
Expert SQL background (query building, stored procedure, optimizing performance)
Skills- Informatica with Big Data Management
1.Minimum 6 to 8 years of experience in informatica BDM development
2.Experience working on Spark/SQL
3.Develops informtica mapping/Sql
Develop complex queries, pipelines and software programs to solve analytics and data mining problems
Interact with other data scientists, product managers, and engineers to understand business problems, technical requirements to deliver predictive and smart data solutions
Prototype new applications or data systems
Lead data investigations to troubleshoot data issues that arise along the data pipelines
Collaborate with different product owners to incorporate data science solutions
Maintain and improve data science platform
Must Have
BS/MS/PhD in Computer Science, Electrical Engineering or related disciplines
Strong fundamentals: data structures, algorithms, database
5+ years of software industry experience with 2+ years in analytics, data mining, and/or data warehouse
Fluency with Python
Experience developing web services using REST approaches.
Proficiency with SQL/Unix/Shell
Experience in DevOps (CI/CD, Docker, Kubernetes)
Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multi-task and manage expectations
Preferred
Industry experience with big data processing technologies such as Spark and Kafka
Experience with machine learning algorithms and/or R a plus
Experience in Java/Scala a plus
Experience with any MPP analytics engines like Vertica
Experience with data integration tools like Pentaho/SAP Analytics Cloud