11+ Data Transformation Services Jobs in India
Apply to 11+ Data Transformation Services Jobs on CutShort.io. Find your next job, effortlessly. Browse Data Transformation Services Jobs and apply today!
Deep-Rooted.Co is on a mission to get Fresh, Clean, Community (Local farmer) produce from harvest to reach your home with a promise of quality first! Our values are rooted in trust, convenience, and dependability, with a bunch of learning & fun thrown in.
Founded out of Bangalore by Arvind, Avinash, Guru and Santosh, with the support of our Investors Accel, Omnivore & Mayfield, we raised $7.5 million in Seed, Series A and Debt funding till date from investors include ACCEL, Omnivore, Mayfield among others. Our brand Deep-Rooted.Co which was launched in August 2020 was the first of its kind as India’s Fruits & Vegetables (F&V) which is present in Bangalore & Hyderabad and on a journey of expansion to newer cities which will be managed seamlessly through Tech platform that has been designed and built to transform the Agri-Tech sector.
Deep-Rooted.Co is committed to building a diverse and inclusive workplace and is an equal-opportunity employer.
How is this possible? It’s because we work with smart people. We are looking for Engineers in Bangalore to work with thehttps://www.linkedin.com/in/gururajsrao/"> Product Leader (Founder) andhttps://www.linkedin.com/in/sriki77/"> CTO and this is a meaningful project for us and we are sure you will love the project as it touches everyday life and is fun. This will be a virtual consultation.
We want to start the conversation about the project we have for you, but before that, we want to connect with you to know what’s on your mind. Do drop a note sharing your mobile number and letting us know when we can catch up.
Purpose of the role:
* As a startup we have data distributed all across various sources like Excel, Google Sheets, Databases etc. We need swift decision making based a on a lot of data that exists as we grow. You help us bring together all this data and put it in a data model that can be used in business decision making. * Handle nuances of Excel and Google Sheets API. * Pull data in and manage it growth, freshness and correctness. * Transform data in a format that aids easy decision-making for Product, Marketing and Business Heads. * Understand the business problem, solve the same using the technology and take it to production - no hand offs - full path to production is yours.
Technical expertise:
* Good Knowledge And Experience with Programming languages - Java, SQL,Python. * Good Knowledge of Data Warehousing, Data Architecture. * Experience with Data Transformations and ETL; * Experience with API tools and more closed systems like Excel, Google Sheets etc. * Experience AWS Cloud Platform and Lambda * Experience with distributed data processing tools. * Experiences with container-based deployments on cloud.
Skills:
Java, SQL, Python, Data Build Tool, Lambda, HTTP, Rest API, Extract Transform Load.
Job Title: SQL Query Writer - Analytics Automation
Location: Thane (West), Mumbai
Experience: 4-5 years
Responsibilities:
- Develop and optimize SQL queries for efficient data retrieval and analysis.
- Automate analytics processes using platforms like SQL, Python, ACL, Alteryx, Analyzer, Excel Macros, and Access Query.
- Collaborate with cross-functional teams to understand analytical requirements and provide effective solutions.
- Ensure data accuracy, integrity, and security in automated processes.
- Troubleshoot and resolve issues related to analytics automation.
Qualifications:
- Minimum 3 years of experience in SQL query writing and analytics automation.
- Proficiency in SQL, Python, ACL, Alteryx, Analyzer, Excel Macros, and Access Query.
- Strong analytical skills and attention to detail.
Duration : Full Time
Location : Vishakhapatnam, Bangalore, Chennai
years of experience : 3+ years
Job Description :
- 3+ Years of working as a Data Engineer with thorough understanding of data frameworks that collect, manage, transform and store data that can derive business insights.
- Strong communications (written and verbal) along with being a good team player.
- 2+ years of experience within the Big Data ecosystem (Hadoop, Sqoop, Hive, Spark, Pig, etc.)
- 2+ years of strong experience with SQL and Python (Data Engineering focused).
- Experience with GCP Data Services such as BigQuery, Dataflow, Dataproc, etc. is an added advantage and preferred.
- Any prior experience in ETL tools such as DataStage, Informatica, DBT, Talend, etc. is an added advantage for the role.
at vThink Global Technologies
Responsibilities
- Identify, analyze, and interpret trends or patterns in complex data sets to develop a thorough understanding of users, and acquisition channels.
- Run exploratory analysis uncover new areas of opportunity, generate hypotheses, and quickly assess the potential upside of a given opportunity.
- Help execute projects to drive insights that lead to growth.
- Work closely with marketing, design, product, support, and engineering to anticipate analytics needs and to quantify the impact of existing features, future product changes, and marketing campaigns.
- Work with data engineering to develop and implement new analytical tools and improve our underlying data infrastructure. Build tracking plans for new and existing products and work with engineering to ensure proper
- Analyze, forecast, and build custom reports to make key performance indicators and insights available to the entire company.
- Monitor, optimize, and report on marketing and growth metrics and split-test results. Make recommendations based on analytics and test findings.
- Drive optimization and data minded culture inside the company.
- Develop frameworks, models, tools, and processes to ensure that analytical insights can be incorporated into all key decision making.
- Effectively present and communicate analysis to the company to drive business decisions.
- Create a management dashboard including all important KPIs to be tracked on a company and department level
- Establish end to end campaign ROI tracking mechanism to attribute sales to specific google and Facebook campaigns
Skills and Experience
- Minimum 1-2 years of proven work experience in a data analyst role.
- Excellent analytical skills and problem-solving ability; the ability to answer unstructured business questions and work independently to drive projects to conclusion.
- Strong analytical skills with the capacity to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.
- Experience extracting insights using advanced SQL or similar tool to work efficiently at scale. Advanced expertise with commonly used analytics tools including Google Analytics, studio and Excel.
- Strong knowledge of statistics, this includes experimental design for optimization, statistical significance, confidence intervals and predictive analytics techniques.
- Must be self-directed, organized and detail oriented as well as have the ability to multitask and work effectively in a fast-paced environment.
- Active team player, excellent communication skills, positive attitude and good work ethic.
A leading global information technology and business process
Python + Data scientist : |
• Build data-driven models to understand the characteristics of engineering systems |
• Train, tune, validate, and monitor predictive models |
• Sound knowledge on Statistics |
• Experience in developing data processing tasks using PySpark such as reading, merging, enrichment, loading of data from external systems to target data destinations |
• Working knowledge on Big Data or/and Hadoop environments |
• Experience creating CI/CD Pipelines using Jenkins or like tools |
• Practiced in eXtreme Programming (XP) disciplines |
Basic Qualifications
- Need to have a working knowledge of AWS Redshift.
- Minimum 1 year of designing and implementing a fully operational production-grade large-scale data solution on Snowflake Data Warehouse.
- 3 years of hands-on experience with building productized data ingestion and processing pipelines using Spark, Scala, Python
- 2 years of hands-on experience designing and implementing production-grade data warehousing solutions
- Expertise and excellent understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies
- Excellent presentation and communication skills, both written and verbal
- Ability to problem-solve and architect in an environment with unclear requirements
Job Description
The applicant must have a minimum of 5 years of hands-on IT experience, working on a full software lifecycle in Agile mode.
Good to have experience in data modeling and/or systems architecture.
Responsibilities will include technical analysis, design, development and perform enhancements.
You will participate in all/most of the following activities:
- Working with business analysts and other project leads to understand requirements.
- Modeling and implementing database schemas in DB2 UDB or other relational databases.
- Designing, developing, maintaining and Data processing using Python, DB2, Greenplum, Autosys and other technologies
Skills /Expertise Required :
Work experience in developing large volume database (DB2/Greenplum/Oracle/Sybase).
Good experience in writing stored procedures, integration of database processing, tuning and optimizing database queries.
Strong knowledge of table partitions, high-performance loading and data processing.
Good to have hands-on experience working with Perl or Python.
Hands on development using Spark / KDB / Greenplum platform will be a strong plus.
Designing, developing, maintaining and supporting Data Extract, Transform and Load (ETL) software using Informatica, Shell Scripts, DB2 UDB and Autosys.
Coming up with system architecture/re-design proposals for greater efficiency and ease of maintenance and developing software to turn proposals into implementations.
Need to work with business analysts and other project leads to understand requirements.
Strong collaboration and communication skills
● Working on an awesome AI product for the eCommerce domain.
● Build the next-generation information extraction, computer vision product powered
by state-of-the-art AI and Deep Learning techniques.
● Work with an international top-notch engineering team with full commitment to
Machine Learning development.
Desired Candidate Profile
● Passionate about search & AI technologies. Open to collaborating with colleagues &
external contributors.
● Good understanding of the mainstream deep learning models from multiple domains:
computer vision, NLP, reinforcement learning, model optimization, etc.
● Hands-on experience on deep learning frameworks, e.g. Tensorflow, Pytorch, MXNet,
BERT. Able to implement the latest DL model using existing API, open-source libraries
in a short time.
● Hands-on experience with the Cloud-Native techniques. Good understanding of web
services and modern software technologies.
● Maintained/contributed machine learning projects, familiar with the agile software
development process, CICD workflow, ticket management, code-review, version
control, etc.
● Skilled in the following programming languages: Python 3.
● Good English skills especially for writing and reading documentation
at NeenOpal Intelligent Solutions Private Limited
We are actively seeking a Senior Data Engineer experienced in building data pipelines and integrations from 3rd party data sources by writing custom automated ETL jobs using Python. The role will work in partnership with other members of the Business Analytics team to support the development and implementation of new and existing data warehouse solutions for our clients. This includes designing database import/export processes used to generate client data warehouse deliverables.
- 2+ Years experience as an ETL developer with strong data architecture knowledge around data warehousing concepts, SQL development and optimization, and operational support models.
- Experience using Python to automate ETL/Data Processes jobs.
- Design and develop ETL and data processing solutions using data integration tools, python scripts, and AWS / Azure / On-Premise Environment.
- Experience / Willingness to learn AWS Glue / AWS Data Pipeline / Azure Data Factory for Data Integration.
- Develop and create transformation queries, views, and stored procedures for ETL processes, and process automation.
- Document data mappings, data dictionaries, processes, programs, and solutions as per established standards for data governance.
- Work with the data analytics team to assess and troubleshoot potential data quality issues at key intake points such as validating control totals at intake and then upon transformation, and transparently build lessons learned into future data quality assessments
- Solid experience with data modeling, business logic, and RESTful APIs.
- Solid experience in the Linux environment.
- Experience with NoSQL / PostgreSQL preferred
- Experience working with databases such as MySQL, NoSQL, and Postgres, and enterprise-level connectivity experience (such as connecting over TLS and through proxies).
- Experience with NGINX and SSL.
- Performance tune data processes and SQL queries, and recommend and implement data process optimization and query tuning techniques.