Cutshort logo
Microsoft ssas jobs

11+ Microsoft SSAS Jobs in India

Apply to 11+ Microsoft SSAS Jobs on CutShort.io. Find your next job, effortlessly. Browse Microsoft SSAS Jobs and apply today!

icon
Hyderabad
6 - 9 yrs
₹10L - ₹15L / yr
SQL
Databases
SQL Server Reporting Services (SSRS)
SQL Server Integration Services (SSIS)
SQL Server Analysis Services (SSAS)
+11 more

Designation: Senior - DBA

Experience: 6-9 years

CTC: INR 17-20 LPA

Night Allowance: INR 800/Night

Location: Hyderabad,Hybrid

Notice Period: NA

Shift Timing : 6:30 pm to 3:30 am

Openings: 3

Roles and Responsibilities:

As a Senior Database Administrator is responsible for the physical design development

administration and optimization of properly engineered database systems to meet agreed

business and technical requirements.

The candidate will work as part of but not limited to the Onsite/Offsite DBA

group-Administration and management of databases in Dev Stage and Production

environments

Performance tuning of database schema stored procedures etc.

Providing technical input on the setup configuration of database servers and SAN disk

subsystem on all database servers.

Troubleshooting and handling all database related issues and tracking them through to

resolution.

Pro-active monitoring of databases both from a performance and capacity management

perspective.

Performing database maintenance activities such as backup/recovery rebuilding and

reorganizing indexes.

Ensuring that all database releases are properly assessed and measured from a

functionality and performance perspective.

Ensuring that all databases are up to date with the latest service packs patches &

security fixes.

Take ownership and ensure high quality timely delivery of projects on hand.

Collaborate with application/database developers quality assurance and

operations/support staff

Will help manage large high transaction rate SQL Server production

Eligibility:

Bachelors/Master Degree (BE/BTech/MCA/MTect/MS)

6 - 8 years of solid experience in SQL Server 2016/2019 Database administration and

maintenance on Azure and AWS cloud.

Experience handling and managing large SQL Server databases in a real time production

environment with sizes greater than 200+ GB

Experience in troubleshooting and resolving database integrity issues performance

issues blocking/deadlocking issues connectivity issues data replication issues etc.

Experience on Configuration Trouble shoot on SQL Server HA

Ability to detect and troubleshoot database related CPUmemoryI/Odisk space and other

resource contention issues.

Experience with database maintenance activities such as backup/recovery & capacity

monitoring/management and Azure Backup Services.

Experience with HA/Failover technologies such as Clustering SAN Replication Log

shipping & mirroring.

Experience collaborating with development teams on physical database design activities

and performance tuning.

Experience in managing and making software deployments/changes in real time

production environments.

Ability to work on multiple projects at one time with minimal supervision and ensure high

quality timely delivery.

Knowledge on tools like SQL Lite speed SQL Diagnostic Manager App Dynamics.

Strong understanding of Data Warehousing concepts and SQL server Architecture

Certified DBA Proficient in TSQL Proficient in the various Storage technologies such as

ASM SAN NAS RAID Multi patching

Strong analytical and problem solving skills Proactive independent and proven ability to

work under tight target and pressure

Experience working in a highly regulated environment such as a financial services

institutions

Expertise in SSIS SSRS

Skills:

SSIS

SSRS


Read more
the world’s first real-time marketing automation built on an Intelligent and Secure Customer Data Platform orchestrating 1-to-1 personalization and cross-channel customer journeys at scale that increases conversion, retention, & growth for enterprises

the world’s first real-time marketing automation built on an Intelligent and Secure Customer Data Platform orchestrating 1-to-1 personalization and cross-channel customer journeys at scale that increases conversion, retention, & growth for enterprises

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
6 - 12 yrs
₹25L - ₹35L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
skill iconDeep Learning
+4 more

Experience Required:

  • 6+ years of data science experience.
  • Demonstrated experience in leading programs.
  • Prior experience in customer data platforms/finance domain is a plus.
  • Demonstrated ability in developing and deploying data-driven products.
  • Experience of working with large datasets and developing scalable algorithms.
  • Hands-on experience of working with tech, product, and operation teams.


Key Responsibilities:

Technical Skills:

  • Deep understanding and hands-on experience of Machine learning and Deep learning algorithms. Good understanding of NLP and LLM concepts and fair experience in developing NLU and NLG solutions.
  • Experience with Keras/TensorFlow/PyTorch deep learning frameworks.
  • Proficient in scripting languages (Python/Shell), SQL.
  • Good knowledge of Statistics.
  • Experience with big data, cloud, and MLOps.

Soft Skills:

  • Strong analytical and problem-solving skills.
  • Excellent presentation and communication skills.
  • Ability to work independently and deal with ambiguity.

Continuous Learning:

  • Stay up to date with emerging technologies.


Qualifications:

A degree in Computer Science, Statistics, Applied Mathematics, Machine Learning, or any related field / B. Tech.

Read more
My Client is the world’s largest media investment company.

My Client is the world’s largest media investment company.

Agency job
via Merito by Jinita Sumaria
Gurugram
3 - 5 yrs
Best in industry
ETL
Informatica
Data Warehouse (DWH)
skill iconPython
SQL
+5 more

The Client is the world’s largest media investment company. Our team of experts support clients in programmatic, social, paid search, analytics, technology, organic search, affiliate marketing, e-commerce and across traditional channel We are currently looking for a Manager Analyst – Analytics to join us. In this role, you will work on

various projects for the in-house team across data management, reporting, and analytics.


Responsibility:

 

•       Serve as a Subject Matter Expert on data usage – extraction, manipulation, and inputs for analytics

•       Develop data extraction and manipulation code based on business rules

•       Design and construct data store and procedures for their maintenance  Develop and maintain strong relationships with stakeholders  Write high-quality code as per prescribed standards.

•       Participate in internal projects as required


Requirements:

 

•       2-5 years for strong experience in working with SQL, Python, ETL development.

•       Strong Experience in writing complex SQLs

•       Good Communication skills

•       Good experience of working with any BI tool like Tableau, Power BI.

•       Familiar with various cloud technologies and their offerings within the data specialization and Data Warehousing.

•       Snowflake, AWS are good to have.

 

Minimum qualifications:

•       B. Tech./MCA or equivalent preferred

Excellent 2 years Hand on experience on Big data, ETL Development, Data Processing.  

Read more
Optisol Business Solutions Pvt Ltd
Veeralakshmi K
Posted by Veeralakshmi K
Remote, Chennai, Coimbatore, Madurai
4 - 10 yrs
₹10L - ₹15L / yr
skill iconPython
SQL
Amazon Redshift
Amazon RDS
AWS Simple Notification Service (SNS)
+5 more

Role Summary


As a Data Engineer, you will be an integral part of our Data Engineering team supporting an event-driven server less data engineering pipeline on AWS cloud, responsible for assisting in the end-to-end analysis, development & maintenance of data pipelines and systems (DataOps). You will work closely with fellow data engineers & production support to ensure the availability and reliability of data for analytics and business intelligence purposes.


Requirements:


·      Around 4 years of working experience in data warehousing / BI system.

·      Strong hands-on experience with Snowflake AND strong programming skills in Python

·      Strong hands-on SQL skills

·      Knowledge with any of the cloud databases such as Snowflake,Redshift,Google BigQuery,RDS,etc.

·      Knowledge on debt for cloud databases

·      AWS Services such as SNS, SQS, ECS, Docker, Kinesis & Lambda functions

·      Solid understanding of ETL processes, and data warehousing concepts

·      Familiarity with version control systems (e.g., Git/bit bucket, etc.) and collaborative development practices in an agile framework

·      Experience with scrum methodologies

·      Infrastructure build tools such as CFT / Terraform is a plus.

·      Knowledge on Denodo, data cataloguing tools & data quality mechanisms is a plus.

·      Strong team player with good communication skills.


Overview Optisol Business Solutions


OptiSol was named on this year's Best Companies to Work for list by Great place to work. We are a team of about 500+ Agile employees with a development center in India and global offices in the US, UK (United Kingdom), Australia, Ireland, Sweden, and Dubai. 16+ years of joyful journey and we have built about 500+ digital solutions. We have 200+ happy and satisfied clients across 24 countries.


Benefits, working with Optisol


·      Great Learning & Development program

·      Flextime, Work-at-Home & Hybrid Options

·      A knowledgeable, high-achieving, experienced & fun team.

·      Spot Awards & Recognition.

·      The chance to be a part of next success story.

·      A competitive base salary.


More Than Just a Job, We Offer an Opportunity To Grow. Are you the one, who looks out to Build your Future & Build your Dream? We have the Job for you, to make your dream comes true.

Read more
Information Solution Provider Company

Information Solution Provider Company

Agency job
via Jobdost by Sathish Kumar
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 7 yrs
₹10L - ₹20L / yr
PowerBI
Data modeling
SQL
SSIS
SSAS
  • Good experience on Power BI Visualizations, DAX queries in Power BI
  • Experience in implementing Row Level Security
  • Can understand data models, can implement simple-medium data models
  • Quick learner to pick up the Application data design and processe
  • Expert in SQL, Analyze current ETL/SSIS process
  • Hands on experience in data modeling  
  • Quick learner to pick up the Application data design and processes
  • Data warehouse development and work with SSIS & SSAS (Good to have)
•Can lead a  team of 2-3 developers and own deliverables
Read more
Amagi Media Labs

at Amagi Media Labs

3 recruiters
Rajesh C
Posted by Rajesh C
Bengaluru (Bangalore)
2 - 4 yrs
₹10L - ₹14L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+3 more
1. 2 to 4 years of experience
2. hands on experience using python, sql, tablaue
3. Data Analyst 
About Amagi (http://www.amagi.com/" target="_blank">www.amagi.com): Amagi is a market leader in cloud based media technology services for channel creation, distribution and ad monetization. Amagi’s cloud technology and managed services is used by TV networks, content owners, sports rights owners and pay TV / OTT platforms to create 24x7 linear channels for OTT and broadcast and deliver them to end consumers. Amagi’s pioneering and market leading cloud platform has won numerous accolades and is deployed in over 40 countries by 400+ TV networks. Customers of Amagi include A+E Networks, Comcast, Google, NBC Universal, Roku, Samsung and Warner Media. This is a unique and transformative opportunity to participate and grow a world-class technology company that changes the tenets of TV. Amagi is a private equity backed firm with investments from KKR (Emerald Media Fund), Premji Invest and MayField. Amagi has offices in New York, Los Angeles, London, New Delhi and Bangalore. LinkedIn page : https://www.linkedin.com/company/amagicorporation" target="_blank">https://www.linkedin.com/company/amagicorporation News: https://www.amagi.com/about/newsroom/amagi-clocks-120-yoy-quarterly-growth-as-channels-on-its-platform-grows-to-400" target="_blank">https://www.amagi.com/about/newsroom/amagi-clocks-120-yoy-quarterly-growth-as-channels-on-its-platform-grows-to-400/ Cofounder on Youtube: https://www.youtube.com/watch?v=EZ0nBT3ht0E" target="_blank">https://www.youtube.com/watch?v=EZ0nBT3ht0E
 

About Amagi & Growth


Amagi Corporation is a next-generation media technology company that provides cloud broadcast and targeted advertising solutions to broadcast TV and streaming TV platforms. Amagi enables content owners to launch, distribute and monetize live linear channels on Free-Ad-Supported TV and video services platforms. Amagi also offers 24x7 cloud managed services bringing simplicity, advanced automation, and transparency to the entire broadcast operations. Overall, Amagi supports 500+ channels on its platform for linear channel creation, distribution, and monetization with deployments in over 40 countries. Amagi has offices in New York (Corporate office), Los Angeles, and London, broadcast operations in New Delhi, and our Development & Innovation center in Bangalore. Amagi is also expanding in Singapore, Canada and other countries.

Amagi has seen phenomenal growth as a global organization over the last 3 years. Amagi has been a profitable firm for the last 2 years, and is now looking at investing in multiple new areas. Amagi has been backed by 4 investors - Emerald, Premji Invest, Nadathur and Mayfield. As of the fiscal year ending March 31, 2021, the company witnessed stellar growth in the areas of channel creation, distribution, and monetization, enabling customers to extend distribution and earn advertising dollars while saving up to 40% in cost of operations compared to traditional delivery models. Some key highlights of this include:

·   Annual revenue growth of 136%
·   44% increase in customers
·   50+ Free Ad Supported Streaming TV (FAST) platform partnerships and 100+ platform partnerships globally
·   250+ channels added to its cloud platform taking the overall tally to more than 500
·   Approximately 2 billion ad opportunities every month supporting OTT ad-insertion for 1000+ channels
·   60% increase in workforce in the US, UK, and India to support strong customer growth (current headcount being 360 full-time employees + Contractors)
·   5-10x growth in ad impressions among top customers
 
Over the last 4 years, Amagi has grown more than 400%. Amagi now has an aggressive growth plan over the next 3 years - to grow 10X in terms of Revenue. In terms of headcount, Amagi is looking to grow to more than 600 employees over the next 1 year. Amagi is building several key organizational processes to support the high growth journey and has gone digital in a big way.
 
Read more
Bengaluru (Bangalore)
1 - 8 yrs
₹8L - ₹14L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+8 more
In this role, you will be part of a growing, global team of data engineers, who collaborate in DevOps mode, in order to enable Merck business with state-of-the-art technology to leverage data as an asset and to take better informed decisions.

The Merck Data Engineering Team is responsible for designing, developing, testing, and supporting automated end-to-end data pipelines and applications on Merck’s data management and global analytics platform (Palantir Foundry, Hadoop, AWS and other components).

The Foundry platform comprises multiple different technology stacks, which are hosted on Amazon Web Services (AWS) infrastructure or on-premise Merck’s own data centers. Developing pipelines and applications on Foundry requires:

• Proficiency in SQL / Java / Python (Python required; all 3 not necessary)
• Proficiency in PySpark for distributed computation
• Familiarity with Postgres and ElasticSearch
• Familiarity with HTML, CSS, and JavaScript and basic design/visual competency
• Familiarity with common databases (e.g. JDBC, mySQL, Microsoft SQL). Not all types required

This position will be project based and may work across multiple smaller projects or a single large project utilizing an agile project methodology.

Roles & Responsibilities:
• Develop data pipelines by ingesting various data sources – structured and un-structured – into Palantir Foundry
• Participate in end to end project lifecycle, from requirements analysis to go-live and operations of an application
• Acts as business analyst for developing requirements for Foundry pipelines
• Review code developed by other data engineers and check against platform-specific standards, cross-cutting concerns, coding and configuration standards and functional specification of the pipeline
• Document technical work in a professional and transparent way. Create high quality technical documentation
• Work out the best possible balance between technical feasibility and business requirements (the latter can be quite strict)
• Deploy applications on Foundry platform infrastructure with clearly defined checks
• Implementation of changes and bug fixes via Merck's change management framework and according to system engineering practices (additional training will be provided)
• DevOps project setup following Agile principles (e.g. Scrum)
• Besides working on projects, act as third level support for critical applications; analyze and resolve complex incidents/problems. Debug problems across a full stack of Foundry and code based on Python, Pyspark, and Java
• Work closely with business users, data scientists/analysts to design physical data models
Read more
Planet Spark

at Planet Spark

5 recruiters
Maneesh Dhooper
Posted by Maneesh Dhooper
Gurugram
2 - 5 yrs
₹7L - ₹18L / yr
Data engineering
Data Engineer
Data Warehouse (DWH)
skill iconPython
SQL
+4 more
Responsibilities :
  1. Create and maintain optimal data pipeline architecture
  2. Assemble large, complex data sets that meet business requirements
  3. Identifying, designing, and implementing internal process improvements including redesigning infrastructure for greater scalability, optimizing data delivery, and automating manual processes
  4. Work with Data, Analytics & Tech team to extract, arrange and analyze data
  5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies
  6. Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition
  7. Works closely with all business units and engineering teams to develop a strategy for long-term data platform architecture.
  8. Working with stakeholders including data, design, product, and executive teams, and assisting them with data-related technical issues
  9. Working with stakeholders including the Executive, Product, Data, and Design teams to support their data infrastructure needs while assisting with data-related technical issues.
Skill Requirements
  1. SQL
  2. Ruby or Python(Ruby preferred)
  3. Apache-Hadoop based analytics
  4. Data warehousing
  5. Data architecture
  6. Schema design
  7. ML
Experience Requirement
  1.  Prior experience of 2 to 5 years as a Data Engineer.
  2. Ability in managing and communicating data warehouse plans to internal teams.
  3. Experience designing, building, and maintaining data processing systems.
  4. Ability to perform root cause analysis on external and internal processes and data to identify opportunities for improvement and answer questions.
  5. Excellent analytic skills associated with working on unstructured datasets.
  6. Ability to build processes that support data transformation, workload management, data structures, dependency, and metadata.
Read more
Commoditize data engineering. (X1)

Commoditize data engineering. (X1)

Agency job
via Multi Recruit by savitha Rajesh
Bengaluru (Bangalore)
9 - 20 yrs
₹40L - ₹44L / yr
skill iconPython
SQL
Lead Technical Trainer
Trainer
IT Trainer
+1 more

This is the first senior person we are bringing for this role. This person will start with the training program but will go on to build a team and eventually also be responsible for the entire training program + Bootcamp.

 

We are looking for someone fairly senior and has experience in data + tech. At some level, we have all the technical expertise to teach you the data stack as needed. So it's not super important you know all the tools. However, having basic knowledge of the stack requirement. The training program covers 2 parts - Technology (our stack) and Process (How we work with clients). Both of which are super important.

  • Full-time flexible working schedule and own end-to-end training
  • Self-starter - who can communicate effectively and proactively
  • Function effectively with minimal supervision.
  • You can train and mentor potential 5x engineers on Data Engineering skillsets
  • You can spend time on self-learning and teaching for new technology when needed
  • You are an extremely proactive communicator, who understands the challenges of remote/virtual classroom training and the need to over-communicate to offset those challenges.

Requirements

  • Proven experience as a corporate trainer or have passion for Teaching/ Providing Training
  • Expertise in Data Engineering Space and have good experience in Data Collection, Data
  • Ingestion, Data Modeling, Data Transformation, and Data Visualization technologies and techniques
  • Experience Training working professionals on in-demand skills like Snowflake, debt, Fivetran, google data studio, etc.
  • Training/Implementation Experience using Fivetran, DBT Cloud, Heap, Segment, Airflow, Snowflake is a big plus

 

Read more
TechChefs Software

at TechChefs Software

2 recruiters
Shilpa Yadav
Posted by Shilpa Yadav
Remote, Anywhere from india
5 - 10 yrs
₹1L - ₹15L / yr
ETL
Informatica
skill iconPython
SQL

Responsibilities

  • Installing and configuring Informatica components, including high availability; managing server activations and de-activations for all environments; ensuring that all systems and procedures adhere to organizational best practices
  • Day to day administration of the Informatica Suite of services (PowerCenter, IDS, Metadata, Glossary and Analyst).
  • Informatica capacity planning and on-going monitoring (e.g. CPU, Memory, etc.) to proactively increase capacity as needed.
  • Manage backup and security of Data Integration Infrastructure.
  • Design, develop, and maintain all data warehouse, data marts, and ETL functions for the organization as a part of an infrastructure team.
  • Consult with users, management, vendors, and technicians to assess computing needs and system requirements.
  • Develop and interpret organizational goals, policies, and procedures.
  • Evaluate the organization's technology use and needs and recommend improvements, such as software upgrades.
  • Prepare and review operational reports or project progress reports.
  • Assist in the daily operations of the Architecture Team , analyzing workflow, establishing priorities, developing standards, and setting deadlines.
  • Work with vendors to manage support SLA’s and influence vendor product roadmap
  • Provide leadership and guidance in technical meetings, define standards and assist/provide status updates
  • Work with cross functional operations teams such as systems, storage and network to design technology stacks.

 

Preferred Qualifications

  • Minimum 6+ years’ experience as Informatica Engineer and Developer role
  • Minimum of 5+ years’ experience in an ETL environment as a developer.
  • Minimum of 5+ years of experience in SQL coding and understanding of databases
  • Proficiency in Python
  • Proficiency in command line troubleshooting
  • Proficiency in writing code in Perl/Shell scripting languages
  • Understanding of Java and concepts of Object-oriented programming
  • Good understanding of systems, networking, and storage
  • Strong knowledge of scalability and high availability
Read more
Cemtics

at Cemtics

1 recruiter
Tapan Sahani
Posted by Tapan Sahani
Remote, NCR (Delhi | Gurgaon | Noida)
4 - 6 yrs
₹5L - ₹12L / yr
Big Data
Spark
Hadoop
SQL
skill iconPython
+1 more

JD:

Required Skills:

  • Intermediate to Expert level hands-on programming using one of programming language- Java or Python or Pyspark or Scala.
  • Strong practical knowledge of SQL.
    Hands on experience on Spark/SparkSQL
  • Data Structure and Algorithms
  • Hands-on experience as an individual contributor in Design, Development, Testing and Deployment of Big Data technologies based applications
  • Experience in Big Data application tools, such as Hadoop, MapReduce, Spark, etc
  • Experience on NoSQL Databases like HBase, etc
  • Experience with Linux OS environment (Shell script, AWK, SED)
  • Intermediate RDBMS skill, able to write SQL query with complex relation on top of big RDMS (100+ table)
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort