Cutshort logo
Health care administration Jobs in Hyderabad

11+ Health care administration Jobs in Hyderabad | Health care administration Job openings in Hyderabad

Apply to 11+ Health care administration Jobs in Hyderabad on CutShort.io. Explore the latest Health care administration Job opportunities across top companies like Google, Amazon & Adobe.

icon
One of the largest IT Companies in India

One of the largest IT Companies in India

Agency job
via Innovalus Technologies by Martin Antony
Bengaluru (Bangalore), Hyderabad
8 - 10 yrs
₹20L - ₹30L / yr
MEDITECH
Medical Data
Clinical Data
Health care administration
Implementation
  • Perform complex and varied applications support. Build, maintain & modify within a large environment;
  • Identify issues, evaluate possible solutions, and make recommendations on the most appropriate action
  • Design, build, test, and document solution-specific information and perform related work as required.
  • Demonstrate functional knowledge in relevant clinical applications (OE-PCS-PHA-EDM-ITS-RAD-LAB-MIC-BBK-PTH)
  • Demonstrate functional knowledge in Clinical Documentation
  • Must have hands-on build and troubleshooting experience
  • Must be able to quickly identify and resolve complex system issues
  • Must be able to build using technical specifications.

 

Basic Qualifications:

  • Implementation and support experience within a MEDITECH Environment
  • Bachelors Degree in business, Healthcare Administration, Communication, Marketing related field or equivalent and relevant work experience

Preferred Qualifications:

  • Knowledge of ITIL Incident, Problem, and Change management,
  • Working knowledge of Agile, Product and Project methodologies
Read more
master works
Spandana Bomma
Posted by Spandana Bomma
Hyderabad
3 - 7 yrs
₹6L - ₹15L / yr
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+8 more

Job Description-

Responsibilities:

* Work on real-world computer vision problems

* Write robust industry-grade algorithms

* Leverage OpenCV, Python and deep learning frameworks to train models.

* Use Deep Learning technologies such as Keras, Tensorflow, PyTorch etc.

* Develop integrations with various in-house or external microservices.

* Must have experience in deployment practices (Kubernetes, Docker, containerization, etc.) and model compression practices

* Research latest technologies and develop proof of concepts (POCs).

* Build and train state-of-the-art deep learning models to solve Computer Vision related problems, including, but not limited to:

* Segmentation

* Object Detection

* Classification

* Objects Tracking

* Visual Style Transfer

* Generative Adversarial Networks

* Work alongside other researchers and engineers to develop and deploy solutions for challenging real-world problems in the area of Computer Vision

* Develop and plan Computer Vision research projects, in the terms of scope of work including formal definition of research objectives and outcomes

* Provide specialized technical / scientific research to support the organization on different projects for existing and new technologies

Skills:

* Object Detection

* Computer Science

* Image Processing

* Computer Vision

* Deep Learning

* Artificial Intelligence (AI)

* Pattern Recognition

* Machine Learning

* Data Science

* Generative Adversarial Networks (GANs)

* Flask

* SQL

Read more
A fast growing Big Data company

A fast growing Big Data company

Agency job
via Careerconnects by Kumar Narayanan
Noida, Bengaluru (Bangalore), Chennai, Hyderabad
6 - 8 yrs
₹10L - ₹15L / yr
AWS Glue
SQL
skill iconPython
PySpark
Data engineering
+6 more

AWS Glue Developer 

Work Experience: 6 to 8 Years

Work Location:  Noida, Bangalore, Chennai & Hyderabad

Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops, 

Job Reference ID:BT/F21/IND


Job Description:

Design, build and configure applications to meet business process and application requirements.


Responsibilities:

7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.


Technical Experience:

Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.


➢ Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.

➢ Create data pipeline architecture by designing and implementing data ingestion solutions.

➢ Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.

➢ Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.

➢ Author ETL processes using Python, Pyspark.

➢ Build Redshift Spectrum direct transformations and data modelling using data in S3.

➢ ETL process monitoring using CloudWatch events.

➢ You will be working in collaboration with other teams. Good communication must.

➢ Must have experience in using AWS services API, AWS CLI and SDK


Professional Attributes:

➢ Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.

➢ Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.

➢ Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.


Qualification:

➢ Degree in Computer Science, Computer Engineering or equivalent.


Salary: Commensurate with experience and demonstrated competence

Read more
Product and Service based company

Product and Service based company

Agency job
via Jobdost by Sathish Kumar
Hyderabad, Ahmedabad
4 - 8 yrs
₹15L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Apache
Snow flake schema
skill iconPython
Spark
+13 more

Job Description

 

Mandatory Requirements 

  • Experience in AWS Glue

  • Experience in Apache Parquet 

  • Proficient in AWS S3 and data lake 

  • Knowledge of Snowflake

  • Understanding of file-based ingestion best practices.

  • Scripting language - Python & pyspark

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 

  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 

  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 

  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 

  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.

  • Define process improvement opportunities to optimize data collection, insights and displays.

  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 

  • Identify and interpret trends and patterns from complex data sets 

  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 

  • Key participant in regular Scrum ceremonies with the agile teams  

  • Proficient at developing queries, writing reports and presenting findings 

  • Mentor junior members and bring best industry practices.

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 

  • Strong background in math, statistics, computer science, data science or related discipline

  • Advanced knowledge one of language: Java, Scala, Python, C# 

  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  

  • Proficient with

  • Data mining/programming tools (e.g. SAS, SQL, R, Python)

  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)

  • Data visualization (e.g. Tableau, Looker, MicroStrategy)

  • Comfortable learning about and deploying new technologies and tools. 

  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 

  • Good written and oral communication skills and ability to present results to non-technical audiences 

  • Knowledge of business intelligence and analytical tools, technologies and techniques.

Familiarity and experience in the following is a plus: 

  • AWS certification

  • Spark Streaming 

  • Kafka Streaming / Kafka Connect 

  • ELK Stack 

  • Cassandra / MongoDB 

  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools

Read more
DAZN

at DAZN

Shivani Sharma
Posted by Shivani Sharma
Hyderabad
4 - 8 yrs
Best in industry
skill iconData Analytics
Data Visualization
PowerBI
Tableau
Qlikview
+7 more

Is your next career move to work in a team which uses data, reporting and analytical skills to help answer business questions to make DAZN a data-driven company?

 

DAZN is a tech-first sport streaming platform that reaches millions of users every week. We are challenging a traditional industry and giving power back to the fans. Our new Hyderabad tech hub will be the engine that drives us forward to the future. We’re pushing boundaries and doing things no-one has done before. Here, you have the opportunity to make your mark and the power to make change happen - to make a difference for our customers. When you join DAZN you will work on projects that impact millions of lives thanks to your critical contributions to our global products

 

This is the perfect place to work if you are passionate about technology and want an opportunity to use your creativity to help grow and scale a global range of IT systems, Infrastructure, and IT Services. Our cutting-edge technology allows us to stream sports content to millions of concurrent viewers globally across multiple platforms and devices. DAZN’s Cloud based architecture unifies a range of technologies in order to deliver a seamless user experience and support a global user base and company infrastructure.

 

This role will be based in our brand-new Hyderabad office. Join us in India’s beautiful “City of Pearls” and bring your ambition to life.

 

Responsibilities:

 

  • Communicate with different stakeholders such as Ad Tech Engineers and Product Owners
  • Should be able to extensively work in Google Analytics and strong SQL knowledge is expected.
  • Strong analytical skills

 

Key Competencies:

 

  • 4-8 years of experience as Data Analyst
  • Advanced Microsoft Excel Skills
  • Strong command on Google Analytics
  • Reporting platform UI experience (Tableau, Looker, etc)
  • Experience with VAST tags, pixels trackers, etc.
  • Experience with DSPs & third-party ad platforms (GAM, YoSpace, etc)

 

At DAZN, we bring ambition to life. We are innovators, game-changers and pioneers. So, if you want to push boundaries and make an impact, DAZN is the place to be.

 

As part of our team, you'll have the opportunity to make your mark and the power to make change happen. We're doing things no-one has done before, giving fans and customers access to sport anytime, anywhere. We're using world-class technology to transform sports and revolutionise the industry and we're not going to stop.

 

 

Read more
Persistent Systems

at Persistent Systems

1 video
1 recruiter
Agency job
via Milestone Hr Consultancy by Haina khan
Bengaluru (Bangalore), Hyderabad, Pune
9 - 16 yrs
₹7L - ₹32L / yr
Big Data
skill iconScala
Spark
Hadoop
skill iconPython
+1 more
Greetings..
 
We have urgent requirement for the post of Big Data Architect in reputed MNC company
 
 


Location:  Pune/Nagpur,Goa,Hyderabad/Bangalore

Job Requirements:

  • 9 years and above of total experience preferably in bigdata space.
  • Creating spark applications using Scala to process data.
  • Experience in scheduling and troubleshooting/debugging Spark jobs in steps.
  • Experience in spark job performance tuning and optimizations.
  • Should have experience in processing data using Kafka/Pyhton.
  • Individual should have experience and understanding in configuring Kafka topics to optimize the performance.
  • Should be proficient in writing SQL queries to process data in Data Warehouse.
  • Hands on experience in working with Linux commands to troubleshoot/debug issues and creating shell scripts to automate tasks.
  • Experience on AWS services like EMR.
Read more
Edubridge Learning

at Edubridge Learning

6 recruiters
Hemal Thakker
Posted by Hemal Thakker
Mumbai, Pune, Hyderabad, Gurugram
2 - 6 yrs
₹4L - ₹7L / yr
skill iconData Analytics
skill iconPython
skill iconR Programming
SAS
skill iconMachine Learning (ML)
+1 more

JOB DESCRIPTION

  • 2 to 6 years of experience in imparting technical training/ mentoring  
  • Must have very strong concepts of Data Analytics
  • Must have hands-on and training experience on Python, Advanced Python, R programming, SAS and machine learning
  • Must have good knowledge of SQL and Advanced SQL
  • Should have basic knowledge of Statistics
  • Should be good in Operating systems GNU/Linux, Network fundamentals,        
  • Must have knowledge on MS office (Excel/ Word/ PowerPoint)                           
  • Self-Motivated and passionate about technology                                     
  • Excellent analytical and logical skills and team player                               
  • Must have exceptional Communication Skills/ Presentation Skills                         
  • Good Aptitude skills is preferred                        
  • Exceptional communication skills

Responsibilities:                                                                                         

  • Ability to quickly learn any new technology and impart the same to other employees
  • Ability to resolve all technical queries of students
  • Conduct training sessions and drive the placement driven quality in the training
  • Must be able to work independently without the supervision of a senior person
  • Participate in reviews/ meetings                                                                                                          

Qualification:                                                                               

  • UG: Any Graduate in IT/Computer Science, B.Tech/B.E. – IT/ Computers
  • PG: MCA/MS/MSC – Computer Science
  • Any Graduate/ Post graduate, provided they are certified in similar courses

ABOUT EDUBRIDGE

EduBridge is an Equal Opportunity employer and we believe in building a meritorious culture where everyone is recognized for their skills and contribution.

Launched in 2009 EduBridge Learning is a workforce development and skilling organization with 50+ training academies in 18 States pan India. The organization has been providing skilled manpower to corporates for over 10 years and is a leader in its space. We have trained over a lakh semi urban & economically underprivileged youth on relevant life skills and industry-specific skills and provided placements in over 500 companies. Our latest product E-ON is committed to complementing our training delivery with an Online training platform, enabling the students to learn anywhere and anytime.

To know more about EduBridge please visit: http://www.edubridgeindia.com/">http://www.edubridgeindia.com/

You can also visit us on https://www.facebook.com/Edubridgelearning/">Facebook , https://www.linkedin.com/company/edubridgelearning/">LinkedIn for our latest initiatives and products

Read more
Advanced technology to Solve Business Problems.( A1)

Advanced technology to Solve Business Problems.( A1)

Agency job
via Multi Recruit by Ranjini A R
Hyderabad
2 - 4 yrs
₹10L - ₹15L / yr
skill iconPython
PySpark
Knowledge in AWS
  • Desire to explore new technology and break new ground.
  • Are passionate about Open Source technology, continuous learning, and innovation.
  • Have the problem-solving skills, grit, and commitment to complete challenging work assignments and meet deadlines.

Qualifications

  • Engineer enterprise-class, large-scale deployments, and deliver Cloud-based Serverless solutions to our customers.
  • You will work in a fast-paced environment with leading microservice and cloud technologies, and continue to develop your all-around technical skills.
  • Participate in code reviews and provide meaningful feedback to other team members.
  • Create technical documentation.
  • Develop thorough Unit Tests to ensure code quality.

Skills and Experience

  • Advanced skills in troubleshooting and tuning AWS Lambda functions developed with Java and/or Python.
  • Experience with event-driven architecture design patterns and practices
  • Experience in database design and architecture principles and strong SQL abilities
  • Message brokers like Kafka and Kinesis
  • Experience with Hadoop, Hive, and Spark (either PySpark or Scala)
  • Demonstrated experience owning enterprise-class applications and delivering highly available distributed, fault-tolerant, globally accessible services at scale.
  • Good understanding of distributed systems.
  • Candidates will be self-motivated and display initiative, ownership, and flexibility.

 

Preferred Qualifications

  • AWS Lambda function development experience with Java and/or Python.
  • Lambda triggers such as SNS, SES, or cron.
  • Databricks
  • Cloud development experience with AWS services, including:
  • IAM
  • S3
  • EC2
  • AWS CLI
  • API Gateway
  • ECR
  • CloudWatch
  • Glue
  • Kinesis
  • DynamoDB
  • Java 8 or higher
  • ETL data pipeline building
  • Data Lake Experience
  • Python
  • Docker
  • MongoDB or similar NoSQL DB.
  • Relational Databases (e.g., MySQL, PostgreSQL, Oracle, etc.).
  • Gradle and/or Maven.
  • JUnit
  • Git
  • Scrum
  • Experience with Unix and/or macOS.
  • Immediate Joiners

Nice to have:

  • AWS / GCP / Azure Certification.
  • Cloud development experience with Google Cloud or Azure

 

Read more
company logo
Agency job
via UpgradeHR by Sangita Deka
Hyderabad
6 - 10 yrs
₹10L - ₹15L / yr
Big Data
skill iconData Science
skill iconMachine Learning (ML)
skill iconR Programming
skill iconPython
+2 more
It is one of the largest communication technology companies in the world. They operate America's largest 4G LTE wireless network and the nation's premiere all-fiber broadband network.
Read more
Helical IT Solutions

at Helical IT Solutions

4 recruiters
Niyotee Gupta
Posted by Niyotee Gupta
Hyderabad
1 - 5 yrs
₹3L - ₹8L / yr
ETL
Big Data
TAC
PL/SQL
Relational Database (RDBMS)
+1 more

ETL Developer – Talend

Job Duties:

  • ETL Developer is responsible for Design and Development of ETL Jobs which follow standards,

best practices and are maintainable, modular and reusable.

  • Proficiency with Talend or Pentaho Data Integration / Kettle.
  • ETL Developer will analyze and review complex object and data models and the metadata

repository in order to structure the processes and data for better management and efficient

access.

  • Working on multiple projects, and delegating work to Junior Analysts to deliver projects on time.
  • Training and mentoring Junior Analysts and building their proficiency in the ETL process.
  • Preparing mapping document to extract, transform, and load data ensuring compatibility with

all tables and requirement specifications.

  • Experience in ETL system design and development with Talend / Pentaho PDI is essential.
  • Create quality rules in Talend.
  • Tune Talend / Pentaho jobs for performance optimization.
  • Write relational(sql) and multidimensional(mdx) database queries.
  • Functional Knowledge of Talend Administration Center/ Pentaho data integrator, Job Servers &

Load balancing setup, and all its administrative functions.

  • Develop, maintain, and enhance unit test suites to verify the accuracy of ETL processes,

dimensional data, OLAP cubes and various forms of BI content including reports, dashboards,

and analytical models.

  • Exposure in Map Reduce components of Talend / Pentaho PDI.
  • Comprehensive understanding and working knowledge in Data Warehouse loading, tuning, and

maintenance.

  • Working knowledge of relational database theory and dimensional database models.
  • Creating and deploying Talend / Pentaho custom components is an add-on advantage.
  • Nice to have java knowledge.

Skills and Qualification:

  • BE, B.Tech / MS Degree in Computer Science, Engineering or a related subject.
  • Having an experience of 3+ years.
  • Proficiency with Talend or Pentaho Data Integration / Kettle.
  • Ability to work independently.
  • Ability to handle a team.
  • Good written and oral communication skills.
Read more
FR Tech Innovations

at FR Tech Innovations

1 recruiter
Kumar Kamaepalli
Posted by Kumar Kamaepalli
Hyderabad
4 - 6 yrs
₹6L - ₹10L / yr
Matlab
skill iconC++
Image processing
Vision
skill iconC
Role: To research, implement and test solutions based on computer vision to meet the requirements of the products developed at FR Tech. Work in collaboration with and guidance of a senior computer vision lead and actively participate in the technical discussions while arriving at conclusive solutions. Experience: 4-6 Years Skills & Experience Required: Experience in the field of Computer vision Solid theoretical background in 3D/2D image processing, computer vision and machine learning Experience in Computer vision for object recognition and tracking, image registration, Image calibration and correction In-depth understanding of image processing algorithms, pattern recognition methods, and rule-based classifiers Working experience in Deep-learning algorithms/approaches (Neural Networks), platforms like Caffe/Tensorflow and Matlab will be considered as an added advantage Knowledge in Machine learning for pattern recognition Knowledge in 3D, 2D and RGB-D imaging devices and their software integration with bigger systems Ability to understand, optimize and debug imaging algorithms. Should have experience on one of the programming languages such as C, C++, and Python. Should have worked on Ubuntu OS Demonstrated outstanding ability to perform innovative and significant research in the form of technical papers, thesis, or patents
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort