Cutshort logo
Plotly Jobs in Bangalore (Bengaluru)

11+ Plotly Jobs in Bangalore (Bengaluru) | Plotly Job openings in Bangalore (Bengaluru)

Apply to 11+ Plotly Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Plotly Job opportunities across top companies like Google, Amazon & Adobe.

icon
Novo

at Novo

2 recruiters
Dishaa Ranjan
Posted by Dishaa Ranjan
Bengaluru (Bangalore), Gurugram
4 - 6 yrs
₹25L - ₹35L / yr
SQL
Python
pandas
Scikit-Learn
TensorFlow
+1 more

About Us: 

Small businesses are the backbone of the US economy, comprising almost half of the GDP and the private workforce. Yet, big banks don’t provide the access, assistance and modern tools that owners need to successfully grow their business. 


We started Novo to challenge the status quo—we’re on a mission to increase the GDP of the modern entrepreneur by creating the go-to banking platform for small businesses (SMBs). Novo is flipping the script of the banking world, and we’re excited to lead the small business banking revolution.


At Novo, we’re here to help entrepreneurs, freelancers, startups and SMBs achieve their financial goals by empowering them with an operating system that makes business banking as easy as iOS. We developed modern bank accounts and tools to help to save time and increase cash flow. Our unique product integrations enable easy access to tracking payments, transferring money internationally, managing business transactions and more. We’ve made a big impact in a short amount of time, helping thousands of organizations access powerfully simple business banking.  



We are looking for a Senior Data Scientist who is enthusiastic about using data and technology to solve complex business problems. If you're passionate about leading and helping to architect and develop thoughtful data solutions, then we want to chat. Are you ready to revolutionize the small business banking industry with us?


About the Role: (specific to the role-- describe the role activities/duties, who they interact with, what they are accountable for, how the role operates in the team, department and organization)


  • Build and manage predictive models focussed on credit risk, fraud, conversions, churn, consumer behaviour etc
  • Provides best practices, direction for data analytics and business decision making across multiple projects and functional areas
  • Implements performance optimizations and best practices for scalable data models, pipelines and modelling
  • Resolve blockers and help the team stay productive
  • Take part in building the team and iterating on hiring processes

Requirements for the Role: (these are specific to the role-- technical skills and requirements to fulfill the job duties, certifications, years of experience, degree)


  • 4+ years of experience in data science roles focussed on managing data processes, modelling and dashboarding
  • Strong experience in python, SQL and in-depth understanding of modelling techniques
  • Experience working with Pandas, scikit learn, visualization libraries like plotly, bokeh etc.
  • Prior experience with credit risk modelling will be preferred
  • Deep Knowledge of Python to write scripts to manipulate data and generate automated  reports

How We Define Success: (these are specific to the role-- should be tied to performance management, OKRs or general goals)


  • Expand access to data driven decision making across the organization
  • Solve problems in risk, marketing, growth, customer behaviour through analytics models that increase efficacy

Nice To Have, but Not Required:

  • Experience in dashboarding libraries like Python Dash and exposure to CI/CD 
  • Exposure to big data tools like Spark, and some core tech knowledge around API’s, data streaming etc.


Novo values diversity as a core tenant of the work we do and the businesses we serve. We are an equal opportunity employer, indiscriminate of race, religion, ethnicity, national origin, citizenship, gender, gender identity, sexual orientation, age, veteran status, disability, genetic information or any other protected characteristic. 

Read more
Curl

Curl

Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
5 - 10 yrs
₹10L - ₹25L / yr
Data Visualization
PowerBI
ETL
Business Intelligence (BI)
Data Analytics
+6 more
Main Responsibilities:

 Work closely with different Front Office and Support Function stakeholders including but not restricted to Business
Management, Accounts, Regulatory Reporting, Operations, Risk, Compliance, HR on all data collection and reporting use cases.
 Collaborate with Business and Technology teams to understand enterprise data, create an innovative narrative to explain, engage and enlighten regular staff members as well as executive leadership with data-driven storytelling
 Solve data consumption and visualization through data as a service distribution model
 Articulate findings clearly and concisely for different target use cases, including through presentations, design solutions, visualizations
 Perform Adhoc / automated report generation tasks using Power BI, Oracle BI, Informatica
 Perform data access/transfer and ETL automation tasks using Python, SQL, OLAP / OLTP, RESTful APIs, and IT tools (CFT, MQ-Series, Control-M, etc.)
 Provide support and maintain the availability of BI applications irrespective of the hosting location
 Resolve issues escalated from Business and Functional areas on data quality, accuracy, and availability, provide incident-related communications promptly
 Work with strict deadlines on high priority regulatory reports
 Serve as a liaison between business and technology to ensure that data related business requirements for protecting sensitive data are clearly defined, communicated, and well understood, and considered as part of operational
prioritization and planning
 To work for APAC Chief Data Office and coordinate with a fully decentralized team across different locations in APAC and global HQ (Paris).

General Skills:
 Excellent knowledge of RDBMS and hands-on experience with complex SQL is a must, some experience in NoSQL and Big Data Technologies like Hive and Spark would be a plus
 Experience with industrialized reporting on BI tools like PowerBI, Informatica
 Knowledge of data related industry best practices in the highly regulated CIB industry, experience with regulatory report generation for financial institutions
 Knowledge of industry-leading data access, data security, Master Data, and Reference Data Management, and establishing data lineage
 5+ years experience on Data Visualization / Business Intelligence / ETL developer roles
 Ability to multi-task and manage various projects simultaneously
 Attention to detail
 Ability to present to Senior Management, ExCo; excellent written and verbal communication skills
Read more
Global Media Agency - A client of Merito

Global Media Agency - A client of Merito

Agency job
via Merito by Merito Talent
Gurugram, Bengaluru (Bangalore), Mumbai
4 - 9 yrs
Best in industry
Machine Learning (ML)
Data Science
media analytics
SQL
Python
+4 more

Our client combines Adtech and Martech platform strategy with data science & data engineering expertise, helping our clients make advertising work better for people.

 
Key Role:
  • Act as primary day-to-day contact on analytics to agency-client leads
  • Develop bespoke analytics proposals for presentation to agencies & clients, for delivery within the teams
  • Ensure delivery of projects and services across the analytics team meets our stakeholder requirements (time, quality, cost)
  • Hands on platforms to perform data pre-processing that involves data transformation as well as data cleaning
  • Ensure data quality and integrity
  • Interpret and analyse data problems
  • Build analytic systems and predictive models
  • Increasing the performance and accuracy of machine learning algorithms through fine-tuning and further
  • Visualize data and create reports
  • Experiment with new models and techniques
  • Align data projects with organizational goals


Requirements

  • Min 6 - 7 years’ experience working in Data Science
  • Prior experience as a Data Scientist within a digital media is desirable
  • Solid understanding of machine learning
  • A degree in a quantitative field (e.g. economics, computer science, mathematics, statistics, engineering, physics, etc.)
  • Experience with SQL/ Big Query/GMP tech stack / Clean rooms such as ADH
  • A knack for statistical analysis and predictive modelling
  • Good knowledge of R, Python
  • Experience with SQL, MYSQL, PostgreSQL databases
  • Knowledge of data management and visualization techniques
  • Hands-on experience on BI/Visual Analytics Tools like PowerBI or Tableau or Data Studio
  • Evidence of technical comfort and good understanding of internet functionality desirable
  • Analytical pedigree - evidence of having approached problems from a mathematical perspective and working through to a solution in a logical way
  • Proactive and results-oriented
  • A positive, can-do attitude with a thirst to continually learn new things
  • An ability to work independently and collaboratively with a wide range of teams
  • Excellent communication skills, both written and oral
Read more
British Telecom
Agency job
via posterity consulting by Kapil Tiwari
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹14L / yr
Data engineering
Big Data
Google Cloud Platform (GCP)
ETL
Datawarehousing
+6 more
You'll have the following skills & experience:

• Problem Solving:. Resolving production issues to fix service P1-4 issues. Problems relating to
introducing new technology, and resolving major issues in the platform and/or service.
• Software Development Concepts: Understands and is experienced with the use of a wide range of
programming concepts and is also aware of and has applied a range of algorithms.
• Commercial & Risk Awareness: Able to understand & evaluate both obvious and subtle commercial
risks, especially in relation to a programme.
Experience you would be expected to have
• Cloud: experience with one of the following cloud vendors: AWS, Azure or GCP
• GCP : Experience prefered, but learning essential.
• Big Data: Experience with Big Data methodology and technologies
• Programming : Python or Java worked with Data (ETL)
• DevOps: Understand how to work in a Dev Ops and agile way / Versioning / Automation / Defect
Management – Mandatory
• Agile methodology - knowledge of Jira
Read more
Streamoid Technologies Pvt Ltd
Agency job
via HyreSpree by HyreSpree Team
Bengaluru (Bangalore)
4 - 6 yrs
₹4L - ₹20L / yr
Natural Language Processing (NLP)
PyTorch
Python
Java
Solr
+1 more
Skill Set:
  • 4+ years of experience Solid understanding of Python, Java and general software development skills (source code management, debugging, testing, deployment etc.).
  • Experience in working with Solr and ElasticSearch Experience with NLP technologies & the handling of unstructured text Detailed understanding of text pre-processing and normalisation techniques such as tokenisation, lemmatisation, stemming, POS tagging etc.
  • Prior experience in implementation of traditional ML solutions - classification, regression or clustering problem Expertise in text-analytics - Sentiment Analysis, Entity Extraction, Language modelling - and associated sequence learning models ( RNN, LSTM, GRU).
  • Comfortable working with deep-learning libraries (eg. PyTorch)
  • Candidate can even be a fresher with 1 or 2 years of experience IIIT, IIIT, Bits Pilani, top 5 local colleges are preferred colleges and universities.
  • A Masters candidate in machine learning.
  • Can source candidates from Mu Sigma and Manthan.
Read more
Marktine

at Marktine

1 recruiter
Vishal Sharma
Posted by Vishal Sharma
Remote, Bengaluru (Bangalore)
3 - 6 yrs
₹10L - ₹20L / yr
Big Data
Spark
PySpark
Data engineering
Data Warehouse (DWH)
+5 more

Azure – Data Engineer

  • At least 2 years hands on experience working with an Agile data engineering team working on big data pipelines using Azure in a commercial environment.
  • Dealing with senior stakeholders/leadership
  • Understanding of Azure data security and encryption best practices. [ADFS/ACLs]

Data Bricks –experience writing in and using data bricks Using Python to transform, manipulate data.

Data Factory – experience using data factory in an enterprise solution to build data pipelines. Experience calling rest APIs.

Synapse/data warehouse – experience using synapse/data warehouse to present data securely and to build & manage data models.

Microsoft SQL server – We’d expect the candidate to have come from a SQL/Data background and progressed into Azure

PowerBI – Experience with this is preferred

Additionally

  • Experience using GIT as a source control system
  • Understanding of DevOps concepts and application
  • Understanding of Azure Cloud costs/management and running platforms efficiently
Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Apurva kalsotra
Posted by Apurva kalsotra
Mohali, Gurugram, Bengaluru (Bangalore), Chennai, Hyderabad, Pune
3 - 8 yrs
₹3L - ₹9L / yr
Data Warehouse (DWH)
Big Data
Spark
Apache Kafka
Data engineering
+14 more
Day-to-day Activities
Develop complex queries, pipelines and software programs to solve analytics and data mining problems
Interact with other data scientists, product managers, and engineers to understand business problems, technical requirements to deliver predictive and smart data solutions
Prototype new applications or data systems
Lead data investigations to troubleshoot data issues that arise along the data pipelines
Collaborate with different product owners to incorporate data science solutions
Maintain and improve data science platform
Must Have
BS/MS/PhD in Computer Science, Electrical Engineering or related disciplines
Strong fundamentals: data structures, algorithms, database
5+ years of software industry experience with 2+ years in analytics, data mining, and/or data warehouse
Fluency with Python
Experience developing web services using REST approaches.
Proficiency with SQL/Unix/Shell
Experience in DevOps (CI/CD, Docker, Kubernetes)
Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multi-task and manage expectations
Preferred
Industry experience with big data processing technologies such as Spark and Kafka
Experience with machine learning algorithms and/or R a plus 
Experience in Java/Scala a plus
Experience with any MPP analytics engines like Vertica
Experience with data integration tools like Pentaho/SAP Analytics Cloud
Read more
IT Consulting, System Integrator & Software Services Company

IT Consulting, System Integrator & Software Services Company

Agency job
via Jobdost by Ankitha Vyas
Chennai, Bengaluru (Bangalore)
3 - 8 yrs
₹5L - ₹12L / yr
Tableau
SQL
PL/SQL
Responsibilities


In this role, candidates will be responsible for developing Tableau Reports. Should be able to write effective and scalable code. Improve functionality of existing Reports/systems.

·       Design stable, scalable code.

·       Identify potential improvements to the current design/processes.

·       Participate in multiple project discussions as a senior member of the team.

·       Serve as a coach/mentor for junior developers.


Minimum Qualifications

·       3 - 8 Years of experience

·       Excellent written and verbal communication skills

 

Must have skills

·       Meaningful work experience

·       Extensively worked on BI Reporting tool: Tableau for development of reports to fulfill the end user requirements.

·       Experienced in interacting with business users to analyze the business process and requirements and redefining requirements into visualizations and reports.

·       Must have knowledge with the selection of appropriate data visualization strategies (e.g., chart types) for specific use cases. Ability to showcase complete dashboard implementations that demonstrate       visual  standard methodologies (e.g., color themes, visualization layout, interactivity, drill-down capabilities, filtering, etc.).

·       You should be an Independent player and have experience working with senior leaders.

·       Able to explore options and suggest new solutions and visualization techniques to the customer.

·       Experience crafting joins and joins with custom SQL blending data from different data sources using Tableau Desktop.

·       Using sophisticated calculations using Tableau Desktop (Aggregate, Date, Logical, String, Table, LOD Expressions.

·       Working with relational data sources (like Oracle / SQL Server / DB2) and flat files.

·       Optimizing user queries and dashboard performance.

·       Knowledge in SQL, PL/SQL.

·       Knowledge is crafting DB views and materialized views.

·       Excellent verbal and written communication skills and interpersonal skills are required.

·       Excellent documentation and presentation skills; should be able to build business process mapping document; functional solution documents and own the acceptance/signoff process from E2E

·       Ability to make right graph choices, use of data blending feature, Connect to several DB technologies.

·       Must stay up to date on new and coming visualization technologies. 

 

Pref location: Chennai (priority)/ Bengaluru  

Read more
Ingrainhub

at Ingrainhub

1 recruiter
Karthik Kulkarni
Posted by Karthik Kulkarni
Bengaluru (Bangalore)
3 - 7 yrs
₹3L - ₹12L / yr
Python
MS-Excel
R Programming
Good knowledge of SQL , Microsoft Excel One Programming language in SAA/Python or R
Read more
LendingKart

at LendingKart

5 recruiters
Mohammed Nayeem
Posted by Mohammed Nayeem
Bengaluru (Bangalore), Ahmedabad
2 - 5 yrs
₹2L - ₹13L / yr
Python
Data Science
SQL
Roles and Responsibilities:
 Mining large volumes of credit behavior data to generate insights around product holdings and monetization opportunities for cross sell
 Use data science to size opportunity and product potential for launch of any new product/pilots
 Build propensity models using heuristics and campaign performance to maximize efficiency.
 Conduct portfolio analysis and establish key metrics for cross sell partnership

Desired profile/Skills:
 2-5 years of experience with a degree in any quantitative discipline such as Engineering, Computer Science, Economics, Statistics or Mathematics
 Excellent problem solving and comprehensive analytical skills – ability to structure ambiguous problem statements, perform detailed analysis and derive crisp insights.
 Solid experience in using python and SQL
 Prior work experience in a financial services space would be highly valued

Location: Bangalore/ Ahmedabad
Read more
Bengaluru (Bangalore)
3 - 12 yrs
₹3L - ₹25L / yr
Java
Python
Spark
Hadoop
MongoDB
+3 more
We are a start-up in India seeking excellence in everything we do with an unwavering curiosity and enthusiasm. We build simplified new-age AI driven Big Data Analytics platform for Global Enterprises and solve their biggest business challenges. Our Engineers develop fresh intuitive solutions keeping the user in the center of everything. As a Cloud-ML Engineer, you will design and implement ML solutions for customer use cases and problem solve complex technical customer challenges. Expectations and Tasks - Total of 7+ years of experience with minimum of 2 years in Hadoop technologies like HDFS, Hive, MapReduce - Experience working with recommendation engines, data pipelines, or distributed machine learning and experience with data analytics and data visualization techniques and software. - Experience with core Data Science techniques such as regression, classification or clustering, and experience with deep learning frameworks - Experience in NLP, R and Python - Experience in performance tuning and optimization techniques to process big data from heterogeneous sources. - Ability to communicate clearly and concisely across technology and the business teams. - Excellent Problem solving and Technical troubleshooting skills. - Ability to handle multiple projects and prioritize tasks in a rapidly changing environment. Technical Skills Core Java, Multithreading, Collections, OOPS, Python, R, Apache Spark, MapReduce, Hive, HDFS, Hadoop, MongoDB, Scala We are a retained Search Firm employed by our client - Technology Start-up @ Bangalore. Interested candidates can share their resumes with me - [email protected]. I will respond to you within 24 hours. Online assessments and pre-employment screening are part of the selection process.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort