Speaker

Anand Bagmar

Anand Bagmar

Software Quality Evangelist

Pune, India

Actions

Anand is a Software Quality Evangelist with 20+ years in the software testing field. He is passionate about shipping a quality product, and specializes in Product Quality strategy & execution, and also building automated testing tools, infrastructure and frameworks.

Anand writes testing related blogs and has built open-source tools related to Software Testing – WAAT (Web Analytics Automation Testing Framework), TaaS (for automating the integration testing in disparate systems) and TTA (Test Trend Analyzer).

You can follow him on Twitter @BagmarAnand, connect with him on LinkedIn at https://in.linkedin.com/in/anandbagmar or visit essenceoftesting.com.

Area of Expertise

  • Information & Communications Technology
  • Health & Medical
  • Finance & Banking
  • Consumer Goods & Services
  • Media & Information

Topics

  • Quality
  • Testing and Quality
  • Quality Engineering
  • Quality Assurance
  • Code Quality
  • Sofware Quality
  • Agile Testing
  • API Testing
  • Android testing
  • Automated Testing
  • Quality & Testing
  • Android Automation Testing
  • Exploratory Testing
  • Automation
  • Automation Testing
  • Automation & CI/CD
  • Test Automation
  • CI
  • CD
  • Continuous Testing
  • Continous Delivery
  • Continous Integration
  • Continuous Deployment
  • open source
  • Open Source Software
  • Selenium
  • Appium
  • DevOps
  • DevOps & Automation
  • Cloud & DevOps
  • Development
  • Mobile Development
  • Developing Android Apps
  • DevOps Transformation
  • Front-End Development
  • Android Development
  • Web Development
  • Agile software development
  • Android Software Development
  • Software Development
  • WebdriverIO
  • Selenium WebDriver
  • Analytics
  • Software Analytics
  • Big Data Machine Learning AI and Analytics
  • Web Apps
  • Mobile web
  • Web Applications
  • Modern Web and UX
  • selenium
  • webdriver
  • automation
  • Mobile Testing
  • Mobile Accessibility
  • Web & Mobile
  • Mobile Apps
  • MobileDevOps
  • Mobile Banking
  • Web Accessibility
  • PWA
  • Testing Automation
  • Web
  • progressive web apps
  • Web Frontend
  • Modern Web
  • Azure Automation
  • Mobile
  • mobile app development
  • PWAs
  • Android
  • Android Tools
  • Android & iOS Application Engineering
  • Android Enginineering
  • Android Applications
  • Android Design
  • ● Firebase ● Android ● Android Things / IOT ● Progressive Web App ● Machine learning and AI ● Robotics and Drone Technologies ● Tensorlow
  • Testing
  • Software testing
  • UI Testing
  • Performance Testing
  • Test-Driven Development
  • QA&Testing
  • Test management
  • A/B testing
  • Automation Technology
  • Cloud Automation
  • build-automation
  • Codeless Automation Testing
  • OpenSource
  • Agile Games
  • Agile Lean
  • Scrum & Agile
  • Agile Mindset
  • Agile Methodologies
  • DevOps Agile Methodology & Culture
  • Agile Engineering
  • Agile Architecture
  • Process Automation
  • Software Deveopment
  • Tools and Frameworks
  • DevTools
  • Developer Tools
  • Tooling

Streamlining End-to-End Testing Automation with Azure DevOps Build & Release Pipelines

Automating end-to-end (e2e) test for Android and iOS native apps, and web apps, within Azure build and release pipelines, poses several challenges. This session dives into the key hurdles and the repeatable solutions implemented across multiple teams at a leading Indian telecom disruptor, renowned for its affordable 4G/5G services, digital platforms, and broadband connectivity.

Challenge #1. Ensuring Test Environment Consistency: Establishing a standardized test execution environment across hundreds of Azure DevOps agents is crucial for achieving dependable testing results. This uniformity must seamlessly span from Build pipelines to various stages of the Release pipeline.

Challenge #2. Coordinated Test Execution Across Environments: Executing distinct subsets of tests using the same automation framework across diverse environments, such as the build pipeline and specific stages of the Release Pipeline, demands flexible and cohesive approaches.

Challenge #3. Testing on Linux-based Azure DevOps Agents: Conducting tests, particularly for web and native apps, on Azure DevOps Linux agents lacking browser or device connectivity presents specific challenges in attaining thorough testing coverage.

This session delves into how these challenges were addressed through:
1. Automate the setup of essential dependencies to ensure a consistent testing environment.

2. Create standardized templates for executing API tests, API workflow tests, and end-to-end tests in the Build pipeline, streamlining the testing process.

3. Implement task groups in Release pipeline stages to facilitate the execution of tests, ensuring consistency and efficiency across deployment phases.

4. Deploy browsers within Docker containers for web application testing, enhancing portability and scalability of testing environments.

5. Leverage diverse device farms dedicated to Android, iOS, and browser testing to cover a wide range of platforms and devices.

6. Integrate AI technology, such as Applitools Visual AI and Ultrafast Grid, to automate test execution and validation, improving accuracy and efficiency.

7. Utilize AI/ML-powered central test automation reporting server through platforms like reportportal.io, providing consolidated and real-time insights into test performance and issues.

These solutions not only facilitate comprehensive testing across platforms but also promote the principles of shift-left testing, enabling early feedback, implementing quality gates, and ensuring repeatability. By adopting these techniques, teams can effectively automate and execute tests, accelerating software delivery while upholding high-quality standards across Android, iOS, and web applications.

Note: While the case study (challenges & solutions) refer to Azure DevOps, the solutions can be easily implemented in any CI tool.

Enabling Continuous Deployment in Enterprises with Testing

The key objectives of any organization is to provide / derive value from the products / services they offer. To achieve this, they need to be able to deliver their offerings in the quickest time possible, and of good quality!

In such a fast moving environment, CI (Continuous Integration) and CD (Continuous Delivery) are now a necessity and not a luxury!

There are various practices that organizations need to implement to enable CD. Changes in requirements (a reality in all projects) needs to be managed better. Along with this, processes and practices need to be tuned based on the team capability, skills and distribution.

Testing (automation) is one of the important practices that needs to be set up correctly for CD to be successful. But, this is tricky and requires a lot of discipline, rigor and hard work by all the team members involved in the product delivery.

All the challenges faced in smaller organizations get amplified when it comes to Enterprises. There are various reasons for this - but the most common reasons are - scale, complexity of the domain, complexity of the integrations (to internal / external system), involvement of various partners / vendors, long product life-cycles, etc.

In such situations, the Testing complexity and challenges also increase exponentially!

Learn, via a case study of an Enterprise, a large Bank, the Testing approach required to take them on the journey to achieving CD.

Testing & Release strategy for Native Android & iOS Apps

Experimentation and quick feedback is the key to success of any product, while of course ensuring a good quality product with new and better features is being shipped out at a decent / regular frequency to the users.

In this session, we will discuss how to enable experimentation, get quick feedback and reduce risk for the product by using a case study of a media / entertainment domain product, used by millions of users across 10+ countries - i.e. - we will discuss Testing Strategy and the Release process an Android & iOS Native app - that will help enable CI & CD.

To understand these techniques, we will quickly recap the challenges and quirks of testing Native Apps and how that is different than Web / Mobile Web Apps.

The majority of the discussion will focus on different techniques / practices related to Testing & Releases that can be established to achieve our goals, some of which are listed below:

* Functional Automation approach - identify and automate user scenarios, across supported regions
* Testing approach - what to test, when to test, how to test!
* Manual Sanity before release - and why it was important!
* Staged roll-outs via Google’s Play Store and Apple’s App Store
* Extensive monitoring of the release as users come on board, and comparing the key metrics (ex: consumer engagement) with prior releases
* Understanding Consumer Sentiments (Google’s Play Store / Apple’s App Store review comments, Social Media scans, Issues reported to / by Support, etc.)

You can find links to some of my earlier conference videos here - https://www.youtube.com/user/abagmar/videos, and my slides here - http://slideshare.net/abagmar

Eradicate Flaky Tests

Have you heard of “flaky tests”?

Many articles, blog posts, podcasts, and conference talks about what are “Flaky tests” and how to avoid them.

Some of the ideas proposed are:

* Automatically rerun failed tests a couple of times, and hope they pass
* Automatically retry certain operations in the test (ex: retry click/checking for visibility of elements, etc.) and hope the test can proceed

Unfortunately, I do not agree with the above ideas, and I would term these as anti-patterns for fixing flaky/intermittent tests.

We need to understand the reasons for flaky/intermittent tests. Some of these reasons could be because of issues like:

* timing issues (i.e. page loading taking time)
* network issues
* browser-timing issues (different for different browsers/devices)
* data related (dynamic, changing, validity, etc.)
* poor locator strategy (ex: weird & hard-wired xpaths/locators)
* environment issue
* actual issue in the product-under-test

In this session, with the help of demos, we will look at the following techniques you can use to reduce/eliminate the flakiness of your test execution:

* Reduce the number of UI tests
* Use Visual Assertions instead of Functional Assertions
* Remove external dependencies via Intelligent Virtualization

Demos will be done using the following and a sample code will be shared with the participants

* Specmatic
* teswiz
* Applitools Eyes

This will be done as live a demo. 45-60 min is a good time for this session. Internet connectivity will be required.

You can find links to some of my earlier conference videos here - https://www.youtube.com/user/abagmar/videos, and my slides here - http://slideshare.net/abagmar

Next Generation Functional & Visual Testing powered by AI - a workshop!

The Test Automation Pyramid is not a new concept.

The top of the pyramid is our UI / end-2-end functional tests - which simulate end-user behavior and interactions with the product-under test.

While Automation helps validate functionality of your product, aspects of UX validations can only be seen and captured by the human eye and is hence mostly a manual activity. This is an area where AI & ML can truly help.

With everyone wanting to be Agile, make quick releases, the look & feel / UX validation, which is typically a a manual, slow, and error-prone activity, quickly becomes a huge bottleneck.

In addition, with any UX related issues propping up cause huge brand-value and revenue loss, may lead to social-trolling and worse - dilute your user-base.

In this hands-on workshop, using numerous examples, we will explore:

* Why Automated Visual Validation is essential to be part of your Test Strategy
* How Visual AI increases the coverage of your functional testing, while reducing the code, and increasing stability of your automated tests
* Potential solutions / options for Automated Visual Testing, with pros & cons of each
* How an AI-powered tool, Applitools Eyes, can solve this problem
* Hands-on look at Applitools Visual AI and how to get started using it

Refer here for machine setup instructions - https://anandbagmar.github.io/assets/pdfs/GettingStartedWithVisualAI-Workshop.pdf

You can find links to some of my earlier conference videos here - https://www.youtube.com/user/abagmar/videos, and my slides here - http://slideshare.net/abagmar

This can be done as a 2hour, or 4 hour workshop.

Increase E2E coverage by Intelligent Virtualisation & Visual AI

Have you heard of “flaky tests”?

There are a lot of articles, blog posts, podcasts, and conference talks about what are “Flaky tests” and how to avoid them.

Some of the ideas proposed are:

* Automatically rerun failed tests a couple of times, and hope they pass
* Automatically retry certain operations in the test (ex: retry click/checking for visibility of elements, etc.) and hope the test can proceed

Unfortunately, I do not agree with the above ideas, and I would term these as anti-patterns for fixing flaky/intermittent tests.

We need to understand the reasons for flaky/intermittent tests. Some of these reasons could be because of issues like:
* timing issues (i.e. page loading taking time)
* network issues
* browser-timing issues (different for different browsers/devices)
* data related (dynamic, changing, validity, etc.)
* poor locator strategy (ex: weird & hard-wired xpaths / locators)
* environment issue
* actual issue in the product-under-test

In this session, with the help of demos, we will look at the following techniques you can use to reduce/eliminate the flakiness of your test execution:
* Reduce the number of UI tests
* Remove external dependencies via Intelligent Virtualization
* Use Visual Assertions instead of Functional Assertions

The demo will be done using the following and a sample code will be shared with the participants
* [Specmatic](https://specmatic.in)
* [teswiz](https://github.com/znsio/teswiz)
* [Applitools Visual AI](https://applitools.com)

Learning Outcomes:
* Explore the real reason for flaky tests
* Implement Visual Assertions instead of Functional Assertions to increase test coverage, while reducing code and increasing test stability
* Use Intelligent and Dynamic Virtualization to remove external dependencies

While there are many reasons for flaky tests, I will focus on a few specific aspects of the same in this session.
This will be a demo-based session to show a concrete example of how one can reduce flaky tests in a complex product environment. The tools used in the demo are mainly to show the concept - teams can use whatever tools that work for them.

I have been speaking in conferences and events since the past 13 years and have spoken in some of the following conferences in the past:
Agile USA
Agile India
Selenium Conference
Appium Conference
StarWest
TestBash
vodQA
TestTalks
Future of Testing
StepIn
Unicom

You can find links to some of my earlier conference videos here - https://www.youtube.com/user/abagmar/videos, and my slides here - http://slideshare.net/abagmar

Automating the real-user scenarios across multi-apps, and multi-devices

Simulating real-user scenarios as part of your automation is a solved problem. You need to understand the domain, the product, the user, and then define and implement your scenario.

But there are some types of scenarios that are complex to implement. These are the real-world scenarios having multiple personas (users) interacting with each other to use some business functionalities. These personas may be on the same platform or different (web / mobile-web / native apps / desktop applications).

Example scenarios:
- How do you check if more than 1 person is able to join a zoom / teams meeting? And that they can interact with each other?
- How do you check if the end-2-end scenario that involves multiple users, across multiple apps works as expected?
Given user places order on Amazon (app / browser)
When delivery agent delivers the order (using Delivery app)
Then user can see the order status as "Delivered"

Even though we will automate and test each application in such interactions independently, or test each persona scenarios independently, we need a way to build confidence that these multiple personas and applications can work together. These scenarios are critical to automate!

In this session, I will demonstrate teswiz, an open-source framework can easily automate these multi-user, multi-app, multi-device scenarios. I will also mention how to run these tests on local and in CI pipelines.

Example: Multi-user, Multi-device test scenario
@multiuser-android-web @videoRequest
Scenario: Host (on Android) requests Guest (on Web) to start video
Given "Host" signs up (using API), logs-in and starts an instant meeting on "android"
And "Guest" joins the meeting from "web"
When "Host" asks "Guest" to turn on the "video"
Then "Guest" should be able to select "later" from video request
And "Host" should receive a toast saying, "Guest" will switch on the "video" later
When "Host" asks "Guest" to turn on "video"
Then "Guest" should be able to select "allow" from video request

Example: Multi-user, Multi-app, Multi-device test scenario
@multiuser-android
Scenario: Verify my order status when delivery-agent delivers the app successfully
Given "I" order a "One Plus 9" phone from "amazon" using the "android app"
When "delivery-agent" delivers the app using the "delivery-agent" "android app"
Then "I" can see the item delivered in Order Status

Teswiz enables, and guides you to implement your automated tests while adhering to the principles of test automation, like - independent tests that run in parallel, against multiple environments using environment-specific-test-data and generate rich and contextual reports (and test execution trends) in reportportal.

Test coverage is increased by using Applitools Visual AI, along with Applitools Ultrafast Test Cloud.

In addition, teswiz takes away the pain of managing your browsers, and android / ios / windows devices for automation. The automated tests can run on local browsers / devices, or against any cloud provider, such as HeadSpin, BrowserStack, SauceLabs, pCloudy.

The following features makes teswiz unique:

The ability to do:
- Multi-user scenario automation between all platforms (android, iOS, Web, Windows desktop applications)
- Managing browsers / devices and parallel execution automatically
- Completely configurable options - means no code change required to run different combinations of tests, on demand
- Integration with Applitools Visual AI and Applitools Ultrafast Test Cloud
- Rich contextual reports (including screenshots, browser / device logs) and trend analysis via ReportPortal.io

- This will be a live demo-based session.
- I will need Internet access for the same
- Target audience: QA, SDET, developers, leads and managers involved with building and testing applications that have multiple personas interacting with each other - ex: chat, video calls, etc.

Testing & Automation: Measuring Effectiveness & ROI

Automation is considered the silver bullet that will enable teams to shift left and accomplish CI/CD. However, how confident are you with the value added by the automation implementation in your organization and teams?

In this interactive discussion, lets together explore the following factors:

- Commonly used metrics to measure effectiveness and ROI from your Testing & Automation activities?
-- Ex: code coverage, defect leakage, test execution status, test execution time, # of tests, etc.
- Do these metrics help in identifying blockers and root causes of issues?
- How do these metrics help to make decisions to better product quality?
- Discuss some other metrics that can help each role take meaningful decisions to make product quality better!
-- Ex: Cycle Time, CLT, defect analysis with RCA, MTTR, Functional coverage, etc.

A modern look at AI-powered Cross-Browser Testing

The traditional approach to cross-browser testing involves ensuring compatibility and functionality across different browsers and versions. However, modern browsers generally comply with W3C specifications, meaning that if a functionality works in one browser, it will likely work in others as well. The main differences lie in rendering, which still needs to be validated.

Therefore, the traditional cross-browser testing strategy may need reconsidering and updating. It is inefficient to run the same tests on all browsers. Instead, we can leverage AI to simplify and streamline the cross-browser testing process, increasing testing coverage while running tests only once.

In this session, we will explore how AI-powered Applitools Execution Cloud and Applitools Ultrafast Grid can make cross-browser testing more manageable and seamless. By utilizing AI, we can enhance testing efficiency and achieve broader coverage, ultimately improving the overall quality of web applications.

Also, icing on the cake, we can use Applitools Self Healing capability to reduce the flakiness of the tests related to locator changes and increase the stability of the tests.

I need wifi access for the demos

Metrics to measure the effectiveness of your engineering processes, practices, and implementation

Business teams implement product ideas to provide value to their customers. The engineering team needs to deliver this promise by implementing the ideas correctly (with the right quality), and in time, to provide the said value to the customers.

All roles use, typically, use standard metrics to measure if they are on track to deliver this objective. And, if they cannot, we need to know at the earliest, so the plan can be updated accordingly.

In this session, we will focus on answering the questions -
What metrics do you use to measure the effectiveness of your engineering processes, practices, and implementation?
Are these metrics effective? Do they help you make decisions to get better?
Discuss metrics (example: CLT, MTTR, etc.), if used, can help your team, and you get better!

This is an interactive discussion with the participants to understand their pain points, and challenges and jointly brainstorm solutions.

Devfest Mumbai, 2023 Sessionize Event

December 2023 Mumbai, India

Android Worldwide January 2023 Sessionize Event

January 2023

Devfest Pune 2022 Sessionize Event

December 2022 Pune, India

Mobile DevOps Summit Sessionize Event

November 2022

Android Worldwide October 2022 Sessionize Event

October 2022

DevOpsDays Zurich 2022 Sessionize Event

May 2022 Winterthur, Switzerland

Global AI Bootcamp 2022 Sessionize Event

March 2022 Madrid, Spain

Web Day 2022 Sessionize Event

March 2022

Anand Bagmar

Software Quality Evangelist

Pune, India

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top