Panel Discussion: The Future of Testing- Impact of AI in Quality Assurance and Beyond [Spartans Summit 2024]

LambdaTest
7 min readApr 16, 2024

As organizations increasingly adopt AI-driven solutions, understanding its implications on testing methodologies becomes paramount. The integration of AI brings about a fundamental change in how software testing is approached, providing remarkable capabilities for automation, predictive analysis, and intelligent decision-making. This transformation requires a deep understanding of how AI affects different aspects of testing, such as test design, execution, analysis, and optimization. Furthermore, as AI advances, organizations must stay updated on emerging trends and best practices to fully leverage its potential in guaranteeing software quality and reliability.

In this panel discussion of Spartans Summit 2204, the panelists (Priscilla Bilavendran, Naveen Khunteta, Ibironke Yekinni, Parveen Khan, Steve Caprara, and Mahathee Dandibhotla) discuss the future of testing and how AI is impacting Quality Assurance.

In this session, the host, Priscilla Bilavendran, facilitates a dynamic exchange of ideas, encouraging lively discussion and debate. Her expertise and professionalism are evident as she navigates through various topics, ensuring that all participants have the opportunity to contribute their thoughts and expertise.

The panel clears up common doubts among testers and shares ideas and experiences about implementing AI by discussing how AI can improve QA. The panel showcases how AI can benefit businesses and industries. Through this conversation, they hope to make AI adoption in testing more accessible and practical.

If you couldn’t catch all the sessions live, don’t worry! You can access the recordings conveniently by visiting the LambdaTest YouTube Channel.

About the Panel

This panel includes some of the brightest minds in the industry to delve into the fascinating topic of AI in quality assurance and the future of testing.

  • Priscilla Bilavendran (Host) is an experienced quality engineering leader with over a decade of experience in software testing. Currently serving as a Team Lead at Fanum ID Services Malaysia, she oversees testing processes and collaborates with cross-functional teams to implement effective quality assurance strategies. She is known for her strategic mindset and dedication to continuous improvement in software testing.
  • Naveen Khunteta is a seasoned quality engineering professional with over a decade of experience in the field. Currently serving as the Director of Quality Engineering at a leading software development company, he is known for his expertise in test automation, performance testing, and quality assurance strategies. With a passion for leveraging innovative technologies to enhance testing processes, he has contributed significantly to advancing quality engineering practices in his organization.
  • Ibironke Yekinni is a highly skilled software testing engineer with a quality assurance and test automation background. With several years of experience in the industry, Ibironke has worked across various domains, including finance, healthcare, and e-commerce. Currently employed as a Senior QA Analyst at a multinational corporation, she specializes in designing and implementing comprehensive testing frameworks to ensure the delivery of high-quality software products. Ibironke is a valuable asset to any testing team because of her attention to detail and analytical mindset.
  • Parveen Khan is a dynamic software development professional focusing on quality assurance and software testing. With a career spanning over eight years, she has held key roles in leading technology companies, where she has spearheaded various testing initiatives and quality improvement projects. Currently working as a Test Manager at a prominent IT consulting firm, she oversees the testing process and ensures adherence to quality standards. Her proactive approach and dedication to continuous improvement make her a trusted leader in software testing.
  • Steve Caprara is an experienced software engineer specializing in quality assurance and test automation. With over a decade of experience in the industry, Steve has a proven track record of delivering high-quality software solutions for diverse clients and industries. Currently serving as a Senior QA Engineer at a leading technology company, Steve is responsible for designing and implementing robust testing strategies to identify and mitigate software defects. Known for his technical expertise and problem-solving skills, Steve is a valuable asset to any software development team.
  • Mahathee Dandibhotla is a seasoned quality assurance professional with extensive experience in software testing and quality engineering. With a background in computer science and engineering, Mahathee has a strong foundation in software development methodologies and testing best practices. Currently employed as a QA Lead at a global IT services company, Mahathee oversees the testing process for complex software projects, ensuring the delivery of high-quality products to clients. Her strategic approach to quality assurance and attention to detail have earned her recognition as a top performer in the field.

Let’s delve into this insightful session.

Embracing Dynamic Element Locators and Context-Aware Testing

The discussion begins with questions posed by Priscilla, the host of the session. Each panelist, including the host, shares their experiences, perspectives, or thoughts on those questions.

Priscilla initiates the discussion by asking, “How can testers adapt to using dynamic element locators in automation testing?

In response to this question, panelist Ibironke shares her insights into adopting dynamic element locators in automation testing and emphasizes the significance of context-aware testing. Yekinni highlights the necessity for testers to move beyond static locators and adapt their scripts to accommodate dynamic changes in software environments, user interactions, and system states. By harnessing AI-driven tools, testers can enhance efficiency and accuracy while ensuring robust test coverage across various scenarios.

The discussion progresses with Priscilla posing another question: “Why is context-aware testing necessary, and how can AI help?

In response, Ibironke highlights real-world examples illustrating how AI has significantly improved testing processes’ efficiency, accuracy, and innovation. Yekinni elaborates on tools like Selenium and Playwright for automated testing and AI-driven solutions for test data generation and visual testing, such as Applitools. These examples underscore the transformative potential of AI in streamlining testing efforts and delivering high-quality software products.

As the conversation continues, Priscilla asks the panelists to share examples of how AI has improved testing efficiency.

Concrete Examples of AI’s Impact on Testing Efficiency

AI is revolutionizing testing practices and enhancing efficiency and accuracy. Let’s explore two concrete examples of AI’s profound impact on testing efficiency.

The discussion delves into concrete examples of AI’s impact on testing efficiency, accuracy, and innovation, led by Priscilla’s question: “Can you share concrete examples of how AI has improved testing efficiency, accuracy, or innovation?

In response, panelists Naveen, Ibironke, Parveen, Steve, Mahathee, and the host Priscilla, share their perspectives. The panelist highlights how tools like Selenium and Playwright leverage AI capabilities to enhance automated testing. These tools utilize AI-driven algorithms to intelligently interact with web elements, identify dynamic locators, and adapt to changes in software environments. By integrating AI into automated testing frameworks, testers can enhance efficiency and accuracy in their test execution, ultimately leading to higher-quality software products.

Furthermore, the panelists emphasize another example of AI’s impact: the use of AI-driven solutions for test data generation and visual testing, exemplified by platforms like Applitools. These platforms leverage sophisticated AI algorithms to automate the creation of test data sets, detect visual anomalies, and validate the graphical user interface (GUI) of software applications. By incorporating AI into these processes, testers can streamline their workflows, improve defect detection efficiency, and ensure a consistent user experience across different devices and platforms.

Skills Development for Testers in an AI-Driven Landscape

As the discussion progresses, the host encourages the panelists to share their personal and professional insights on the essential skills that testers should focus on developing.

Priscilla initiates the discussion with a question for the panelists: “Based on your personal and professional experience, what skills do testers need to evolve to adapt to an AI-driven future?

In response, Naveen, Ibironke, Parveen, Steve, Mahathee, and Priscilla herself highlight the essential skills required for testers to excel in an AI-centric landscape. They emphasize the criticality of continuous learning, effective collaboration, and clear communication among testers.

Moreover, proficiency in programming/scripting and expertise in domain knowledge is crucial for successfully navigating AI-driven environments. Furthermore, ethical responsibility in AI practices, encompassing data privacy, transparency, and accountability, is important to ensure integrity in testing processes.

Following the interactive discussion on skill development for testers, the host encourages the panelists to share their perspectives on the feasibility of implementing the strategies in an AI-driven context.

Strategies for Adapting to an AI-Driven Future

As the session progresses, the host encourages the panelists to share their perspectives on the feasibility of implementing the strategies in an AI-driven context.

Priscilla poses another question to the panelists: “What are some practical strategies for testers to adapt to an AI-driven future?

In response, Naveen, Ibironke, Parveen, Steve, Mahathee, and Priscilla provide valuable insights into practical strategies for testers to navigate an AI-driven future. They emphasize the importance of embracing AI tools while ensuring adherence to ethical standards and regulatory requirements during testing. Moreover, they highlight the proactive stance required for learning and integrating AI technologies into testing workflows. This proactive approach enables testers to stay abreast of advancements, effectively utilize AI to improve testing practices and uphold integrity and compliance with ethical guidelines.

That’s a Wrap

As the exploration of the AI-driven future of software testing concludes, it becomes evident that testers face both challenges and opportunities in embracing AI technologies. By embracing dynamic element locators, leveraging AI-driven tools, and honing their skills, testers can drive innovation, efficiency, and quality in testing processes. With a commitment to continuous learning, collaboration, and ethical responsibility, testers can confidently navigate the path forward in an AI-driven world.

The key takeaways from this session include embracing dynamic element locators and context-aware testing, which are essential for testers to adapt to dynamic software environments effectively. AI-driven tools like Selenium and Playwright and solutions for test data generation and visual testing have tangibly improved testing efficiency and accuracy.

Continuous learning, collaboration, and programming proficiency emerge as crucial skills for testers in an AI-driven landscape. Maintaining ethical responsibility in AI practices while leveraging AI tools is paramount for ensuring data privacy and accountability. Testers can effectively navigate the evolving software testing landscape by honing their skills and staying updated on AI developments.

Did this panel discussion answer your questions? If you have any further inquiries, please feel free to drop them on the LambdaTest Community.

--

--