AI is becoming an important QA tool, helping with faster scenario generation, risk detection, and test planning. At the online TestConf, Arbaz Surti demonstrated how effective prompts using roles, contexts, and output formats can help create clear, relevant, and actionable test scenarios. AI can enhance testers, but human judgment is required to ensure relevance and quality.
AI will be part of the QA toolkit, and testers who learn how to form prompts effectively will use AI to speed up scenario generation, uncover risks faster, and focus their time on higher-value work, Sruthi explained.
For my Baskin-Robbins project, I needed to test how menu item availability was synchronized with an ordering system. We gave the AI detailed prompts to describe our application and asked it to generate edge cases. As a result, I came up with some scenarios that I hadn’t considered. This included a scenario where an item marked as “in stock” in the app may actually be out of stock in the store.
By discovering this, Surthi said, the company was able to prevent serious bugs that could have caused customers to place orders that couldn’t be fulfilled or stores to deal with pesky refunds.
Rapid engineering is an extension of what we’re already good at: asking good questions. If you give an AI a vague prompt, it will usually get a vague answer, Sruthi said. Applying the same discipline that testers bring to writing test cases will yield results much closer to what testers actually need: clear, relevant, reliable, and actionable test scenarios.
Creating effective prompts for test cases is similar to creating clear requirements. The more context you provide, the better the output. He suggested using a simple structure like this:
- Define roles. You’re a senior QA engineer, a performance tester focused on load and stress scenarios, a security tester skilled at identifying vulnerabilities in authentication flows, or a QA lead responsible for risk-based prioritization.
- Add context: What features or systems are you testing? Are there any rules or constraints?
- Set the output format.
Doing this often results in structured, prioritized scenarios that can be incorporated directly into QA workflows, Surti says, rather than a random list of test cases. It’s about guiding the AI the same way you would guide a junior tester.
AI-powered testing workflows empower testers, not replace them. Surti explained that AI is being incorporated into various parts of the QA cycle.
Helps quickly brainstorm coverage during test planning. Test case creation also allows you to structure scenarios and suggest edge cases that might be missed. Once you get to execution and automation, AI helps generate scripts and prioritize high-value tests. Reports can also summarize results, identify patterns, and translate technical findings into language appropriate for your business.
The workflow is the same as before, but the AI acts like a smart assistant at each step, helping testers act faster and focus on higher-value thinking, Sruthi said.
Regarding prompts in QA, Surti suggested focusing on guidance rather than specific steps.
- Clarity over completeness: Prioritize asking for exactly what you need instead of trying to get everything in one prompt.
- Format for reuse: Output can be structured and incorporated directly into test plans, tables, or checklists.
- Think like a tester. Treat the AI’s responses as something to be verified and iterated on, similar to test results.
- Risk-aware prompts: Guide AI to focus on high-impact and edge-case scenarios instead of generic answers.
- Reflective and Iterative: Each prompt is an opportunity for refinement and learning, improving results over time.
Sruti said AI is a powerful accelerator, not a replacement for testers. It’s great for brainstorming, scenario generation, or surfacing edge cases that might otherwise be missed due to time pressure. But its real value comes when you guide it with the right prompts and apply human judgment to filter what’s important. He concluded that while AI can help cover more ground faster, it is testers who ensure the depth and relevance of that coverage.
InfoQ interviewed Arbaz Surti about accelerating software testing.
InfoQ: How can I create effective prompts for generating test cases?
Albus Surti: Here is an example prompt with the output it produces:
Role: You are a Senior QA Engineer with experience in e-commerce checkout testing.
Context: The functionality being tested is an e-commerce checkout flow for a web app. It should support guest and authenticated checkout, multiple payment methods (credit cards, gift cards, PayPal), promo codes, multi-device continuity, shipping options, and tax calculations. The system must enforce payment validation and protect against common security attacks (SQLi, XSS). There is a 15 minute session/checkout timeout.
Constraint: Create a Markdown table with the following columns:
|Test Scenario |Category (Functional/Edge Case/Security/Usability) |Input/Action |Expected Result |Priority (High/Medium/Low) |
Requirements:
– Provides 15-20 different scenarios covering functionality, edge cases, security, and usability.
– Include core happy path, negative/failed payment cases, edge cases (empty cart, timeouts, multi-device), and security testing (SQL injection, XSS).
…
Outputs only the Markdown table (no additional comments).
result:
test scenario
category
input/action
Expected results
priority
Add item to cart
functional
Select the product and click “Add to Cart”
The product appears in your cart at the correct price
expensive
Apply discount code
functional
Please enter a valid/invalid coupon
Active: Discount is applied. Disabled: Error message
expensive
Log in at checkout
functional
Log in during checkout
Cart is retained and user is redirected to checkout
medium
…
InfoQ: What prompting techniques exist and how are these techniques used?
SrutiHere are the techniques that have the biggest impact on testers, taken from: A Systematic Survey of Prompt Engineering Techniques:
- Role prompts help shape the AI’s “voice” and expertise. For example, AI can act like a senior QA engineer.
- Decomposition is breaking a large task into smaller steps that are easier to test.
- Structured output is also important because requesting results in tables, JSON, or test charters gives you something you can incorporate into your workflow.
- You can also let AI self-assess its coverage, helping you find gaps.
There are also more advanced techniques like chain-of-thought prompts (CoTs) that step-by-step instruct the AI to make inferences. But for most testers, starting with these core techniques can already make a difference in the quality of the output.
