Lab 7 - Using AI and AI Tools in Software Testing
In this tutorial, I’ll guide you through using Artificial Intelligence (AI) and AI-powered tools in software testing to generate test plans, test cases, user story sub-tasks, and more. This tutorial builds on our previous work with the Spring Boot Book Management API and Express.js User Management API projects, integrating AI into the existing Jira, Xray, Zephyr, Confluence, and GitHub workflows. We’ll explore how AI can automate and enhance testing processes, focusing on practical applications, and recommend the best AI tool compatible with Jira.
Overview
AI is revolutionizing software testing by automating repetitive tasks, generating test artifacts, and improving test coverage. We’ll use AI tools to:
- Generate a test plan for the APIs.
- Create test cases based on user stories.
- Derive sub-tasks for user stories.
- Integrate findings into our Jira ecosystem.
We’ll continue with the Book Management Scrum (BMS) and User Management Scrum (UMS) projects from previous tutorials, leveraging the Spring Boot API (http://localhost:8080/api/books
) and Express.js API (http://localhost:3001/api/users
).
Recommended AI Tool for Jira
The best AI tool for integration with Jira is Testim with its AI-powered test automation features, combined with Jira’s native AI capabilities (e.g., Jira Automation with AI suggestions) and third-party plugins like Katalon TestOps or SmartBear Test (which leverage AI for test generation and management). However, for this tutorial, we’ll primarily use ChatGPT (or similar models like Grok from xAI) as a general-purpose AI tool to generate test artifacts, which can then be manually imported into Jira/Xray/Zephyr. For automated execution within Jira, Katalon TestOps is highly recommended due to its seamless Jira integration, AI-driven test optimization, and support for test case generation and execution.
- Why Katalon TestOps?
- Integrates directly with Jira for test case and defect management.
- Uses AI to suggest test cases, optimize test suites, and predict flaky tests.
- Supports manual and automated testing, with reporting synced to Jira.
- Compatible with Xray and Zephyr workflows.
For this tutorial, we’ll use ChatGPT to generate initial artifacts (since it’s widely accessible), then show how to integrate them into Katalon TestOps and Jira.
Part 1: Using AI to Generate a Test Plan
Step 1: Define the Scope with AI
AI can analyze project requirements and suggest a test plan structure. Let’s use ChatGPT to generate a test plan for the Book Management Scrum (BMS) project.
-
Input Prompt to ChatGPT:
- Open ChatGPT (or a similar AI model) and provide the following prompt:
As an AI assistant, help me create a test plan for a Spring Boot REST API project called "Book Management API" that manages books with endpoints GET /api/books, GET /api/books/{id}, and POST /api/books. The project uses an H2 in-memory database and is part of a Scrum workflow in Jira. Include sections for scope, objectives, test strategy, test environment, test schedule, and deliverables.
- Open ChatGPT (or a similar AI model) and provide the following prompt:
-
Generated Test Plan (Example Output):
- Test Plan for Book Management API
- Scope: Test the functionality, performance, and reliability of the Book Management API endpoints (GET /api/books, GET /api/books/{id}, POST /api/books) using an H2 in-memory database.
- Objectives: Ensure 100% functional coverage of CRUD operations, validate performance under 100 concurrent users, and verify error handling.
- Test Strategy: Combine manual testing (via Xray), automated unit/integration tests (via JUnit), E2E tests (via Cypress), and stress tests (via JMeter).
- Test Environment: Local development environment with Spring Boot running at
http://localhost:8080
, H2 database, and a React frontend athttp://localhost:3000
. - Test Schedule: Conduct testing during Sprint 1 (April 3, 2025 - April 17, 2025), with daily execution and a final review on April 17, 2025.
- Deliverables: Test cases, test execution reports (Xray), stress test results (JMeter), and a Confluence documentation page.
- Test Plan for Book Management API
-
Refine and Import:
- Copy the AI-generated plan into a document (e.g., Google Docs or a text editor).
- Create a new page in the Book Management Documentation (BMDOC) Confluence space called Test Plan - Sprint 1.
- Paste the plan, adjust as needed, and embed related Jira issues (e.g., sprint stories) using the Jira Issue/Filter macro.
Step 2: Integrate with Katalon TestOps
-
Set Up Katalon TestOps:
- Sign up for a Katalon TestOps account (free tier available) and integrate it with your Jira instance:
- In Jira, go to Apps > Manage Apps > Install Katalon TestOps for Jira.
- Follow the setup wizard to connect Katalon to your BMS project.
- In Katalon TestOps, create a new project linked to the BMS project.
- Sign up for a Katalon TestOps account (free tier available) and integrate it with your Jira instance:
-
Import the Test Plan:
- In Katalon TestOps, go to Test Planning > Create Test Plan.
- Use the AI-generated plan as a basis, inputting the scope, objectives, and strategy.
- Assign the test plan to Sprint 1 - Book CRUD and save it. Katalon will sync this with Jira as a custom issue or link to existing Xray/Zephyr test plans.
Part 2: Using AI to Generate Test Cases
Step 1: Generate Test Cases with AI
AI can analyze API endpoints and user stories to suggest test cases.
-
Input Prompt to ChatGPT:
- Provide the following prompt:
As an AI assistant, generate test cases for a Spring Boot REST API endpoint GET /api/books that returns a list of books from an H2 database. Include test case ID, title, description, preconditions, steps, expected result, and priority. Also, generate test cases for POST /api/books to add a new book, considering valid and invalid inputs.
- Provide the following prompt:
-
Generated Test Cases (Example Output):
- Test Case TC-BK-001
- Title: “Verify GET /api/books returns all books”
- Description: “Ensure the API returns a list of all books in the database.”
- Preconditions: H2 database contains at least two books.
- Steps:
- Send a GET request to
http://localhost:8080/api/books
. - Check the response status code.
- Verify the response body contains a JSON array.
- Send a GET request to
- Expected Result: Status 200, response contains all books (e.g., [{“id”: 1, “title”: “The Great Gatsby”, “author”: “F. Scott Fitzgerald”}]).
- Priority: High
- Test Case TC-BK-002
- Title: “Verify POST /api/books with valid data”
- Description: “Ensure a new book is added successfully with valid input.”
- Preconditions: API is running and accessible.
- Steps:
- Send a POST request to
http://localhost:8080/api/books
with body{"title": "New Book", "author": "New Author"}
. - Check the response status code.
- Verify the response contains the new book.
- Send a POST request to
- Expected Result: Status 200, response contains the new book with a generated ID.
- Priority: High
- Test Case TC-BK-003
- Title: “Verify POST /api/books with invalid data (missing title)”
- Description: “Ensure the API handles missing required fields gracefully.”
- Preconditions: API is running.
- Steps:
- Send a POST request to
http://localhost:8080/api/books
with body{"author": "No Title Author"}
. - Check the response status code.
- Send a POST request to
- Expected Result: Status 400 (Bad Request) with an error message.
- Priority: Medium
- Test Case TC-BK-001
-
Import into Katalon TestOps:
- In Katalon TestOps, go to Test Design > Create Test Case.
- Manually input each test case from the AI output, or use Katalon’s AI feature to upload the text and let it parse and suggest cases.
- Link each test case to the corresponding Jira story (e.g., Add a New Book, View All Books) via the Katalon-Jira integration.
- Sync with Xray/Zephyr by exporting the test cases as Jira issues (e.g., Test issue type).
Step 2: Execute and Validate
- Run the test cases manually or automate them using Katalon Studio (which integrates with Katalon TestOps).
- Mark the execution status in Katalon TestOps, which will update the linked Jira issues.
Part 3: Using AI to Generate User Story Sub-Tasks
Step 1: Generate Sub-Tasks with AI
AI can break down user stories into actionable sub-tasks based on technical and testing requirements.
-
Input Prompt to ChatGPT:
- Provide the following prompt:
As an AI assistant, generate sub-tasks for the Jira user story "As a librarian, I want to add a new book to the catalog so that it’s available for users" in a Spring Boot project. Include sub-task titles, descriptions, and estimated effort in hours.
- Provide the following prompt:
-
Generated Sub-Tasks (Example Output):
- Sub-Task: Implement POST /api/books Endpoint
- Description: “Develop the Spring Boot controller and service logic to handle POST requests for adding a new book.”
- Estimated Effort: 4 hours
- Sub-Task: Add Input Validation
- Description: “Implement validation to ensure title and author fields are not null or empty.”
- Estimated Effort: 2 hours
- Sub-Task: Write Unit Tests for POST Endpoint
- Description: “Create JUnit tests to verify the POST /api/books endpoint functionality.”
- Estimated Effort: 3 hours
- Sub-Task: Perform Integration Testing
- Description: “Test the POST endpoint with the H2 database using Spring Boot integration tests.”
- Estimated Effort: 2 hours
- Sub-Task: Implement POST /api/books Endpoint
-
Import into Jira:
- In the Book Management Scrum (BMS) project, open the Add a New Book story.
- Click Create Sub-Task for each AI-generated sub-task, copying the title, description, and effort.
- Assign sub-tasks to team members and estimate story points based on the total effort (e.g., 11 points).
Step 2: Track Progress
- Use the Jira sprint board to move sub-tasks from To Do to In Progress to Done as the team completes them.
- Sync with Confluence by adding a section on the Sprint 1 - Book CRUD page to list sub-tasks and their status.
Hands-On Labs
Lab 1: Generate and Import a Test Plan
- Task: Use ChatGPT to generate a test plan for the User Management Scrum (UMS) project, covering the Express.js API endpoints (GET /api/users, GET /api/users/:id, POST /api/users). Import it into Confluence and Katalon TestOps.
- Steps:
- Prompt ChatGPT with: “Create a test plan for an Express.js REST API project called ‘User Management API’ with endpoints GET /api/users, GET /api/users/:id, and POST /api/users, using SQLite and integrated with Jira.”
- Refine the output and create a Test Plan - Sprint 1 page in the User Management Documentation (UMDOC) space.
- Import the plan into Katalon TestOps and link it to the UMS project.
Lab 2: Generate and Execute Test Cases
- Task: Use ChatGPT to generate test cases for the new PUT /api/users endpoint (from the stress testing lab), import them into Katalon TestOps, and execute one manually.
- Steps:
- Prompt ChatGPT with: “Generate test cases for an Express.js REST API endpoint PUT /api/users/:id to update a user, including valid updates and invalid inputs (e.g., non-existent ID).”
- Import the test cases into Katalon TestOps and link them to the Update User story (create this story in Jira if needed).
- Manually execute the “Verify PUT /api/users/:id with valid data” test case and mark the result in Katalon.
Lab 3: Generate and Assign Sub-Tasks
- Task: Use ChatGPT to generate sub-tasks for the View All Users story in the UMS project, import them into Jira, and assign them to team members.
- Steps:
- Prompt ChatGPT with: “Generate sub-tasks for the Jira user story ‘As an admin, I want to view all users in the system to manage accounts’ in an Express.js project.”
- Create the sub-tasks in Jira under the View All Users story.
- Assign sub-tasks to team members (e.g., Alice for “Implement GET /api/users”, Bob for “Write Unit Tests”) and estimate effort.
Key Concepts and Best Practices
- AI in Test Planning:
- Use AI to draft test plans based on project requirements, saving time while allowing manual refinement.
- Integrate with tools like Katalon TestOps for execution and tracking.
- AI in Test Case Generation:
- Leverage AI to suggest test cases based on endpoints and user stories, ensuring comprehensive coverage.
- Validate and adjust AI-generated cases to fit real-world scenarios.
- AI in Sub-Task Creation:
- Break down user stories into sub-tasks using AI, improving planning accuracy and task distribution.
- Use estimated effort to inform sprint planning and story point estimation.
- Integration with Jira Ecosystem:
- Sync AI-generated artifacts with Jira, Xray/Zephyr, and Confluence for traceability.
- Use Katalon TestOps to bridge AI automation with Jira workflows.
Additional Notes
- Alternative AI Tools:
- Testim: Excellent for AI-driven test automation and maintenance, with Jira integration.
- Mabl: AI-powered E2E testing with self-healing tests, compatible with Jira.
- Functionize: Uses NLP and ML for test case generation, with API testing support.
- Limitations: AI-generated artifacts may require human oversight to ensure accuracy and context-specific details.
- Automation: Combine AI tools with CI/CD pipelines (e.g., Jenkins) to automate test execution and report results to Jira/Katalon.
This tutorial introduces AI and tools like Katalon TestOps into your software testing process, enhancing efficiency and coverage for the Spring Boot and Express.js APIs. By completing the labs, you’ll gain hands-on experience in leveraging AI within your Jira-based Scrum workflow.
By Wahid Hamdi