TestKase Docs
Core TestingDefects

Status Workflow

Understand the defect lifecycle from discovery through resolution and verification.

Status Workflow

Defects in TestKase follow a defined lifecycle that ensures every bug is properly tracked from discovery through resolution and verification. Understanding the status workflow helps your team maintain a clean, accurate defect backlog.

Status Transition Flow

Open --> In-Progress --> Closed --> Achieved

Transition Rules and Guidance

TransitionWhoWhen to Use
Open -> In-ProgressDeveloper / AssigneeThe assignee has begun investigating or fixing the defect. Move to In-Progress as soon as active work starts so the team knows the defect is being handled.
In-Progress -> ClosedDeveloper / AssigneeThe fix has been implemented, code-reviewed, and merged or deployed. The developer believes the issue is resolved. Move to Closed to signal that the defect is ready for verification.
Closed -> AchievedTester / QAA tester has re-tested the scenario and confirmed the fix works correctly. Move to Achieved only after successful verification. This is the final state for a resolved defect.
Closed -> Open (Reopen)Tester / QARe-testing revealed that the fix is incomplete or introduced a regression. Move back to Open with a comment explaining what still fails. The defect re-enters the triage queue.
In-Progress -> Open (Reopen)Developer / AssigneeThe developer determined the defect cannot be reproduced, is blocked by another issue, or needs more information. Move back to Open with a comment explaining why.

Always add a comment when changing a defect's status, especially when reopening. A brief note like "Fix verified in staging -- confirmed working" or "Still failing on Chrome 120 after fix -- see attached screenshot" gives the team immediate context.

Example Workflow

Here is a typical defect lifecycle from discovery to resolution:

  1. A tester executes the test case "Verify checkout with discount code" and it fails. They create a defect: "[Checkout] Discount code applies twice when page is refreshed" with priority Major. Status is automatically set to Open.
  2. During the daily stand-up, the team triages the defect and assigns it to a developer. The developer picks it up and moves the status to In-Progress.
  3. The developer identifies the root cause (discount is applied on each page render instead of once), implements the fix, and merges the pull request. They move the defect to Closed.
  4. A tester re-executes the original test case with the fix deployed. The discount code now applies correctly. The tester moves the defect to Achieved.

Reopening Defects

If re-testing reveals that a fix is incomplete or the original issue reoccurs, the tester should move the defect from Closed back to Open. When reopening a defect:

  • Add a comment explaining what still fails and under what conditions.
  • Attach new screenshots or logs that demonstrate the remaining issue.
  • Update the description if the problem manifests differently than originally reported.
  • Consider whether the priority should change -- a defect that was initially Minor but persists across multiple fix attempts may warrant escalation.

Do not skip the Achieved step. Moving a defect directly from In-Progress to Achieved bypasses verification and can lead to unverified fixes shipping to production. Always have a tester verify the fix by re-running the associated test case(s) before marking a defect as Achieved.

Defects from Execution

The most common way to create a defect in TestKase is directly from a failed test execution. This workflow is designed to minimize context-switching and ensure that defects are automatically linked to the test case and execution where they were discovered.

Step-by-Step: Creating a Defect During Execution

  1. Open a test cycle and begin executing test cases.
  2. When you encounter a failure, set the test case execution result to Fail.
  3. After marking the test as failed, click the Link Defect button that appears in the execution view.
  4. You will be presented with two options:
    • Create New Defect -- Opens the defect creation form pre-populated with context from the current test execution. Fill in the title, description, priority, and any additional details, then save.
    • Link Existing Defect -- Search for and select a defect that has already been logged. This is useful when the same defect causes multiple test cases to fail.
  5. Once saved or linked, the defect is automatically associated with the current test case. You can continue executing the remaining test cases in the cycle.

What Gets Auto-Linked

When you create a defect from the execution view, TestKase automatically creates the following link:

  • Defect -> Test Case: The defect is linked to the test case that was being executed when the failure occurred.

If you also want to link the defect to a requirement, you can do so manually from the defect detail view after creation (see Link to Requirements).

Do not create duplicate defects for the same root cause. If multiple test cases fail due to the same underlying issue, create the defect once and then use Link Existing Defect for subsequent failures. This keeps your defect count accurate and avoids confusion during triage.

Linking an Existing Defect to Multiple Failures

When a single defect causes multiple test cases to fail within the same cycle (or across cycles), you should link the existing defect rather than creating a new one each time. To do this:

  1. Mark the test case as Fail.
  2. Click Link Defect and choose Link Existing Defect.
  3. Search for the defect by title or browse the list of open defects.
  4. Select the defect and confirm. The test case is now linked to the existing defect.

Best Practices for Execution-Based Defects

  • Report immediately: Create the defect as soon as you encounter the failure while the context is fresh. Delayed reporting leads to missing details and vague reproduction steps.
  • Search before creating: Before creating a new defect, quickly search the existing defect list to check if the same issue has already been logged by another tester.
  • Add execution context: Include the specific test data, environment, and browser or device you were using when the failure occurred. This context is invaluable for reproduction.