Best Practices & FAQ
Guidelines for effective test cycle management and answers to common questions.
Best Practices
Following these best practices will help your team get the most out of test cycles in TestKase. These recommendations are based on patterns observed in high-performing QA teams and industry standards for test management.
- Use descriptive, consistent cycle names. Adopt a naming convention that includes the version number, test type, and environment. For example:
"v2.5.0 Regression - Staging", "Sprint 22 Smoke Tests", or "Payment Module UAT
- Round 3". Consistent naming makes cycles easy to find, sort, and compare in the listing view and in reports. Avoid generic names like "Test Run 1" or "Cycle" that provide no context.
- Keep cycle scope focused. Resist the temptation to put every test case into a single massive cycle. Instead, create separate cycles for different test types (regression, smoke, UAT) or feature areas (payments, authentication, reporting). Smaller, focused cycles are easier to manage, assign, track to completion, and analyze in reports. A focused cycle also makes it easier to identify the root cause when pass rates drop.
- Set up regression cycle templates. Create a "master" regression cycle with your standard regression test cases. When a new build is ready, clone this template instead of manually selecting test cases each time. Periodically update the template as your regression suite evolves (adding new cases for new features, removing cases for deprecated features). This saves significant setup time and ensures consistency across regression runs.
- Use smoke test cycles for quick validation. Maintain a lean smoke test cycle containing 10 to 30 critical test cases that cover core functionality (login, main workflow, key integrations). Run this cycle first when a new build is deployed to quickly determine if the build is stable enough for full regression testing. This avoids wasting hours on a detailed regression run against a fundamentally broken build. If the smoke test cycle fails, the build goes back to development immediately.
- Always add comments on failures and blocks. A status of "Fail" or "Blocked" without context is nearly useless. Train your team to always add a comment explaining: the step that failed, the actual result observed, the expected result, and any relevant error messages, screenshots, or logs. This information is critical for developers during bug triage and saves significant time by eliminating the need to re-run the test to understand the failure.
- Assign testers by domain expertise. Rather than distributing test cases randomly or in a round-robin fashion, assign tests to the team members who best understand the feature area. A tester familiar with the payment module will execute payment tests more effectively, catch subtle issues, and provide more useful feedback than someone unfamiliar with the domain. Cross-training is valuable, but for time-critical cycles, leverage existing expertise.
- Clone and re-test rather than re-executing in place. When a bug fix is deployed and you need to re-test failed cases, clone the cycle rather than resetting statuses in the original. This preserves the original cycle's results as a historical record of the first test pass, and the cloned cycle documents the re-test results separately. Both cycles can be compared in reports to show the before-and-after impact of the bug fix.
- Review progress daily during active testing. During a test execution phase, check the cycle progress bar and completion percentage at least once a day. Identify bottlenecks early: too many blocked tests may indicate an environment issue, one tester falling behind may require workload redistribution, and a rising failure count may signal a systemic problem with the build. Use the Reports dashboard for a broader view across all active cycles in the project.
Consider creating a "Definition of Done" for test cycles that specifies the minimum completion percentage and maximum acceptable failure rate before a release can proceed. For example: "A cycle is considered complete when 100% of test cases have been executed and the pass rate is above 95%. All critical and high-priority failures must have linked defects." This gives your team a clear, objective exit criterion for each testing round.
FAQ
▶Can I add a test case to multiple cycles?
Yes. The same test case can be included in as many test cycles as needed. Each cycle maintains its own independent execution status, assignee, comments, and defect links for that test case. Executing a test in one cycle has no effect on the same test case in other cycles. This is by design -- it allows you to run the same tests across different builds, environments, and sprints while keeping results completely isolated.
▶What happens when I update a test case that is included in a cycle?
Changes to the test case definition (title, steps, expected results, description, priority, etc.) are reflected in all cycles that include that test case, because the cycle references the live test case from your library. However, execution data (status, comments, defects, assignee) is scoped to each individual cycle and is never affected by changes to the underlying test case definition.
▶Can I remove a test case from a cycle after execution has started?
Yes, you can remove a test case from a cycle at any time, even if it has already been executed. When you remove a test case, its execution record (status, comments, defects) for that cycle is deleted. The test case itself remains in your test case library and in any other cycles where it is included. This action is recorded in the cycle history for audit purposes.
▶How do I re-test failed test cases after a bug fix?
You have two options. First, you can change the status of the failed test cases back to "Not Executed" in the current cycle and re-execute them against the new build. Second (and recommended for traceability), you can clone the cycle and re-execute only the previously failed tests in the new cycle. The cloning approach preserves the original cycle's results as a historical record while the new cycle tracks the re-test effort separately.
▶Is there a limit to how many test cases I can add to a single cycle?
There is no hard limit on the number of test cases in a cycle. TestKase is designed to handle large cycles efficiently with pagination and filtering. However, for practical manageability, we recommend keeping cycles under 500 test cases. Very large cycles can be harder to track, assign, and analyze. If your regression suite has thousands of test cases, consider splitting it into multiple focused cycles organized by module or feature area.
▶Can I automate test execution reporting into a cycle?
Yes. Use the Automation to automatically report execution results from your automated tests into a test
cycle. The @testkase/reporter CLI parses test output files (JUnit XML, Playwright JSON,
Cypress, and more) and pushes results to a specified cycle using Automation ID matching. Each test
case's status is updated automatically based on the automated test results.
▶What is the difference between a test cycle and a test plan?
A test cycle is a single round of test execution -- it contains test cases with execution statuses, assignees, and results. A test plan is a higher-level organizational container that can group multiple test cycles together under a common testing objective. For example, a "v3.0 Release" test plan might contain a Smoke Test cycle, a Regression cycle, and a UAT cycle. Plans provide a holistic view of all testing activities for a milestone.
▶Can I filter the cycle view to see only my assigned test cases?
Yes. The cycle view supports filtering by assignee, execution status, priority, and test case title. Use the filter controls at the top of the cycle to select your name from the assignee filter, and the view will display only the test cases assigned to you. Combine multiple filters (e.g., show only "Not Executed" tests assigned to you) to focus on your remaining work.
▶What happens to a cycle when it passes the end date?
Nothing changes functionally. The end date is an informational field used for planning and reporting purposes only. A cycle remains fully functional after its end date -- testers can still execute tests, change statuses, add comments, and link defects. The end date simply serves as a target to help your team track whether testing is progressing on schedule. It does not auto-close or lock the cycle.
▶Can I delete a test cycle?
Yes, you can delete a test cycle from the cycle listing view or from the cycle detail view. Deleting a cycle permanently removes all execution data (statuses, comments, defects, history) associated with that cycle. The underlying test cases in your library are not affected -- only the execution records within that cycle are deleted. This action cannot be undone, so consider exporting the cycle as a CSV before deleting if you need to preserve the results for compliance or historical reference.
▶Can I export cycle results for compliance audits?
Yes. Use the Download CSV feature to export the full execution results, including test case IDs, execution statuses, tester names, timestamps, and comments. The cycle history tab also provides a detailed audit trail of all changes with timestamps and user attribution. For richer visual reporting, use Reports to generate analytics and charts that can be shared with auditors and stakeholders.
▶How do I handle test cases that are consistently blocked across cycles?
If a test case is repeatedly marked as Blocked across multiple cycles, it usually indicates an underlying issue that needs to be addressed at the root level. Review the blocking reason in the execution comments across cycles to identify the pattern. Common causes include: environment instability, missing test data that is never provisioned, or a dependency on a feature that is always incomplete. Address the root cause by fixing the environment, creating proper test data setup procedures, or marking the test case as Draft in the test case library until the blocker is permanently resolved.