Best Practices & FAQ
Guidelines for effective reporting and answers to common questions.
Best Practices
Follow these best practices to get the most value from TestKase's reporting suite.
1. Establish quality gates with reports
Define minimum pass rate thresholds (e.g., 95% for critical tests, 85% overall) and check the Execution Summary report before every release. Use the Release Readiness gauge as a formal go/no-go metric.
2. Review reports in daily stand-ups
Start each daily stand-up by pulling up the Execution Summary and Tester Workload reports. This takes 30 seconds and gives the entire team visibility into where things stand and who needs help.
3. Use trend reports for retrospectives
During sprint retrospectives, review Execution Trend and Created vs Executed reports. Are failure rates improving? Is test creation keeping pace with feature development? These trend lines tell the story of your team's quality journey.
4. Act on AI insights, do not just read them
AI reports like Flaky Tests, Predictive Failure, and Suite Optimization are only valuable if you act on their findings. Create action items: fix flaky tests, run predicted failures first, archive redundant tests. Track progress over time.
5. Maintain requirement traceability
Link every test case to at least one requirement and vice versa. Regularly check the Uncovered Requirements and Unlinked Test Cases reports to keep your traceability matrix complete. This is especially important for regulated industries.
6. Structure folders to match your architecture
Reports like Execution by Folder, Scorecard by Folder, and Risk Heatmap (Folder) are most meaningful when your folder structure mirrors your product architecture. Create folders for each module or feature area so these reports provide actionable, module-level insights.
7. Use burn-down charts for deadline tracking
At the start of each test cycle, check the Execution Burn-Down chart daily. If the line is not declining steadily toward zero, you are falling behind. Take corrective action early (reassign tests, add testers, reduce scope) rather than discovering the delay at the end.
8. Schedule quarterly suite optimization
Every quarter, generate the Suite Optimization and Stale Tests AI reports. Archive or update tests that are no longer providing value. A leaner, more focused test suite executes faster and produces more meaningful results.
FAQ
▶How do I access the Reports tab?
Navigate to your project from the left sidebar then click on the Reports tab in the project navigation. All available reports are displayed as cards organized by category. Click any card to open that report with default filters.
▶Can I export reports as PDF or images?
Report charts can be viewed in the Reports tab within the application. For raw data exports, use the CSV export functionality available on test cycles and test cases to extract the underlying data. PDF and image export features are on the roadmap.
▶How often are reports updated?
Reports are generated in real time based on the latest data. There is no caching delay -- what you see reflects the current state of your project. Every time you open a report or change a filter, the chart is recalculated from live data.
▶Do AI reports cost credits every time I view them?
Yes, each AI report generation consumes credits. The credit cost varies by report type and is displayed on the report card before you generate it. See the AI Features guide for details on credit pricing and management. Standard (Tier 1 and Tier 2) reports do not consume any credits.
▶What is the difference between Tier 1, Tier 2, and Tier 3 reports?
Tier 1 (Core) reports are available on all plans and show point-in-time snapshots: execution summaries, coverage percentages, defect distributions, and team workload. Tier 2 (Trends) reports require a Pro plan and add a time dimension with trend lines, burn-up/down charts, and scorecards. Tier 3 (AI) reports use machine learning and require AI credits to generate.
▶Why does a report show no data?
Reports require the relevant data to exist. If an execution report shows no data, verify that test cases have been executed within the selected date range and cycle. If a coverage report shows no data, verify that requirements have been created and linked to test cases. Check your filters -- a narrow date range or specific cycle filter may exclude all matching data.
▶Can I see reports across multiple projects?
Reports are scoped to a single project. Each project has its own Reports tab with data isolated to that project. Cross-project analytics are not currently available but are being considered for future releases.
▶What does the granularity filter do?
The granularity filter controls how time periods are grouped on the X-axis of trend charts. Daily shows one data point per day -- best for sprint-level analysis. Weekly aggregates data by week -- best for release-level views. Monthly aggregates by month -- best for quarterly reviews. Granularity is available on all Tier 2 trend reports and the Execution Velocity AI report.
▶How does the Release Readiness gauge calculate its score?
The Release Readiness score is a weighted composite of three factors: overall pass rate (what percentage of executed tests passed), execution progress (what percentage of tests in scope have been executed), and critical test pass rate (what percentage of Critical-priority tests passed). The AI model combines these factors into a single 0-100 score. A score above 80 is typically considered green (ready), 60-80 is yellow (caution), and below 60 is red (not ready).
▶How can I identify and fix flaky tests?
Generate the Flaky Tests AI report to get a list of tests with inconsistent results. For each flaky test, review recent execution history to understand the pattern. Common root causes include timing-dependent assertions, shared test data, external service dependencies, and non-deterministic ordering. Fix the root cause, then re-run the test several times to confirm stability.