Reports & Analytics
Explore TestKase's comprehensive reporting suite with 27+ reports across 6 categories.
Overview
TestKase provides a comprehensive reporting and analytics suite designed to give your team full visibility into testing progress, quality trends, and risk areas. Whether you need a quick snapshot of execution status before a release or a deep-dive into historical defect patterns, the Reports tab delivers the data you need to make confident, data-driven decisions.
The reporting suite includes 27+ reports organized into 6 categories that span execution analytics, coverage tracking, defect metrics, team performance, trend analysis, and AI-powered insights. Reports are available directly within each project -- navigate to the Reports tab in your project sidebar to get started.
Every report can be filtered by date range, test cycle, test plan, and other contextual dimensions. Trend-based reports support configurable granularity (daily, weekly, or monthly) so you can zoom in on sprint-level details or zoom out for release-level overviews.
27+ reports
Comprehensive analytics across every dimension of your testing
6 categories
Execution, Coverage, Defects, Team, Trends, and AI Insights
AI-powered insights
Machine learning detects flaky tests, predicts failures, and scores quality
Flexible filters
Filter by date range, cycle, plan, folder, and granularity
Trend tracking
Burn-up, burn-down, area charts, and dual-line comparisons over time
Release readiness
AI-computed readiness gauge combining pass rate, progress, and risk
Reports update in real time. Every chart reflects the latest data in your project -- there is no caching delay. Open the Reports tab at any time to see the current state of your testing.
Report Categories
Reports are organized into three tiers based on complexity and access level. Tier 1 reports cover the fundamentals every team needs. Tier 2 unlocks time-series trend analysis for teams that want to track progress across sprints and releases. Tier 3 applies machine learning to surface hidden patterns, predict outcomes, and optimize your test suite.
| Tier | Categories Included | Report Count | Access Level |
|---|---|---|---|
| Tier 1 -- Core | Execution Reports, Coverage & Traceability, Defect Analytics, Team Metrics | 14 reports | All plans |
| Tier 2 -- Trends | Trend Analysis (area charts, burn-up/down, scorecards, cycle comparison, dual-line) | 12 reports | Pro plans and above |
| Tier 3 -- AI | AI Insights (flaky tests, risk heatmaps, predictive failure, quality scores, release readiness) | 11 reports | Requires AI credits |
Tier 1 (Core) reports are available on every plan and provide the essential views your team needs day-to-day: execution status breakdowns, requirement coverage percentages, defect distribution, and team workload visibility. These reports answer the question "where are we right now?"
Tier 2 (Trends) reports add a time dimension. They show how your metrics evolve over days, weeks, or months. Use burn-up and burn-down charts to track sprint progress, area charts to visualize execution velocity, and scorecards to compare folder or tester performance. These reports answer the question "how are we trending?"
Tier 3 (AI Insights) reports use machine learning to go beyond what traditional charts can show. They detect flaky tests, predict which tests are most likely to fail, score test case quality, generate risk heatmaps, and compute release readiness. Each AI report consumes AI credits when generated. These reports answer the question "what should we act on?"
The tier of each report is shown as a badge on the report card in the Reports tab. Tier 3 (AI) reports also display the credit cost before you generate them.
Filters & Date Ranges
Most reports support a consistent set of filters that allow you to narrow down the data to exactly the scope you need. Filters are applied via the filter bar at the top of the report view.
| Filter | Available On | Description |
|---|---|---|
| Date Range | Most reports | Filter by execution date, creation date, or update date depending on the report. Each report shows a hint next to the date picker (e.g., "Execution / Updated date" or "Created date"). |
| Test Cycle | Most reports | Restrict the report to a specific test cycle. When selected, only executions within that cycle are included. |
| Test Plan | Most reports | Restrict the report to a specific test plan. When selected, only cycles and executions belonging to that plan are included. |
| Folder | Execution by Folder, Scorecard by Folder, Defects by Folder, coverage reports | Filter by test case folder. Useful for focusing on a specific module or feature area. |
| Granularity | Trend reports (Tier 2), Execution Velocity (Tier 3) | Choose how time periods are grouped: Daily, Weekly, or Monthly. Affects the X-axis resolution of trend charts. |
| Cycle IDs | Cycle Comparison | Select two or more test cycles to compare side by side. This is a multi-select filter specific to the Cycle Comparison report. |
Date range tips:
- Use last 7 days for sprint-level analysis during daily stand-ups.
- Use last 30 days for release-level overviews and stakeholder reporting.
- Use custom range to align the report with specific sprint or release boundaries.
- Pay attention to the date filter hint next to the date picker. Some reports filter by execution date (when the test was run) while others filter by creation date (when the test case was created). Choosing the wrong date type may return unexpected results.
When comparing sprints, set the date range to match each sprint's start and end dates exactly. This ensures you are comparing equal time windows and not mixing data across sprints.
Coverage reports (Requirement Coverage, Traceability Matrix, Uncovered Requirements) do not support date range or cycle filters because they reflect the current state of requirement-test linkages, which are not time-bounded.
Reading Reports
TestKase uses several chart types across its reports. Understanding how to read each chart type will help you extract maximum value from the analytics.
Donut Charts
Donut charts show proportional breakdowns of a single metric. Each segment represents a category (e.g., Passed, Failed, Blocked), and the segment size is proportional to its count or percentage. The center of the donut typically displays the total count or a key percentage.
How to read: Look at the relative size of each segment. A large green (Passed) segment with small red (Failed) and yellow (Blocked) segments indicates healthy execution. If the red segment dominates, investigate the failures. Hover over any segment to see the exact count and percentage.
Reports using donut charts: Execution Summary, Requirement Coverage, Test Case Coverage, Defects by Folder, Defects by Cycle, Defects by Tester, Test Case Distribution.
Stacked Bar Charts
Stacked bar charts compare multiple items (cycles, testers, folders) across the same set of categories. Each bar is divided into stacked colored segments, making it easy to compare both the total and the breakdown across items.
How to read: Compare bar heights for totals and segment proportions for breakdowns. A tall bar with a large red segment indicates a high-volume area with many failures. A short bar with all green indicates a small, stable area.
Reports using stacked bars: Execution by Cycle, Execution by Tester, Execution by Priority, Execution by Environment, Execution by Folder, Cycle Comparison, Execution by Automation Type, Tester Workload.
Line Charts
Line charts plot a single metric over time. The X-axis represents time periods and the Y-axis represents the metric value. Upward trends indicate growth, downward trends indicate decline.
How to read: Look for the overall direction (trending up, down, or flat) and note any sharp inflection points. A sudden drop in the burn-down chart, for example, indicates a burst of execution activity. A plateau indicates stalling.
Reports using line charts: Execution Burn-Up, Execution Burn-Down, Test Creation Trend, Requirement Coverage Trend, Execution Velocity.
Area Charts
Area charts are similar to line charts but fill the space below each line with color. When multiple series are stacked, the total height represents the combined value while the colored bands show the individual contributions.
How to read: Focus on the thickness of each colored band over time. In the Execution Trend report, a growing green band (passed) and shrinking red band (failed) shows quality improvement.
Reports using area charts: Execution Trend.
Dual Line Charts
Dual line charts plot two related metrics on the same time axis. This makes it easy to compare rates and identify divergence or convergence between the two metrics.
How to read: If the two lines are diverging, the rates are becoming unbalanced. In the Created vs Executed report, a large gap between creation and execution lines means tests are accumulating faster than they are being run.
Reports using dual line charts: Created vs Executed.
Heatmaps
Heatmaps use color intensity to represent risk or severity levels. Darker/warmer colors (red, orange) indicate higher risk, while lighter/cooler colors (green, blue) indicate lower risk.
How to read: Scan for the hottest cells -- these are your highest-risk areas. Focus your testing and stabilization efforts there. Cool cells can be deprioritized.
Reports using heatmaps: Risk Heatmap (Folder), Risk Heatmap (Feature).
Scorecards
Scorecards present tabular data with calculated metrics. Each row represents an entity (folder or tester) with columns for key metrics and a calculated score or pass rate. Rows are typically sortable by any column.
How to read: Sort by pass rate to find the weakest performers. Look at the total counts to understand volume -- a 50% pass rate on 100 tests is more concerning than 50% on 4 tests.
Reports using scorecards: Scorecard by Folder, Scorecard by Tester.
Gauges
Gauge charts display a single aggregate score (0-100) on a dial. The dial position and color indicate the health level: green for good, yellow for caution, red for critical.
How to read: The score is the key takeaway. Below the gauge, you will find the contributing factors (pass rate, execution progress, critical test pass rate) that explain why the score is where it is. Use these factors to determine what needs to improve.
Reports using gauges: Release Readiness.
Next Steps
- Execution Reports -- Pass/fail/blocked breakdowns by cycle, tester, priority, environment, and folder.
- Coverage & Traceability Reports -- Track requirement coverage, test case linkage, and the traceability matrix.
- Defect Reports -- Analyze defect distribution by folder, cycle, and tester.
- Team Metrics & Trend Analysis -- Track team workload, test case distribution, and quality trends over time.
- AI Insights Reports -- ML-powered analytics for flaky tests, risk heatmaps, predictive failure, and more.
- Best Practices & FAQ -- Guidelines for effective reporting and answers to common questions.