TestKase Docs
Reports & Analytics

Team Metrics & Trend Analysis

Track team workload, test case distribution, and quality trends over time.

Team Metrics

Team metrics reports provide visibility into how testing work is distributed across your team and how your test case library is composed. Both reports are Tier 1.

Tester Workload

Chart type: Stacked Bar | Tier: 1

Displays assigned vs. executed test cases per tester. Each bar represents a team member, with stacked segments showing how many of their assigned tests have been executed vs. how many remain pending.

Key insights: Identify testers who are overloaded with assignments and falling behind on execution. Spot testers who have finished early and can take on additional work. This report is critical for workload balancing and sprint planning.

When to use: Mid-sprint to check execution progress per team member. If some testers are falling behind, reassign tests to those who are ahead of schedule.

Test Case Distribution

Chart type: Donut | Tier: 1

Shows a breakdown of your test case library by priority, status, and automation type. This gives you a bird's-eye view of the composition of your test suite.

Key insights: Check the balance of priorities -- if the majority of your test cases are "Low" priority, your suite may not be focused on the right things. Review automation type distribution to track your manual-to-automated test ratio.

When to use: During quarterly test suite reviews to assess the health and composition of your library. Also useful when setting automation goals (e.g., "increase automated coverage to 60%").

Pair Tester Workload with Execution by Tester for a complete picture: workload shows assignment balance, while execution shows outcome distribution.

Trend Analysis

Trend analysis reports add a time dimension to your testing data. Instead of showing a point-in-time snapshot, they plot metrics over days, weeks, or months so you can see how your quality posture is evolving. All trend reports are Tier 2 (Pro plans and above) and support configurable granularity.

Execution Trend

Chart type: Area | Tier: 2

Plots pass/fail/blocked counts over time as a stacked area chart. Each time period shows the cumulative execution outcomes, making it easy to see whether your pass rate is improving or degrading.

Use case: Track sprint-over-sprint quality improvement. A growing green (passed) area and shrinking red (failed) area indicates positive momentum.

Execution Burn-Up

Chart type: Line | Tier: 2

Shows cumulative test executions vs. total scope over time. Two lines are plotted: one for the total number of tests in scope, and one for the cumulative number of tests executed. When the two lines converge, execution is complete.

Use case: Sprint progress tracking. Share with stakeholders to show how execution is progressing toward the total scope. If scope keeps increasing (the top line rises), it signals scope creep.

Execution Burn-Down

Chart type: Line | Tier: 2

Shows remaining unexecuted tests over time. The line starts at the total number of tests and decreases as tests are executed. The ideal trajectory is a steady downward slope reaching zero by the end of the cycle.

Use case: Predict whether your team will finish execution on time. If the burn-down line flattens, the team is stalling and may need additional resources or scope reduction.

Test Creation Trend

Chart type: Line | Tier: 2

Tracks the number of new test cases created per period. This report shows how actively your team is expanding the test suite.

Use case: Ensure test cases are being created at a pace that matches feature development. A declining creation trend while new features ship signals a growing coverage gap.

Cycle Comparison

Chart type: Stacked Bar | Tier: 2

Provides a side-by-side status comparison of selected test cycles. Choose two or more cycles and see their pass/fail/blocked distributions next to each other.

Use case: Compare this week's regression cycle against last week's to see if quality improved. Compare smoke test results across different builds.

Scorecard by Folder

Chart type: Scorecard | Tier: 2

A detailed pass/fail scorecard for each folder. Each row shows the folder name along with total test count, passed count, failed count, blocked count, unexecuted count, and the calculated pass rate as a percentage.

Use case: Module-level quality scoring. Quickly identify which product areas are meeting quality targets and which need attention. Sort by pass rate to find the weakest areas.

Scorecard by Tester

Chart type: Scorecard | Tier: 2

A pass rate scorecard for each tester. Each row shows the tester name, total assigned tests, tests executed, passed, failed, and their overall pass rate.

Use case: Review individual execution progress and results during sprint retrospectives. Recognize team members with strong pass rates and investigate patterns behind low pass rates.

Created vs Executed

Chart type: Dual Line | Tier: 2

Compares the test case creation rate vs. execution rate over time. Two lines are plotted: one for the number of test cases created per period, and one for the number of executions per period.

Use case: Ensure your team is not just creating tests but also executing them. A large gap between creation and execution rates means tests are piling up without being run.

Execution by Automation Type

Chart type: Stacked Bar | Tier: 2

Splits execution results by automation type (Manual, Automated, Hybrid). Each bar shows how tests of each type performed.

Use case: Track automation effectiveness. If automated tests have a higher failure rate than manual tests, investigate whether the automation scripts are brittle or the test environments are misconfigured.

Requirement Coverage Trend

Chart type: Line | Tier: 2

Plots the requirement coverage percentage over time. This shows whether your team is progressively improving coverage or if coverage is stagnating.

Use case: Set a coverage target (e.g., 90%) and track progress toward it over multiple sprints. Share the trend line with stakeholders to demonstrate continuous improvement.

All trend reports support three granularity levels: Daily, Weekly, and Monthly. Choose daily for sprint-level analysis, weekly for release-level views, and monthly for quarterly reviews.