|
| 1 | +--- |
| 2 | +title: Re-designing image reading data reporting to stress-test the reporting roadmap |
| 3 | +description: Using a design sprint to better define the future of image reading insight and validate the platform and tooling needed for dashboards going forward |
| 4 | +date: 2026-04-17 |
| 5 | +tags: |
| 6 | + - breast screening |
| 7 | + - reporting |
| 8 | + - image reading |
| 9 | + - BSIS |
| 10 | + - FRQA |
| 11 | + - dashboards |
| 12 | + - design sprint |
| 13 | +author: |
| 14 | + - Laboni Paul |
| 15 | +--- |
| 16 | + |
| 17 | + |
| 18 | +Breast Screening Information Systems (BSIS) primarily enables Quality Assurance (QA) staff to monitor and ensure the safety of breast screening services by identifying any failures to comply with performance standards. The Film Reading Quality Assurance (FRQA) report in BSIS, alongside the Interval Cancer report, is used mainly by image readers, unit directors and QA to understand image reading performance across services. QA use this report to find instances where image reading may be poorer than expected and follow up with services to understand why. |
| 19 | + |
| 20 | +This work is helping us rethink image reading data reporting so that we can improve and replace BSIS’ FRQA report and prepare for the future breast screening service. It also gives us a way to test the platform and tooling needed to replace other breast screening reports over time. |
| 21 | + |
| 22 | +## Finding the right platform |
| 23 | + |
| 24 | +As part of our early dashboard testing, we explored using Federated Data Platform (FDP) for breast screening reporting. This helped us better understand how it supports our needs, particularly for user-facing dashboards. |
| 25 | + |
| 26 | +We are now exploring where and how data should be presented so that it best meets the needs of screening users, including usability, accessibility and delivery speed. |
| 27 | + |
| 28 | +## Why image reading is a useful stress test |
| 29 | + |
| 30 | +FRQA represents the most complex reporting domain within BSIS, so it gave us a deliberate stress test for our future reporting approach. It involves complex logic and visualisations, the need to reconcile different instances of NBSS data, and 5 different user groups with different views of the data. Having dashboards across different platforms is hard to maintain and creates a frustrating experience for users, so we are using FRQA as the most demanding reporting domain to help us define out dashboard strategy going forward. |
| 31 | + |
| 32 | +The new breast screening service also introduces additional needs. It is creating a dedicated digital workflow for image readers, including structured recording of image quality, batch reading and enhanced reader interfaces. The data generated will be richer, more timely and more structured than under NBSS, which creates an opportunity and a responsibility to rethink how image reading analysis is improved and replaced. |
| 33 | + |
| 34 | +This work helps answer 2 connected questions: |
| 35 | + |
| 36 | +- how do we safely replace FRQA for QA? |
| 37 | +- how can we redesign image reading data analysis in preparation for the new breast screening service? |
| 38 | + |
| 39 | +By tackling these questions early, we can better understand the platforms and tools needed for this use case, and de-risk the roadmap for replacing other BSIS reports. |
| 40 | + |
| 41 | +## The design sprint |
| 42 | + |
| 43 | +We ran a 10-day time-boxed design sprint to: |
| 44 | + |
| 45 | +- map the underpinning logic and users of FRQA as it works today |
| 46 | +- explore what modern image reading insight should look like, using visual concepts to test our thinking |
| 47 | +- create a data model to validate requirements and the data needed in future |
| 48 | +- test which data platform and visualisation tools could best support the solution |
| 49 | +- create an early view of delivery steps and further work |
| 50 | + |
| 51 | +## Understanding user needs and gaps |
| 52 | + |
| 53 | +Across user groups, there is a need for more timely insight, better benchmarking against peers and national standards, clearer explanations of complex metrics, easier access to definitions and standards, improved visibility of trends over time, and better workload visibility. |
| 54 | + |
| 55 | +There is also a need for more actionable insight. Users find BSIS difficult to access, with limited interactivity and static reports that make information harder to explore and interpret. FRQA and similar reports were designed mainly for QA oversight and reconciliation, which means image readers and unit directors are often underserved. |
| 56 | + |
| 57 | +## Testing visual concepts |
| 58 | + |
| 59 | +During the sprint, we prioritised the needs of image readers, screening directors and QA. Image readers and screening directors are currently more underserved and need more frequent access to performance insight, so we designed visual concepts tailored to their needs to better understand their data requirements. |
| 60 | + |
| 61 | +For QA users, we focused on recreating the most complex FRQA infographics with improved interactivity through technical exploration. |
| 62 | + |
| 63 | +Below is an example of a visual concept that we tested with screening directors and image readers. |
| 64 | + |
| 65 | + |
| 66 | + |
| 67 | +## A common data model |
| 68 | + |
| 69 | +Identifying KPIs for each user group helped us create an early data model for future image reading reporting. This helped confirm the fields and formats required, the identifiers needed, and the synthetic data needed to test platforms and visualisation tools. |
| 70 | + |
| 71 | +It also helped us think through how future dashboards might use critical data such as when image reading took place and the decisions taken during arbitration. |
| 72 | + |
| 73 | +Below is an initial pass at modelling the data needed for image reading. |
| 74 | + |
| 75 | + |
| 76 | + |
| 77 | +## Technical exploration |
| 78 | + |
| 79 | +We drafted a data flow diagram to review feasibility with Information Governance and architects. Then, using dummy data that represents the real thing, we tested visualisation tools to recreate KPIs and complex charts for QA - feedback on this helped refine the data model further. |
| 80 | + |
| 81 | + |
| 82 | + |
| 83 | +We compared FDP’s visualisation tools with other options, including default dashboarding tools, open-source options and custom web builds. We also explored whether data processed in one platform could serve dashboards both within the image reader workflow and in a separate location for QA users who need to review multiple dashboards in one place. |
| 84 | + |
| 85 | +Below is an example of a chart found in the current BSIS FRQA report, re-created using open source tooling. |
| 86 | + |
| 87 | + |
| 88 | + |
| 89 | +## What we found |
| 90 | + |
| 91 | +Data for the new breast screening will reside in Azure. Based on our testing so far, Azure-based platforms and code-driven visualisation tools better supported the requirements of image reading analysis. |
| 92 | + |
| 93 | +This is because they better support: |
| 94 | + |
| 95 | +- a single source of truth, by keeping reporting closer to the source data used by the new breast screening service |
| 96 | +- user interface flexibility, including the ability to embed dashboards more easily within the new service interface |
| 97 | +- efficient development, with greater scope to iterate, duplicate dashboards and create custom visualisations more quickly |
| 98 | + |
| 99 | +## What happens next |
| 100 | + |
| 101 | +We have been testing some of the riskiest assumptions in the proposed architecture and plan of work. Some of these assumptions include the ability to obscure identifiable data and provide role-based access to users - these are some of the critical needs for a secure and scalable data platform. On achieving this, we are now demonstrating the core set-up of the data platform that provides a workspace for ad-hoc analysis and can automate data dashboards. |
| 102 | + |
| 103 | +## Looking further ahead |
| 104 | + |
| 105 | +In future, we will be developing the performance viewer dashboard that shows oversight metrics, like coverage and uptake, in one place. This is a prototype designed in an earlier sprint that shows higher-level metrics and demographic breakdowns in one dashboard. It is intended to replace simpler BSIS reports like invitation monitoring, KC63, deprivation reports in BSIS and other platforms, while also helping establish our ways of working in Azure. |
| 106 | + |
| 107 | +The image reading dashboards will require further understanding of the fields we can extract from NBSS, how we extract them and how we calculate the required measures. We expect to begin with pilot dashboards for a small number of services, then expand incrementally as more NBSS data can be integrated and automated. Over time, this work will help us replace FRQA for QA, improve reporting for services, and create better principles and standards for image reading data across the future breast screening service. |
0 commit comments