Making Interpreting Study Data Exponentially More Efficient

Apollo is Charles River Labs' new online study platform, designed to allow scientists to view pharmaceutical data in almost real time.  Clinical Observations is the module for recording data points taken daily over the course of a pharmaceutical study, encompassing hundreds of readings every day. Pivoting from a table of data to a new graphical format enabled users to much more efficiently see anomalies or spikes in the data to quickly pinpoint areas to be reviewed in more detail, massively increasing their speed of reviewing study data.


Charles River Labs


Lead Product Designer


Product Owner, 2 Engineers, Data Team, Design Director


8 weeks plus development time

The original Clinical Observations screen
The original Clinical Observations screen

The problem: too much numerical data

The initial version of the Clinical Observations module was designed before I started on the project and was made up of a huge table of data that could be sorted and reviewed in more detail. I did several weeks of user interviews with study monitors (the personnel at pharma companies who track studies of specific drugs) and study directors (the CRL personnel who run the studies) to learn how users were currently using this and other modules of Apollo. One huge takeaway was that while the data table did show all the needed readings, it was very difficult to find any meaningful data in such a huge data set and wasn't especially useful if a user didn't know specifically what they were looking for.

In these drug studies a scientist is generally looking for anomalies, trying to find spikes or changes that could then be investigated to determine the cause. The biggest shortcoming in the initial version of Clin Obs was that the data couldn't be filtered or ordered by frequency of occurrence, meaning a scientist had to scan the entire table looking for those anomalies (imagine having to compare numbers on a scrolling bingo card the size of a newspaper). Studies could often last three to six months or more, meaning a user also had to horizontally scroll a long timeline of entries to view all of the data and evaluate when things occurred. And when it came to the data set itself large studies could also take a very long time to load, something that would only increase in the future as the platform added more customers and studies. This made the Clinical Observations module ripe for UX improvements.

Exploring visual ways to display data

I spent some time sketching and researching different graphs and other methods of visualizing data, looking for ideas that we could translate into better ways to quickly analyze Clin Obs data. I explored a variety of charts and graphs, looking at everything from pie charts and bar graphs to ways to graph occurences of clinical signs over time. My favorite concept that went unused was the idea of a sparkline, a tiny line chart that showed a quick view of trends over time (bottom). 

Clinical Observations concepts
Clinical Observations concepts
User testing the PowerBI prototype
User testing the PowerBI prototype

User testing in PowerBI

Some members of our data team had started to explore graphing Clinical Observations in Power BI, taking recorded study data and displaying it in a bar graph format in different ways. The visual design and UX of the prototype was far from ideal (literally any item that was clicked would trigger loading something else) but it provided a great starting point for testing these concepts with users.

Comments ranged from "game changer" to
"can I have this now?"

I tested the Power BI prototype with over a dozen external users and a panel of internal study directors, asking them to first talk me through their thoughts as they explored it then showing them specifically how it worked. Users were thrilled with the visual format (comments ranged from "game changer" to "can I have this now?") and had lots of feedback. We confirmed that quickly being able to find those anomalies was key, if they could easily find places where the data varied in a way they didn’t expect they could investigate that to learn more about what was happening. Being able to find those spikes then filter the numeric data by those specific occurrences would have huge benefits, not only greatly increasing their efficiency of interpreting the study data but making it much easier for them to summarize and present findings to their teams.

The new Clinical Observations screen on initial load
The new Clinical Observations screen on initial load

The solution: graphing clinical signs over time

After reviewing our user interviews, we decided that showing the occurrences over time, frequency of occurrence by symptom, and number of occurrences by group would provide much more meaningful and digestible results and cover most use cases. I designed graphs to correlate with these then worked extensively with the data team to make sure we had access to the data streams needed to load those graphs up front.

The module was designed so that a user could review the graphs then choose a clinical sign to see detailed data on that sign. We also added the ability to dig deeper to see all data on a specific animal in the study, a feature that had been often requested. This not only made it much faster for users to find issues that needed to be investigated, but also greatly reduced page load times by reducing the amount of data being requested from the CRL servers.

The original page was hard to interpret, too data heavy, and slow to load
The original page was hard to interpret, too data heavy, and slow to load (i.e. giant bingo card)
The new screen provided a streamlined view that made it very efficient to find and understand anomalies in the study data
The new screen provided a streamlined view that made it very efficient to find and understand anomalies in the study data

End result: Reducing time spent from hours to minutes

Follow up testing received rave reviews from our beta users. Clinical Observations had evolved from a giant table of data into a quick to interpret set of graphs that solved many of our users’ pain points and made their use of the application far more efficient. In early testing, users estimated that the new version will reduce the time to find an anomaly from as much as an hour to only minutes every time they review the study data. The direct results of this is thousands of dollars saved in time spent analyzing study data, and the new updates has the potential to save companies hundreds of thousands of dollars by enabling scientists to be more proactive rather than reactive in their responses to new data. More user testing is planned to continue refining the graphs and better improve the usability of the filters and table data selection after the module launches to a wider audience.