Coagulate scientifically-linked data through an automated standardization platform.
We’ve seen that many biotech and pharma companies are rethinking their data strategy to reflect the importance of the data they produce and collect. Many current workflows and datasets, while interlinked from a scientific perspective, are isolated from one another. As a result, there’s immense value in establishing lab connectivity to prioritized instruments and capture data in a centralized, standardized repository.
A great example of the isolated data sets is a bioprocess workflow. For example, a typical bioprocess workflow for a protein-based biologic (or microbiologic, or viral, etc.) involves 15+ instruments, many of which are various makes of the similar type of instrument. Each stage is dependent on its predecessor, adding to the importance of easily comparing data sets.
The status quo is for scientists and informatics teams to place a heavy reliance on the manual collection, transfer, manipulation, and reporting of their instrument data. One bioprocess director allots 40% of his team’s week to data collection, massaging, and prep. Spending so much time on data collection, and not running more experiments, potentially hampers a department’s - and the organization’s - ability to scale effectively.
“One bioprocess director allots 40% of his team’s week to data collection, massaging, and prep.”
In previous blogs, we discussed automating data collection, transfer and standardization from GE AKTA purification systems and analytical HPLCs with Waters Empower. In the context of protein biologics, for example, the product of the AKTA purification system is sampled and analyzed with a Waters Empower connected HPLC. The data generated from both systems need to be linked together in order to make sense of the results. These are only two of the 15+ instruments involved in bioprocess workflows. To complete the production of the biologic, many other instruments also need to be incorporated. Below is a list of only a few instruments which TetraScience integrates within the bioprocess workflow:
- Blood gas analyzer (BGA)
- Plate readers (Spectrophotometers)
- Cell analyzers
- Chemistry analyzers
- Shaking incubators
- Filtration (TFF, etc)
- CO2 sensors
- Isoelectric Focusing
- Molecular interaction characterization
Through our Bioprocess Data Management Platform, instruments participating in the bioprocess workflow have automated data collection, transfer, and standardization. This, in turn, can enable the linking of disparate data into coherent sets, without hundreds of manual steps and with minimal human error.
Here's a POC example of an integration with an ELN, in this case, IDBS’s E-Workbook.
Biotech and pharma organizations can employ the TetraScience Data Management Platform solution to ensure scientists are dedicated to higher-value activities, its data quality is at greater-than-acceptable levels, and its historical data will enable — not hamper — new scientific discoveries.
The core functionality of the Bioprocess Data Management Platform includes the ability to automatically collect, parse, normalize, and centralize future and historical data sets from an organization's’ instrumentation. By doing so, instrument data can be accessed by downstream applications and tools, such as visualization programs or ELN/LIMS, and eliminates 90% of data-related labor in doing so. The below diagram shows a high-level overview of how this is possible. (Note: these are a selection of instrument - it does not represent the entire process end-to-end.)
Leveraging integrations with various instrument manufacturers and software vendors, TetraScience standardizes data sets through a variety of methods, including Internet-of-Things, file, or software data collection methods.
Through automation, standardization, and centralization, the TetraScience platform makes possible the addition of critical metadata from additional sources (LIMS IDs, batch records, chemical/compound lot #s, asset management sources). This enables users to easily search and retrieve data, a step which has been identified by our customers as the most critical component for method improvement and analytics across the bioprocess workflow.
Given its structure, the Bioprocess Data Management Platform allows for new data streams and instruments to be added through the TetraScience Datahub, a machine used to push software updates and provide greater security than existing methods. Scientists have the ability to easily add in new instruments or swap out replaced instrumentation.
The results from the data management platform have notably come in the form of FTE hours saved. For example, we’ve seen teams regain up to 10% of their week simply from having data automatically collected and transported.
One customer estimates that for every visualization an individual run, it takes up to 2 hours to collect and normalize the data and enter that into the analytics platform. It may not seem like much, but for one user, over the course of a year, that could be over 2.5 weeks of their time! Near-instant data availability brings this process down to minutes, rather than hours, which in turn allows the user to make decisions faster as well as run more experiments.
There are softer benefits as well. Human error is virtually eliminated, as there’s no longer a need to manually copy and paste or transcribe data from printouts. Tools, like an ELN, that organizations had invested a lot of money in weren’t being used to their potential, as it was too cumbersome a process to enter in every data point collected. Through the Bioprocess Data Management platform, those tools can access and process data easier and faster.
Most importantly, as organizations begin to file patents, INDs, and other FDA documents (or the dreaded audit), scientists can find relevant data within a few minutes or seconds.
If you’re interested in learning more about managing Bioprocessing data or any of our additional integrations, please reach out to us.