Literature DB >> 33874798

A Machine Vision Approach for Bioreactor Foam Sensing.

Jonas Austerjost1, Robert Söldner1, Christoffer Edlund2, Johan Trygg2, David Pollard3, Rickard Sjögren2.   

Abstract

Machine vision is a powerful technology that has become increasingly popular and accurate during the last decade due to rapid advances in the field of machine learning. The majority of machine vision applications are currently found in consumer electronics, automotive applications, and quality control, yet the potential for bioprocessing applications is tremendous. For instance, detecting and controlling foam emergence is important for all upstream bioprocesses, but the lack of robust foam sensing often leads to batch failures from foam-outs or overaddition of antifoam agents. Here, we report a new low-cost, flexible, and reliable foam sensor concept for bioreactor applications. The concept applies convolutional neural networks (CNNs), a state-of-the-art machine learning system for image processing. The implemented method shows high accuracy for both binary foam detection (foam/no foam) and fine-grained classification of foam levels.

Entities:  

Keywords:  bioprocessing; deep learning; foam sensor; machine vision; process analytical technology

Mesh:

Year:  2021        PMID: 33874798      PMCID: PMC8293757          DOI: 10.1177/24726303211008861

Source DB:  PubMed          Journal:  SLAS Technol        ISSN: 2472-6303            Impact factor:   3.047


Introduction

Foam emergence is a commonly observed phenomenon in bioprocess upstream applications. Foam, which is mainly provoked by the combination of gassing needed to support the cell culture and the release of lipids and proteins from cells, has an adverse effect on the operation of the bioreactor and the cell culture productivity. Loss of cell viability leading to cell rupture and foaming can then occur due to lack of specific nutrients or mechanical stresses from bursting gas bubbles and agitation. The generated foam can then rapidly develop, in some cases during the course of minutes, and block exhaust gas filters, resulting in reactor overpressure, reduced sterility integrity, and ultimately batch failure, stressing the need to detect and prevent foam emergence. Established strategies to prevent and eliminate foaming within a bioprocess rely predominantly on chemical methods, but mechanical and physical methods are used as well. Mechanical and physical methods are only able to destroy existing foam, whereas chemical methods are capable of preventing the emergence of foam as well as eliminating present foam. Typical mechanical strategies to break foam include liquid sprayers, centrifugal foam breakers, or orifice foam breakers, while physical strategies include the application of ultrasonic or thermal probes to break existing foam.[4-6] Chemical strategies rely on the addition of so-called antifoam agents to the cell culture broth. These antifoam agents are surface active substances, which influence the surface properties of the medium toward a decreased foaming ability, including commonly used agents such as silicone oils, polypropylene glycol, and glycerol esters. Due to being relatively inexpensive and easy to handle and add into bioprocessing equipment, chemical foam prevention and elimination strategies are most often used. Nevertheless, antifoam agents influence mass transport and high concentrations may negatively affect the volumetric mass oxygen transfer coefficient as well as the dissolved oxygen concentration, which are important parameters for aerobic cell culture processes. This can result in decreased cell growth and reduced product titers.[9,10] Furthermore, antifoam agents may cause fouling of filters and membranes in subsequent downstream processing applications and therefore accelerate material fatigue and decrease purification efficiency.[11,12] This underlines the need for a well-considered antifoam agent feeding strategy based on reliable sensor data. Established foam sensors in bioprocessing include conductivity or capacitance probes placed within the bioreactor. The disadvantages of these contact-based sensing strategies are the fouling and coating of probes, which typically results in false-positive signals. The outcome of this is high maintenance costs and possible overaddition of chemical antifoams, which can ultimately lead to batch failure. Contactless foam sensing can be reached via ultrasound sensors, but is prone to temperature shifts, humidity, and false-positive signals from splashing caused by agitation. As foam is a key bioprocess parameter that can be visually identified, machine vision-based approaches are promising candidates to detect foam emergence. Traditional machine vision workflows rely on extensive feature engineering, where complex algorithms are hand-engineered to achieve the task at hand.[14,15] Achieving good predictive performance from such a system is difficult, and they suffer from low robustness to changes in imaging conditions. However, in the past decade deep convolutional neural networks (CNNs) as well as openly available large-scale annotated data sets have contributed to exceptional progress in machine vision. By training CNNs end to end on large data sets, CNN-based machine vision has outperformed traditional methods for a wide variety of vision tasks and now completely dominates the field. In this study, we present a machine vision-based strategy to detect foam within a small-scale (250 mL), single-use bioreactor setup ( ). This concept was implemented using off-the-shelf hardware components and open-source machine learning software libraries. The established system showed high accuracy in both binary foam detection and fine-grained classification and has been identified as a promising approach to overcome the drawbacks of conventional foam sensor systems like fouling and coating, as well as limited single-level functionalities. The noninvasive system shows proof of concept for the application to bioreactor formats for both single-use and stainless steel formats. Schematic diagram depicting the implemented machine vision-based foam detection in a single-use bioreactor setup. First, an image is acquired by a camera module, which is then classified by a CNN. The classification can be performed by either the implemented binary classification model or the developed fine-grained classification model.

Materials and Methods

Experimental Setup

All bioreactor experiments were performed using an Ambr250 high-throughput multiparallel bioreactor system that has become the biotech/biopharma industry state of the art for bioprocessing research and development (The Automation Partnership [Cambridge] Ltd., Cambridge, UK, part of the Sartorius Stedim Biotech Group). The system comprises 12 or 24 disposable vessels (250 mL each) integrated into a liquid handling system for fully automated bioprocessing operation. The system is housed inside a biosafety cabinet to enable aseptic automated sample removal and collection. Two different camera modules have been placed in front of the Ambr250 system and were used to acquire the image material for the model development. A smartphone (Google Pixel 3a XL, Google LLC, Menlo Park, CA), as well as an action camera (apeman A79, Apeman International Co., Ltd., Shenzhen, China), were used for image data acquisition. An additional light-emitting diode (LED) light source (Godox LED64 LED, GODOX Photo Equipment Co., Ltd., Shenzhen, China) was used to introduce lighting varieties to the image data set during image data acquisition, in addition to the standard Ambr250 clean bench lighting modifications (clean bench light of biosafety cabinet on/off) (see for performed experiments). Each device was fixated on the clean bench window using a dedicated suction cup holder ( ).
Table 1.

Experimental Plan, Which Resulted from a Full-Factorial Design DoE with 2 Levels for Each Factor (Volume, Dye Addition, Clean Bench Light).

Experiment No.12345678
Run order26374158
Volume (mL)200240200240200240200240
Dye additionNoNoYesYesNoNoYesYes
Clean bench light of biosafety cabinetOffOffOffOffOnOnOnOn
Experimental Plan, Which Resulted from a Full-Factorial Design DoE with 2 Levels for Each Factor (Volume, Dye Addition, Clean Bench Light). (A) Experimental setup used for image acquisition. A smartphone, an action camera, and an LED light source have been mounted on a clean bench glass in front of a multiparallel small-scale bioreactor system via suction cup holders. (B) Performed six-step workflow to acquire a CNN, which is able to distinguish between different levels of foam in single-use, small-scale, bioreactors. The shown workflow is an example for the fine-grained classification model. To include and identify important process and environmental parameters with respect to model quality, a design of experiments (DoE) using the Software MODDE (Sartorius Stedim Data Analytics AB, Umeå, Sweden) was performed. The experimental plan is shown in and was generated by using a full-factorial design (FFD) with two levels for each factor. The “Volume (mL)” entry within the experimental plan corresponds to the filling volume of the cultivation vessel (200 mL/240 mL). The “Dye addition” entry indicates if 50 µL of food dye (Orange Red, Suchuangyi Technology Co., Ltd., Shenzhen, China) were added to the media or not (yes/no). The “Clean bench light” entry specifies if the clean bench light (which is part of the safety cabinet in which the Ambr250 system is placed) was turned on or off (on/off). To prevent any experimenter bias, the order of execution was assigned at random (“Run order” row). In addition, the external light source (“LED light”) ( ) was arbitrarily turned on and off to further introduce diversity into the acquired image material. To provoke variously strong characteristics of foam levels, different levels of air supply (5–50 mL/min) and different additions (100 µL to 1 mL) of a 0.5 g/mL bovine serum albumin (BSA) solution (BSA acquired from Sigma-Aldrich Chemie GmbH, Taufkirchen, Germany) have been added to the used medium (4Cell XtraCHO Stock & Adaptation, Sartorius Stedim Cellca GmbH, Ulm, Germany). Furthermore, stirrer speed adjustments were performed during the image acquisition phase (500–1500 rpm). Video material was initially recorded, and the video frames were subsequently extracted. A video of every experiment was recorded for 20 to 25 min to collect a diverse data set of different foam quantities. The resolution of the acquired images was 1920 × 1080 pixels for the smartphone camera and 1520 × 2688 pixels for the action camera, respectively.

Model Training

After image material acquisition, regions of interest (ROIs) were manually annotated using the cloud-based image annotation platform Dataloop (Dataloop AI, Herzliya, Israel). The ROIs that contain the bioreactor vessel from a side view were cropped out, rescaled to 250 × 250 pixels, and assigned to a class (no foam, low foam, medium foam, high foam) by a single subject matter expert trained in bioprocessing scenarios to reduce the risk of introducing inconsistent labels (see Supplemental Material for example images and their assigned classes). The resulting data set, which formed the foundation for model generation and validation, is specified in .
Table 2.

Acquired Image Data Set and Manually Annotated Classes.

Data SetWhole Data SetAction CameraSmartphone
No foam98217965
Low foam21831422041
Medium foam15421241418
High foam42861367
Total51353444791
Acquired Image Data Set and Manually Annotated Classes. The generated data shown in comprise the annotated image material generated by the action camera, which took an image every 10 s, and the annotated image material originating from the smartphone camera video, with an image every 90th frame (corresponding to one image every 3 s, downsampled from original acquisition at 30 frames per second). For experiments on binary classification, which distinguishes between no foam and foam, the classes low foam, medium foam, and high foam were combined into the single class foam, whereas as all classes were used for fine-grained classification. The annotation data and the cropped raw image data were exported and used to train CNN models for image classification, using the Python programming language (version 3.6) and the deep learning framework PyTorch (version 1.4) (Facebook Research, Menlo Park, CA). For both binary foam detection and fine-grained classification, a ResNet-18 model neural network was used. ResNet variants are widely used for image classification, and ResNet-18 is the smallest model in this family. Both the binary and fine-grained models where trained with cross-entropy loss for 30 epochs, a batch size of 50 images (largest batch size fitting on the graphics processing unit [GPU] used for training), a learning rate of 1.2e−5 (chosen after pilot experiments evaluating the learning rate influence on model convergence on the validation set), the Adam optimizer, and random horizontal flips for data augmentation. For each training, the model with lowest validation loss was saved and used for evaluation (see Supplemental Material for corresponding loss plots). The validation data were created by taking 10% of the training data at random and using that split for all models doing the same task (binary or fine-grained classification). Due to class imbalance, because of more foam than no-foam images present, class weights of 0.4, corresponding to the ratio of foam and no-foam images, were introduced for the images having foam in them when training the binary classifier but not the fine-grained one.

Model Validation

To validate the models, a subset of the images was excluded from model training and only used to evaluate models’ classification performance as a test set. For the binary classifier, all images from one smartphone video capture were excluded to constitute a test set of 672 images, where 477 images contained foam and 195 did not. Due to the low number of high-foam images in any given video capture, another test set with 512 images was designed and used for the fine-grained model containing 98 no-foam, 218 low-foam, 154 medium-foam, and 386 high-foam images. Since the classes are imbalanced, evaluation accuracy as commonly defined, that is, the ratio of correct classifications, will be biased toward the class with the most labels. Instead, the classification performance was evaluated by calculating the F1 score, defined as Here, the TP, TN, FN, and FP values depict the true-positive/true-negative and false-negative/false-positive predictions, respectively. “Precision” indicates, out of how many images that were classified as containing foam, what ratio was correctly predicted. Recall indicates the ratio of all images containing foam that were correctly classified as containing foam. The F1 score is the harmonic mean of the precision, and recall and is widely used to provide a single evaluation metric for classification models when classes are imbalanced. To qualitatively validate the models’ predictions, the locations where the models attend for predictions were visualized using the GradCAM++ method.[21,22] GradCAM++ uses a weighted combination of the positive partial derivatives of the last CNN layer to produce a heat map over the image highlighting regions the model pays much attention to when making its prediction. Although GradCAM++ does not provide a full explanation behind the prediction, the heat maps provide intuition whether the model predictions are based on sensible information.

Results and Discussion

Two foam classification models were generated following a six-step workflow ( ). One of the resulting models is a binary model, which is able to distinguish between foam and no foam. The second model is a fine-grained model, which is able to classify foam-containing bioreactor images into the classes no foam, low foam, medium foam, and high foam. The developed binary foam classification model showed strong performance, with an F1 score higher than 97% on an independent test set ( ), indicating that the machine vision system reliably detects foam buildup. The fine-grained classifier showed promising results with around a 76% F1 score on the considerably more difficult task of fine-grained classification. Inspecting the confusion matrix for the fine-grained classifier shows that the main source of error is high-foam images mistaken as medium-foam ones and, to slightly lower degree, medium-foam images mistaken as low-foam ones ( ). The models’ predictions were visualized using the GradCAM++ method ( ). This method provides a heat map representation that indicates regions of high decision-making allowing qualitative investigation of the models’ predictions. For the binary classifier, the model correctly focuses on to the liquid–gas interface when making a correct prediction, but focuses on other parts of the vessel when making incorrect predictions ( ). A similar behavior can be observed for the fine-grained classification model ( ). Here, too, the model focuses on the foam area for correct predictions and other parts of the vessel environment for incorrect predictions.
Table 3.

CNN Classification Performance on Foam Detection.

ModelPrecision (%)Recall (%)F1 Score (%)
Binary97.9596.9597.45
Fine-grained76.3578.6875.58
CNN Classification Performance on Foam Detection. Confusion matrices of the developed classification models. (A) Confusion matrix for the binary classifier mode. (B) Confusion matrix for the fine-grained classifier model. Each row indicates the true image labels; columns indicate the respective model’s prediction. Figures indicate the proportion of model predictions for images of the labels within each row. Exemplary GradCAM++ visualizations of the developed classification models. (A) Visualizations of the binary classification model. (B) Visualizations of the fine-grained classification model. Image boxes from left to right: Raw input image; respective GradCAM++ heat map, where blue means low attention and red means high; and the input image with the corresponding heat map overlay. These visualizations allow interpretation of the CNN classifiers’ behavior and indicate that the developed models are capable of recognizing the foam region within the vessel area and using it for the classification tasks. Incorrect classifications have been observed, which are the result of the CNNs not focusing on the foam area of the vessel or edge cases introduced by the manual annotation of images. However, these failures may be avoided by averaging the predictions over time-consecutive sequences of images, instead of relying on only a single image, to receive a more robust signal. For example, the frame rate of video acquisition is 30 frames per second, and the worst-case wrong prediction with the binary foam detection is every 48 frames, on average, assuming uniformly randomly distributed failures over time. In this case, averaging the prediction over 30 frames per second may drastically reduce the impact of the misclassifications. This approach is applicable to actual cultivation setups, where foam emergence usually takes several seconds to minutes.

Concluding Remarks and Outlook

Conventional foam sensor probes show several disadvantages, as they are prone to fouling and coating, and sensitive to reactor conditions such as humidity, agitation splashing, and temperature. This can result in false-negative sensing and overdosing of antifoam, leading to batch failure. Furthermore, the expensive cost and the lack of robustness do not justify their application in single-use bioprocessing setups. The proposed combination of commodity camera modules and the developed CNN approach for real-time foam identification and quantification overcome these drawbacks. The initialized models can be implemented into real-life setups either by hardcoding the ROI into the image acquisition modules or, in the case of flexible module/equipment positions, via preceding object detection tasks that deliver the appropriate ROI, such as deep cropping approaches. The developed algorithm demonstrated high performance regarding foam identification (binary classification, foam/no foam) with an F1 score of 98% on an independent test set. Furthermore, fine-grained classification of foam levels (no foam, low foam, medium foam, high foam) showed good results; the model achieved an F1 score of 76% over all classes, indicating great promise for image-based foam quantification. The main source of error here was distinguishing between medium-foam and high-foam images, a task difficult even for a subject matter expert if no metric scale for orientation is provided. Furthermore, the foam height is usually not distributed equally along the foam surface, which adds further complexity to this machine vision task. However, it has been shown that the established system provides an inexpensive, accurate, and flexible alternative to traditional foam-sensing systems. Going forward, implementing an antifoam agent feed strategy based on the resulting fine-grained sensor signal to ensure an antifoam agent addition based on demand would minimize negative effects on bioprocessing equipment features as well as cell behavior. A resulting reduction of batch failures based on antifoam overdosing would improve the efficiency of process development as well as manufacturing processes. Other useful additions to the concept system include implementation of outlier detection capabilities to reduce the impact of process artifacts, for example, the accidental blocking of the camera view to the bioreactor with an object or operator hand during routine operation. Potentially, further accuracy could be added to the model by introducing exact foam metrics (via surface or volume measurements) as annotation data. Additionally, to further reduce the risk of biased or inconsistent labels, a diverse data set labeled by multiple subject matter experts whose assessments are then aggregated, for instance by majority voting, is preferrable, especially in difficult-to-judge edge cases. Furthermore, both models presented in this work can definitely be optimized for higher performance by tuning the model architecture, learning rate, loss function, and so on, which is something we leave for future work. To conclude, the presented concept results show great promise for the application of machine vision to implement cheap, flexible, and robust monitoring for foam control for upstream bioprocessing. It is anticipated that this machine vision methodology will be further expanded to other areas of bioprocessing. Click here for additional data file. Supplemental material, sj-pdf-1-jla-10.1177_24726303211008861 for A Machine Vision Approach for Bioreactor Foam Sensing by Jonas Austerjost, Robert Söldner, Christoffer Edlund, Johan Trygg, David Pollard and Rickard Sjögren in SLAS Technology
  7 in total

1.  Foam control in fermentation bioprocess: from simple aeration tests to bioreactor.

Authors:  A Etoc; F Delvigne; J P Lecomte; P Thonart
Journal:  Appl Biochem Biotechnol       Date:  2006       Impact factor: 2.926

Review 2.  Foam and its mitigation in fermentation systems.

Authors:  Beth Junker
Journal:  Biotechnol Prog       Date:  2007-06-13

3.  Fouling effects of yeast culture with antifoam agents on microfilters.

Authors:  M K Liew; A G Fane; P L Rogers
Journal:  Biotechnol Bioeng       Date:  1997-01-05       Impact factor: 4.530

4.  Transfer learning based deep CNN for segmentation and detection of mitoses in breast cancer histopathological images.

Authors:  Noorul Wahab; Asifullah Khan; Yeon Soo Lee
Journal:  Microscopy (Oxf)       Date:  2019-06-01       Impact factor: 1.571

Review 5.  Scale-Down Model Development in ambr systems: An Industrial Perspective.

Authors:  Viktor Sandner; Leon P Pybus; Graham McCreath; Jarka Glassey
Journal:  Biotechnol J       Date:  2018-11-26       Impact factor: 4.677

6.  The effect of dissolved oxygen on the production and the glycosylation profile of recombinant human erythropoietin produced from CHO cells.

Authors:  Veronica Restelli; Ming-Dong Wang; Norman Huzel; Martin Ethier; Helene Perreault; Michael Butler
Journal:  Biotechnol Bioeng       Date:  2006-06-20       Impact factor: 4.530

Review 7.  Beyond de-foaming: the effects of antifoams on bioprocess productivity.

Authors:  Sarah J Routledge
Journal:  Comput Struct Biotechnol J       Date:  2012-12-01       Impact factor: 7.271

  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.