The lensless optical fluid microscopy is of great significance to the miniaturization, portability and low cost development of cell detection instruments. However, the resolution of the cell image collected directly is low, because the physical pixel size of the image sensor is the same order of magnitude as the cell size. To solve this problem, this paper proposes a super-resolution scanning algorithm using a dual-line array sensor and a microfluidic chip. For dual-line array sensor images, the multi-group velocity and acceleration of cells flowing through the line array sensor are calculated. Then the reconstruction model of the super-resolution image is constructed with variable acceleration. By changing the angle between the line array image sensor and the direction of cell flow, the super-resolution image scanning and reconstruction are achieved in both horizontal and vertical directions. In addition, it is necessary to study the row by row extraction algorithm for cell foreground image. In this paper, the dual-line array sensor is implemented by adjusting the acquisition window of the image sensor with a pixel size of 2.2μm. When the tilt angle is 21 degrees, the equivalent pixel size is 0.79μm, improved 2.8 times, and after de-diffraction its average size error was 3.249%. As the angle decreases, the image resolution is higher, but the amount of information is less. This super-resolution scanning algorithm can be integrated on the chip and used with a microfluidic chip to realize on-chip instrument.
The lensless optical fluid microscopy is of great significance to the miniaturization, portability and low cost development of cell detection instruments. However, the resolution of the cell image collected directly is low, because the physical pixel size of the image sensor is the same order of magnitude as the cell size. To solve this problem, this paper proposes a super-resolution scanning algorithm using a dual-line array sensor and a microfluidic chip. For dual-line array sensor images, the multi-group velocity and acceleration of cells flowing through the line array sensor are calculated. Then the reconstruction model of the super-resolution image is constructed with variable acceleration. By changing the angle between the line array image sensor and the direction of cell flow, the super-resolution image scanning and reconstruction are achieved in both horizontal and vertical directions. In addition, it is necessary to study the row by row extraction algorithm for cell foreground image. In this paper, the dual-line array sensor is implemented by adjusting the acquisition window of the image sensor with a pixel size of 2.2μm. When the tilt angle is 21 degrees, the equivalent pixel size is 0.79μm, improved 2.8 times, and after de-diffraction its average size error was 3.249%. As the angle decreases, the image resolution is higher, but the amount of information is less. This super-resolution scanning algorithm can be integrated on the chip and used with a microfluidic chip to realize on-chip instrument.
Collecting and analyzing cell images of biological tissues is an important basis for disease diagnosis, health monitoring, and new drug development in medicine today [1,2]. Flow cytometry can quickly and accurately perform cell detection. However, its promotion and application are harmed by cost and portability. With the popularization of concepts such as smart medicine and telemedicine, the lensless optical fluid microscope technology for miniaturization, automation, and low cost of cell image acquisition instruments were proposed in 2006 [3]. Since the pixel size of the image sensor and cell size is on the same order of magnitude as cell size, the resolution of the image collected by the lensless optical fluid microscope is low. Then the method of passing the target through a special aperture array is proposed to reduce the pixel size and achieve super-resolution imaging to solve this problem [4,5].Scholars from all over the world are trying to solve the problem of low resolution of the imaging results of lensless systems by implementing super-resolution reconstruction. A method of real super-resolution reconstruction by generating a micro-lens effect above or on the surface of the object has been proposed [6,7]. In order to obtain more details of cells, the multi-angle micro-displacement of the optical path is generated, and the cells are scanned for micro-displacement [8,9]. Then the high-resolution image is synthesized into a group of low-resolution sub-pixel-shifted images. But at the same time, an accurate optical path system is required, and the implementation cost is high. Similarly, the fluid flow first generates low-resolution images of multiple frames of targets and then reconstructs a single super-resolution image through a multi-frame super-resolution algorithm [10-12]. Different from this, the convolutional neural network structure is improved to establish the feature mapping relationship between low-resolution images and high-resolution images [13-15]. The multi-wavelength phase recovery and multi-angle light source diffraction tomography was used to realize the high-resolution imaging of the lensless system and restores the depth image [16,17]. Also, an up-sampling phase retrieval scheme is proposed to bypass the resolution limit of the pixel size of the imager [18]. This method introduces some optical devices and improves the resolution through the corresponding phase recovery algorithm. Our research team has proposed a method of super-scan imaging using a single-line array detector, which sets an oblique linear array image sensor under the microfluidic channel to scan the flowing cells. After reconstruction, a super-resolution scan of the cells can be obtained. Compared with the area array image sensor, its method greatly reduces the power consumption occupied by the pixel unit. However, this method requires very high control accuracy of the cell flow rate, and the reconstructed image is easily distorted.In this article, our proposed solution is to build a super-resolution scanning system using a dual-line image sensor. It can accurately calculate cell flow velocity and acceleration. Firstly, two single-line array detectors with micro-pitch and parallel structure are adopted to construct the double-line array structure. The time difference between the cells flowing through two independent linear array sensors is used to accurately calculate the instantaneous flow velocity and acceleration of the cells. Secondly, the single-line scan imaging process is re-modeled, and the transformation relationship between the line scan image and the object image coordinates is pushed to reconstruct the line scan image and restore the super-resolution image of the object. In addition, the foreground separation of the line scan image, speed calculation, and other issues have been studied in depth. Based on the mean background modeling, a multi-threshold foreground coarse segmentation method is proposed to update the background model, and the foreground model of the line scan image is extracted by the background model. Feature detection and feature matching algorithms are used to match the time difference and displacement difference of cells as they pass through two linear array sensors, and accurately calculate the instantaneous flow velocity and acceleration information of the cells.
Materials and methods
System structure and basic model
The system structure (Fig 1A) of the dual-line array image sensor consists of a 405nm laser plane wave source, a microfluidic chip, and a CMOS plane array image sensor MT9P031. In the system, when the pixel size of the image sensor is smaller, the resolution of the reconstructed image will be higher and the image will be clearer. However, the current commercial linear array image sensor has too large pixels, so we choose a smaller pixel and a high sampling rate area array image sensor. This sensor can adjust the size of image acquisition window through the function of a region of interest (ROI), so as to replace the two-wire array sensor, and its pixel size is 2.2μm. In this function, only the pixel reading of the window area will be activated, so the row rate can be greatly improved. The schematic diagram of the dual-line array sensor structure is shown in Fig 1B, and the two linear array sensors are placed in parallel with a space of d. When the linear array sensor is at an acute angle to the direction of cell flow, the lateral resolution of scanning imaging can be increased, and that is the principle of super-resolution scanning imaging. In this section, the basic mathematical modeling of its structure will be carried out, including the establishment of the coordinate system, speed model, resolution model and distance model.
Fig 1
The structure and imaging principle.
(A) The system structure of the dual-line array image sensor. (B) The process of acquisition by the dual-line array sensor.
The structure and imaging principle.
(A) The system structure of the dual-line array image sensor. (B) The process of acquisition by the dual-line array sensor.As shown in Fig 1B, the acquisition resolution in the inclined placement mode is smaller than that in the vertical placement mode. Taking the first intersection of the object flow direction and the linear array sensor as the origin, the object flow direction of the channel as the axis y, the flow direction as the positive direction, the direction perpendicular to the channel as the axis x, and the direction pointing to the channel one measurement as the positive direction, the rectangular coordinate system of the channel object image is established, named C1. By a similar process, the rectangular coordinate system of the linear scanning image, called C2, is established. Special attention should be paid to the fact that the intersection of the axis x′ and the axis y′ is not the zero point of the axis y′, but the coordinate of the axis y′ when the cell is passing through the linear array sensor.Suppose that there is an object flowing at a speed V in the channel, as shown in Fig 2A. As calculated using Eq (1), the velocity V, V, V and V can be obtained by decomposing the velocity in the coordinate system C1 and C2 respectively. Similarly, the transformation relationship of acceleration between two coordinate systems is determined in Eq (2).
Fig 2
Some model of the tilted linear array sensor.
(A) The velocity decomposition model. (B) The super-resolution model.
Some model of the tilted linear array sensor.
(A) The velocity decomposition model. (B) The super-resolution model.Fig 2B is an enlarged view of the intersection of axes. When the pixel spacing of the linear array sensor is d, the imaging resolution of the axis x′ direction is d, that of the axis x direction is d. Then the transformation formula between d and d can be deduced in Eq (3). To ensure that the scale of the reconstructed image is the same as that of the real object, the resolution in the axis x direction should be equal to the resolution in the axis y direction.When the imaging sample reaches the origin, the linear array sensor starts acquiring images, and suppose there is a point P1 on the object this time. As shown in Fig 3, its coordinate is (x,y). The distance from the axis x and the axis y is S and S. If the object without a lateral velocity flows, the pixel corresponding to point P1 on the linear array sensor is L1, else is L2 and the lateral flow distance is . Then the coordinate of the corresponding point P2 on the linear scanning image is (x',y'). The ordinate y' represents the number of frames since the object starts imaging when P1 is acquired. The true distance between point P2 and the axis y is S. According to the relationship between imaging resolution, pixel size and pixel coordinates, the Eq (4) can be obtained.
Fig 3
The distance calculation model in the case of the tilted linear array sensor.
Methods of cell foreground extraction
When the linear array sensor scanning is used to image cells in microfluidics, the influence of background impurities in microfluidics can be avoided, and only the dynamic change information of cells flowing can be collected. However, because of the noise of image sensor pixels and the non-uniformity of the light source, the uneven fringe noise will be formed on the scanned image. After the system is started, the noise will remain stable. In the linear scanning image, the pixels and light intensity of each line are the same. As a result, for a continuous, short period of time, it can be considered that the background is almost the same. Based on this assumption, the pixel value of background can be obtained by simple mean modeling that build without cells flow. Further in this paper, the background model is updated in real-time by identifying pixels with cells flow through the multi-threshold method, which reduces the interference to the background model.Firstly, i is the current number of collected times, and when the sensor first collects, i is 1. N rows of background images, from i-N to i-1, are buffered to establish a background initial mean model. P is the line pixel value of the F row collected currently, and is the mean value of the rows from F to F. Meanwhile, P is a pixel value of the column j in the F row, so the formula of background initial mean model isThen the initial foreground difference information EP of line F will be obtained after line F is cached in Eq (6).Based on this information, the background mask MP of line F is
where T1 and T2 are the lower and the upper threshold of the background, respectively. The pixels in which cells are present will be filtered by this mask, and the new value of background mean model of rows F to F will be re-calculated byFinally, the new foreground difference information of cells is
Methods of instantaneous velocity
The accuracy of cell velocity calculation determines the distortion of reconstructed scanned images. In lensless imaging, when the object is far from the imaging surface, the light will be diffracted through the object to form a diffraction ring, each ring of gray level is uniform. Therefore, the maximally stable extremal regions (MSER) algorithm is used to detect the alternating light and dark diffraction rings, and then the feature points on the boundary of the maximally stable extremal regions are screened out. Finally, the scanning matching of feature points in another linear array sensor acquisition image is carried out by the sum of squared differences (SSD) algorithm. Then the set of feature points on two linear array sensors is obtained for calculating cell flow velocity.MSER, similar to the watershed, can detect connected regions such as diffraction rings in cell diffraction images. Under the obvious detection effect, we compressed the dynamic range of the image before MSER detection to reduce the calculation time. According to the characteristics of diffraction rings, the corner features are mostly distributed in the upper and lower vertex positions of the MSER region. So the coordinates of the Corner point are quickly determined, its ordinate is the extremum of the MSER area’s ordinate, and the value of its abscissa is the mean value of the abscissa at the extremum of the ordinate. Then each MSER region can extract two corner features, and select the appropriate corners to match. It is necessary to analyze the coordinates of each corner after the initial extraction of corner features. Each corner point must be not too close, and a corner point with a more obvious corner feature should be selected while the distance is relatively close. The minimum distance between the corners’ ordinates can be determined by extracting the maximum difference of the ordinates of corners. The corner points with more obvious features can be screened by the calculation matrix of the corner point features in Eq (10).M is a window matrix of the corner point, and M is a corner feature calculation matrix, which is related to the actual line array direction and is obtained by experiments. M and M are the same size, and the larger the V, the more obvious the feature.Assume that K feature points are extracted from the scanned image of the first linear array sensor, and an image of size (H+1)×(W+1) is extracted around the feature point k. The feature point, that is the center point of this image, is denoted as M(0,0,k). Then the SSD matching algorithm on the image of the second linear array sensor is
where M(i,j) is the pixel value of the coordinates (i,j) on the scanned image of the second linear array sensor, and V(i,j,k) is the SSD value of the pixel point and the feature point k on the scanned image of the first linear array sensor. The large the SSD value, the higher the matching degree between the two feature points. During the search process, the feature point with the largest SSD value is selected as the final matching point.The difference in the displacement of cell images acquired by the dual-line array sensors is small because of the short distance between the first and the second linear array sensor. So an SSD matching search area of the second linear array sensor is set up based on the coordinates of the feature points of the first linear array sensor, to reduce the search efficiency of the SSD matching algorithm. Assume that the line rate of the line array sensor is f, the pixel size is s, and the line array pitch is d. The coordinates of two adjacent feature points on the first linear array sensor are (x,y), (x,y), and the coordinates of corresponding matching points on the second linear array sensor are , . Then the time difference between the first and the second linear array sensor of the point on the cell is , and the lateral displacement is . Therefore, the lateral velocity and longitudinal velocity of this point in the coordinate system C2 areSimilarly, the lateral velocity and longitudinal velocity of this point in the coordinate system C2 can be calculated. Then the lateral acceleration a and longitudinal acceleration a during this period can be calculated by the two adjacent feature points V, V. The time difference between the two feature points on the cell after passing the first linear array sensor is (y−y)/f, and the acceleration of the cell during this period isIn this way, the velocity and acceleration information of the coordinate system C1 can be obtained. By a similar process, the K velocities and the K−1 accelerations of all feature points are calculated.
The reconstruction with variable acceleration
Suppose that an object flows in the microchannel, with V and V as the initial velocity of the axis x and axis y, a and a as the acceleration of the axis x, and axis y. According to the physical relationship of distance, speed and acceleration, the Eq (14) can be obtained
whereThen the coordinate transformation formula of the object coordinate system mapping in the line scanned coordinate system is
whereSo the solution y′ of the one-variable quadratic equation can be written asIn reality, it is difficult for small objects to maintain a constant acceleration flow, which is mostly variable acceleration flow. It is assumed that the object is running at speed V0 as shown in Fig 4A, and the linear array sensor starts to acquire at time t0. So the instantaneous flow velocity of the object at t1, t2 and t3 are V1, V2 and V3, respectively. That is mean, the object has three acceleration a0, a1 and a3 for three time periods when flowing through the linear array sensor. In this case, this paper adopts an iterative mapping method to map the acceleration change time in the linear scan coordinate system on the object coordinate system, so that the different acceleration areas of the object correspond to the linear scan area one by one. Then the object coordinate system maps the reconstructed image on the line scan coordinate system. As shown in Fig 4B, it is a schematic diagram of an object passing through a linear array sensor at this speed change, which respectively shows the position of the linear array sensor on the object at each time point. At the moment, the object and the linear array sensor intersect at two points a and b respectively, and the area between the original point on the object and the two points a and b flows through the linear array sensor with V0 as the initial speed and a0 as the acceleration. Similarly, the area between the four points a, b, c, and d on the object flows through the line array sensor with V1 as the initial speed and a1 as the acceleration. If the object collects data from line y0 from t0 to t1, the distance the object moves along the axis y′ is
Fig 4
The situation with the variable speed.
(A) Object flows at the variable speed. (B) The position of the line array sensor on the object.
The situation with the variable speed.
(A) Object flows at the variable speed. (B) The position of the line array sensor on the object.Then the distance from the point b to the axis x isConsidering the linear array sensor as a straight line, the equation of the straight line at time t1 can be obtained by the slope of the linear array sensor in the coordinate system C1.Similarly, the equation at time t2 is
whereThen the three acceleration regions can be mapped to the coordinate system of the object.To generalize it to K accelerations, the Eq (22) can be written as
whereThe Eq (23) can be written asThen the coordinates of the object coordinate system are mapped to the linear scanning imaging coordinate system, and the speed information of the corresponding area is brought into the corresponding pixel value. By solving the quadratic equation of each acceleration region, the coordinates of the linear scan coordinate system, mapped by the coordinates (x,y) of the object coordinate system, can be obtained. It should be noted that are the coordinates relative to each acceleration segment, and the coordinates in the coordinate system C2 should beThen the pixel value of coordinates (x,y) can be calculated from the pixels around the coordinate .
Results and discussion
Analysis of cell foreground extraction
We used 20μm microspheres as test objects, and the angle of the dual-line array sensor is 21 degrees. When the number of acquisition lines of the sensor is set to 10 lines, the frame rate is 1230fps. The flow rate of the solution is related to the sampling rate of the image sensor. When the sampling rate of the image sensor is higher, the more samples can be processed per unit time. After considering these issues, this paper chooses a suitable flow rate of solution, which is 5 μL/min ~ 10 μL/min. We extract an image every 10 frames in Fig 5, and the microsphere flowed in 0.12s. In our system, only two lines of pixels are used to reconstruct super-resolution images. This chapter explains the foreground extraction of the scanned image.
Fig 5
Test results of cell foreground extraction for 20μm microspheres.
Fig 6A is the image of 500 lines which are scanned pixels from the first linear array sensor. Due to the unevenness of the light source, there are vertical stripes with uneven brightness and width on the scanned image. When the number of cache lines N is taken as 20, the background mean model and the algorithm in this paper are tested. Fig 6B is the extracted microsphere scanned image by the background mean model, and the background part does not eliminate the uneven noise very well, although the microsphere foreground part can be separated. Differently, the pixel value of foreground difference information is firstly assigned 1, when is between 15 and -15, otherwise is 0, as shown in Fig 6C. Obviously, it roughly divides the white background part and the black foreground part. After avoiding the influence of the foreground part on the background mean model, the microsphere scanned image is shown in Fig 6D. Compared with the image extracted by the background mean model, the algorithm proposed in this paper has less background noise. Therefore, background interference can be largely reduced, and cell images can be more accurately extracted.
Fig 6
Test results of cell foreground extraction for 20μm microspheres.
(A) The 500 lines of raw images from the first linear sensor are connected to one image. (B) The extracted microsphere image directly by background mean model. (C) The mask image with the threshold of plus-minus 15. (D) The foreground image extracted by the improved method.
Test results of cell foreground extraction for 20μm microspheres.
(A) The 500 lines of raw images from the first linear sensor are connected to one image. (B) The extracted microsphere image directly by background mean model. (C) The mask image with the threshold of plus-minus 15. (D) The foreground image extracted by the improved method.
Analysis of speed calculation
The detection effect of MSER is mainly affected by the parameters of A and Δ. Of course, the limitations of the connected regions also can be further filtered. In this paper, the A is set to a larger value of 20 to detect more diffraction rings. First, the dynamic range of the image is compressed in 10 steps, when A is 20 and Δ is 1. As shown in Fig 7A, each color is an MSER area. And only the first and second diffraction rings can be detected, when the dynamic range is below 190. When above 190, the third-order diffraction ring can appear. However, after 200, there will be too many subdivided detection areas, which will increase the processing burden. Therefore, the dynamic range of the diffraction image collected by scanning imaging can be compressed to 190, and then better MSER detection can be performed. Second, Δ is tested in steps of 0.3 in Fig 7B, when the dynamic range is below 190. With the increase of Δ, the detection effect of diffraction ring becomes worse. And with the decrease of Δ, more and more detection areas are subdivided. When between 2 and 2.3, the detection effect is in an intermediate state.
Fig 7
The analysis of MSER.
(A) MSER with the different values of image dynamic range. (B) MSER with the different Δ.
The analysis of MSER.
(A) MSER with the different values of image dynamic range. (B) MSER with the different Δ.Finally, corner features are extracted from MSER features directly in the first left of Fig 8 when the dynamic range is 190 and Δ is 2. According to the scanning image of the experiment structure and through the experimental analysis, the window radius of M is 4 pixels in Eq (25) and the results are shown in Fig 8 on the far right. As you see, our method can extract and screen better corner features that meet the requirements easily and quickly.
Fig 8
The feature point after extracted and screened from MSER when the value of the image dynamic range is 190, A is 20 and Δ is 2.
To reduce the search efficiency of the SSD matching algorithm, this paper sets up the SSD matching search area of the second linear array sensor based on the feature point coordinates of the first linear array sensor. Using this algorithm when H = W = 20, 11 feature points are detected and matched on the scanning image (Fig 9A). The first and third lines are the images corresponding to the feature points on the first linear array, and the second and fourth lines are the images corresponding to the feature points on the second linear array. As you see the feature points matching is accurate in Fig 9B.
Fig 9
The analysis of the matching feature points.
(A) The feature maps of this point in the image of the dual-line array sensor by the SSD algorithm. (B) The corresponding feature points in the image of the second linear array sensor.
The analysis of the matching feature points.
(A) The feature maps of this point in the image of the dual-line array sensor by the SSD algorithm. (B) The corresponding feature points in the image of the second linear array sensor.Then the velocities of all 11 feature points and 10 acceleration information, shown in Table 1, can be calculated by applying formulas (1), (2), (12), (13). When the microspheres flow in the channel, the maximum change of the transverse velocity along the channel direction is about 216 μm/s, and the longitudinal velocity is about 514 μm/s. It is obvious that the horizontal and vertical velocity are not stable in the process of cell flow, and the instantaneous velocity of cell flow can be accurately calculated using the method presented in this paper.
Table 1
Velocity and acceleration information of each characteristic point of the microsphere.
Feature point
Vx′(μm/s)
Vy′(μm/s)
ax(μm/s2)
ay′(μm/s2)
Vx(μm/s)
Vy(μm/s)
1
1040.771
416.308
15.679
-831.264
-7.852
1120.917
2
996.431
386.571
-128.123
6792.612
-7.292
1040.852
3
1353.003
541.200
4695.305
-5043.000
-10.208
1457.192
4
1353.002
451.000
-14346.766
15409.167
73.773
1424.280
5
1353.003
676.500
13324.547
-11346.750
-136.180
1506.561
6
1476.003
492.000
-475.913
-2909.423
80.480
1553.760
7
1248.925
416.308
-3624.000
1641.213
68.098
1314.720
8
1127.502
451.000
0.000
0.000
-8.507
1214.327
9
1127.502
451.000
-38.049
2017.200
-8.507
1214.327
10
1230.003
492.000
5728.734
2881.714
-9.280
1324.720
Reconstruction results
Having gathered the information of the velocity of the microsphere, the scanned image of the 20μm microsphere was reconstructed. To compare the superiority of the algorithm of the variable acceleration, we use the algorithm of the variable acceleration and the uniform velocity to reconstruct the microsphere, and the results are shown in Fig 10A and 10B. Significantly, the latter is much better than the former, and the multi-order diffraction ring of the 20μm microsphere with little distortion, can be observed from the reconstruction image. This shows that the algorithm of the variable acceleration is necessary in the case that the actual flow direction of the cell is variable and the speed is variable. The resolution is related to the pixel size and the tilt angle of the linear array sensor. According to the Eq (3), the equivalent pixel size is 0.79μm, which is 2.78 times higher than the pixel size of the area array sensor. When the microsphere is collected by the area array sensor with the same pixel size 2.2μm, its resolution is very low as shown in in Fig 10C. The details are much fuzzier than that in Fig 10B, when we enlarge the image to 2.78 times in Fig 10D.
Fig 10
The comparison of the reconstructed image of the dual-line array sensor and the image of the area array sensor.
(A) The image reconstructed by the algorithm derived from the uniform velocity hypothesis. (B) The image reconstructed by the algorithm derived from the variable acceleration hypothesis. (C) The area array sensor image. (D) 2.78x magnification of the image (C) using a cubic spline interpolation algorithm.
The comparison of the reconstructed image of the dual-line array sensor and the image of the area array sensor.
(A) The image reconstructed by the algorithm derived from the uniform velocity hypothesis. (B) The image reconstructed by the algorithm derived from the variable acceleration hypothesis. (C) The area array sensor image. (D) 2.78x magnification of the image (C) using a cubic spline interpolation algorithm.The reconstructed super-resolution image is a diffraction image of the microsphere, and the image of the microsphere can be recovered by the de-diffraction algorithm. This algorithm has been studied in the paper [19], and is cited directly here. Fig 11A shows an image of a 20μm microsphere under a 10x microscope. After being magnified four times, the de-diffracted image of Fig 10D is shown in Fig 11B, and the de-diffracted image of Fig 10D is shown in Fig 11C. The pixel values of the white line are plotted in Fig 11D, by comparing these images, the recovery results in this paper has smoother edges and clearer details.
Fig 11
The analysis of de-diffraction image.
(A) The microscope image under a 10x microscope. (B) The de-diffraction image of Fig 10D. (C) The de-diffraction image of Fig 10B. (D) The pixel values of the white line in (B) and (C) are plotted.
The analysis of de-diffraction image.
(A) The microscope image under a 10x microscope. (B) The de-diffraction image of Fig 10D. (C) The de-diffraction image of Fig 10B. (D) The pixel values of the white line in (B) and (C) are plotted.We calculated the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) about the enlarged image of area array sensor and the de-diffraction image of dual-line array sensor. In Table 2, because the ideal image is used as the reference image, the PSNR and SSIM of all images are relatively low. However, the de-diffraction image of the dual-line array sensor has a higher PSNR (Improved 1.62 times), and its SSIM is closest to 1 (Improved 3.96 times).
Table 2
The PSNR and contrast of the image of the different sensor, before and after de-diffraction.
Image
PSNR (dB)
SNR (dB)
SSIM
Bilinear Interpolation
11.945
10.623
0.053
Cubic Spline Interpolation
11.981
10.659
0.054
Dual-line Array Sensor
19.301
17.979
0.210
After de-diffraction, the size of this microsphere is 20.7375μm, and the error of microsphere size in our experiment is less than 10%. We calculated the size and its error of 50 microsphere images of dual-line array sensor in Fig 12. Their diameter calculated is shown in Fig 12A, and the white column represents the more part, the black column represents the less part. So it can be seen that the error between the calculated diameter and the real diameter is small, almost within 2μm. Meanwhile, the diameter error of each microsphere is also calculated in Fig 12B, and the reconstructed dimensions error are within the error range of the microspheres, and the average error is 3.249%.
Fig 12
The analysis of the 50 microsphere images of dual-line array sensor.
(A) The dimensions. (B) The dimensions error.
The analysis of the 50 microsphere images of dual-line array sensor.
(A) The dimensions. (B) The dimensions error.In the above test, the angle between the micro-channel and the linear array sensor is 21 degrees, and the sensor pixel size is 2.2μm. If a smaller pixel sensor or a smaller angle is used, the method in this paper is still applicable, and the equivalent pixel size is smaller, as shown in Fig 13. The resolution magnification is only related to the tilt angle, not the pixel size.
Fig 13
The relationship between the equivalent pixel size and the angle between the linear array and the micro-channel, as the different pixel size of the dual-line array sensor.
We have done the same experiment when the angle is 15 or 10 degrees and the pixel size is 2.2μm (Fig 14). As the angle is 15 degrees, the equivalent pixel size is 0.569μm. As the angle is 10 degrees, the equivalent pixel size is 0.382μm. It is smaller than the equivalent pixel size of 0.775μm in paper [8] and 0.770μm in paper [20] by the pixel size image sensor (1.67μm).
Fig 14
The super-resolution image reconstructed and its de-diffraction image as the different angles between the linear array and the micro-channel.
The quantity of information can be evaluated by image entropy, and the higher the value, the more information. In Table 3, with the decrease of angle, the image entropy becomes less and less at the real size. When these image is enlarged to same size, the image entropy decreases greatly. This means that when the angle is smaller, the image resolution is higher, but the amount of information is less. Therefore, the tilt angle can be selected to meet you different needs conveniently, just rotating the microfluidic chip.
Table 3
The image entropy in the different angles.
Image size
Angle
Image entropy (bit/pixel)
Real size
10˚
1.6469
15˚
2.1064
21˚
2.3829
Be enlarged to same size
10˚
1.6469
15˚
1.6455
21˚
1.6176
Conclusion
In summary, the super-resolution scanning system, using the dual-line array image sensor, is demonstrated to obtain the super-resolution image of cells. Firstly, the method, combined by background mean model and a multi-threshold foreground coarse segmentation method, is designed to extract the cells foreground information from the line of scanning image. Secondly, the multiple sets of velocities and accelerations of cells passing through linear array sensors can be calculated with the MSER and SSD algorithm. Then the reconstruction model of scanning image is deduced with uniform speed, uniform acceleration and variable acceleration flow. Finally, the super-resolution image of the cells can be reconstructed. When the pixel size of the linear array sensor is 2.2μm and the angle is 21 degrees, the equivalent pixel size is 0.79μm (Improved 2.8 times, and improved 2.15 times in paper [8,20]). After de-diffraction, the size error of 20μm microsphere was 3.249%, and the PSNR was improved 1.62 times, the SSIM was improved 3.96 times. With the same system structure, the equivalent pixel size can be 0.382μm as the angle is 10 degrees, but the image entropy also decreases. Furthermore, the resolution and the flow rate of solution can be improved by using image sensors with smaller pixels and higher sampling rates, or using the high-throughput microfluidic chips of the multi-channel, and high-throughput analysis can be achieved in the paper [21]. Therefore, it is sufficient to demonstrate that the proposed super-resolution scanning algorithm and system is effective. The application of the algorithm in lensless optical fluid microscopy can provide a more convenient method of cell detection instruments.29 May 2020PONE-D-20-10408A super-resolution scanning algorithm for lensless microfluidic imaging using the dual-line array image sensorPLOS ONEDear Dr. Yu,Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process, in particular points 1-4 raised by reviewer 2.Please submit your revised manuscript by Jul 13 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.Please include the following items when submitting your revised manuscript:A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocolsWe look forward to receiving your revised manuscript.Kind regards,Christof Markus AegerterAcademic EditorPLOS ONEJournal Requirements:When submitting your revision, we need you to address these additional requirements.1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf2. We note you have included a table to which you do not refer in the text of your manuscript. Please ensure that you refer to Table 3 in your text; if accepted, production will need this reference to link the reader to the Table.Additional Editor Comments (if provided):[Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.Reviewer #1: YesReviewer #2: Yes**********2. Has the statistical analysis been performed appropriately and rigorously?Reviewer #1: N/AReviewer #2: Yes**********3. Have the authors made all data underlying the findings in their manuscript fully available?The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.Reviewer #1: YesReviewer #2: Yes**********4. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1: YesReviewer #2: Yes**********5. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)Reviewer #1: The manuscript studies an optofluidic on-chip holographic microscope using an angular tilted dual linear sensors. Using this framework the authors demonstrate super-resolution of the on-chip holography, which is one of the limitations of lensfree imaging. The tradeoff in the current implementation is the overall throughput of the device (for example, in comparison to: https://www.nature.com/articles/s41377-018-0067-0). I think that the authors should add a discussion on the throughput / super-resolution for the suggested system.Also, although the overall English level is OK, I suggest to have this manuscript be proofed by someone that is not involved in the research, to have it written in a more concise manner.Reviewer #2: The manuscript reports a super-resolution scanning system with a tilted dual-line image sensor. It can reconstruct the cell images that bypass the pixel size limit. The dual line array are used to accurately track the cell velocity when flowing through the channel. This manuscript is in general techanically sound. I recommend its publication if the authors can better address following concerns:1) The sensor, in fact, is based on the MT9P031 area sensor chip. The authors only read out two lines of the chip. It is unclear to me whether this two line scheme has a real benefit compared to the orginal subpixel optofluidic microscope developed at Yang’s group at Caltech, where hundreads of lines are read out also at a high speed.2) From line 120 to 124, the author tries to declare a formula for the initial background calculation. However, these parameters are not well defined. I assume Fi is the row number which should belong to N rows, i∈N. If so, the meaning of Fi-N and Fi-1 are unclear to me. What is the range of index i? The parameter k in the equation (5) hasn’t been defined either.3) What is the distance bewteen the channel and the sensor chip? In the original optofluic microscope device, the cover glass was removed and the channel is directly placed on top of the sensing area. If the distance is large, did they propagate the light to the sample plane using a phase retrieval process?4) For the reconstructed results in Fig 10. As the author demonstrates, the result of variable acceleration (Fig 10 B) is much better than the uniform speed (Fig 10 A). I am curious about this performance difference. Does that mean the current system could only perform well in a certain condition? I hope the author could discuss it.5) Only 20-um microsphere is demonstrated in the manuscript. This reviewer feel that the impact may be a little short for the lab-on-a-chip community. It will be better if the authors can images of certain cells.6) I also suggest the authors to give a better introduction and review of the super-resolution lensless microscopy approach that bypasses the pixel size limit in their second paragraph. New development in this field include the use of up-sampling phase retrieval process to bypass the pixel size limit, see, for example, “Wide-field, high-resolution lensless on-chip microscopy via near-field blind ptychographic modulation,” Lab on a Chip, 20, 1058 - 1065, 2020.**********6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose “no”, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.Reviewer #1: NoReviewer #2: No[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.4 Jun 2020Additional Editor Comments Response:1. The bold words mean the sentences have been revised in the “Revised Manuscript with Track Changes” file and the “Manuscript” file.2. Table 3 does not refer in the text of my first manuscript. I'm sorry that this is an error that should not have occurred. It has been modified correctly in the revision file.3. My figure files has been uploaded to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, and ensured that figures meet PLOS requirements.4. We have updated the data availability statement.Notice: The “Revised Manuscript with Track Changes” file is different from the “Manuscript” file in the deleted sentences and the added sentences, the former reserved these sentences with red and the latter did not.Reviewer #1: The manuscript studies an optofluidic on-chip holographic microscope using an angular tilted dual linear sensors. Using this framework the authors demonstrate super-resolution of the on-chip holography, which is one of the limitations of lensfree imaging. The tradeoff in the current implementation is the overall throughput of the device (for example, in comparison to: https://www.nature.com/articles/s41377-018-0067-0). I think that the authors should add a discussion on the throughput / super-resolution for the suggested system.Also, although the overall English level is OK, I suggest to have this manuscript be proofed by someone that is not involved in the research, to have it written in a more concise manner.Response: Thanks to the reviewers for their suggestions, we have noticed the mentioned papers and discussed them in the new manuscript. The experiment in the first manuscript shows that the flow rate of solution is about 5μL/min ~ 10μL/min, which is only the data under the single channel of 100μm width. The method used in our paper is to collect as much sub-pixel information as possible through the image sensor's high sampling rate and the flow rate of solution control. The sampling rate of the image sensor and the actual flow rate of the cell affect the setting of the flow rate of solution. In our experiment, we chose a flux that is suitable for the sampling rate of the image sensor. Therefore, our focus is different from the papers presented by the reviewers. We focus on high-resolution imaging of flowing cells, which are smaller in size than the mentioned paper. Furthermore, our system can also increase the flow rate of solution by using image sensors with faster sampling rates or using multi-channel high-throughput microfluidic chips. Finally, the manuscript has been revised by someone good in English.Reviewer #2:1) The sensor, in fact, is based on the MT9P031 area sensor chip. The authors only read out two lines of the chip. It is unclear to me whether this two line scheme has a real benefit compared to the original subpixel optofluidic microscope developed at Yang’s group at Caltech, where hundreds of lines are read out also at a high speed.Response: The subpixel optofluidic microscope developed at the Yang’s group at Caltech, is based on an area array image sensor. The idea is to collect multi-frame low-resolution images of the area array, and then reconstruct high-resolution images by multi-frame super-resolution algorithm.Our system is based on the dual-line array image sensor. It collects the sub-pixel information of the cell through the dual-line array image sensor, one of which is used as a basic super-scan image, and the other is used to calculate the instantaneous flow rate of the cell at multiple points.In contrast, the advantages of our system are: 1. The system only samples the flowing cells through two rows of pixels, which can naturally reduce the noise caused by the low cleanliness of the microfluidic chip. 2. The current commercial linear array image sensor has too large pixels, so we choose a smaller pixel and high sampling rate area array image sensor with a region of interest (ROI). Then our research team is also designing the corresponding dual line array image sensor, which means that there are only two rows of pixels and the sampling rate is higher. Reading only two rows of pixels can reduce the area and power consumption caused by too many pixels of the sensor while acquiring higher resolution images. It is more conducive to the development of the lab-on-chip systems and portable mobile monitoring equipment.2) From line 120 to 124, the author tries to declare a formula for the initial background calculation. However, these parameters are not well defined. I assume Fi is the row number which should belong to N rows, i∈N. If so, the meaning of Fi-N and Fi-1 are unclear to me. What is the range of index i? The parameter k in the equation (5) hasn’t been defined either.Response: I'm sorry that the description of our manuscript is unclear. In the manuscript, i is the current number of collected times, and when the sensor first collects, i is 1. N rows of background images, from i-N to i-1, are buffered to establish a background initial mean model. For example, assuming that N is 20, when the 1000th acquisition is performed, that is, i=1000, then Fi-N to Fi-1 represent the 980th to 999th rows.I'm sorry that there is an error in equation (5), where k is the loop variable in the cumulative calculation, and its range is i-N to i-1.Thanks to the reviewers for your careful inspection, these two issues have been revised in the new manuscript.3) What is the distance between the channel and the sensor chip? In the original optofluic microscope device, the cover glass was removed and the channel is directly placed on top of the sensing area. If the distance is large, did they propagate the light to the sample plane using a phase retrieval process?Response: The distance between the channel and the sensor chip, that is, the object plane and the imaging plane is about 600μm. The original optofluidic microscope device mentioned by the reviewers is to remove the cover glass on the image sensor surface to reduce the distance between the object surface and the imaging surface, which can reduce the diffraction. In this paper, during the test, the cover glass was not removed, but the acquired diffraction image was reconstructed, and then the phase information of the object was recovered using the phase retrieval process. Since this algorithm is not the focus of this paper, and our research team has published the corresponding paper, this method is directly cited in Ref.19. It is worth noting that the method in this paper is also applicable to the small distance between the object surface and the imaging surface.4) For the reconstructed results in Fig 10. As the author demonstrates, the result of variable acceleration (Fig 10 B) is much better than the uniform speed (Fig 10 A). I am curious about this performance difference. Does that mean the current system could only perform well in a certain condition? I hope the author could discuss it.Response: I'm sorry that the description of our manuscript is unclear. Fig 10A in the text is an image reconstructed by the algorithm derived when the microspheres are assumed to have a uniform velocity. Fig 10B is an image reconstructed by the algorithm derived when the microsphere is assumed to have variable acceleration. The flow velocity of the microspheres is changing and the direction of the flow is also changing. The results of Fig 10B are better than those of Fig 10A, which shows that the algorithm derived when using uniform velocity alone does not fit the real situation well. The algorithm deduced when the microspheres are assumed to be variable acceleration can more accurately reconstruct the image under real conditions. Therefore, this does not mean that the system can only perform well under certain conditions. Instead, this means that the system can compensate for the distortion of the reconstructed image due to the uneven cell flow velocity by estimating the acceleration at different points.5) Only 20-um microsphere is demonstrated in the manuscript. This reviewer feel that the impact may be a little short for the lab-on-a-chip community. It will be better if the authors can images of certain cells.Response: This suggestion is very well. We analyzed the 20-um microsphere in the experiment and got the conclusion as described in the paper. In later experiments, we plan to test red blood cells, white blood cells, and algae, and design an embedded, on-chip detection device. However, due to the impact of the COVID-19 epidemic, we cannot return to the laboratory for related experiments. Therefore, we hope to publish the conclusions obtained first.6) I also suggest the authors to give a better introduction and review of the super-resolution lensless microscopy approach that bypasses the pixel size limit in their second paragraph. New development in this field include the use of up-sampling phase retrieval process to bypass the pixel size limit, see, for example, “Wide-field, high-resolution lensless on-chip microscopy via near-field blind ptychographic modulation,” Lab on a Chip, 20, 1058 - 1065, 2020.Response: Thanks to the reviewers for providing us with a new paper, we have added an analysis of the paper to the new manuscript. The mentioned paper uses an up-sampling phase retrieval process to bypass the pixel size limit. The mentioned paper introduces some optical devices on the optical level and improves the resolution through the phase recovery algorithm. Our system is based on the original system without increasing the complexity of the system, through the high sampling rate linear array sensor and the flow of the cell itself, to collect more sub-pixel information of the cell to improve the imaging resolution. The research perspectives of the two are different.Submitted filename: Response to Reviewers.docxClick here for additional data file.10 Jun 2020A super-resolution scanning algorithm for lensless microfluidic imaging using the dual-line array image sensorPONE-D-20-10408R1Dear Dr. Yu,We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.Kind regards,Christof Markus AegerterSection EditorPLOS ONEAdditional Editor Comments (optional):Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.Reviewer #2: All comments have been addressed**********2. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.Reviewer #2: Yes**********3. Has the statistical analysis been performed appropriately and rigorously?Reviewer #2: Yes**********4. Have the authors made all data underlying the findings in their manuscript fully available?The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.Reviewer #2: Yes**********5. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #2: Yes**********6. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)Reviewer #2: The authors have addressed the my previous comments. I think the presented results are technically sound and it is fine to accept.**********7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose “no”, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.Reviewer #2: No16 Jun 2020PONE-D-20-10408R1A super-resolution scanning algorithm for lensless microfluidic imaging using the dual-line array image sensorDear Dr. Yu:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.If we can help with anything else, please email us at plosone@plos.org.Thank you for submitting your work to PLOS ONE and supporting open access.Kind regards,PLOS ONE Editorial Office Staffon behalf ofProf. Christof Markus AegerterSection EditorPLOS ONE
Authors: Xin Heng; David Erickson; L Ryan Baugh; Zahid Yaqoob; Paul W Sternberg; Demetri Psaltis; Changhuei Yang Journal: Lab Chip Date: 2006-08-04 Impact factor: 6.799