Algorithms for the Detection of Blob Defects in
High Speed Glass Tube Production Lines
Gabriele Antonio De Vitis Dip. di Ingegneria dell’Informazione
Università di Pisa Pisa, Italia
Pierfrancesco Foglia Dip. di Ingegneria dell’Informazione
Università di Pisa Pisa, Italia [email protected]
Cosimo Antonio Prete Dip. di Ingegneria dell’Informazione
Università di Pisa Pisa, Italia [email protected]
Abstract—The glass tube production requires high-quality checks that may be performed via inspection systems based on image processing. Such processing must be fast enough to guarantee real-time inspection and to meet the increasing production speed and quality required by the market. Among the various classes of defects, blobs, that appear on the tube surface as circular lenses, are critical as they are hard to be detected and can induce in consumers the idea of low quality of the product contained. In this paper, we present a system based on an algorithm for defect detection that outperforms, in terms of processing time, state of the art solutions without increasing the number of defective tubes not detected. The algorithm exploits defects features and the proprieties of tubes image that are relevant in the production process.
Keywords—Defect detection, glass tube, real time inspection, image processing, inspection systems.
I. INTRODUCTION
Borosilicate glass tubes are usually converted into containers for pharmaceutical applications such as vials, syringes and carpules and for other application domains. Due to imperfections in the raw materials used in the furnace, this type of glass may have defects such as knot inclusions (blobs) or flexible fragments called lamellae, which can cause subsequent problems and recalls [1],[2],[3]. More than 100 million units of drugs packaged in vials or syringes have been withdrawn from the market due to recalls associated with glass quality problems [4],[5].
Among the various classes of glass defects, an important one is represented by knot inclusion (blobs): they appear on the tube surface as circular lenses, while they appear on the captured image as dark patches, orthogonal to the frame (Fig. 1). The blob is related to the cosmetic look of the container and can induce in consumers the idea of low quality of the product contained.
In this paper we focus on aesthetic defects and in particular on blob defects, as they are the most critical defects to be detected by conventional techniques (operators) due to their geometrical features and visible sizes [6].
An inspection system [7],[8],[9],[10] based on machine vision is constituted by the Image Acquisition Subsystem and the Host Computer. The Image Acquisition Subsystem is devoted to the acquisition of the digitized images (frames); key components of such system are a LED-based illuminator, a line scan camera, and a frame grabber, which groups together single sequential lines captured by the camera into a single frame, transferring it to the Host Computer. To
achieve a 360 degrees inspection, 3 cameras are utilized. The Host Computer implements defect detection and classification algorithms (in the Defect Detection and Classification subsystems) and it takes the discard decisions (via the Discard Decision and Data Analysis subsystem), communicating them to a Cutting and Discarding Machine. Discard decisions are taken considering process parameters, some of which are settled via an operator GUI, while other parameters are communicated by the Drawing Machine that establishes the tube speed. Defect detection and classification algorithms implemented in the Defect Detection and Classification subsystems usually work with a part of the acquired image (Region Of Interest – ROI), as only a portion of the acquired image represents the tube.
Fig. 1. Example of a blob defect in a glass tube (highlighted by a red oval).
The evolution of the glass production process requires both high accuracy in defects detection and faster production lines. A discard decision of a tube slice can be taken once the features of the defects on such slice are known. These features can be derived from the defects present in each acquired frame of the tube slice. The detection and classification of these defects imposes temporal constraints on the system. The inspection system, in fact, works in pipeline, and the Image Acquisition Subsystem feeds the pipeline at a rate which is determined by the sampling rate of the line scan camera divided by the number of lines in a frame. The Defect Detection and Classification module must work with the same rate, to avoid frame loss, i.e., the sampling rate of the line scan camera enforces an upper bound to the processing time of defect detection and classification algorithms. The increase of production speed involves the use of line scan cameras with increased sampling rate to keep constant or to increase the accuracy of defects detection. Consequently, the increase of production speed or quality requirements determine the need to reduce the processing time of defect detection and classification algorithms.
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
In this paper, we propose an algorithm for the detection of blob defects that can satisfy the requirements on reduced processing time imposed by the evolution of the production process. The performance of the system based on such algorithm is compared with the ones achieved by adopting conventional algorithms. Results show that, with respect to conventional solutions, we achieve a 66% reduction in processing time and with higher accuracy, while throughput increase of about 3X, and of about 6X when adopting ROI reduction techniques. These results are a consequence of the exploitation of defects features and proprieties of tube images.
II. STATEOFTHE ART
As usually done in inspection systems, the set of elaborations performed on each frame can mainly be divided into 3 stages [8],[11],[12].
1. Image preprocessing 2. Defects detection 3. Defects classification
In the pre-processing phase, algorithms are used to prepare the image for the following stages, with the aim of reducing detection errors due to the acquisition process and/or speeding up the calculation by excluding the regions not to be investigated. The steps generally adopted concern noise reduction, contrast enhancement [13], elimination of unwanted regions and identification of the region of interest (ROI) (Fig. 2).
State-of-the-art ROI extraction techniques consist in identifying the visible part of the tube inside the frame (called hereinafter internal part of the tube), excluding also the edges of the tube. They appear dark as the light rays of the illuminator, having a critical angle of incidence on them according to Snell's law, are reflected on the glass tube and do not affect the camera sensor. An algorithm to extract the internal part of the tube has been proposed in [7].
In the defect detection stage, algorithms are used to determine image regions whose pixels may identify a defect To extract these regions, segmentation techniques are adopted [11], typically based on thresholds [9] or on edge detection [14],[15].
The classification stage consists of algorithms that extract a series of characteristics of the segmented regions, eventually including them within predetermined classes of defects.
State of the art techniques for feature/defect detection and extraction are the edge detection techniques [14]. Edge detection aims to identify points in a digital image where the image brightness changes sharply compared to the rest. Among the various edge detection techniques, the algorithm proposed by Canny [15] (Canny algorithm) is considered the ideal one for images with noise [14]. The image is first smoothed with a Gaussian filter and then gradient magnitude is computed at each pixel of the smoothed image; edges are determined by applying non-maximum suppression, double threshold, and hysteresis. The algorithm has been usefully adopted in various applications domains (inspection of semiconductor wafer surface [16], detection of defects in satin glass [17], measuring icing shape on conductor [18], studies on bubble formation in co-fed gas-liquid flows [19]) and it has been also adopted for the detection of defects in similar inspection systems [7].
Other techniques for defect detection and extraction are based on thresholds, that can be global (fixed for the whole image) or local, i.e. they can be variable in different regions of the image [20].
As for global thresholds techniques, [9] presents an inspection method for float glass fabrication. The authors utilize a benchmark image to remove bright and dull stripes that are present in their glasses. Then, they utilize adaptive global thresholds based on the OTSU algorithm [21] to separate distortions from defects. The OTSU algorithm selects threshold values (one for each image) that maximize the inter-class variance of the image histogram [22]. It is useful for separating background from defects/foreground and produces satisfactory results when images present bimodal or multimodal histograms [11]. It has been successfully utilized in [11] to derive a configurable industrial vision system for surface inspection of transparent parts (in particular, it has been tested on headlamp lens) and again in [23] to detach defects from the background in a float glass defect classificatory and in [24] for glass inspection vision systems.
Fig. 2. Image taken by the line-scan camera. The array of CCD sensors in the line scan camera is orthogonal to the direction of the movement of the tube (tube direction). The Region of Interest is also highlighted. It represents the portion of the image which is further analyzed for detecting defects. The frame is composed of 1000x2048 pixels, which corresponds to a tube length of 500 mm and a tube diameter of about 20 mm. We must note that a (square) pixel in the picture corresponds to a rectangular portion of the tubes so that the original shape of the tube is deformed in the image.
By considering the peculiarity of the glass tubes production process, for which tubes rotate and vibrate, the most important task of stage 1 is the ROI extraction. Only a portion of the image is related to the tube, and only a slice of it may contain visible defects (Fig. 2), and such portion cannot be derived with background subtraction or other template matching techniques [25] due to vibration and the not perfect circular section of the tube (“sausage” shape). For stage 2, by considering the characteristics of the tube glass production, the use of single or multiple constant thresholds does not allow the detection of defects (Fig. 5 highlights that the luminous intensity of columns belonging to the defect is similar to the ones of columns not including a defect) due to the features of the defects, so a Canny edge detector can be applied. The 3 stages can be simplified as:
1) ROI Extraction
2) Canny edge detection (Canny Algorithm).
3) Edges classification inside the ROI and defects classification (Classification).
In Fig. 3, we report the processing time for each of the 3 stages previously described. The stage with the longest processing time is the one related to the application of the Canny algorithm, and it takes 74,6 % of the total processing time. This is not surprising, as it performs time-consuming
tasks such as the application of the Gaussian filter for noise reduction, the calculation of the intensity gradient and dual-threshold comparison. Being very general, Canny brings to high-quality detection, without considering the characteristics of the defect to be detected.
8.04
65.85
9.43
Processing Time
Fig. 3. Processing Time on an Intel i7-940 CPU (2.93 GHz) for each stage of the defect detection algorithm performed on an image of 1000x2048 pixels size. The total Processing Time is 83,2 ms.
In the following, by exploiting the features of the luminous intensity of the defects relevant to the specific domain, we derive an algorithm for defect detection (stage 2) that permits to achieve faster processing times, so that it can be utilized in advanced production processes.
III. RATIONALOFTHEALGORITHM
The region of interest (ROI) of the captured images is presented as a light background with luminous intensity ranging mainly in the direction orthogonal to the direction of movement of the glass; longitudinally, instead, significant variations in intensity are not present.
We consider knot inclusion or Blobs. They appear on the tube surface as circular lenses, while they appear on the captured image as dark patches of small dimensions, orthogonal to the frame (Fig. 1), due to the different orthogonal and longitudinal dimension of pixels. They are relevant due to their difficulty in detection and their effects on the final quality of the produced tube [7], [26]. The blob is related to the cosmetic look of the tube and can induce in consumers the idea of the low quality of the product.
Fig. 4. Frame composed of 1000x2048 pixels with a blob defect. Two columns are highlighted: column b that is located on the blob defect, while column a is a column without blob defect near the edge of the tube image.
Fig. 4 represents a frame with a blob defect, while Fig. 5 shows the luminous intensities of a column containing this defect (column b in Fig. 4). Pixels, where the defect is present, have a lower intensity than those where the defect is not present. This suggests using a threshold algorithm to detect defects: pixels with luminous intensity values below a threshold can be classified as defects. Anyway, since the luminous intensity of pixels near the edge of the tube (column a in Fig. 4 whose intensity is depicted in Fig. 5) is
lower than the luminous intensity of pixels belonging to the blob defect, it is not possible to use a fixed global threshold for the detection of defects. Such threshold, in fact, must be adaptive with respect to the average brightness of each column, that varies among columns, and to the dispersion of the value around such average, due to image noise. For this reason, to select adaptive thresholds, we adopted an approach based on the Sigma Rule [27]. To individuate blob defects, we calculate the mean value (μ) and the standard deviation (σ) of the luminous intensity of a single column and we set the threshold value to the average value of the luminous intensity calculated on the column minus kc-times the
standard deviation of the luminous intensity (Fig. 6). Instead of standard deviation, more robust estimators like Median Absolute Deviation [28] (MAD) could be used (more resilient to outliers), but our goal is to reduce processing time while maintaining the same quality of defect detection. So, solutions like MAD are not considered for their computational cost.
Fig. 5. Luminous intensity (0-255) for pixels that belong to the columns a and b of Figure 4. Column b includes a blob defect, while column a is without blob defect and is located near the edge of the tube image.
Fig. 6. Application of sigma rule on intensity values on the column with a blob defect. Line μ represents the average value of the luminous intensity of column b of Fig.4, while lines σ, σ-μ, σ-2μ, σ-3μ, represent thresholds equals to the average value minus to 1, 2, 3 times the standard deviation of the column.
IV. THEALGORITHM
A. Initialization
We assume that a line scan camera collects an image in a frame grabber composed of grey-scale pixels. Previously, the region of interest (ROI) has been extracted using the algorithm presented in [7], and the algorithm starts from this image, with size of N rows and M columns. We denote as
I(i, j)
the luminous intensity of the pixel with coordinatesi, j
(i denotes the column index and jdenotes the row index) and we assume that luminous intensity is represented with 8-bit values in the range [0,255], where 0-value is black, and 255-value is white (grayscale notation).
A result image
R(i , j)
is created with the same dimension of the image that must be processed and its pixels are initialized with 0-value. This image is written by the algorithm and it will contain the results of detection. After that, the algorithm of blobs detection is executed, as it follows.B. Columns processing to detect blobs
1. The average value m of the luminous intensity for the column i is evaluated:
m(i)=
1
N
∑
J=1N
I (i, j)
2. The standard deviation of the luminous intensity for the column i is calculated with the simplified formula:
σ (i)=
√
1
N
∑
j=1N
I (i , j)
2−
m(i)
23. The difference
E
between each pixel value and the average value of its column is calculated:E (i, j)=I (i , j)−m(i)
4. If
E
is higher than kc times the standard deviationof its column, the point
(
i , j)
belongs to an anomaly and the corresponding point in the result image is set to white value (i.e. 255-value).if E (i, j)>k
cσ (i)then R (i , j)=255
5. After that each column is processed, the result image R is sent to the classification stage.
C. Classification
A classification algorithm is needed to individuate group of points with some features useful to candidate them to defect and to separate them from noisy points. The adopted method consists in building for each group of adjacent pixels the smallest rectangle that contains them (container of the recognized pixel). Considerations on the number of pixels included in such container permit to distinguish real defects from the noise and to distinguish blob defects from noises.
We considered the following parameters: MA: Minimum Area in terms of number of pixels LHRThres: Length and Height Ratio
For each container, we evaluate the following quantities: A: Area of the container in terms of number of pixels LHR: Length and Height Ratio
The classification is performed according to the algorithm:
If A ≥ MA and LHR ≤ LHRThres the defect is classified as a blob, otherwise the defect is classified as noise.
Based on experts’ experience, the values of the parameters used in our experiments are MA = 20 pixels,
LHRThres = 2. According to the MA setting, the area of the smallest detectable defect is equal to 0.15 mm2.
D. Setting of K values
In the choice of kc we must consider that:
High k-values can cause false negatives, i.e. real defects that are not recognized by the algorithm. Low k-values can cause false positives, i.e.
points that the algorithm recognizes as a defect but that does not belong to a defect.
We choose kc as the smallest values that recognize the
defects that are present in our sampling images.
V. RESULTS
The inspection system has been implemented and it is working on production lines of a real glass tube foundry [7]. The Host computer is a PC based on an intel i7-940 CPU. The algorithm runs as a task activated by the frame grabber when a new frame is ready in main memory. The transfer of a frame from the frame grabber buffer to main memory is overlapped with the processing of the previous frame. As the Cutting Machine is located at a distance higher than 5 m, once a frame is elaborated, there is more than 1s to produce the discarding command. Such time is overlapped with the elaboration of consecutive frames, so it has not been considered in our evaluation.
System performance has been evaluated using a dataset of 30 frames of glass tubes acquired during the production phase. All frames are composed of 1000 lines acquired by the line scan camera sensor (2K pixels). These frames contain 10 blobs; one frame contains 3 blobs while 22 frames do not have blobs. The machine on which tests are executed has a configuration similar to the production one and equipped with an Intel Core i7-940 CPU running at 2.93 GHZ.
Tests are executed to measure 4 parameters: processing time, number and type of defects classified, blob area of detected blobs. As for processing time, we perform 400 executions of the whole dataset of images [31], and we take the average execution time of the defect detection phase and the maximum total processing time. Processing time is the main parameter, the others are related to the detection accuracy: a good algorithm should be fast but should not reduce the inspection quality. For what concern the number and type of defects, we considered the number of true positives (TP), i.e. the number of defects that are properly classified by the algorithm; the number of false positives (FP), i.e. defects that are classified by the algorithm but that do not correspond to real defects in the tube; and the number of false negatives (FN), i.e. defects that are present in the tube but are not recognized by the algorithm. We consider also the number of tubes discarded with a similar
TABLE I HOST COMPUTER Hardware Configuration
Processor Intel® Core™ i7-940 Processor (8MB Cache, 2.93 Ghz)
RAM 8 GB
Defect detection system
Algorithm name Canny Sigma
Pre-processing ROI Extraction ROI Extraction Defect Detection Canny Algorithm Local Threshold
Parameters Thresholds 35-80 kc = 3.91
Post-processing Classification Classification
classification. The real defects have been identified by human experts, with a visual inspection of the tubes.
Performance figures of the systems have been compared to the ones achieved by a version that adopts, for the defect detection phase, the Canny edge algorithm [15]. As to the preprocessing phase, the same version of ROI extraction algorithm has been considered in both algorithms. The proposed algorithm (called Sigma, hereinafter) has been configured with the following value: kc = 3.91 while, for the
Canny edge detector, we adopted the thresholds: 35-80. Such values generate the best quality of detection in terms of TP, FP, and FN.
Table I summarizes the main features of the Host Computer, the algorithms utilized for the various stages of defect detection and their configuration parameters. All the image processing algorithms have been implemented using the OpenCV library [29], [30] and compiled with the Visual Studio compiler.
Table II reports the accuracy in defects classification, in term of number and type of defects detected (TP), not detected (FN) and wrongly detected (FP). We also report the accuracy in terms of number of tubes that should be discarded by the system, relying on right (TP) or wrong classification (FP), as more defects can be present on a frame. The two algorithms have similar classification accuracy, with a slight reduction in the number of FPs with the sigma algorithm. The same is true for the portion of tubes.
TABLE III BLOBAREA [PIXELS]
Frame ExpectedValue AlgorithmCanny Sigma
1.3 135 270 46 1.5 81 198 48 1.7 97 186 94 1.9 100 132 126 3.2 18 182 19 3.3 110 236 132 3.5 1172 1958 1286 3.7 18 38 22 3.7 43 55 57 3.7 22 38 33 Cumulative sum 1796 3293 1863 Percentage 100 183,35 103,73
Avg Abs Error
(%) 0 167,27 27,58
Table III reports the accuracy in defects classification, measured as the blob area of defects that are TP, as detected by visual inspection of an operator and by the algorithms. The blob area is related to the cosmetic look of the tube and can induce in consumers the idea of the low quality of the product. We report also: i) the cumulative sum of the defects, and the fraction of defects (in percentage, row Percentage in Table III) that are detected by the algorithms with respect to the operator; and ii) the average of the abs error in percentage, i.e. the relative error (row Avg Abs Error in Table III). The Sigma algorithm improves accuracy with
respect to Canny in blob detection. Sigma algorithm detects the blobs cumulative area with an error of about 3%, and it has an Average Absolute Error of 27%. The significant percentage of error of the Canny algorithm is due to the morphological dilate operation that we apply after the process of Canny detection algorithm to best detect connected regions [29].
In terms of processing time (Fig. 7), the execution time of the Sigma algorithm is 13 ms while the execution time of the Canny algorithm is 61 ms, with a decrease of 78% of Sigma compared to the Canny algorithm. The execution time for the total process with Canny is 90 ms and with sigma algorithm is 30 ms, with a decrease of 66%. This corresponds to an increase of throughput of about 3X.
Defe ct D etec ion with Can ny Defe ct D etec ion with Sig ma Tota l Pro cess ing with Can ny Tota l Pro cess ing with Sig ma Cann y FP S Sigm a FP S 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
Processing Time and Throughput
m ill is ec o n d s FP S
Fig. 7. Processing Time and Throughput for Canny and Sigma algorithm
Results on the accuracy of detection (Table II) indicated that the Sigma algorithm does not worsen the quality of inspection: the number of True Positive is like the ones obtained with the Canny algorithm, while decreases the number of False Positives. The Sigma algorithm is also more precise than Canny with reference to the size and area of detected defects. As for the processing time, the analysis of the execution time indicates that the Sigma algorithm is faster than the Canny, when considering the defect detection phase, but also when considering the whole execution. As a summary, our proposed algorithm permits us to reduce the processing time of inspection with respect to Canny, with better accuracy in detection.
Further improvements can be achieved by adopting the ROI reduction algorithm DSDRR [32] (Detrended Standard Deviation ROI Reduction), a lightweight technique that reduce the ROI by excluding from it portions inside the tube that don’t contain defects. In particular (Fig. 8), the combined use of the Sigma and DSDRR algorithms, permits to increase the throughput of 5.72X respect to a baseline Canny solution.
A reduction of processing time allows the use of cameras with higher sample rate. So, it is possible to have an improved quality of collected frames, by catching defects with a smaller size, or it is possible to increase the speed of production without loss of precision and accuracy in defects recognition.
TABLE II
NUMBEROFRECOGNIZED BLOBS/PORTIONOFTUBESDISCARDED ANDTHEIRCLASSIFICATION
Expected value TP TP/FP (FN)Canny TP/FP (FN)Sigma Blobs 10 10/5 (0) 10/3 (0) Number of Tubes Discarded 8 8/2 (0) 8/1 (0)
Cann y Cann y with DSD RR Sigma Sigm a w ith D SDRR 0 20 40 60 80 100 0 20 40 60 80 100 Processing Time and Throughput
TOTAL FPS m il is se co n d s FP S
Fig. 8 Processing Time and Throughput.
VI. CONCLUSIONS
A vision system can be exploited to inspect the quality of glass tubes during the production process. Improvements in such process and the need to increase the detection accuracy suggest the adoption of faster algorithms for the detection of defects. In this paper, starting from the analysis of defects that are relevant in the pharmaceutical glass tubes domain, we have developed an algorithm from blob detection that provides quality of detection comparable with state of the art schemes and reduces the detection time by 78%, and by 66% the overall processing time. In terms of throughput, the gain is of about 3 times, and increases to about 6 times when adopting ROI reduction techniques. As for future works, we plan to investigate strategies to parallelize all the phases involved in detection, considering CMPs [33] and the adoption of GPUs.
ACK
This work has been partially supported by the Italian Ministry of Education and Research (MIUR) in the framework of the CrossLab project (Departments of Excellence – LAB Advanced Manufacturing and LAB Cloud Computing, Big data & Cybersecurity) and by the PRA Project “Guida, Navigazione, Comunicazione e Controllo ad Elevata Sicurezza per Veicoli Senza Pilota”, supported by the University of Pisa.
REFERENCES
[1] Reynolds G, Peskiest D. Glass delamination and breakage, new answers for a growing problem; BioProcess International 9(11):52-57, 2011.
[2] Iacocca R.G., Toltl N., et al. Factors Affecting the Chemical Durability of Glass Used in the Pharmaceutical Industry. AAPS PharmSciTech, v.11(3):1340-1349, 2010.
[3] Schaut, Robert A., and W. Porter Weeks. "Historical review of glasses used for parenteral packaging." PDA journal of pharmaceutical science and technology, 71.4 (2017): 279-296.
[4] Food and Drug Administration. Recalls, Market Withdrawals, & Safety Alerts Search. https://www.fda.gov/Safety/Recalls/, visited February 2019.
[5] Bee J.S., et al. Effects of surfaces and leachable on the stability of bio-pharmaceuticals. Jou. of Pharmaceutical Sciences, 4158–4170, 2011.
[6] Bernhard Hinsch, Visual and Automated Inspection Technologies for Glass Syringes, Pharmaceutical Technology, Vo. 38(10), 2014.
[7] P. Foglia, C.A. Prete, M. Zanda, An inspection system for pharmaceutical glass tubes, WSEAS Transactions on Systems, Vol. 14, Art. #12, PP. 123-136, 2015.
[8] Kumar, Ajay. "Computer-vision-based fabric defect detection: A survey. "IEEE trans. on ind, electronics 55.1 (2008): 348-363.
[9] Peng, Xiangqian, et al. An online defects inspection method for float glass fabrication based on machine vision. The Intern. Journal of Advanced Manufacturing Technology 39.11-12 (2008): 1180-1189.
[10] Bradski, Gary, and Adrian Kaehler. Learning OpenCV: Computer vision with the OpenCV library. O'Reilly Media, Inc., 2008.
[11] Martínez, S. Satorres, et al. "An industrial vision system for surface quality inspection of transparent parts." The International Journal of Advanced Manufacturing Technology 68.5-8 (2013): 1123-1136
[12] Malamas, N., et al. A survey on industrial vision systems, applications and tools. Image and vision computing 21.2 (2003): 171-188.
[13] Li, Di, Lie-Quan Liang, and Wu-Jie Zhang. "Defect inspection and extraction of the mobile phone cover glass based on the principal components analysis." The International Journal of Advanced Manufacturing Technology 73.9-12 (2014): 1605-1614.
[14] Kumar, Mukesh, Rohini Saxena. "Algorithm and technique on various edge detection: A survey."Signal & Image Processing 4.3 (2013): 65.
[15] J.F. Canny, A computational approach to edge detection, IEEE Trans. Pattern Analysis and Machine Intelligence, 8(6):679–698, 1986
[16] N.G. Shankar, Z.W. Zhong, Defect detection on semiconductor wafer surfaces, Microelectronic Engineering, 77 (3–4), 337-346, 2005
[17] Adamo, F., et al. "A low-cost inspection system for online defects assessment in satin glass." Measurement 42.9 (2009): 1304-1311.
[18] Xinbo Huang, et al, An Online Technology for Measuring Icing Shape on Conductor Based on Vision and Force Sensors, IEEE Transactions on Instrumentation and Measurement, 66 (12) 3180-3189, 2017.
[19] M.M. de Beer, et al., Bubble formation in co-fed gas–liquid flows in a rotor-stator spinning disc reactor, International Journal of Multiphase Flow, Volume 83, 2016, pp 142-152.
[20] Saxena, Lalit Prakash. "Niblack’s binarization method and its modifications to real-time applications: a review." Artificial Intelligence Review (2017): 1-33.
[21] Otsu N, A threshold selection using an iterative selection method. IEEE Trans Syst Man Cybern (1979) 9:62–66
[22] Zouhir Wakaf et al., Defect detection based on extreme edge of defective region histogram, Journal of King Saud University, -Computer and Information Sciences, Vol. 30(1), 2018, Pp. 33-40
[23] L. Huai-guang et al., A classification method of glass defect based on multiresolution and information fusion, The Int. Journal of Advanced Manufacturing Technology, Vol. 56 (9), pp. 1079-1090, 2011.
[24] Masoumeh Aminzadeh, Thomas Kurfess, Automatic thresholding for defect detection by background histogram mode extents, Journal of Manufacturing Systems, Volume 37, Part 1, pp. 83-92, 2015.
[25] H. Kong, et al., "Accurate and Efficient Inspection of Speckle and Scratch Defects on Surfaces of Planar Products," IEEE Transactions on Industrial Informatics, vol. 13, no. 4, pp. 1855-1865, Aug. 2017.
[26] Berry H. Pharmaceutical aspects of glass and rubber; J. of Pharmacy and Pharmacology, v.5(11):1008-1023; Wiley, 2011.
[27] Pukelsheim, Friedrich. "The three-sigma rule." The American Statistician 48.2 (1994): 88-91.
[28] Jain, Raj. The art of computer systems performance analysis: techniques for experimental design, measurement, simulation, and modeling. John Wiley & Sons, 1990.
[29] Bradski, Gary, and Adrian Kaehler. Learning OpenCV: Computer vision with the OpenCV library. O'Reilly Media, Inc., 2008.
[30] https://opencv.org/
[31] J. Abella, al., Measurement-Based Worst-Case Execution Time Estimation Using the Coefficient of Variation, ACM Trans. Des. Autom. Electron. Syst.22, 4, Article 72 (June 2017), 29 pages.
[32] G. A. De Vitis, P. Foglia, C.A. Prete, A technique to reduce the processing time of defect detection in glass tubes, Computing Conference 2019, July 16-17 2019, London, UK. Springer Advances in Intelligent Systems and Computing.
[33] S. Bartolini, P. Foglia, C.A. Prete, Exploring the Relationship between Architectures and Management Policies in the design of NUCA-based Chip Multicore Systems, Future Generation Computer Systems, Vol. 78, Part 2, January 2018, pp. 481-501.