For full instructions, visit the IEEE IST 2015 website: http://ist2015.ieee-ims.org
Program for 2015 IEEE International Conference on Imaging Systems and Techniques (IST)
Tuesday, September 15
Tuesday, September 15, 19:00 - 20:00
Food Paradise Canteen, 1/F, E5 Central Teaching Building, University of Macau
Wednesday, September 16
Wednesday, September 16, 07:30 - 09:00
Wednesday, September 16, 09:00 - 09:10
Welcome message from General Chair
Wednesday, September 16, 09:10 - 09:30
Welcome Message from Local Chairs
Wednesday, September 16, 09:30 - 10:30
Lecture 1: Impact of Molecular Imaging for diagnosis and treatment of cancer
Lecture 1: Impact of Molecular Imaging for diagnosis and treatment of cancerSeptember 15, 9:30 AMJames P. Basilion, PhDCase Western Reserve University; Case Center for Imaging ResearchCleveland, USAMolecular Imaging (MI) seeks to non-invasively visualize and quantitatively study both normal and pathological processes within living organisms that occur at the molecular levels. Molecular Imaging utilizes these differences in biochemical functioning to derive images representative of on going normal biology and/or disease. This approach is significantly different from current clinical imaging modalities, which generate images based on anatomical differences between normal or diseased tissues. Therefore, MI can be thought of as adding another dimension to current clinical images, that of biochemical information, making the images more useful for disease assessment, diagnosis, and therapeutic approach.To generate images rich in biochemical information MI takes advantages of molecular differences between normal and diseased tissues (biomarkers), probing them with biomarker specific molecules that can be non-invasively quantitatively measured. For example, in many cancers biomarkers such as proteases or cell-surface receptors are differentially regulated making them potential targets for developing imaging probes to interrogate biologies unique to the disease state. Last year at the School we discussed the use of these markers to engineer new techniques to improve surgical removal of cancers. This year we will study the use of MI and biomarkers as they impact the diagnosis and delivery of therapeutic payloads.
Wednesday, September 16, 10:30 - 10:50
Wednesday, September 16, 11:00 - 12:00
Lecture 2: Biomedical Image Processing - Advanced Concepts in Breast Cancer Evaluation
Lecture 2: Biomedical Image Processing - Advanced Concepts in Breast Cancer EvaluationSeptember 15, 11:00AMMichalis Zervakis and George LivanosDepartment of Electronic and Computer EngineeringTechnical University of Crete, Chania, GreeceResearch interest and utilization of digital imaging systems becomes increasingly important in modern healthcare, especially for medical diagnostics and biomedical applications. Distinct imaging modalities, equipped with digital sensors and driven by advanced technical aspects are capable of providing medical experts with multivariate information regarding human organism in both microscopic and macroscopic level. One of the most important applications of biomedical imaging regards classification of cancerous tissue segments. In laboratory studies, tissue examination and evaluation is directly related to disease treatment and health survival. Based on digital imaging techniques, the entire spectrum of digital image processing is now applicable in medicine. The interpretation of medical images relies mainly on subjective visual estimates by an expert, yielding only semi-quantitative results open to inter-observer variation. Consequently, there is a need to establish more objective, automated or semi-automated methods to qualify and quantify the laboratory assessments of human tissue in combination with the adopted imaging modality. Yet, commercial algorithmic procedures are not open to targeted analysis and expansion. Thus, the algorithmic integration of individual algorithmic steps implemented by simple, low-cost software constitutes a "semi-automated approach" for the development of effective image analysis tool. This direction presents a great challenge for commercial developers in order to provide an efficient, accurate and financially accessible tool for any pathologist. Towards this direction, we attempt to review algorithmic approaches for the extraction of qualitative and quantitative results regarding disease diagnosis and/or prognosis from biomedical images. We emphasize the increasingly important role of such methods, as well as the necessity for the expert's intervention in order to resolve ambiguous cases. In addition, we address the need for successive validation in order to achieve standard, repeatable and generalizable outcomes in clinical practice. The above issues are explained using extended examples of exploiting and fusing image processing schemes for the classification of breast tissue from microscope image samples. Overall, we discuss an integrated evaluation protocol based on computer-assisted image processing approaches, which forms a promising scientific area and a challenge for the industry.
Wednesday, September 16, 12:00 - 13:20
Lecture 3: Light Scattering Techniques: Novel Applications in Cancer Research and Pharmaceutical Industry
Lecture 3: Light Scattering Techniques: Novel Applications in Cancer Research and Pharmaceutical IndustrySeptember 15, 12:00 PM1Tannaz Farrahi, 2Suman Shrestha, 3Aditi Deshpande, 4Thomas Cambria, 5George Livanos, 6Ying Na, 7Keerthi Srivastav Valluru, and 4George C. Giakos1Dept. of Electrical and Computer Engineering, University of Virginia, USA, 2University of Massachusetts Medical Center, USA, 3University of Akron, USA, 4Manhattan College, USA,5Dept. of Electronic and Computer Engineering, Technical University of Crete, Greece, 6Institute of Communication Engineering Hangzhou Dianzi University, China, 7Stanford University Medical School, USAOptical diagnostic technology based on light scattering, for minimally invasive detection of precancerous and early cancerous changes, will be presented. Among them, near-infrared (NIR) optical imaging is a newer imaging technique that shows much promise for earlier detection of many cancers and their characterization. Giakos, and coworkers pioneered the uses of NIR polarimetric detection towards the design of efficient lung cancer detection techniques, and instrumentation development. Specifically, the observation of inherent (label-free) near infrared NIR polarimetric diffused reflectance signatures (as opposed to any external stains or labels used to treat the sample) of cellular components, can be found extremely useful and directly applicable in the areas of classical cytopathology and histopathology, as well as during real-time surgical excision of tumors by contributing to reliable assessment of tumor margins. An important biomarker of precancerous change which has been characterized accurately based on our techniques, is the cell enlargement. This lecture will present the latest optical scattering techniques for early cancer diagnosis, following by the NIR polarimetric detection principles.The challenge in analyzing protein aggregates lies in the characterization of the formed aggregates as well as the wide size range, spanning in a range between nm to a few mm in diameter. Since no single one of the currently available techniques is able to cover this size range, synergistic efforts of several techniques is necessary. However, each technique has its own strengths and weaknesses. Moreover, the available modalities differ in the physical measuring principles and, consequently, in the results and type of information obtained. This lecture, will specifically focus on the physical changes that occur in proteins and their contribution to the overall stability relevant to product development, so readers are referred to other chapters in this book for chemical degradations with specific emphasis on the physical stability of protein pharmaceuticals, using innovative light scattering techniques. Comparisons among different modalities will be presented.
Wednesday, September 16, 13:20 - 14:30
Wednesday, September 16, 14:30 - 15:30
Innovative Detection Systems and Techniques
- 14:30 A Novel Front-End Electronic System with Full-Customized Readout ASIC and Post Digital Pulse Shaping for CZT-Based PET Imaging
- This paper presents a novel front-end electronic system dedicated to CZT detectors for positron-emission tomography (PET) imaging applications. It is implemented by full-customized readout application-specific integrated circuits with a post digital pulse shaping algorithm in FPGA. In the front-end readout ASIC, the preamplifier using split-let topology, the variable gain amplifier, the multiple-point sampling ramp ADC are discussed in detail. Meanwhile, the design techniques of digital CR-RC shaping and the digital trapezoidal shaping are introduced. A prototype ASIC is implemented in CMOS 0.35 μm. The preliminary results have been obtained. The detection range of the gamma ray is from 11.2 keV to 550 keV. The linearity of the output voltage is less than 1 %. The gain of the readout channel is 40.2 V/pC. The test results show that the proposed front-end electronics is appropriable for PET imaging applications.
- 14:45 An FPGA-based Lock-In Detection System to Enable Chemical Species Tomography Using TDLAS
- This paper presents the design, implementation and test of a compact, low-cost and fully digital signal recovery system for tunable diode laser absorption spectroscopy (TDLAS) in narrow line-width gas sensing applications. An FPGA-based digital lock-in amplifier (DLIA), in conjunction with TDLAS using the wavelength modulation spectroscopy (WMS) technique, is utilized to demodulate and extract first (1f) and second (2f) harmonic signals for a narrow CO2 feature in the spectrum region of 1997.2nm. The spectrum in this wavelength region shows suitably weak water absorption, enabling CO2 detection with high resolution. Gas-cell experiments were carried out using the DLIA and a conventional rack-mounted commercial lock-in amplifier. The comparison between the two systems shows good agreement, validating the feasibility of this approach and demonstrating the prospect for extension to a massively multi-channel system to implement Chemical Species Tomography.
- 15:00 Development of 36M-Pixel Micro-CT Using Digital Single-Lens Reflex Camera
- A high-resolution, large field-of-view micro-CT system is indispensable for the visualization of fine three-dimensional (3-D) structures of large human lung specimens. It drastically increases the overall number of detector pixels. At the SPring-8 synchrotron radiation facility, a micro-CT system based on a 10M-pixel CCD camera was developed over a decade ago for 3-D specimen imaging of centimeter-sized objects with approximately 7 μm spatial resolution and a field of view of 23.6 mm width × 15.5 mm height. Recent studies require higher spatial resolution and wider field-of-view systems. Detectors with a spatial resolution of around 5 μm can visualize capillaries in the lung specimens. Accordingly, a wide field-of-view micro-CT system with a spatial resolution of about 5 μm is under development using a 36M-pixel digital single-lens reflex camera.
- 15:15 Coke Deposition Detection Through the Analysis of Catalyst Images
- Coke deposition on catalyst will not only reduce catalytic activity and selectivity, but also affect the product yield, the reaction residence time, the regenerator temperature and so on. As a result, it is necessary to measure the amount of coke deposition on catalyst. This paper proposes a new method based on image analysis. An image acquisition system consisting of a flatbed scanner and an opaque cover is used to obtain catalyst images. After imaging processing and analysis, the gray layer is selected to be the most effective colour layer for colour features extraction based on a discriminability index, D. Eight colour features (mean, variance, skewness, entropy, energy, H, S, V) are extracted from images with a good ability to classify the catalysts with different coke amount. Furthermore, the results show that there is a significant linear correlation between the H value and amount of coke deposition on catalyst, which could reflect the coke deposited state and coking progress effectively.
Remote Sensing, Ladars, Lidars
- 14:30 A Thresholding Technique for Differentiation of Materials Used in Remote Sensing
- This paper introduces a new remote sensing technique based on polarimetric detection principles, aimed at enhancing the discrimination of unresolved materials, Amorphous Silicon and Polysilicon. Some of the polarimetric measurements performed at different angles of incidence (angular rotations) of these materials arepresented using backscattered polarimetric signals obtained with a polarimetric system (test bed). A new discrimination technique of these materials based on their angle of rotation has been presented. This technique calculates the rotation angle at which maximum discrimination between these materials is possible using a thresholding method. Based on the initial results, it can be said that maximum discrimination between the materials is possible more at off-axis angles than at the angle of incidence.
- 14:45 Reflectance Modelling Using Terrestrial LiDAR Intensity Data
- With the increasing use of Terrestrial Laser Scanners (TLSs) to sense various environments it becomes increasingly necessary to develop automated processing techniques to deal with the large amount of data generated. To aid in the automatic processing, researchers have recently been turning to the use of ``intensity'' data returned by TLSs as an additional source of information. Ideally a value that is independent of distance and incidence angle, and that instead is related to the surface properties being scanned is desired. For diffuse surfaces this value is termed the reflectance. A method for modelling the reflectance of a diffuse surface using returned intensity, angle of incidence and range obtained from TLSs is presented. The model is applied to two different TLS instruments, a Faro Focus3D and Riegl VZ-400. A model is parametrized for each instrument using data obtained in an underground potash mine. For the Riegl instrument the model is verified using a data set obtained above ground, in a grass playing field. The standard deviation of error is 0.064 or 6.4%. For the Faro instrument the model is obtained using only a subset of the acquired data set and verified with the remainder. The standard deviation for the Faro model is 0.061 or 6.1%.
- 15:00 A Novel Analysis Method of Deformation Accuracy for Spaceborne PS-DInSAR
- Spaceborne PS-DInSAR is an advanced remote sensing technology for achiving global surface deformation measurement, it has a series of advantages such as all time and weather, low cost, high deformation accuracy and so on. This paper focuses on the deformation accuracy of PS-DInSAR. Based on basic principle of spaceborne DInSAR, this paper introduces the principle of PS-DInSAR, and sets up the mathematical model of PS-DInSAR deformation accuracy, deduces the accuracy model of PS-DInSAR deformation detection by a novel method. Besides, PS-DInSAR deformation accuracy is analyzed by simulation. The research of this paper has guiding significance for the analysis of spaceborne PS-DInSAR overall performance.
- 15:15 PS-DInSAR Deformation Velocity Estimation by the Compressive Sensing
- PS-DInSAR has been a tool for detecting surface micro-deformation. However, the technique is constrained by the quantity of SAR images which should be more than 30. Compressive Sensing (CS) is a new method of signal processing and allows recovering signal stably with fewer measurements. The paper applied CS to PS-DInSAR after analyzing the sparsity of data and proposed a novel method to estimate the deformation velocity with a high accuracy by using fewer SAR images. Our method will reduce the redundant data. A scene with a cone-shaped peak is designed to generate SAR images. Simulation results are presented to validate our method.
Wednesday, September 16, 15:30 - 16:30
Electrical Tomographic Imaging Techniques
- 15:30 Image Reconstruction for Electrical Resistance Tomography Based on Extended Sensitivity Matrix
- Image reconstruction for electrical resistance tomography is an ill-posed and ill-conditioned problem. To enhance the image quality, a regularized method should be applied for the image reconstruction. However, most of the existing algorithm is based on a sensitivity matrix. In this paper, a new extended sensitivity matrix is designed and used for image reconstruction. The extended sensitivity matrix is consisting of a lot of block bases of mixed sizes. An image reconstruction algorithm for electrical resistance tomography based on the extended sensitivity matrix and Landweber iteration is carried out. Simulation results show that the proposed method has a good performance.
- 15:45 Excitation Strategy for Three-dimensional Electrical Capacitance Tomography Sensor
- The multiple electrode excitation strategy has been applied in two-dimensional electrical capacitance tomography (ECT) sensors to improve the image quality. The aim of this research is to investigate the performance of multi-electrode excitation strategy in three-dimensional (3D) ECT sensor. Three different excitation strategies are considered for comparison, including the single-electrode excitation, dual-electrode excitation in the same plane and dual-electrode excitation in different planes. The optimal number of iterations based on the maximum correlation coefficients is used to reconstruct the images using simulation data by Landweber iteration algorithm. Simulation results show that the single-electrode excitation strategy outperforms the other two strategies.
- 16:00 A Modified L-curve Method for Choosing Regularization Parameter in Electrical Resistance Tomography
- Regularization methods are widely used in dealing with the ill-posed inverse problem of electrical resistance tomography (ERT). One of the most well-known regularization technique is Tikhonov method, which is a parameter-dependent method. A proper choice of regularization parameter is crucial in obtaining an efficient regularization solution for the inverse problem. The L-curve method is known as one of the most popular regularization parameter choice rules, however it may fail in some situations. The investigation on those failed situations shows that a new corner point often appears on the L-curve and the parameter corresponding to the new corner point can obtain a better solution than the one corresponding to the traditional global corner point. Based on this observation, a modified L-curve method is proposed using the new corner point. Moreover two strategies are provided to implement the modified method--one is based on the second-order differential of L-curve while the other is based on the curvature of L-curve. The modified L-curve method is examined by numerical simulations for typical conductivity distributions. And the results indicate that the modified method can achieve a more efficient solution and improve the quality of reconstructed images in comparison with the traditional L-curve method. The modified L-curve method extends the application of L-curve in the field of choosing regularization parameter and can also be used in other kinds of tomography.
- 16:15 Electrical Capacitance Tomography Based Imaging of High-contrast Gas-liquid Annular Flow
- Gas-liquid annular flow with thin thickness often occurs in industry scenario and can be generated in laboratory for research. Imaging of gas-liquid annular flow can derive the thickness of the annular flow, further more calculate the gas liquid ratio, as well as help monitor the status of the gas-liquid annular flow in the process. In this paper, an electrical capacitance tomography (ECT) based method for imaging of gas-liquid annular flow is introduced, which focuses on image reconstruction and estimation of annulus thickness with high-contrast dielectric. The cross-section image is reconstructed from the compensated capacitances of an ECT sensor and is used to estimate the liquid film thickness.
Imaging Instrumentation Design and Techniques
- 15:30 Extended Focused Imaging in a Holographic Microscopy Imaging System
- In most optical imaging systems, a three-dimensional object lying within the depth-of-field (DOF) will produce a clear and sharp image, while the parts outside will become blurry. Therefore, especially in microscopy, increasing the DOF is highly desirable, which can be achieved computationally through extended focused imaging (EFI). To construct the EFI image, we first use a depth-from-focus algorithm to create a depth map for each pixel by calculating its entropy. Based on the depth map, we show how to achieve EFI in a holographic microscopy imaging system called optical scanning holography. Computational results on objects with multiple axial sections are presented to validate the proposed approach.
- 15:45 A Portable Single-Pixel Camera Based on Coarse-To-Fine Coding Light
- Single-pixel imaging based on compressive sensing theory is a novel technique that emerged in the past decade. However, the majority of existing single-pixel cameras are too complex to realize out of laboratory. In this paper, we propose a portable compressive imaging camera platform consisting of a pocket projector and an optical power sensor. We use the projector to cast the flexible compressive sensing pattern generated by the computer program. In addition, this work introduces a quick coarse-to-fine model that can explore the object outline information and consequently reduce redundant computations in advance. Simulation and real experiments show that compared with existing techniques, the proposed platform and approach demonstrates higher recovery accuracy with less time consumption.
- 16:00 Through-the-Wall Human Sensing Based on Change Detection
- The ability to sense the presence of human through visually obscure barriers can be useful in many critical missions like the search and rescue operation. Due to the nature of a human target which regularly moves, theoretically the Change Detection (CD) principle can be employed for detection. Experiments were performed in the controlled environment utilizing 10 cm thick brick wall in order to validate this theory. It was discovered that the CD scheme in combination with DAS based beamformer is sensitive to changes due to position, postural and breathing. It is also possible to differentiate between a sitting and standing target from the images in the vertical plane.
- 16:15 A Portable 3D White Light Imaging System for CAD of Facial Prosthesis
- Accurate acquisition of 3D facial surface data is important to CAD of facial prosthesis in order to restore the patients' appearance. However, most of the existing facial imaging systems are required to be fixed in a special room requiring certain lighting conditions. In this paper, a portable 3D white light imaging system for CAD of facial prosthesis is development. This system consists of two measurement sensors which can scan and register both sides of the patients' face. Compared with other methods, this system is fast, compact, safe and resists to the surroundings. A detailed system mathematical model is deduced and a system calibration method is proposed to ensure the measurement accuracy. This system has been successfully applied to the clinical applications.
Wednesday, September 16, 16:30 - 16:45
Wednesday, September 16, 16:45 - 17:45
Efficient Segmentation Techniques
- 16:45 Container Based Parallelization for Faster and Reliable Image Segmentation
- In this paper, we describe a scalable and economical architecture for performing container based parallelization to obtain the best possible quantized image using different quantization techniques on the cloud. This approach using containers can be scaled to be used with huge datasets. The quantization techniques used in this paper are fuzzy entropy and genetic algorithm based techniques. Different types of membership functions are used in each technique to calculate the fuzzy entropy. The best possible quantized image is determined using the Structural Similarity Index (SSIM). This is a futuristic approach for solving lengthy repetitive serial problems in a parallel and economical way. As expected the results significantly better than the serial approach.
- 17:00 A Robust Algorithm for Segmentation of Blood Vessels in the Presence of Lesions in Retinal Fundus Images
- Diabetic Retinopathy is a vascular disorder caused by the variations in blood vessels pattern of retina which appears as lesions. Segmentation of blood vessels in the presence of lesions is a challenging task. This paper proposes a robust method for accurate segmentation of blood vessels in retinal images even in the presence of lesions. The proposed method involves three main steps to cater false vessels after applying vessels segmentation method. First step is to extract region based features. This approach is based on region classification using a feature vector of 10 discriminating features. Then feature selection is performed to get the best features. Lastly, blood vessels classification is performed using Support Vector Machine (SVM) classifier to categorize each region as true vessel and false vessel. The proposed approach is tested on the dataset of fundus images from normal and abnormal DR patients and gives good segmentation results even in the presence of lesions.
- 17:15 Prostate Segmentation in CT Data Using Active Shape Model Built by HoG and Non-Rigid Iterative Closest Point Registration
- In the paper a new method for prostate segmentation in computed tomography (CT) data is proposed. In the proposed approach, first, corresponding points of training data sets are found using point clouds generation by Marching Cubes algorithm and non-rigid Iterative Closest Points registration. After that, having the corresponding points available, the statistical model of the prostate is built by the Active Shape Model (ASM). As a feature vector histogram of image gradient (HoG) is utilized. Finally, the ASM is used once more for the target prostate segmentation: the statistical prostate model is fitted to the CT data. Efficiency of the proposed segmentation algorithm is validated using the Dice coefficient reaching the value 0.807 with standard deviation 0.045. The method can cope with data anisotropy.
- 17:30 Segmentation of Far-infrared Pedestrians for Advanced Driver-assistance Systems
- Robust and efficient Far-infrared (FIR) pedestrian segmentation under outdoor environment is challenging for advanced driver-assistance systems (ADAS), in order to address the problems that the existing methods are easy to be interfered by various background targets, various pedestrian scales, and images noise. This paper proposes a pedestrian segmentation method in far-infrared image for ADAS to address the above three problems: Firstly, with the purpose of reducing the interference of background targets, we design a road horizontal plane estimation algorithm to locate the area-of-interest (AOI) and a pixel-intensity vertical projection is utilized within the AOI. Secondly, in order to reduce the interference of various pedestrian scales, the width of each estimated vertical image stripe (denotes the width of a pedestrian) is regarded as a key parameter to guide the binary segmentation algorithm. Thirdly, we used proper morphological operations to deal with the image noise. Experiments conducted on extensive urban image sequences indicate that, compared with two state-of-art algorithms, the method proposed in this research is more reliable and is feasible for practical applications.
Wednesday, September 16, 16:45 - 17:30
Non-Invasive Biomedical Imaging
- 16:45 Detection of Macular Whitening and Retinal Hemorrhages for Diagnosis of Malarial Retinopathy
- Retinopathy deals with the abnormalities associated with human retina. Retinal whitening and hemorrhages are the most common lesions present in the image. Furthermore presence or retinal whitening in macular region contributes to vision loss. An automated system based on image processing an machine learning tools have been used for early detection of hemorrhages and whitening to prevent from other severe disease and vision loss. For the detection of retinal whitening, white regions are localized first than a set of features is extracted out of which most relevant are used for classification. The hemorrhages are detected by applying illumination equalization following with the segmentation of hemorrhages and vessels, the small false regions are removed to accurately detect hemorrhages. The proposed system achieves an average accuracy of 90% and 97.3% for retinal whitening and hemorrhages respectively.
- 17:00 A Review Analysis on Early Glaucoma Detection Using Structural Features
- Glaucoma is an eye disease that might cause severe destruction and permanent blindness if not detected at an early stage. Glaucoma is also called silent thief of sight. Glaucoma can be detected using structural and functional features. Functional features are observed by visual field testing and to observe structural features Optical Coherence Tomography (OCT) and Fundus images are the most widely used medical imaging techniques. Optic nerve head (ONH), Retinal layers are the key source and most repeatable structural features to detect structural changes in the retina of glaucomatous eyes. This paper presents a review on different glaucoma detection techniques from clinical and machine learning perspectives. The paper also highlights the functional and structural features and their significance with respect to digital fundus and OCT images for glaucoma detection. It concludes that structural features are more precise for early glaucoma detection as compared to functional features. Moreover, using hybrid features in training classifiers and correlating results of both fundus and OCT images can yield more accurate results.
- 17:15 Review of OCT and Fundus Images for Detection of Macular Edema
- Macula is an oval shaped area near the center of human retina that covers the area of 5500 microns and at its center, there is a small pit known as fovea with the diameter of 1500 microns. Macular disorders involve group of diseases that damages macula resulting in blindness or vision loss. Macular Edema 'ME' is the most common disease related to macula. The symptoms for this disease usually appear in final stages to patient when it is very difficult to cure it and at that time it can cause severe damage to central vision. However if it is detected in early stages then it can be easily cured. Different techniques that are used to detect ME are Fundus Photography (also known as Fundography), Fluorescein Angiography 'FA' and Optical Coherence Tomography 'OCT'. Earlier, fundography was the most widely used test to detect ME. But now a days OCT is being widely used for detecting this disease due to its ability to detect small changes in sub-retinal layers. In this paper, we present a detailed comparison between Fundography and OCT imaging technology for the detection of Macular Edema using our own dataset provided to us by Armed Forces Institute of Ophthalmology (AFIO) Rawalpindi. Total 64 patients were studied in this experiment by examining their fundus and OCT images where 15 patients had ME while 49 were healthy persons. OCT images provide an objective evaluation of early macular edema as compared to fundography.
Wednesday, September 16, 19:30 - 20:30
Fortune Inn Restaurant, Ground floor of UM Guest House, University of Macau
Thursday, September 17
Thursday, September 17, 09:00 - 10:00
Advanced Medical Imaging Systems and Techniques
- 09:00 A Microwave Breast Imaging System Using Elliptical Uniplanar Antennas in a Circular-Array Setup
- Microwave tomography has attracted significant research interest as it offers a non-ionizing diagnostic technique for breast cancer. A Microwave Breast Imaging system comprises an array of antennas, which illuminate the tissue and measure the scattered energy in order to spatially allocate the dielectric permittivity and conductivity of the tissue. The radiating elements of a MBI system should be compact and wideband. This paper describes an elliptical uniplanar antenna of 40 mm x 50 mm size that operates in the 1.53-3.33 GHz range when placed against a breast phantom. The antenna is used as radiating element in a circular-array setup around a hemispherical phantom. Simulated and measured data of the proposed array are presented and show satisfying agreement and system performance.
- 09:15 Breast Cancer Detection in Digital Mammograms
- This paper discusses an approach for automatic detection of abnormalities in the mammograms. Image processing techniques have been applied to accurately segment the suspicious region-of-interest (ROI) prior to abnormality detection. Unsharp masking has been applied for enhancement of the mammogram. Noise removal has been done by using median filtering. Discrete wavelet transform has been applied on filtered image to get the accurate result prior to segmentation. Suspicious ROI has been segmented using the fuzzy-C-means with thresholding technique. Tamura features, shape based features and moment invariants are extracted from the segmented ROI to detect the abnormalities in the mammograms. Proposed algorithm has been validated on the Mini-MIAS data set.
- 09:30 Bioinformatics of Lung Cancer
- The objective of this study is to explore novel bioinformatics techniques, namely, the Polarimetric Exploratory Data Analysis (pEDA), for early identification and discrimination of precancerous and cancerous lung tissues. The outcome of this study indicates that the full-width-at half maximum (FWHM) and Dynamic Range (DR) extracted from histograms of inherent (label-free) near infrared (NIR) diffused-polarimetric reflectance signals provide an important metrics for the characterization of cancerous tissue. Application of pEDA on the acquired data has been proved an effective diagnostic tool aimed at discriminating optical information among normal, precancerous, and cancerous lung tissue samples. Therefore, it can eventually be proved a useful diagnostic tool in the early detection of Non-Small Cell Lung Cancer (NSCLC) as histopathology.
- 09:45 Lung Tissue Evaluation Detecting and Measuring Morphological Characteristics of Cell Regions
- The goal of this study is to develop an automated, accurate and time efficient image processing algorithmic scheme, capable of segmenting lung tissue slides and quantitatively detecting any possible morphological characteristic that may differentiate healthy cells from adenocarcinoma. Microscopy images are segmented into the key regions via a proposed clever, sequential fusion methodology, combining image clustering, the watershed transform and mathematical morphology and analyzed utilizing an innovative tissue evaluation approach based on quantitative assessments of the extracted cell regions shape and size. The preliminary results of this work indicate that it is possible to discriminate healthy cells from cancerous ones considering their overall morphology within the tissue and measuring possible indices that may reveal an evolving neoplasia, a tumor growth or a malfunction in cell proliferation. Applying the proposed method to a much larger and more variform dataset is our next plan for the upcoming future in order to validate and ensure the robustness and accuracy of the proposed classification scheme, making it an extremely valuable assisting tool for medical experts for cancer diagnosis and prognosis.
Translational Medical Imaging
- 09:00 Modeling Human-perceived Quality for the Assessment of Digitized Histopathology Color Standardization
- Color consistency is still one of the most significant problems in whole-slide imaging since even subtle variations of color appearance in digitized slides might cause image misinterpretation by pathologists or by computer-aided diagnosis systems. These variations are mainly caused by differences in laboratory protocols and imaging devices manufactures. In this paper we propose a model for assessing color standardization algorithms in whole-slide histopathology imaging based on two metrics: (i) the color similarity between a template well-stained image and the resulting color-standardized image, and (ii) the structural distortion caused by the application of a color standardization algorithm. We employed the chi-square histogram distance as color distance measure, and the Universal Quality (Q) index for quantifying structural distortion. The developed model produce an overall quality score (OQS) in the range [0,10] that correlates well with human-perceived color standardization quality. To the best of our knowledge, this is the first attempt to measure the efficacy of color standardization algorithms in digital pathology.
- 09:15 Liposomes Labeled with Indium-111 by A Novel Surface Labeling Method Exhibits Good Biodistribution in Vivo
- Radioliposomes have potential applications as theranostic nanoparticles. Several techniques have been developed to label liposomes with radionuclides. Methods: In this study, a new In-111 surface labeling method for 1,4,7,10-tetraazacyclododecane-N,N′,N″,N′″-tetraacetic acid (DOTA) derivative liposomes (DLs) is described. The in vitro stability and in vivo molecular imaging properties of the labeled liposomes were compared with those of conventional 111In-liposomes (111In-Ls) prepared by a remote loading method. In vitro stability tests were performed in normal saline, rat plasma, and human plasma at 37°C. Imaging characteristics of both radioliposomal preparations were determined in LS174T tumor-bearing mice. Results: The labeling efficiency of In-111-labeled DL (111In-DLs) was greater than 95%; accordingly, no post-labeling purification was required, in contrast to the 111In-Ls. The specificity of 111In-DLs was higher (>10 111In per liposome) than that of 111In-Ls (<2 111In per liposome). The two radioliposomes showed similar in vitro stability. Non-invasive longitudinal monitoring by micro-single-photon emission computed tomography/computed tomography showed a similar in vivo tumor distribution for the two radioliposomes (48 h post-injection, P > 0.05). Conclusion: The new 111In surface labeling method for liposomes was rapid, efficient, highly specific activity, and easy to perform. Although the two radioliposomes showed similar in vitro and in vivo characteristics, 111In-DLs have benefits for clinical drug preparation, and this surface labeling method is a promising platform for radioliposomal theranostic nanoparticle preparation.
- 09:30 NanoSPECT/CT Imaging and Biodistribution of Rhenium-188-HAS Microspheres Using New Radio Labeling Process in a GP7TB Hepatoma Rats Model
- The human serum albumin microsphere (HSA microsphere) is biodegradable and biocompatible, the SPECT/CT imaging and biodistribution of Rhenium-188-HAS microspheres using new radio labeling process were investigated in a GP7TB hepatoma rat model. The labeling efficiency of the Rhenium-188-HSA microspheres using new radio labeling process was about 97%. The levels of radioactivity within the liver peaked at 1 h (68.325777±1.896218 %ID/organ) and then declined slowly. With image analysis, the highest uptake in liver was 69.9±3.8 %ID/organ and 59.7±5.7 %ID/organ at 48 and 72 h after administration of Rhenium-188-HAS microspheres via TACE. The liver uptake of Rhenium-188-HAS microspheres was steadily maintained. These results showed potential benefit and advantage of Rhenium-188-HAS microspheres via liver transarterial chemoembolization (TACE) for treatment of Hepatocellular carcinoma.
- 09:45 Longitudinally Therapeutic Evaluation of 188Re-Human Serum Albumin Microsphere in Hepatoma Model by Three-Dimensional Ultrasound Imaging
- The aim of this study was to investigate the utility of three-dimensional (3D) high-frequency ultrasound in the longitudinally therapeutic evaluation of 188Re-human serum albumin microspheres (188Re-HSAM) in a GP7TB hepatoma model. Male F344 rats were intrahepatic inoculation with GP7TB 1 mm3 cubes. Studies were performed on rat at 26 days after tumor inoculation. The efficacy of 188Re-HSAM was performed by a single-dose treatment in GP7TB hepatoma rats via intraarterial route. Rats were monitored for survival until death. The body weight was measured once a week. To monitor the tumor growth, longitudinal tumor volumes were obtained from 3D segmentation of ultrasound imaging in the F344 rats with GP7TB hepatoma once a week. The tumor volumes were inhibition with time after i.a. injection of 188Re-HSAM. In contrast to the mean tumor volume of 1803.2 ± 306.8 mm3 in the treated normal saline group at 54 days, the mean tumor volumes of the treated groups were 381 ± 95.1 and 267.4 ± 54.7 mm3 at 54 days administration of 2.8 mCi and 6.5 mCi 188Re-HSAM, respectively. The mean growth inhibition rates achieved by 2.8 mCi and 6.5 mCi 188Re-HSAM were 0.21 and 0.15, respectively. The 3D high-frequency ultrasound with a high spatial resolution and contrast in soft tissue can become imaging modality for rat preclinical studies. The longitudinally therapeutic evaluation of 188Re-HSAM demonstrated better tumor growth inhibition for rat with increased dose in the GP7TB hepatoma model. These results suggested that intraarterial administration of 188Re-HSAM could provide a benefit and promising strategy for delivery of radiotherapeutics in oncology applications.
Thursday, September 17, 10:00 - 11:00
Advanced Imaging Techniques
- 10:00 Ghost Imaging of Binary-valued Objects by Using a CCD and an Equivalent Photodiode
- In this paper, compressive sensing was employed in pseudothermal ghost imaging in order to reconstruct binary-valued objects. A rotating ground glass illuminated by laser with wavelength of 635nm produced a beam of pseudothermal light. A splitter was used to separate the beam into two, i.e., the transmitted light and the reflected light, and respectively recorded by an equivalent photodiode and a CCD. The object was placed between the splitter and the photodiode. Total variation minimization was used to implement the image reconstruction by applying the compressive sensing method. As a result, a ghost imaging setup was effectively established to generate feasible results for the binary-valued objects.
- 10:15 Inexact Newton Backtracking Method for Solving Microwave Tomography Inverse Problem
- An inexact Newton backtracking method (INBM) has been developed to obtain a stable solution of microwave tomography inverse problem. The problem is presented in term of nonlinear objective functional and solved iteratively by linearization. Instead of finding direct regularized solution of linear ill-posed problem, iterative regulated approximation solver in term of INBM is proposed as a new alternative method. The iteration is guarded using a forcing term which is determined on the nonlinear and linearized steps of the microwave inverse problem. The method proposed is tested using numerical examples and experimental data. The quality of proposed method is evaluated by comparing it with the Levenberg Marquardt (LM) method.
- 10:30 The Simulation Study of a Microprobe for Investigation of Electrical Impedance/ Temperature Property of Biological Tissues
- The measurement of electrical impedance/ temperature property of biological tissue is of great importance to develop electrical impedance method for the monitoring of hyperthermia process. Through software simulation and theoretical analysis, the authors design and realize an impedance microprobe to measure the electrical impedance/ temperature property of biological tissues in a minimally invasive way. The microprobe has advantages of small size, small sensing volume, convenient for application, etc. Then, the basic performance test has been down, which indicates that the proposed impedance microprobe can be applied to the experimental research of electrical impedance/temperature property of biological tissue and can provide references for the monitoring of hyperthermia process based on electrical impedance technology.
- 10:45 Robust Feature Learning by Improved Auto-encoder From non-Gaussian Noised Images
- Much recent research has been devoted to learning algorithms for deep architectures such as Deep Belief Networks(DBN) and stacks of auto-encoder variants, with impressive results obtained in several areas, mostly on vision and languages datasets. These learning algorithms aim to find good representations for data, which can be used for classification, reconstruction, visualization and so on. Despite the progress, most existing algorithms would be fragile to non-Gaussian noises and outliers due to the criterion of mean square error(MSE) and cross entropy(CE). In this paper, we propose a robust auto-encoder called correntropy-based contractive auto-encoder(C-CAE) to learn robust features from data with non-Gaussian noises and outliers. The maximum correntropy criterion(MCC) is adopted as reconstruction cost function and a well chosen penalty term is added to the reconstruction cost function. By replacing cross entropy with MCC, the proposed method can learn robust features from the data containing non-Gaussian noises and outliers. The penalty term corresponds to the Frobenius norm of the Jacobian matrix of the encoder activations with respect to the input. By adding the penalty term, the antinoise ability of the proposed method is improved. The proposed method is evaluated using the MNIST benchmark dataset. Experimental results show that, compared with the traditional auto-encoders, the proposed method learns robust features, improves classification accuracy and reduces the reconstruction error, which demonstrates that the proposed method is capable of learning robust features on noisy data.
Electrical Tomography Measurements and Optimization Techniques
- 10:00 Sensitivity Matrix Construction Based on Ultrasound Modulation for Electrical Resistance Tomography
- Image reconstruction of electrical resistance tomography (ERT) is an inherently nonlinear inverse problem. Sensitivity matrix is widely adopted to make this inverse problem to be a linear one, which is used to relate the magnitude of a boundary voltages change of object to the change in conductivity inside the object that has given rise to it. Usually, the sensitivity matrix employed is calculated with a uniform conductivity distribution. However, it is different from the sensitivity matrix of the measured object due to the 'soft-field' effect, which affects the quality of reconstructed image. Aiming at improving the resolution of reconstructed images, a novel way of constructing the sensitivity matrix is proposed based on ultrasound modulation. With focused ultrasound perturbing the measured object, the conductivity in the focal region will be altered according to the acousto-electric effect. Measuring the boundary voltages change with the change of the conductivity, the sensitivity matrix of the measured object will be approximately constructed. The sensitivity matrices constructed based on the Geselowitz's sensitivity theorem and on the ultrasound modulation are presented. Besides, image reconstruction is carried out by sensitivity coefficient algorithm with these two kinds of sensitivity matrix. Simulation results indicate that the reconstructed images with higher quality can be obtained with sensitivity matrix constructed based on the ultrasound modulation.
- 10:15 Image Reconstruction Based on Hopfield Neural Networks for Electrical Impedance Tomography
- Electrical impedance tomography(EIT) is a new technology developed in recent years, which aims to estimate the electrical properties at the interior of an object from the voltage measurements on its boundary. The image reconstruction for EIT is an inverse problem, which is both nonlinear and ill-posed. Many traditional methods are provided for the demand. But they cannot avoid producing artifacts in reconstructed images. In this paper, a new method---Hopfield neural networks is used to solve the inverse problem for EIT. Hopfield neural networks (HNN) has been used to successfully address many different problems, including image restoration and reconstruction. The method has shown great advantages over other traditional techniques in accuracy and the quality of the reconstructed images in both size and position.
- 10:30 A Simplified PIV-based Method for Flame Velocity Distribution Measurement
- In the paper, a simplified method based on particle image velocimetry (PIV) for flame velocity distribution measurement is proposed. A high speed camera was first used to capture consecutive images, and multilevel thresholding segmentation based on Otsu's method was used to enhance the image contrast. Then the velocity distribution is calculated by means of DFT. Experiments were carried out to validate the feasibility of the proposed method by using a methane/air premixed burner. Calculation results show that the flame velocity distribution can be completely obtained by the aid of multilevel thresholding segmentation. The experimental results show that the flame velocity distribution agrees well with the flame structure and the motion feature. The proposed method can be further applied in monitoring of industrial combustion process.
- 10:45 A Faster Measurement Strategy of Electrical Capacitance Tomography Using Less Sensing Data
- Electrical Capacitance Tomography (ECT) is a process imaging modality offering real-time cross-sectional permittivity distribution information within a vessel in a non-invasive manner. The common measurement strategy is to take capacitance data of all non-redundant electrode pairs for image reconstruction. In this paper, a novel and faster measurement strategy for ECT using less capacitance data is proposed. The key benefit of this work includes improved sampling speed through significantly reduced number of measurements and smaller dynamic range of the capacitance measurements, which will make the decrease of axial electrode dimension possible. An online image reconstruction algorithm based on Tikhonov regularization and second order Gaussian-Laplace operator is employed to test the image quality. Both simulation and experiment results have shown that the proposed measurement strategy can significantly reduce the measurement number while keeping a comparable or even better image quality compared with the common measurement strategies.
Thursday, September 17, 11:00 - 11:20
Thursday, September 17, 11:20 - 12:20
Efficient Characterization Techniques
- 11:20 Rapid Quantification of Total Polyphenol Content in EVOO Using NIR Sensor with Wavelength Selection and FS-MLR
- Olive oil quality is regulated by an European normative, and according to different chemical and organoleptic markers the olive oil can be labelled as lampante, virgin or extra virgin olive oil (EVOO), being the last the top-quality one. Since these parameters determine only the olive oil chemical stability, in the last years different works have studied the favourable influence of others olive oil minor components on the human health. Polyphenols are one of these minor components. This research has been carried out to study the correlation between the polyphenols contained into EVOO and its spectral response in the near infrared. The selection of the best correlated spectral wavelengths could be the first step to build a rapid and lowcost NIR sensor to be set up into the elaboration process in order to control the presence of these components. During the process, several olive oil variables such as temperature, humidity, turbidity... could affect to the NIR response. In this sense, this work has studied the influence of the olive oil turbidity on the error of polyphenol content prediction. For this study 63 olive oil samples obtained from an olive oil mill have been employed and the reference values were obtained by an external laboratory with chemical analysis. Spectra were filtered with an ANOVA analysis and the best correlated band was identified between 1690 and 1870 nm. The regression algorithm FSMLR was used and it got a RMSECV of 75 ppm, the R2 coefficient was 0.94 and the RPD value was 4.76. The results get worse when samples with different turbidity indexes were used.
- 11:40 Automatic Determination of Peroxides and Acidity of Olive Oil Using Machine Vision in Olive Fruits Before Milling Process
- Ones of the most important parameters to measure of olive oil quality are acidity and peroxide index and, currently, they are measured in a laboratory using samples of olives extracted from a batch in the reception process. The aim of this work is to provide an automatic inspection system, based on computer vision -visible and infrared (IR) channels-, to infer automatically these parameters. The proposal uses the differences in superficial textures, the defects in IR pictures and a color estimation in CIELab. Furthermore, a different image preprocessing has been employed and an Artificial Neural Networks have been used as estimation technique. The system has reached good estimation results with R=96.3 in acidity and R=93.9 in peroxide index.
- 12:00 Olive Batches Automatic Classification in Mill Reception Using Computer Vision
- Olive batches classification is crucial to obtain the best possible oil in milling extracting process. Nowadays, this selection is done manually and it takes place before starting the milling process. This work proposes an automatic classification system of olives batch, based on computer vision, whose goal is the online differentiation of the olives batch according to their quality level. The proposed system has been installed and tested in production condition, working with real lots of olives brought to the mill for the farmer. Two full setups have been installed in the factory: before and after the olive washing process. The proposed methodology uses a feature vector of the samples that concatenates the olive image histograms from different colour spaces and the result values of two algorithms used to determine the texture (image entropy and grey level co-occurrence matrix). As classifier, an artificial neural network (ANN) was used. For the experimental validation, 6325 images from 100 batches were analysed showing good classification results (success ratios of 97.1% before the washing stage and 96.4% after).
Emerging Electromagnetic Imaging, Systems and Techniques
- 11:20 Automatic Detection of Concealed Pistols Using Passive Millimeter Wave Imaging
- A method is proposed for automatic detection of the concealed pistols detected by passive millimeter in security applications. In this paper, we extend four half-surround Haar-like features and use integral image to rapidly calculate the rectangle features. Then we obtain a multi-layer classifier cascaded by several strong classifiers using AdaBoost algorithm to detect the contraband. Various passive millimeter images from both published literatures and our own measurements are used for training and testing. The experimental results show that the metallic pistols in different sizes, shapes, and angles can be accurately detected，so this method is usefull for automatic detection of pistols.
- 11:35 Characteristics of Electromagnetic Holographic Measurement Sensitivity Field for Flow Imaging
- The flow imaging procedure of electromagnetic holographic measurement requires an adequate establishment of sensitivity field. Currently, the widely adopted Geselowitz sensitivity field for most of the electric/magnetic imaging is not applicable in the flow imaging of electromagnetic holographic measurement. In this paper, in accordance with the mathematical basis of tomography - Radon transform and its inverse transform, through a combination of the physical meaning of sensitivity field and the characteristics of electromagnetic holographic measurement approach, a holographic measurement sensitivity field is established through the gradient of electric potential. Through the numerical test by Finite Element Method, it shows that the new sensitivity field is beneficial to overcome the "soft field effect"; through the validation of electromagnetic holographic measurement data, it shows that the response obtained by the new sensitivity field is in better agreement with the theoretical, thus to indicate that the holographic measurement sensitivity field established in this paper is applicable to flow imaging procedure of electromagnetic holographic measurement.
- 11:50 Eddy Current Nodestructive Testing Method Based on the Spatial Fuzzy Entropy
- A new method to estimate the geometrical characteristic on the surface of the nonferromagnetic metal materials or inside was proposed in this paper. Pulse currents were stimulated into a pair of planar coils and the distribution of three dimensional magnetic flux densities on the material surface was measured with the magnetoresistive sensor. Along the normal symmetry axis of the exciting coil, it calculates the space sliding removing fuzzy entropy of the module of the three dimensional magnetic flux density. Both simulation and experimental results show that the partial conductivity mutation caused by the defect of the metal might lead to the increase of disorder degree in the distribution of the magnetic field and also the entropy. The size of the defection and entropy comply with the monotonously increasing functions in a certain range. With the aid of this novel method, the tiny defect, both on the surface and inside of the aluminum plate, can be effectively detected.
- 12:05 Temperature Measurement of Gas Turbine Swirling Flames Using Tomographic Imaging Techniques
- This paper presents the 3-D (three-dimensional) temperature measurement of swirling flames of a well-characterized tangential swirl burner using a RGB (red, green and blue) CMOS (Complementary metal-oxide-semiconductor) camera associated with four flexible imaging fiber bundles for flame image acquisition. Optical tomographic algorithms were used to reconstruct the 3-D model of grey-level intensity of the flame and the two-color pyrometric technique was applied for computing the flame temperature based on the reconstructed 3-D model. Three R-type thermocouples were also employed to measure the flame temperature which was then used as a reference for validating the temperature derived from the flame images. Experimental results obtained showed that the proposed technique is capable of determining flame temperature profiles, and consequently can be an effective means of characterizing the 3-D swirling flame behaviors, including stability limits such as flame blow-off/flashback, thus reducing the event probability by changing inlet conditions.
Thursday, September 17, 12:20 - 13:20
Enhanced Detection and Identification Imaging Techniques
- 12:20 A Combined Similarity Measure for Multimodal Image Registration
- Mutual information (MI) and local self-similarity (LSS) are considered more suitable for multimodal image registration than other several similarity measures existing. MI reflects the corresponding relationship of pixel intensities and LSS matches the features describing local texture layout between visible (VS) and far-infrared (FIR) images. However, there are some shortcomings when they are used alone. MI is sensitive to the size of matching window and LSS is limited by the difference of texture layout between VS and FIR images. We devise a new similarity measure LSMI to combine MI and LSS together linearly because there is no conflict between them. Two fusing schemes are discussed in detail and one is chosen to proof the effectiveness. Experiments are carried out on 87 image pairs and more than 30% results show that LSMI works better than MI, more than 50% results show that LSMI works better than LSS. The performance of 3 algorithms is similar in the other cases.
- 12:35 Share-Use of the Motion Vectors From Video Encoding for High Relative Speed Moving Vehicle Detection
- Detection of moving vehicles is a crucial part of Advanced Driver Assistance Systems (ADAS) in automobiles. Accurately detecting approaching vehicles may help to avoid threats while driving, and alerting the driver in a timely manner can improve road safety. To this end, real-time processing is an essential requirement. In this paper, a vision-based and block-based method is proposed, which uses spatial constraints to detect independent moving vehicles, employs the motion vectors of video encoding instead of optical flows, and also adopts a rough road region detection to reduce the number of moving blocks that are to be clustered and refined to yield independent moving vehicles. Experimental results in complex traffic scenarios demonstrate that our method is robust and real-time for on-road vehicle detection.
- 12:50 Multimodal Medical Image Registration Using Discrete Wavelet Transform and Gaussian Pyramids
- In this research paper, authors propose multimodal brain image registration using wavelet transform followed by Gaussian pyramids. The reference and the target images are decomposed into their LL LH HL LL DWT coefficients and then are processed for image registration using Gaussian pyramids. The image registration is also done using Gaussian pyramids only and wavelets transforms only for comparison. The quality of registration is measured by comparing the maximum MI values used by the three methods and also using correlation coefficients .Our proposed technique proves to show better results as compared to the other two mentioned methods with mathematical implications on real-time samples.
- 13:05 A Skeleton Reconstruction Algorithm for Identifying Individual Fish Fry in a Population Image
- Due to rapid advance of computer vision technology, computer assisted image analysis starts to play an important role in several areas including aquaculture. In recent years several computer vision-based methods have been applied to many major operations, e.g. automated fish counting, inspection, and measurement. In this paper we address a problem of overlapping objects in a population image that frequently occurs when objects under investigation are allowed to move freely during operations. We proposed a new skeleton reconstruction algorithm for identifying and isolating individual objects in a cluster of overlapping objects. The algorithm re-assembles initial skeleton of an object cluster based on combination of edge and geometric measures, in order to form correct skeletons of individual objects in a cluster. Skeletons produced by our algorithm will be used as a basis for further automated inspection and measurement tasks. In this paper we apply our algorithm in a field of aquaculture for automated identifying individual fish fry in an overlapping-fry cluster. Our algorithm can achieve 93.33 percent accuracy for skeleton reconstruction of each individual fry in a cluster of 2 - 7 overlapping fry. The results also show the effectiveness of our algorithm in dealing with various overlapping patterns.
Pattern Recognition and Features Extraction
- 12:20 A Semiautomatic Method for Pedestrian Ground Truth Generation in Thermal Infrared On-Board Videos
- Currently, pedestrian detection and tracking algorithms of Thermal Infrared (TIR) on-board videos encounter lack of comprehensive pedestrian datasets for benchmarking. The generation of ground truth is a tedious and error-prone task in the process of establishing the dataset of annotated videos. This paper presents a novel semiautomatic video annotation method to facilitate annotating pedestrians in TIR on-board videos. The proposed method consists of two phases, in the first phase we learn the pedestrian appearance models online, then in the second phase we use the learned models to automatically annotate the pedestrian in the other frames. We present an video annotation tool to verify the effectiveness and reliability of our method. A comparison between our tool and the state of the art of on-board video annotating tools was performed, which showed how our annotation tool provides a high ground truth quality with shorter annotation time when annotating pedestrians in TIR on-board videos with bounding boxes.
- 12:35 Self-Organization of Dynamic Spectral Imaging Data Based on Bootstrapping and Clustering Approaches
- This study introduces a novel technique for self-organizing data, without any prior knowledge on their statistical distribution, fusing efficient strategies from clustering and resampling. The proposed methodology aims at searching for hidden characteristics within the processed dataset and revealing additional data structures or subclasses that can be utilized for identifying irregular groups that are of particular importance in disease modeling. The performance evaluation of the presented algorithm to in vivo dynamic optical data from cervical cancer is tested and analyzed on sample vectors representing the temporal response of tissue areas obtained through Dynamic Contrast Enhanced Optical Imaging. The results of this study show that stratified, repeated applications of simple clustering schemes can effectively organize big data, giving rise to the application of the proposed method for tissue classification for enabling accurate and early disease diagnosis.
- 12:50 Facial Expression Recognition Using Deep Neural Networks
- We develop a technique using deep neural network for human facial expression recognition. Images of human faces are preprocessed with photometric normalization and histogram manipulation to remove illumination variance. Facial features are then extracted by convolving each pre-processed image with 40 Gabor filters. Kernel PCA is applied to features before feeding them into the deep neural network that consists of 1 input layer, 2 hidden layers and a softmax classifier. The deep network is trained using greedy layer-wise strategy. We use the Extended Cohn-Kanade Dataset for training and testing. Recognition tests are performed on six basic expressions (i.e. surprise, fear, disgust, anger, happiness, sadness). To test the robustness of the classification system further, and for benchmark comparison, we add a seventh emotion, namely "contempt", for additional recognition tests. We construct confusion matrix to evaluate the performance of the deep network. It is demonstrated that the network generalizes to new images fairly successfully with an average recognition rate of 96.8% for six emotions and 91.7% for seven emotions. In comparison with shallower neural networks and SVM methods, the proposed deep network method can provide better recognition performance.
- 13:05 Robust Gait Recognition Based on Partitioning and Canonical Correlation Analysis
- Gait recognition would be greatly affected by some covariate factors including clothing type and carrying objects. Finding an approach robust to these covariate factors is the most challenging problem. In this paper, we propose a method based on canonical correlation analysis (CCA) to model the correlation between gait sequences under two different walking conditions. Correlation strength is used in KNN classifier as similarity measure. GEIs are partitioned into several parts and vast majority voting is employed among these parts to reduce the effect of the covariate factors. Experiment results show that our proposed method outperforms other classical methods over all views.
Thursday, September 17, 13:20 - 14:30
Thursday, September 17, 14:30 - 15:30
Novel Computed Tomography (CT) Imaging Techniques
- 14:30 2D Versus 3D Total Variation Minimization in Digital Breast Tomosynthesis
- Total variation (TV) minimization has become an important tool for the sparse image reconstruction. In this study a realistic 3D digital breast tomosynthesis (DBT) data was reconstructed and compared using two different forms of TV: (i) 2D reconstruction by applying TV layer by layer (ii) 3D reconstruction of the entire data. It can be assumed that a 3D reconstruction should perform better. However, in real DBT the resolution in z direction is about 10-15 times lower than the resolution in x-y plane. This study investigates the performance of the reconstruction when a TV term in z direction is added into the cost function of TV minimization.
- 14:45 Sparse Tomographic Image Reconstruction Method Using Total Variation and Non-Local Means
- Patient radiation dose is a major issue in computerized tomography (CT) imaging. Therefore, many improvements to the classical reconstruction algorithms are suggested to achieve reasonable image quality with less patient dose. The aim of this work is to improve the well-known algebraic reconstruction algorithm (ART) in order to obtain good image quality with less or limited projection angles. We achieve this purpose by sequential application of ART update, total variation minimization (TV), and non-local means (NLM). Both TV and NLM are widely used in imaging algorithms with high performance. To show the improvement in ART by TV and NLM we used a Shepp-Logan phantom simulation and real data from digital tomosynthesis imaging system. Our results indicate that the proposed method provided superior results over two widely used methods, ART and ART+TV, in many senses including Structure SIMilarity (SSIM), signal to noise ratio (SNR) and root mean squared error (RMSE).
- 15:00 Digital Breast Tomosynthesis Imaging Using Total Variation and Non-Local Means
- Digital breast tomosynthesis (DBT) generates 3D images of breast by using 2D projections taken from a limited view angle. Due to the ill-posed nature of the image reconstruction in DBT alternative image reconstruction methods have been introduced for better image quality. In this study, an efficient DBT image reconstruction algorithm has been introduced. The proposed method was formulated as combination of algebraic reconstruction technique (ART), total variation (TV) and non-local means (NLM). A real DBT data set and a commercially available 3D phantom were used for performance evaluations. Results of our study showed that ART+TV+NLM helped in obtaining better reconstructed images than those obtained by ART and ART+TV in terms of reduced background noise, out-of-plane artifacts while keeping the details well-preserved.
- 15:15 Reconstruction of Knee Joint Image From CT Data Using Positioning Doll
- Instead of a phantom with a fixed joint, a positioning doll that is the size of an actual adult and has movable joints can be used to practice positioning for radiography. This paper describes the image processing of a knee joint image created from computed tomography (CT) data using a positioning doll in order to provide the radiographic practice which bend the knee joint. First, pre-processing is performed to adjust the transverse plane images obtained by the CT scanner. Next, the adjusted transverse plane image is divided into the femur, patella, tibia and fibula parts. They are reconstructed according to the angle of bending, and the lateral knee joint image is produced. Various images of different angles of the knee joint and different angles of body rotation can be produced from CT data. It is expected that students will grasp the three-dimensional structure of the organ to be examined and deepen their anatomical knowledge by using our proposed method. We believe this method will help students learn positioning techniques.
Techniques Aimed to Enhanced Diagnosis and Imaging
- 14:30 Hand Dorsal Vein Recognition: Sensor, Algorithms and Evaluation
- Biometric recognition involves in identifying the individual based on his physical and/or behavior traits. Among the various biometric modalities, the dorsal hand vein biometric has been known for improved accuracy, stability and resistant to spoofing. The crucial fact, in achieving the accurate biometric recognition strongly correlates with the quality of the dorsal hand vein that can be captured in real-life scenarios. In this paper, we present a new dorsal hand vein sensor that can capture a good quality of dorsal hand vein images. The introduced sensor is based on a near infrared illumination that can emit a spectrum of 940nm that in turn used to illuminate the dorsal hand region. The presented sensor employs a single camera with a simple structure that will further improve the quality of the light to properly illuminate the dorsal hand region. Extensive experiments are carried out on our newly collected database comprised of 50 subjects resulting in 100 unique dorsal hand veins. We also present an extensive evaluation of the eight different state-of-the-art techniques that demonstrated the outstanding performance of the Log-Gabor and Sparse Representation Classifier with an EER of 0.7%.
- 14:42 Visible Iris Imaging: A Novel Imaging Solution for Improved Iris Recognition
- Imaging dark iris in visible spectrum has had limited success due to low texture visibility owing to light-scattering and absorption properties of the cells in iris. Traditional iris imaging employ Near-infra-red (NIR) illumination to address such a problem, however, this limits the use of biometrics using regular cameras such as the ones present on smartphones. In this work, we propose a new iris imaging framework, to resolve the iris texture pattern without employing the NIR illumination. The proposed iris imaging setup employs a simple illumination source placed at an acute angle to axis of eye and imaging device to maximize the texture visibility. The proposed setup is used to obtain a new iris image database to evaluate the verification performance. The newly constructed iris image database of dark iris images comprises of 62 unique iris patterns with 10 samples each in different session. The database is acquired using iPhone 5S smartphone. Further a benchmark comparison is provided with respect to NIR images for the subset of the database to measure the robustness of proposed method. Detailed experiments are carried out using five well established state-of- art iris recognition algorithms and have indicated the superior performance of the proposed imaging setup with Genuine Match Rate (GMR) of 85.98% at False Match Rate (FMR) = 0.01% with an Equal Error Rate (EER) of 3.54%.
- 14:54 Varying Energy CT Imaging Method About Complicated Structural Components
- For complicated structural components with wide X-ray attenuation ranges, conventional fixed-energy computed tomography (CT) imaging cannot obtain all structural information. This limitation results in a shortage of CT information because the effective thickness of the components along the direction of X-ray penetration exceeds the limit of the dynamic range of the X-ray imaging system. To address this problem, varying ray energy CT imaging method is proposed. In this new method, the tube voltage is adjusted several times with the fixed lesser interval. Then, the fusion of gray consistency and logarithm demodulation are applied to obtain full and lower noise projection of the high dynamic range. Last the conventional CT method to get high dynamic range CT image. An accompanying experiment demonstrates that this new technology can extend the dynamic range of X-ray imaging systems and provide complete representations of the internal structures of complicated structural components.
- 15:06 Improving Authenticity and Robustness of Medical Images Watermarking Schemes Based on Multi-resolution Decomposition
- The handling of numeric information in hospitals, especially in radiology systems, the storing and electronic transfer of medical images between hospital services, has become the central point of the information infrastructure of modern healthcare systems. Medical images transmitted through the communication networks need more robust and reliable algorithms. In this paper, we propose a new medical image watermarking scheme based on wavelet transform. The main contribution lies in the extraction process that improves watermarking scheme robustness and efficiency against several kinds of attacks. The results are very encouraging. Our images have very good resistance against several dangerous attacks known in the image processing field (rotation and noise).
- 15:18 Multi-GPU Based Evaluation and Analysis of Prehistoric Ice Cores Using OpenCL
- The analysis of prehistoric ice cores is a well established instrument in the field of climate research. Until recently, common methods were often based on the analysis of carbon dioxide and methane concentrations. The use of computed tomography based 3-D reconstructions for the evaluation and analysis of prehistoric ice cores yields the possibility to improve the accuracy of age determination by an order order of magnitude, from several hundred years to decades. This, in turn, allows the improvement of the underlying model of the climatic development over the last sever hundreds of thousands of years. The use of 3-D volumes allows a much more detailed analysis with respect to the size, amount, distribution and connectivity of air bubbles in the ice cores as a new climatic proxy. In this setting, we present a GPU-based approach for the efficient evaluation and analysis of air-bubbles using OpenCL1. As the raw data size can grow up to 10 TB per meter of an ice core, we focus on a distributable and scalable approach, which is based on component labeling and can be scaled to multiple-GPUs via OpenCL.
Thursday, September 17, 15:30 - 16:30
Image Analysis and Processing I
- 15:30 Coefficients Training Methods for Image and Video Quality Measures
- Combining multiple color image attribute measures by linear addition has been shown to be an effective method in evaluating color image quality. Therefore, appropriate selection of the linear combination coefficients determines the performance of the image quality measure. Different methods for training the linear combination coefficients are used for variant applications. In this paper, training methods of obtaining the linear combination coefficients are discussed for two practical applications: (1) for image processing applications where images before and after processing have the same source; and (2) for video processing purpose where consecutive frames have different contents and suffer from different types of distortions. With the obtained linear combination coefficients, the overall image quality measure Color Quality Measure (CQM) is shown to be effective in benchmarking the qualities of images and video frames.
- 15:45 A New Correlation-Differential Denoising Algorithm
- We propose a new correlation-differential denoising algorithm based on two novel concepts of correlation-weighted and differential-weighted filters for Gaussian noise. The correlation-weighted filter utilizes the correlation of different sequences in an image in different directions as a weight to perform the filtering. This filter can preserve the texture information, especially the edge information. Derived from the Gaussian filter, the differential-weighted filter uses the actual difference of pixel values as a parameter instead of the distance relationship of pixels' positions. These two filters have the complementary function that preserve the texture information while simultaneously removing the noise. The algorithm is shown to outperform current denoising standards, including Gaussian filtering, non-local mean, anisotropic diffusion, total variation minimization, and multi-scale transform coefficient thresholding for both middle and high levels of Gaussian noise cases.
- 16:00 Image Reconstruction for Wet Granules by Electrical Capacitance and Microwave Tomography
- The moisture content of granules in fluidized bed drying, granulation and coating process is typically between 1%~25%, which results in the change of permittivity and conductivity during the process. In such conditions, the application of electrical capacitance tomography has some limitations. Considering that microwave tomography works in a wide frequency range (up to 2.5 GHz) and can be used to measure high permittivity and high conductivity of materials, the objective of this research is to combine capacitance and microwave tomography to investigate the solids concentration with different moisture content. The reconstructed images show that both capacitance and microwave tomography are functions of moisture content. The measurement results demonstrate that the two tomography techniques can complement with each other. The analysis results would be used to process control in a fluidized bed drying, granulation and coating to improve the operation efficiency.
- 16:15 3D Positioning for Revision Total Hip Replacement Surgery by Dual-modality Tomography
- A new approach based on electrical capacitance tomography (ECT) and electrical resistance tomography (ERT) is proposed to providing real-time 3D images for revision total hip replacements (THRs), attempting to navigate a drilling/milling tool in the femoral bone during the surgery. An ECT/ERT dual-modality sensor adopts the conventional ECT sensor structure with internal electrodes and voltage excitation. The capacitance and conductance are measured by an impedance analyzer based system. With the prior knowledge of the shape and diameter of the femoral bone and the drilling/milling tool, the 3D imaging process during the revision THR surgery can be simplified to estimate the cross-sectional position of the femur and the cross-sectional and axial position of the drilling/milling head. Experiments were carried out with an aluminum rod inserted in empty cavity of a cemented femoral bone surrounded by saline solution. The cross-sectional position of the femur is derived by a weighted mean method and the axial position by a linear function. 3D images for visualizing the revision THR process is generated in MatLab. The initial result is promising, providing the possibility of visualizing the surgery process using the ECT/ERT dual-modality.
Thursday, September 17, 16:00 - 17:30
- 16:00 A Fast CU Encoding Scheme Based on the Joint Constraint of Best and Second-best PU Modes for HEVC Inter Coding
- The emerging video coding standard High Efficiency Video Coding (HEVC) has shown greatly improved coding efficiency by adopting hierarchical structures of coding unit (CU), prediction unit (PU), and transform unit (TU). However, the encoding time increases substantially due to the numerous combinations of CU, PU and TU. In this paper, we propose a fast CU encoding scheme for HEVC inter coding based on the joint constraint of best PU mode, the second-best PU mode as well as the CBF information. The experimental results show that the proposed fast CU encoding scheme can reduce the encoding time by more than 35\% for RA and LB cases with negligible BD-rate loss over HEVC test model reference software, HM 16.4.
- 16:08 Adaptive Rate Binning in Real Time Transmission
- In this paper, we proposed an adaptive rate random binning code is developed for point-to-point (P2P) delay constrained source streaming. We study the error exponent (error convergence rate), which measures the asymptotic rate at which the error probability decays with the time delay. An achievable lower bound of error exponent is derived for the proposed binning. Numerical results show that the proposed adaptive rate random binning code achieves a better exponent than the existing fixed length (FL) random binning code. This general approach can be easily extended to real time imagining processing, signal processing with delay and source compression areas.
- 16:16 A Coding Protocol for Relay Network
- This paper proposes a coding protocol for a relay network. The performance of this coding protocol is achieved by means of cooperative successive-cancellation decoding followed by a sliding-window linear encoding scheme at the relays. The linear mapping at the relay encoder is designed to give an optimal constellation for the signals transmitted by the relays. The proposed scheme is shown to achieve full diversity when the relays are capable of decoding the original source symbols error free. The paper then proposes a selection-relaying approach capable of recovering full diversity even when the relay nodes decode erroneously. Simulation results confirm that the proposed relaying schemes indeed achieve the desired performance.
- 16:24 Application of Non-Negative Matrix Factorization for the Deconvolution of Petroleum Mixtures Using Mid FTIR Analysis
- The aim of this study was to develop an efficient, in terms of both time and cost, but still reliable methodology, capable of identifying the chemical fractions in complex commercial petroleum products. We demonstrate the performance of a methodology based on Attenuated Total Reflectance Fourier Transform Infrared (ATR-FTIR) analytical signals, combined with a modified factorization algorithm to solve this "mixture problem". The results of this innovative work, regarding both the application of the adapted deconvolution technique to petroleum analysis and its self-adaptation to data without any former initialization, indicate that it is possible to reveal the content in a chemically complex petroleum mixture, working solely with the infrared signals of a limited number of samples and without any other a priori information. A focus application of the proposed methodology is the quality control of commercial gasoline by identifying and quantifying the individual fractions used for its formulation.
- 16:32 A New Method of Reducing Boundary Artifacts for JPEG2000 Multi-Tile Coding
- In this paper, we present the pre-/post-filtering framework to reduce the boundary artifacts introduced by JPEG2000 multi-tile coding. Compared to taking one-dimension array as the input model as common did, two-dimension (2D) matrix is adopted to make the derivations more precisely. To build the statistical relationship between the input image and the distortion of the reconstructed image, we present the analysis of the 2D discrete wavelet transform (DWT) with the pre-filter, the 2D inverse DWT with the post-filter, and the distortion model of the scalar dead-zone quantization. Furthermore, the measure standard is formulated in terms of the memory cost and the relative size of the mean square error on boundary and non-boundary coefficients. Based on several experiments, the optimal pre-/post-filters of typical sizes are obtained, and their great performance in reduction of boundary artifacts is proved.
- 16:41 Licence Plate Images Deblurring with Binarization Threshold
- The principal purpose of this paper is to develop a new method to deblur licence plate images. The statistical analyses of plate images we have performed enable us to believe that the binarization threshold is a reasonable parameter to distinguish blurred plate images from clean ones. Our approach defines a new regularization term which includes both intensity and gradient priors, and gives an effective and convergent solution. A large number of experiments on real-world plate images dataset have been conducted by using different algorithms. Comparing with other representative deblur algorithms, the method we proposed yields higher quality results. Moreover, further experiments are carried out to apply our algorithm to non-plate blurred images, such as composite images, saturated images and other common images. The results demonstrate the proposed method has a state-of-the-art performance on both document and non-document images.
- 16:49 Improved Directional Weighted Interpolation Method Combination with Anti-aliasing FIR Filter
- In this paper, we present an improved directional weighted interpolation method for single-sensor camera imaging. By observing the fact that the conventional directional weighted interpolation methods are based on unreliable assumptions using spectral correlation, a contribution of this work is made using an anti-aliasing finite impulse response filter to improve the interpolation accuracy by exploiting robust spectral correlation. We also make a contribution towards refining the interpolation result by using the gradient inverse weighted filtering method. An experimental analysis of images revealed that our proposed algorithm provided superior performance in terms of both objective and subjective image quality compared to conventional directional weighted Demosaicking algorithms. Our implementation has very low complexity and is therefore well suited for real-time applications.
- 16:57 Deep Convolutional Neural Network for Kinship Verification From Facial Images
- In this paper, we propose to learn a relational feature for kinship verification from facial images. To offer a father-son relationship as an example, we first use a convolutional neural network (CNN) to extract the father's identity feature for face recognition. Then we set up a deep CNN-Auto Encoder (AE) model to establish the relational features. A CNN-AE neural network is a supervised deep learning algorithm that applies back propagation, setting the target values to be close to the inputs. The relational features are the activations taken from the last hidden layer of the deep network model, while the son's facial image is the input and the father's identity feature is the target value. This deep model learns a process that a son gradually becomes his father. The activations of the higher layers strongly represent a father-son kinship in visual. Experimental results show that the relational feature is effective for kinship verification.
- 17:05 Color Characteristics for the Evaluation of Suspended Sediments
- This study focuses on a significant issue of the environmental monitoring application area, which is the suspended sediment concentration estimation. More specifically, the purpose of the current work is to provide a new non-intrusive way to estimate the suspended sediment (SS) distribution. The proposed methodology uses the color characteristics of river flow images and provides a high correlation factor with the suspended sediment measurements. In our opinion, the importance of the current work derives from the fact that it provides an alternative and effective way of estimating SS distribution rather as opposed to the conventional method that requires human presence, especially if we consider the difficulty of taking measurements of the river pollution during flush flood events when the sediment distribution is increased and is directly related to water quality.
- 17:13 Multi-pedestrian Tracking for Far-infrared Pedestrian Detection On-board Using Particle Filter
- Target tracking is important for pedestrian detection in on-board vision application preventing traffic accidents effectively. Facing complex traffic scene including background change, various pedestrian appearance and multi-targets etc., target tracking algorithms existing such as Kalman and particle filters expose shortcomings in accuracy, robustness and availability. A heuristic tracking scheme including feature model learning and target tracking iteratively is used for far-infrared (FIR) multi-pedestrian tracking on-board. To learn and update feature models for each pedestrian, partial least squares regression (PLSR) and heuristic computation are adopted. And to track multi pedestrians meanwhile, an improved particle filter algorithm is proposed combining adaptive searching region and double feature models. Experiment on several FIR video sequences demonstrates the improved scheme outperforms comparing with other particle filter algorithms when multi-pedestrian tracking, even with partial occlusion, scale and posture variation.
- 17:22 Compound Poisson Noise Verification for X-ray Flat Panel Imager
- Physical effects that contribute to the noise in digital radiography are possible to be represented as overall variance including Poisson statistics of x-rays. For real system, statement of the noise response must be integrated against a spectrum to get the total response to spectral fluence of x-ray, that the detected x-ray follows a compound-Poisson distribution. To verify the noise property of our FPD, we formulate a Poisson based model to verify the image noise statistic from measured data. Results demonstrate that the FPD noise property is independed from the tube voltage, only 1.6% error were occurred by the model fitting. In this case, the noise property of our FPD can accurately be represented as the modeled function.
Thursday, September 17, 16:30 - 16:45
Thursday, September 17, 16:45 - 17:45
Image Analysis and Processing II
- 16:45 3D Shape and Color Estimation Using Linear Light Sources and Cameras
- This paper proposes a 3D shape and color estimation method based on the photometric stereo method that uses images taken from multiple linear light sources. The proposed method can estimate partial object shape accurately. Hence we estimate multiple object shapes with different light source positions, extract partial object shapes and integrate them. In addition we also use the binocular stereo method to estimate depth at edge parts and integrate its result. An actual experimental situation is simulated and the 3D shape of the target object is measured using the synthetic images. From the result of experiment, the proposed method is effective for measuring the whole 3D shape of an object.
- 17:00 Secure Image Processing Inside Cloud File Sharing Environment Using Lightweight Containers
- The increase of the cloud file sharing storage as infrastructure for serving large amounts of images over the internet inspires new data analytics paradigms. In this paper, we sketch the idea of expanding the cloud file sharing capabilities from only storing images to also performing encryption and analytics by moving and executing user defined programs near the data inside of an object storage cloud. In the big data context such as cloud file sharing storage, the arbitrary separation of storage and computation increases latency and decreases performance. The philosophy behind this approach is to package applications then move the application to the data, rather than moving data to where the application is located. In this paper, the encoding of the image is done using the P-Fibonacci transform of Discrete Cosine Coefficients "PFCC" algorithm. This paper describes the Docker Machine and Docker Swarm, an environment to support containerized user defined applications running remotely inside the cloud storage. Furthermore detailed simulations have been carried out to test the encryption service on cloud file sharing environment such as OpenStack Object Storage.
- 17:15 Image Fusion for Surface Finishing Inspection
- This paper presents an automatic defect detection system for machined metallic surfaces. Depending on the type of surface treatment a characteristic texture may be added. The aim is to determine flaws even if their orientation and shape are very similar to the surface finishing. For this purpose, a procedure based on merging features obtained under different lighting conditions has been developed. All the devices involved in the image acquisition process are also detailed. Results of the automated inspection suggest that the system works effectively with a low value of false rejections. Finally, ways to further improve the defect detection rate have also been discussed.
- 17:30 Autonomous Facial Recognition Based on the Human Visual System
- This paper presents a real-time facial recognition system utilizing our human visual system algorithms coupled with logarithm Logical Binary Pattern feature descriptors and our region weighted model. The architecture can quickly find and rank the closest matches of a test image to a database of stored images. There are many potential applications for this work, including homeland security applications such as identifying persons of interest and other robot vision applications such as search and rescue missions. This new method significantly improves the performance of the previous Local Binary Pattern method. For our prototype application, we supplied the system testing images and found their best matches in the database of training images. In addition, the results were further improved by weighting the contribution of the most distinctive facial features. The system evaluates and selects the best matching image using the chi-squared statistic.
Friday, September 18
Friday, September 18, 09:00 - 10:00
Lecture 4: Computational Vision: Current Progress to Future Perspectives
Lecture 4: Computational Vision: Current Progress to Future PerspectivesSeptember 18, 9:00 AMSos Agaian, Ph.D., Peter T. Flawn Distinguish Professor, University of Texas at San AntonioThe last two decades have seen remarkable scientific and industrial advances, few greater than in the area of big data and cloud computing. The rapid proliferation of hand-held mobile computing devices, coupled with the acceleration of the ‘Internet-of-Things' connectivity, and data producing systems, such as embedded sensors, mobile phones, surveillance cameras, has certainly contributed to these advances. One of the fields in which scientific computing has made particular inroads has been the area of big image-data analytics and computational vision systems. In our modern digital information connected societies, we are producing, storing and using ever increasing volumes of digital image and video content. Every day, we create 2.5 quintillion bytes of data (over 2.5 billion photos uploaded to Facebook every month, and over 300 hours of video uploaded to YouTube per minute by over 1 billion users) - so much that 90% of the data stored in the world today has been created in just the last two years. How can we possibly make sense of all this visual-centric data? And how can we be sure that the derived computations and analysis are fully relevant to our human vision, understanding and interpretations. Well managed and properly analyzed, this wealth of data can be used to unlock new sources of economic value and improved societal prosperity. The current state of the art in computational vision analytics affords us with a variety of tools and methods to solve various classes of computer vision problems. We then are posed with the following questions - how big of a class of problems in vision are we able currently to solve, compared with the totality of what humans can do? Can we duplicate human vision abilities in a computational device?This talk will give an overview of the main areas of vision-based technology being investigated by Agaian's visual computation and analysis research team. It will also discuss how do we render, interpret, and communicate big data securely, and summarize some of the unique high performance engineering computational challenges in applying vision technology methods to real-world problems. Finally, Agaian summarize his ongoing research objectives, current results, in the area of computational vision technology, and offer some perspective on new challenges, including the sharing of his methods with the basic sciences through inter-disciplinary collaborations.