Although machine learning (ML) has shown promise in numerous domains, there are concerns about generalizability to out-of-sample data. This is currently addressed by centrally sharing ample, and importantly diverse, data from multiple sites. However, such centralization is challenging to scale (or even not feasible) due to various limitations. Federated ML (FL) provides an alternative to train accurate and generalizable ML models, by only sharing numerical model updates. Here we present findings from the largest FL study to-date, involving data from 71 healthcare institutions across 6 continents, to generate an automatic tumor boundary detector for the rare disease of glioblastoma, utilizing the largest dataset of such patients ever used in the literature (25,256 MRI scans from 6,314 patients). We demonstrate a 33% improvement over a publicly trained model to delineate the surgically targetable tumor, and 23% improvement over the tumor's entire extent. We anticipate our study to: 1) enable more studies in healthcare informed by large and diverse data, ensuring meaningful results for rare diseases and underrepresented populations, 2) facilitate further quantitative analyses for glioblastoma via performance optimization of our consensus model for eventual public release, and 3) demonstrate the effectiveness of FL at such scale and task complexity as a paradigm shift for multi-site collaborations, alleviating the need for data sharing.
Combining Registration Errors and Supervoxel Classification for Unsupervised Brain Anomaly Detection
Samuel B. Martins, Alexandre X. Falcão, Alexandru C. Telea
Biomedical Engineering Systems and Technologies, Springer International Publishing, pp. 140–164, 2021.
Automatic detection of brain anomalies in MR images is challenging and complex due to intensity similarity between lesions and healthy tissues as well as the large variability in shape, size, and location among different anomalies. Even though discriminative models (supervised learning) are commonly used for this task, they require quite high-quality annotated training images, which are absent for most medical image analysis problems. Inspired by groupwise shape analysis, we adapt a recent fully unsupervised supervoxel-based approach (SAAD)—designed for abnormal asymmetry detection of the hemispheres—to detect brain anomalies from registration errors. Our method, called BADRESC, extracts supervoxels inside the right and left hemispheres, cerebellum, and brainstem, models registration errors for each supervoxel, and treats outliers as anomalies. Experimental results on MR-T1 brain images of stroke patients show that BADRESC outperforms a convolutional-autoencoder-based method and attains similar detection rates for hemispheric lesions in comparison to SAAD with substantially fewer false positives. It also presents promising detection scores for lesions in the cerebellum and brainstem.
BADRESC: Brain Anomaly Detection based on Registration Errors and Supervoxel Classification
Samuel B. Martins, Alexandre X. Falcão, Alexandru C. Telea
Automatic detection of brain anomalies in MR images is very challenging and complex due to intensity similarity between lesions and normal tissues as well as the large variability in shape, size, and location among different anomalies. Inspired by groupwise shape analysis, we adapt a recent fully unsupervised supervoxelbased approach (SAAD) — designed for abnormal asymmetry detection of the hemispheres — to detect brain anomalies from registration errors. Our method, called BADRESC, extracts supervoxels inside the right and left hemispheres, cerebellum, and brainstem, models registration errors for each supervoxel, and treats outliers as anomalies. Experimental results on MR-T1 brain images of stroke patients show that BADRESC attains similar detection rate for hemispheric lesions in comparison to SAAD with substantially less false positives. It also presents promising detection scores for lesions in the cerebellum and brainstem.
Investigating the impact of supervoxel segmentation for unsupervised abnormal brain asymmetry detection
Samuel B. Martins, Alexandru C. Telea, Alexandre X. Falcão
Computerized Medical Imaging and Graphics, 85, pp. 101770, 2020.
Several brain disorders are associated with abnormal brain asymmetries (asymmetric anomalies). Several computer-based methods aim to detect such anomalies automatically. Recent advances in this area use automatic unsupervised techniques that extract pairs of symmetric supervoxels in the hemispheres, model normal brain asymmetries for each pair from healthy subjects, and treat outliers as anomalies. Yet, there is no deep understanding of the impact of the supervoxel segmentation quality for abnormal asymmetry detection, especially for small anomalies, nor of the added value of using a specialized model for each supervoxel pair instead of a single global appearance model. We aim to answer these questions by a detailed evaluation of different scenarios for supervoxel segmentation and classification for detecting abnormal brain asymmetries. Experimental results on 3D MR-T1 brain images of stroke patients confirm the importance of high-quality supervoxels fit anomalies and the use of a specific classifier for each supervoxel. Next, we present a refinement of the detection method that reduces the number of false-positive supervoxels, thereby making the detection method easier to use for visual inspection and analysis of the found anomalies.
An adaptive probabilistic atlas for anomalous brain segmentation in MR images
Samuel B. Martins, Jordão Bragantini, Alexandre X. Falcão, Clarissa L. Yasuda
Purpose: Automated segmentation of brain structures (objects) in MR three-dimensional (3D) images for quantitative analysis has been a challenge and probabilistic atlases (PAs) are among the most well-succeeded approaches. However, the existing models do not adapt to possible object anomalies due to the presence of a disease or a surgical procedure. Post-processing operation does not solve the problem, for example, tissue classification to detect and remove such anomalies inside the resulting segmentation mask, because segmentation errors on healthy tissues cannot be fixed. Such anomalies very often alter the shape and texture of the brain structures, making them different from the appearance of the model. In this paper, we present an effective and efficient adaptive probabilistic atlas, named AdaPro, to circumvent the problem and evaluate it on a challenging task - the segmentation of the left hemisphere, right hemisphere, and cerebellum, without pons and medulla, in 3D MR-T1 brain images of Epilepsy patients. This task is challenging due to temporal lobe resections, artifacts, and the absence of contrast in some parts between the structures of interest.
Methods: In AdaPro, we first build one probabilistic atlas per object of interest from a training set with normal 3D images and the corresponding 3D object masks. Second, we incorporate a texture classifier based on convex optimization which dynamically indicates the regions of the target 3D image where the PAs (shape constraints) should be further adapted. This strategy is mathematically more elegant and avoids problems with post-processing. Third, we add a new object-based delineation algorithm based on combinatorial optimization and diffusion filtering. AdaPro can then be used to locate and delineate the objects in the coordinate space of the atlas or of the test image. We also compare AdaPro with three other state-of-the-art methods: an statistical shape model based on synergistic object search and delineation, and two methods based on multi-atlas label fusion (MALF).
Results: We evaluate the methods quantitatively on 3D MR-T1 brain images of 2T and 3T from epilepsy patients, before and after temporal lobe resections, and on the template and native coordinate spaces. The results show that AdaPro is considerably faster and consistently more accurate than the baselines with statistical significance in both coordinate spaces.
Conclusion: AdaPro can be used as a fast and effective step for brain tissue segmentation and it can also be easily extended to segment subcortical brain structures. By choice of its components, probabilistic atlas, texture classifier, and delineation algorithm, it can also be extended to other organs and imaging modalities.
ALTIS: A fast and automatic lung and trachea CT-image segmentation method
Azael M. Sousa, Samuel B. Martins, Alexandre X. Falcão, Fabiano Reis, Ericson Bagatin, Klaus Irion
Purpose: The automated segmentation of each lung and trachea in CT scans is commonly taken as a solved problem. Indeed, existing approaches may easily fail in the presence of some abnormalities caused by a disease, trauma, or previous surgery. For robustness, we present ALTIS (implementation is available at http://lids.ic.unicamp.br/downloads) - a fast automatic lung and trachea CT-image segmentation method that relies on image features and relative shape- and intensity-based characteristics less affected by most appearance variations of abnormal lungs and trachea.
Methods: ALTIS consists of a sequence of image foresting transforms (IFTs) organized in three main steps: (a) lung-and-trachea extraction, (b) seed estimation inside background, trachea, left lung, and right lung, and (c) their delineation such that each object is defined by an optimum-path forest rooted at its internal seeds. We compare ALTIS with two methods based on shape models (SOSM-S and MALF), and one algorithm based on seeded region growing (PTK).
Results: The experiments involve the highest number of scans found in literature - 1255 scans, from multiple public data sets containing many anomalous cases, being only 50 normal scans used for training and 1205 scans used for testing the methods. Quantitative experiments are based on two metrics, DICE and ASSD. Furthermore, we also demonstrate the robustness of ALTIS in seed estimation. Considering the test set, the proposed method achieves an average DICE of 0.987 for both lungs and 0.898 for the trachea, whereas an average ASSD of 0.938 for the right lung, 0.856 for the left lung, and 1.316 for the trachea. These results indicate that ALTIS is statistically more accurate and considerably faster than the compared methods, being able to complete segmentation in a few seconds on modern PCs.
Conclusion: ALTIS is the most effective and efficient choice among the compared methods to segment left lung, right lung, and trachea in anomalous CT scans for subsequent detection, segmentation, and quantitative analysis of abnormal structures in the lung parenchyma and pleural space.
Extending Supervoxel-based Abnormal Brain Asymmetry Detection to the Native Image Space
Samuel B. Martins, Alexandru C. Telea, Alexandre X. Falcão
IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, pp. 450-453, 2019.
Most neurological diseases are associated with abnormal brain asymmetries. Recent advances in automatic unsupervised techniques model normal brain asymmetries from healthy subjects only and treat anomalies as outliers. Outlier detection is usually done in a common standard coordinate space that limits its usability. To alleviate the problem, we extend a recent fully unsupervised supervoxel-based approach (SAAD) for abnormal asymmetry detection in the native image space of MR brain images. Experimental results using our new method, called N-SAAD, show that it can achieve higher accuracy in detection with considerably less false positives than a method based on unsupervised deep learning for a large set of MR-T1 images.
A Supervoxel-Based Approach for Unsupervised Abnormal Asymmetry Detection in MR Images of the Brain
Samuel B. Martins, Guilherme Ruppert, Fabiano Reis, Clarissa L. Yasuda, Alexandre X. Falcão
International Symposium on Biomedical Imaging (ISBI), IEEE, pp. 882-885, 2019.
Several pathologies are associated with abnormal asymmetries in brain images and their automated detection can improve diagnosis, segmentation, and automatic analysis of abnormal brain tissues (e.g., lesions). In this paper, we introduce a fully unsupervised supervoxel-based approach for abnormal asymmetry detection in MR images of the brain. Also, we present a new method for symmetrical supervoxel extraction called SymmISF. The experiments over a large set of MR-TI images reveal a higher detection rates and considerably less false positives in comparison to a deep learning auto-encoder approach.
Modeling normal brain asymmetry in MR images applied to anomaly detection without segmentation and data annotation
Samuel B. Martins, Barbara C. Benato, Bruna F. Silva, Clarissa L. Yasuda, Alexandre X. Falcão
SPIE Medical Imaging, vol. 10950, pp. 71-80, 2019.
While the human brain presents natural structural asymmetries between left and right hemispheres in MR images, most neurological diseases are associated with abnormal brain asymmetries. Due to the great variety of such anomalies, we present a framework to model normal structural brain asymmetry from control subjects only, independent of the neurological disease. The model dismisses data annotation by exploiting generative deep neural networks and one-class classifiers. We also propose a patch-based model to localize volumes of interest with reduced background sizes around selected brain structures and a one-class classifier based on an optimum-path forest. This model makes the framework independent of segmentation, which may fail, especially in abnormal images, or may not be available for a given structure. We validate the first method to the detection of abnormal hippocampal asymmetry using distinct groups of Epilepsy patients and testing controls. The results of validation using the original feature space and a two-dimensional space based on non-linear projection show the potential to extend the framework for abnormal asymmetry detection in other parts of the brain and develop intelligent and interactive virtual environments. For instance, the approach can be used for screening, inspection, and annotation of the detected anomaly type, allowing the development of CADx systems.
Graph-Based Image Segmentation Using Dynamic Trees
Jordão Bragantini, Samuel B. Martins, Cesar Castelo-Fernandez, Alexandre X. Falcão
Iberoamerican Congress on Pattern Recognition (CIARP), Springer, pp. 470-478, 2018.
Image segmentation methods have been actively investigated, being the graph-based approaches among the most popular for object delineation from seed nodes. In this context, one can design segmentation methods by distinct choices of the image graph and connectivity function—i.e., a function that measures how strongly connected are seed and node through a given path. The framework is known as Image Foresting Transform (IFT) and it can define by seed competition each object as one optimum-path forest rooted in its internal seeds. In this work, we extend the general IFT algorithm to extract object information as the trees evolve from the seed set and use that information to estimate arc weights, positively affecting the connectivity function, during segmentation. The new framework is named Dynamic IFT (DynIFT) and it can make object delineation more effective by exploiting color, texture, and shape information from those dynamic trees. In comparison with other graph-based approaches from the state-of-the-art, the experimental results on natural images show that DynIFT-based object delineation methods can be significantly more accurate.
A Fast and Robust Negative Mining Approach for Enrollment in Face Recognition Systems
Samuel B. Martins, Giovani Chiachia, Alexandre X. Falcão
Conference on Graphics, Patterns and Images (SIBGRAPI), IEEE, pp. 201-208, 2017.
Consider a face image data set from clients of a company and the problem of building a face recognition system from it. Video cameras can be used to acquire several images per client in order to maximize the robustness of the system. However, as the data set grows huge, the accuracy of the system might be seriously compromised since the number of negative samples for each user is increasing. We propose here a first solution for this problem, which (i) limits the number of negative samples in the training set for preserving responsiveness during user enrollment, (ii) selects the most informative negative samples with respect to each user for preserving accuracy, and (iii) builds a user-specific classification model. We combine a high-dimensional data representation from deep learning with a method that selects negative samples from a large mining set and builds, within interactive times, effective user-specific training set and classifier, using linear support vector machines. The method can also be used with other feature extractors. It has shown superior performance as compared to five baseline methods on three unconstrained data sets.
A multi-object statistical atlas adaptive for deformable registration errors in anomalous medical image segmentation
Samuel B. Martins, Thiago V. Spina, Clarissa L. Yasuda, Alexandre X. Falcão
Statistical Atlases have played an important role towards automated medical image segmentation. However, a challenge has been to make the atlas more adaptable to possible errors in deformable registration of anomalous images, given that the body structures of interest for segmentation might present significant differences in shape and texture. Recently, deformable registration errors have been accounted by a method that locally translates the statistical atlas over the test image, after registration, and evaluates candidate objects from a delineation algorithm in order to choose the best one as final segmentation. In this paper, we improve its delineation algorithm and extend the model to be a multi-object statistical atlas, built from control images and adaptable to anomalous images, by incorporating a texture classifier. In order to provide a first proof of concept, we instantiate the new method for segmenting, object-by-object and all objects simultaneously, the left and right brain hemispheres, and the cerebellum, without the brainstem, and evaluate it on MRT1-images of epilepsy patients before and after brain surgery, which removed portions of the temporal lobe. The results show efficiency gain with statistically significant higher accuracy, using the mean Average Symmetric Surface Distance, with respect to the original approach.
Interactive Medical Image Segmentation by Statistical Seed Models
Thiago V. Spina, Samuel B. Martins, Alexandre X. Falcão
Conference on Graphics, Patterns and Images (SIBGRAPI), IEEE, pp. 273-280, 2016.
Interactive 3D object segmentation is an important and challenging activity in medical imaging, although it is tedious and error-prone to be done. Automatic segmentation methods aim to replace the user altogether, but require user interaction to produce training data sets of segmented masks and to make error corrections. We propose a complete framework for interactive medical image segmentation, which reduces user effort by automatically providing an initial segmentation result. We develop a Statistical Seed Model (SSM) to this end, that improves from seed sets selected by robot users when reconstructing masks of previously segmented images. The SSM outputs a seed set that may be used to automatically delineate a new test image. The seeds provide both an implicit object shape constraint and a flexible way of interactively correcting segmentation. We demonstrate that our framework decreases the amount of user interaction by a factor of three, when segmenting MR-images of the cerebellum.
Diagnosis of Human Intestinal Parasites by Deep Learning
Alan Z. Peixinho, Samuel B. Martins, John E. Vargas, Alexandre X. Falcão, Jeancarlo F. Gomes, Celso T. N. Suzuki
Eccomas Thematic Conference on Computational Vision and Medical Image Processing (VipIMAGE), 2015.
Intestinal parasitic infections can cause serious health problems, especially in children and immunodeficient adults. In order to make the diagnosis of intestinal parasites fast and effective, we have developed an automated system based on optical microscopy image analysis. This work presents a deep learning approach to discover more effective parasite image features from a small training set. We also discuss how to prepare the training set in order to cope with object scale and pose variations. By using random kernels, our approach considerably simplifies the learning task of a suitable convolutional network architecture for feature extraction. The results demonstrate significant accuracy gains in classification of the 15 most common species of human intestinal parasites in Brazil with respect to our previous state-of-the-art solution.
Medical image segmentation using object shape models: A critical review on recent trends, and alternative directions
A. X. Falcão, T. V. Spina, S. B. Martins, R. Phellan
Eccomas Thematic Conference on Computational Vision and Medical Image Processing (VipIMAGE), 2015.
Segmentation is important to define the spatial extension of body anatomic structures (objects) in medical images for quantitative analysis. In this context, it is desirable to eliminate (at least minimize) user interaction. This aim is feasible by combining object delineation algorithms with Object Shape Models (OSMs). While the former can better capture the actual shape of the object in the image, the latter provides shape constraints to assist its location and delineation. We review two important classes of OSMs for medical image segmentation: Statistical (SOSMs) and Fuzzy (FOSMs). SOSMs rely on the image mapping onto a reference coordinate system, which indicates the probability of each voxel to be in the object (a probabilistic atlas built from a set of training images and their segmentation masks). Imperfect mappings due to shape and texture variations asks for object delineation algorithms, but the methods usually assume that the atlas is at the best position for delineation. Multiple atlases per object can mitigate the problem and a recent trend is to use each training mask as an individual atlas. By mapping them onto the coordinate system of a new image, object delineation can be accomplished by label fusion. However, the processing time for deformable registration is critical to make SOSMs suit- able for large scale studies. FOSMs appear as a recent alternative to avoid reference systems (deformable registration) by translating the training masks to a common reference point for model construction. This relaxes the shape constraints, but asks for a more effective object delineation algorithm and some efficient approach for object’s location. One of the solutions, named optimum object search, translates the model inside an estimated search region in the image while a criterion function guides translation and determines the best delineated object among candidates. This makes segmentation with FOSMs consid- erably faster than with SOSMs, but SOSMs that adopt the optimum object search can be more effective and with less atlases per object. We then discuss the pros and cons of the recent FOSM and SOSM approaches by providing alternative directions, which also include the user to correct segmentation errors and improve the models.
I have been acting as an expert reviewer at the following academic outlets: