Posts

Showing posts from 2020

Reading notes: Federated Learning with Only Positive Labels

Image
This blog is the reading note for the paper "Federated Learning with Only Positive Labels" by Yu, Felix X., et al. ICML 2020. Broadly speaking, the authors consider learning a multi-class classification model in the federated setting, where each user has access to the positive data associated with only a single class. Specifically, they propose a generic framework namely Federated Averaging with Spreadour (FedAwS), where the server imposes a geometric regularizer after each round to encourage classes to spread-out in the embedding space. They show that FedAwS can almost match the performance of conventional learning both theoretically and empirically. Introduction In this work, the authors consider learning a classification model in the federated learning setup, where each user has only access to a single class. Examples of such settings include decentralized training of face recognition models or speaker identification models, where the classifier of the users has sensitive

Reading notes: Degenerative Adversarial NeuroImage Nets: Generating Images that Mimic Disease Progression

Image
This blog is the reading note for the paper "Degenerative Adversarial NeuroImage Nets: Generating Images that Mimic Disease Progression" by Ravi, Daniele, et al. MICCAI 2019. Broadly speaking, the authors try to simulating images representative of neurodegenerative diseases. Specifically, they designed a novel network named Degenerative Adversarial NeuroImage Net (DaniNet) to produce accurate and convincing synthetic images that emulate disease progression. Introduction  Disease progression modelling help to map out longitudinal change during chronic diseases. However, modelling temporal neurodegeneration of full resolution MRI is still a challenging problem. Several traditional simulators proposed in the literature are extremely resource-demanding and is not scalable to high resolution image. [1] proposed a deep learning framework based on GAN to manipulate MRI, but their assumption is over-simplified: 1. disease progression is modelled linearly and 2. morphological chan

Reading notes: Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation

Image
This blog is the reading note for the paper " Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation " by Uktku Ozbulak, et al., MICCAI 2019.  Broadly speaking, the authors try to generate more realistic adversarial attacks on medical image segmentation tasks. Specifically, they design a very novel generation method to produce an adversarial example that misleads the trained model to predict a targeted  realistic prediction mask with limited perturbation. Introduction Recent studies have developed various deep learning-based models in medical imaging systems. They have shown great performance over some clinical prediction tasks with minimum labor expenses. However, deep learning-based models  come with a major security problem: they can be totally fooled by so-called adversarial examples. Adversarial examples are inputs of the model with imperceptibly small perturbation that lead to misclassification. Consequently, the users of these syst