Posts

Reading notes: Federated Learning with Only Positive Labels

Image
This blog is the reading note for the paper "Federated Learning with Only Positive Labels" by Yu, Felix X., et al. ICML 2020. Broadly speaking, the authors consider learning a multi-class classification model in the federated setting, where each user has access to the positive data associated with only a single class. Specifically, they propose a generic framework namely Federated Averaging with Spreadour (FedAwS), where the server imposes a geometric regularizer after each round to encourage classes to spread-out in the embedding space. They show that FedAwS can almost match the performance of conventional learning both theoretically and empirically. Introduction In this work, the authors consider learning a classification model in the federated learning setup, where each user has only access to a single class. Examples of such settings include decentralized training of face recognition models or speaker identification models, where the classifier of the users has sensitive

Reading notes: Degenerative Adversarial NeuroImage Nets: Generating Images that Mimic Disease Progression

Image
This blog is the reading note for the paper "Degenerative Adversarial NeuroImage Nets: Generating Images that Mimic Disease Progression" by Ravi, Daniele, et al. MICCAI 2019. Broadly speaking, the authors try to simulating images representative of neurodegenerative diseases. Specifically, they designed a novel network named Degenerative Adversarial NeuroImage Net (DaniNet) to produce accurate and convincing synthetic images that emulate disease progression. Introduction  Disease progression modelling help to map out longitudinal change during chronic diseases. However, modelling temporal neurodegeneration of full resolution MRI is still a challenging problem. Several traditional simulators proposed in the literature are extremely resource-demanding and is not scalable to high resolution image. [1] proposed a deep learning framework based on GAN to manipulate MRI, but their assumption is over-simplified: 1. disease progression is modelled linearly and 2. morphological chan

Reading notes: Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation

Image
This blog is the reading note for the paper " Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation " by Uktku Ozbulak, et al., MICCAI 2019.  Broadly speaking, the authors try to generate more realistic adversarial attacks on medical image segmentation tasks. Specifically, they design a very novel generation method to produce an adversarial example that misleads the trained model to predict a targeted  realistic prediction mask with limited perturbation. Introduction Recent studies have developed various deep learning-based models in medical imaging systems. They have shown great performance over some clinical prediction tasks with minimum labor expenses. However, deep learning-based models  come with a major security problem: they can be totally fooled by so-called adversarial examples. Adversarial examples are inputs of the model with imperceptibly small perturbation that lead to misclassification. Consequently, the users of these syst

Reading notes: Improving Adversarial Robustness (two papers in ICCV 2019)

Image
This blog is the reading note for the two papers: "Improving Adversarial Robustness via Guided Complement Entropy" by Hao-Yun Chen, et al., ICCV 2019 and " Adversarial Defense by Restricting the Hidden Space  of Deep Neural Networks " by Aamir Mustafa, et al. ICCV 2019. Broadly speaking, these two papers deal with the problem of adversarial attacks on natural images. Specifically, the authors of these two papers both design novel training methods that force the network to learn well-separated feature representations of different classes in some manifolds. This ensures that attackers can no longer fool the network within a restricted perturbation budget. Introduction Deep neural networks are vulnerable to adversarial attacks, which are intentionally crafted to fool them by adding imperceptibly small perturbation to an input image. Several papers have pointed out that the main reason for the existence of such perturbation is the close proximity of different class

Reading notes: Non-Local Context Encoder: Robust Biomedical Image Segmentation against Adversarial Attacks

Image
This blog is the reading note for the paper "Non-Local Context Encoder: Robust Biomedical Image Segmentation against Adversarial Attacks." by He, Xiang, et al., AAAI 2019. Broadly speaking, this paper deals with the problem of adversarial attacks on biomedical image segmentation model. Specifically, it proposes a non-local context encoder which can model short- and long-range spatial dependencies and encode global contexts to enhance the adversarial robustness. Introduction Medical image segmentation is a fundamental part of image analysis for computer-aided diagnosis, which offers pixel-level annotation for ROI such as organs, lesion, structures on the medical image (e.g. MRI, CT, X-ray). However, it is challenging due to the limited number and diversity of training dataset. With the development of the hardware, CNN-based methods such as UNet, FCN have achieved great success in medical segmentation.  In parallel to this progress, recent work shows that semantic segment