Our Slim Scissors enables quick extraction of elongated thin parts by simply brushing some coarse scribbles.
Existing interactive segmentation algorithms typically fail when segmenting objects with elongated thin structures (bicycle spokes). Though some recent efforts attempt to address this challenge by introducing a new synthetic dataset and a three-stream network design, they suffer two limitations: 1) large performance gap when tested on real image domain; 2) still requiring extensive amounts of user interactions (clicks) if the thin structures are not well segmented. To solve them, we develop Slim Scissors, which enables quick extraction of elongated thin parts by simply brushing some coarse scribbles. Our core idea is to segment thin parts by learning to compare the original image to a synthesized background without thin structures. Our method is model-agnostic and seamlessly applicable to existing state-of-the-art interactive segmentation models. To further reduce the annotation burden, we devise a similarity detection module, which enables the model to automatically synthesize background for other similar thin structures from only one or two scribbles. Extensive experiments on COIFT, HRSOD and ThinObject-5K clearly demonstrate the superiority of Slim Scissors for thin object segmentation: it outperforms TOS-Net by 5.9% IoU_thin and 3.5% F score on the real dataset HRSOD.
Demos are coming.
Kunyang Han, Jun Hao Liew, Jiashi Feng, Huawei Tian, Yao Zhao, and Yunchao Wei
In ECCV 2022.
@InProceedings{han2022slim,
title = {Slim Scissors: Segmenting Thin Object from Synthetic Background},
author = {Han, Kunyang and Liew, Jun Hao and Feng, Jiashi and Tian, Huawei and Zhao, Yao and Wei, Yunchao},
booktitle = {eccv},
year = {2022},
}
This template was originally made by Phillip Isola and Richard Zhang for a colorful project, and inherits the modifications made by Jason Zhang. The code can be found here.