Nogueira K, Dalla Mura M, Chanussot J, Schwartz WR & dos Santos JA (2016) Learning to semantically segment high-resolution remote sensing images. In: 2016 23rd International Conference on Pattern Recognition (ICPR) Proceedings. 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 04.12.2016-08.12.2016. Piscataway, NJ: IEEE, pp. 3566-3571. https://doi.org/10.1109/icpr.2016.7900187
Abstract Land cover classification is a task that requires methods capable of learning high-level features while dealing with high volume of data. Overcoming these challenges, Convolutional Networks (ConvNets) can learn specific and adaptable features depending on the data while, at the same time, learn classifiers. In this work, we propose a novel technique to automatically perform pixel-wise land cover classification. To the best of our knowledge, there is no other work in the literature that perform pixel-wise semantic segmentation based on data-driven feature descriptors for high-resolution remote sensing images. The main idea is to exploit the power of ConvNet feature representations to learn how to semantically segment remote sensing images. First, our method learns each label in a pixel-wise manner by taking into account the spatial context of each pixel. In a predicting phase, the probability of a pixel belonging to a class is also estimated according to its spatial context and the learned patterns. We conducted a systematic evaluation of the proposed algorithm using two remote sensing datasets with very distinct properties. Our results show that the proposed algorithm provides improvements when compared to traditional and state-of-the-art methods that ranges from 5 to 15% in terms of accuracy.