Show simple item record

dc.contributor.authorShuhao Ran
dc.contributor.authorXianjun Gao
dc.contributor.authorYuanwei Yang
dc.contributor.authorShaohua Li
dc.contributor.authorGuangbin Zhang
dc.contributor.authorPing Wang
dc.contributor.otherSchool of Geosciences, Yangtze University, Wuhan 430100, China
dc.contributor.otherSchool of Geosciences, Yangtze University, Wuhan 430100, China
dc.contributor.otherSchool of Geosciences, Yangtze University, Wuhan 430100, China
dc.contributor.otherSchool of Geosciences, Yangtze University, Wuhan 430100, China
dc.contributor.otherSchool of Geosciences, Yangtze University, Wuhan 430100, China
dc.contributor.otherAerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
dc.date.accessioned2025-10-09T04:53:27Z
dc.date.available2025-10-09T04:53:27Z
dc.date.issued01-07-2021
dc.identifier.urihttps://www.mdpi.com/2072-4292/13/14/2794
dc.identifier.urihttp://digilib.fisipol.ugm.ac.id/repo/handle/15717717/40825
dc.description.abstractDeep learning approaches have been widely used in building automatic extraction tasks and have made great progress in recent years. However, the missing detection and wrong detection causing by spectrum confusion is still a great challenge. The existing fully convolutional networks (FCNs) cannot effectively distinguish whether the feature differences are from one building or the building and its adjacent non-building objects. In order to overcome the limitations, a building multi-feature fusion refined network (BMFR-Net) was presented in this paper to extract buildings accurately and completely. BMFR-Net is based on an encoding and decoding structure, mainly consisting of two parts: the continuous atrous convolution pyramid (CACP) module and the multiscale output fusion constraint (MOFC) structure. The CACP module is positioned at the end of the contracting path and it effectively minimizes the loss of effective information in multiscale feature extraction and fusion by using parallel continuous small-scale atrous convolution. To improve the ability to aggregate semantic information from the context, the MOFC structure performs predictive output at each stage of the expanding path and integrates the results into the network. Furthermore, the multilevel joint weighted loss function effectively updates parameters well away from the output layer, enhancing the learning capacity of the network for low-level abstract features. The experimental results demonstrate that the proposed BMFR-Net outperforms the other five state-of-the-art approaches in both visual interpretation and quantitative evaluation.
dc.language.isoEN
dc.publisherMDPI AG
dc.subject.lccScience
dc.titleBuilding Multi-Feature Fusion Refined Network for Building Extraction from High-Resolution Remote Sensing Images
dc.typeArticle
dc.description.keywordshigh-resolution remote sensing images
dc.description.keywordsbuilding extraction
dc.description.keywordsmultiscale features
dc.description.keywordsaggregate semantic information
dc.description.keywordsfeature pyramid
dc.description.doi10.3390/rs13142794
dc.title.journalRemote Sensing
dc.identifier.e-issn2072-4292
dc.identifier.oaioai:doaj.org/journal:b0a985190eb849c7849483871159ae53
dc.journal.infoVolume 13, Issue 14


This item appears in the following Collection(s)

Show simple item record