Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Dataset
Data sources: ZENODO
addClaim

DLSR-FireCNet: A deep learning framework for burned area mapping based on decision level super-resolution

Authors: Seydi, Seyd Teymoor; mojtaba, sadegh;

DLSR-FireCNet: A deep learning framework for burned area mapping based on decision level super-resolution

Abstract

Associated Publication: Seydi, S.T. & Sadegh, M. (2025). DLSR-FireCNet: A deep learning framework for burned area mapping based on decision level super-resolution. Remote Sensing Applications: Society and Environment, 37, 101513. https://doi.org/10.1016/j.rsase.2025.101513 Input Features Source: MODIS surface reflectance product Spectral bands used: Red (Band 1) and Near-Infrared / NIR (Band 2) Native input resolution: 250 m Image structure: Bi-temporal pairs — one pre-fire image and one post-fire image per event Architecture target: The model is trained to produce burned area maps at 30 m effective resolution via decision-level super-resolution

Powered by OpenAIRE graph
Found an issue? Give us feedback