Advanced search in
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
Include:
195 Research products, page 1 of 20

  • Publications
  • Research data
  • Other literature type
  • GB
  • AT
  • IL
  • The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences

10
arrow_drop_down
Relevance
arrow_drop_down
  • Publication . Article . Conference object . Other literature type . 2019
    Open Access
    Authors: 
    D. Backes; G. Schumann; G. Schumann; F. N. Teferele; J. Boehm;
    Publisher: Copernicus GmbH
    Countries: Luxembourg, United Kingdom

    Abstract. The occurrence of urban flooding following strong rainfall events may increase as a result of climate change. Urban expansion, aging infrastructure and an increasing number of impervious surfaces are further exacerbating flooding. To increase resilience and support flood mitigation, bespoke accurate flood modelling and reliable prediction is required. However, flooding in urban areas is most challenging. State-of-the-art flood inundation modelling is still often based on relatively low-resolution 2.5 D bare earth models with 2–5 m GSD. Current systems suffer from a lack of precise input data and numerical instabilities and lack of other important data, such as drainage networks. Especially, the quality and resolution of the topographic input data represents a major source of uncertainty in urban flood modelling. A benchmark study is needed that defines the accuracy requirements for highly detailed urban flood modelling and to improve our understanding of important threshold processes and limitations of current methods and 3D mapping data alike.This paper presents the first steps in establishing a new, innovative multiscale data set suitable to benchmark urban flood modelling. The final data set will consist of high-resolution 3D mapping data acquired from different airborne platforms, focusing on the use of drones (optical and LiDAR). The case study includes residential as well as rural areas in Dudelange/Luxembourg, which have been prone to localized flash flooding following strong rainfall events in recent years. The project also represents a cross disciplinary collaboration between the geospatial and flood modelling community. In this paper, we introduce the first steps to build up a new benchmark data set together with some initial flood modelling results. More detailed investigations will follow in the next phases of this project.

  • Publication . Article . Other literature type . 2019
    Open Access English
    Authors: 
    Y. Ding; Xianwei Zheng; Hanjiang Xiong; Y. Zhang;

    Abstract. With the rapid development of new indoor sensors and acquisition techniques, the amount of indoor three dimensional (3D) point cloud models was significantly increased. However, these massive “blind” point clouds are difficult to satisfy the demand of many location-based indoor applications and GIS analysis. The robust semantic segmentation of 3D point clouds remains a challenge. In this paper, a segmentation with layout estimation network (SLENet)-based 2D–3D semantic transfer method is proposed for robust segmentation of image-based indoor 3D point clouds. Firstly, a SLENet is devised to simultaneously achieve the semantic labels and indoor spatial layout estimation from 2D images. A pixel labeling pool is then constructed to incorporate the visual graphical model to realize the efficient 2D–3D semantic transfer for 3D point clouds, which avoids the time-consuming pixel-wise label transfer and the reprojection error. Finally, a 3D-contextual refinement, which explores the extra-image consistency with 3D constraints is developed to suppress the labeling contradiction caused by multi-superpixel aggregation. The experiments were conducted on an open dataset (NYUDv2 indoor dataset) and a local dataset. In comparison with the state-of-the-art methods in terms of 2D semantic segmentation, SLENet can both learn discriminative enough features for inter-class segmentation while preserving clear boundaries for intra-class segmentation. Based on the excellence of SLENet, the final 3D semantic segmentation tested on the point cloud created from the local image dataset can reach a total accuracy of 89.97%, with the object semantics and indoor structural information both expressed.

  • Publication . Conference object . Article . Other literature type . 2018
    Open Access
    Authors: 
    Haval Abdul-Jabbar Sadeq; Jane Drummond; Zhenhong Li;

    Abstract. A 2010 study examining ASTER GDEM v1 data revealed accuracies of 12-25m and strong negative discrepancy biases compared to precise GPS observations, in several test sites in China. Rather than further investigating these, with the advent of ASTER GDEM v2 a new series of tests, also using precise GPS observations but also other DEMs, was performed. In these tests better than the expected 17m accuracies were found (RMSE values of 3.9m to 15.3m) and no strong biases.

  • Publication . Article . Other literature type . 2012
    Open Access
    Authors: 
    Wolfgang Thaller; Ulrich Krispel; Sven Havemann; Ivan Redi; Andrea Redi; Dieter W. Fellner;
    Publisher: Copernicus GmbH

    Abstract. In the course of a project related to green building design, we have created a group of eight parametric building models that can be manipulated interactively with respect to dimensions, number of floors, and a few other parameters. We report on the commonalities and differences between the models and the abstractions that we were able to identify.

  • Open Access English
    Authors: 
    Milto Miltiadou; Mark Warren; Mike Grant; Matthew Brown;
    Countries: Cyprus, United Kingdom

    Presented at 36th International Symposium on Remote Sensing of Environment, Berlin, Germany, 11-15 May, 2015 The overarching aim of this paper is to enhance the visualisations and classifications of airborne remote sensing data for remote forest surveys. A new open source tool is presented for aligning hyperspectral and full-waveform LiDAR data. The tool produces coloured polygon representations of the scanned areas and aligned metrics from both datasets. Using data provided by NERC ARSF, tree coverage maps are generated and projected into the polygons. The 3D polygon meshes show well-separated structures and are suitable for direct rendering with commodity 3D-accelerated hardware allowing smooth visualisation. The intensity profile of each wave sample is accumulated into a 3D discrete density volume building a 3D representation of the scanned area. The 3D volume is then polygonised using the Marching Cubes algorithm. Further, three user-defined bands from the hyperspectral images are projected into the polygon mesh as RGB colours. Regarding the classifications of full-waveform LiDAR data, previous work used extraction of point clouds while this paper introduces a new approach of deriving information from the 3D volume representation and the hyperspectral data. We generate aligned metrics of multiple resolutions, including the standard deviation of the hyperspectral bands and width of the reflected waveform derived from the volume. Tree coverage maps are then generated using a Bayesian probabilistic model and due to the combination of the data, higher accuracy classification results are expected.

  • Open Access English
    Authors: 
    B Anbaroglu; Benjamin Heydecker; Tao Cheng;
    Publisher: Copernicus Publications
    Project: UKRI | Integrated Spatio-Tempora... (EP/G023212/1)

    Abstract. Occurrence of non-recurrent traffic congestion hinders the economic activity of a city, as travellers could miss appointments or be late for work or important meetings. Similarly, for shippers, unexpected delays may disrupt just-in-time delivery and manufacturing processes, which could lose them payment. Consequently, research on non-recurrent congestion detection on urban road networks has recently gained attention. By analysing large amounts of traffic data collected on a daily basis, traffic operation centres can improve their methods to detect non-recurrent congestion rapidly and then revise their existing plans to mitigate its effects. Space-time clusters of high link journey time estimates correspond to non-recurrent congestion events. Existing research, however, has not considered the effect of travel demand on the effectiveness of non-recurrent congestion detection methods. Therefore, this paper investigates how travel demand affects detection of non-recurrent traffic congestion detection on urban road networks. Travel demand has been classified into three categories as low, normal and high. The experiments are carried out on London’s urban road network, and the results demonstrate the necessity to adjust the relative importance of the component evaluation criteria depending on the travel demand level.

  • Open Access English
    Authors: 
    Isabella Toschi; Pablo Rodríguez-Gonzálvez; Fabio Remondino; S. Minto; S. Orlandini; A. Fuller;

    Abstract. This paper discusses a methodology to evaluate the precision and the accuracy of a commercial Mobile Mapping System (MMS) with advanced statistical methods. So far, the metric potentialities of this emerging mapping technology have been studied in few papers, where generally the assumption that errors follow a normal distribution is made. In fact, this hypothesis should be carefully verified in advance, in order to test how well the Gaussian classic statistics can adapt to datasets that are usually affected by asymmetrical gross errors. The workflow adopted in this study relies on a Gaussian assessment, followed by an outlier filtering process. Finally, non-parametric statistical models are applied, in order to achieve a robust estimation of the error dispersion. Among the different MMSs available on the market, the latest solution provided by RIEGL is here tested, i.e. the VMX-450 Mobile Laser Scanning System. The test-area is the historic city centre of Trento (Italy), selected in order to assess the system performance in dealing with a challenging and historic urban scenario. Reference measures are derived from photogrammetric and Terrestrial Laser Scanning (TLS) surveys. All datasets show a large lack of symmetry that leads to the conclusion that the standard normal parameters are not adequate to assess this type of data. The use of non-normal statistics gives thus a more appropriate description of the data and yields results that meet the quoted a-priori errors.

  • Open Access English
    Authors: 
    Anna Lobovikov-Katz;
    Publisher: Copernicus Publications

    Abstract. Acknowledgement of the value of a basic freehand sketch by the information and communication community of researchers and developers brought about the advanced developments for the use of sketches as free input to complicated processes of computerized visualization, so as to make them more widely accessible. However, a sharp reduction and even exclusion of this and other basic visual disciplines from education in sciences, technology, engineering and architecture dramatically reduces the number of future users of such applications. The unique needs of conservation of cultural heritage pose specific challenges as well as encourage the formulation of innovative development tasks in related areas of information and communication technologies (ICT). This paper claims that the introduction of basic visual disciplines to both communities is essential to the effectiveness of integration of heritage conservation needs and the advanced ICT development of conservation value, and beyond. It provides an insight into the challenges and advantages of introducing these subjects in a relevant educational context, presents some examples of their teaching and learning in the modern environment, including e-learning, and sketches perspectives to their application.

  • Publication . Article . Other literature type . 2016
    Open Access
    Authors: 
    Thomas Blaschke; Stefan Lang; Dirk Tiede; Manos Papadakis; A. Györi;
    Publisher: Copernicus GmbH

    We introduce a prototypical methodological framework for a place-based GIS-RS system for the spatial delineation of place while incorporating spatial analysis and mapping techniques using methods from different fields such as environmental psychology, geography, and computer science. The methodological lynchpin for this to happen - when aiming to delineate <i>place</i> in terms of objects - is object-based image analysis (OBIA).

  • Publication . Article . Other literature type . 2018
    Open Access English
    Authors: 
    Andrew P. McClune; P. E. Miller; Jon P. Mills; David A. Holland;
    Publisher: Copernicus Publications

    Abstract. Over the last 20 years the use of, and demand for, three dimensional (3D) building models has meant there has been a vast amount of research conducted in automating the extraction and reconstruction of these models from airborne sensors. Whilst many different approaches have been suggested, full automation is yet to be achieved and research has suggested that the combination of data from multiple sources is required in order to achieve this. Developments in digital photogrammetry have delivered improvements in spatial resolution whilst higher image overlap to increase the number of pixel correspondents between images, giving the name multi-ray photogrammetry, has improved the resolution and quality of its by-products. In this paper the extraction of roof geometry from multiray photogrammetry will be covered, which underpins 3D building reconstruction. Using orthophotos, roof vertices are extracted using the Canny edge detector. Roof planes are detected from digital surface models (DSM) by extracting information from 2D cross sections and measuring height differences. To eliminate overhanging vegetation, the segmentation of trees is investigated by calculating the characteristics of a point within a local neighbourhood of the photogrammetric point cloud. The results highlight the complementary nature of these information sources, and a methodology for integration and reconstruction of roof geometry is proposed.

Send a message
How can we help?
We usually respond in a few hours.