Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Other literature type . 2022
License: CC BY
Data sources: ZENODO
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Project deliverable . 2022
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

D3.1 Visual Analysis for Real Sensing

Authors: Krestenitis, Marios; Ioannidis, Konstantinos;
Abstract

The main purpose of the document is to report all the algorithms that have been deployed for extracting features of the constructions and comprise the baseline for the higher level of implementations. The report is divided into three distinct sections. First, it presents the developments made with respect to the 3D representation pipeline that were deployed and applied to demo sites #1, #4, #6 and #7, which included bridges and industrial buildings. These include tools for Structure from Motion and Dense 3D point cloud generation on images captured in ASHVIN demo sites. Furthermore, a single image 3D depth prediction pipeline is presented. Secondly, the approach and implementation carried out to develop an AI-based defect detection service with pixel segmentation is presented. The aim was to detect and pixel segment different types of defects that are present in realistic inspection scenarios in demonstration site #3, which included airport operational areas. Convolutional neural network architectures were trained and validated. Finally, the report presents the results of the training and implementation of a state-of-the-art object detection algorithm to detect objects at construction sites for monitoring the construction progress. The implemented model was applied to images obtained from demo site #4 (construction of industrial building) and is based on YOLO v5 detector.

Keywords

Digital Twin, Object Detection, Semantic Segmentation, 3D Representation

12 references, page 1 of 2

Feng, X., Jiang, Y., Yang, X., Du, M., & Li, X. (2019). Computer vision algorithms and hardware implementations: A survey. Integration, 69, 309-320.

Frahm, J.-M., Fite-Georgel, P., Gallup, D., Johnson, T., Raguram, R., Wu, C., . . . others. (2010). Building rome on a cloudless day. European conference on computer vision, (pp. 368-381).

Frahm, J.-M., Pollefeys, M., Lazebnik, S., Gallup, D., Clipp, B., Raguram, R., . . . Johnson, T. (2010). Fast robust large-scale mapping from video and internet photo collections. ISPRS Journal of Photogrammetry and Remote Sensing, 65, 538-549.

Fuhrmann, S., Langguth, F., & Goesele, M. (2014). Mve-a multi-view reconstruction environment. GCH, (pp. 11-18).

Fujita, Y., & Hamamoto, Y. (2011). A robust automatic crack detection method from noisy concrete surfaces. Machine Vision and Applications, 22, 245-254.

Furukawa, Y. (2014). Photo-Consistency. In K. Ikeuchi (Ed.), Computer Vision: A Reference Guide (pp. 595-597). Boston, MA: Springer US. doi:10.1007/978-0- 387-31439-6_204

G.H. Beckman, D. P.-J. (2019). Deep learning-based automatic volumetric damage quantification using depth camera. Automation in Construction.

Han, X.-F., Laga, H., & Bennamoun, M. (2019). Image-based 3D object reconstruction: State-of-the-art and trends in the deep learning era. IEEE transactions on pattern analysis and machine intelligence, 43, 1578-1604.

Liu, Y., Yao, J., Lu, X., Xie, R., & Li, L. (2019). DeepCrack: A deep hierarchical feature learning architecture for crack segmentation. Neurocomputing, 338, 139-153.

Sarlin, P.-E., Cadena, C., Siegwart, R., & Dymczyk, M. (2019). From coarse to fine: Robust hierarchical localization at large scale. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (pp. 12716-12725).

  • BIP!
    Impact byBIP!
    citations
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
    OpenAIRE UsageCounts
    Usage byUsageCounts
    visibility views 80
    download downloads 41
  • citations
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
    Powered byBIP!BIP!
  • 80
    views
    41
    downloads
    Powered byOpenAIRE UsageCounts
Powered by OpenAIRE graph
Found an issue? Give us feedback
visibility
download
citations
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
views
OpenAIRE UsageCountsViews provided by UsageCounts
downloads
OpenAIRE UsageCountsDownloads provided by UsageCounts
0
Average
Average
Average
80
41
Green
Funded by