home  Home      home  ISPRS Foundation      sitemap  Sitemap      search  Search    
 
October 22, 2019 
Scientific Initiatives

In 2014 ISPRS has introduced the so called Scientific Initiative to support projects of interest to the ISPRS community. Call are normally launched in autum of evenly numbered  years. Details of the regulations an be found at http://www.isprs.org/documents/orangebook/app9.aspx.
 
Reports of previous calls are available at


ISPRS Scientific Initiatives 2019

For 2019 competition, the Council has approved the six selected projects for funding. The following provides a brief summary of the above-awarded projects together with the information of their principle investigator(s) and co-investigator(s):

Title PI(s) TC

ISPRS benchmark on multi-sensorial indoor mapping and positioning

Cheng Wang

I

Development of an open source multi-view and multimodal feature marching tool for photogrammetric applications

Diego González-Aguilera

II

International Benchmarking of terrestrial Image-based Point Clouds for Forestry

Markus Hollaus, Martin Mokroš, Yunsheng Wang

III

GeoBIM benchmark: reference study on software support for open standards of city and building models

Francesca Noardo

IV

The ISPRS Benchmark Test on Indoor Modelling

Kourosh Khoshelham

IV

Capacity building for object detection and tracking in UAV videos using deep learning

K. Vani

V

ISPRS benchmark on multi-sensorial indoor mapping and positioning 

Principle Investigator: Cheng Wang, Xiamen University, China

Co-investigators: Naser Elsheimy, University of Calgary, Canada; Chenglu Wen, Xiamen University, China; Guenther Retscher, TU Wien - Vienna University of Technology, Austria; Zhizhong Kang, China University of Geosciences, China; Andrea Lingua, Polytechnic University of Turin, Italy

Indoor environments are essential for people’s life. Indoor mapping and positioning technologies have become in great demand in recent years. Visualization, positioning, and location-based services (LBS), routing and navigation in large public buildings, navigational assistance for disabled or aged people and evacuation under different emergency conditions are just a few examples of the emerging applications that require 3D mapping and positioning of indoor environments. SLAM-based indoor mobile laser scanning systems (IMLS) provide an effective tool for indoor applications. During the IMLS procedure, 3D point clouds and high accuracy trajectory with position and orientation are acquired. Although considerable efforts have been made in the last few years to improve the SLAM algorithms and the geometric/semantic information extraction from point clouds and images, there are still some remaining challenges. First, lack of efficient or real-time 3D point cloud generation methods of as-built 3D indoor environment. Second, difficulties of building information model (BIM) features extraction in the clustered and occluded indoor environment. In addition, given the relatively high accuracy, the IMLS trajectory provides a good reference or ground truth for the low-cost indoor positioning solutions. This ISPRS scientific initiative aims to stimulate and promote research in three fields: (1) SLAM-based indoor point cloud generation; (2) automated BIM feature extraction from point clouds, with an emphasis on the elements that are involved in building management and navigation tasks such as floors, walls, ceilings, doors, windows, stairs, lamps, switches, air outlets; and (3) low-Cost multisensory indoor positioning, focusing on the smartphone platform solution. In the absence of standard datasets for evaluating indoor mapping and positioning methodologies, this initiative also aims to create a benchmark dataset including several point clouds captured by IMLS in indoor environments of various complexity. The initiative project will provide a common framework for the evaluation and comparison of LiDAR-based SLAM, BIM feature extraction and smartphone indoor positioning methods. Datasets will be available from the dedicated webpage on the ISPRS website, and interested participants will be invited to test their methods and submit their results for evaluation. The submitted models will be evaluated from geometric, semantic and topologic point of view, and the results will be published on the webpage.

 

Development of an open source multi-view and multimodal feature marching tool for photogrammetric applications     

Principle Investigator: Diego González-Aguilera

Co-Investigators: Francesco Nex, University of Twente, Netherlands; Isabella Toschi, FBK Trento, Italy; Andrea Fusiello, University of Udine, Italy; Pablo Rodriguez-Gonzalvez, University of Leon, Spain; Luis Lopez Fernandez, TIDOP Research Unit, Avila, Spain; David Hernández-Lopez,  Institute for Regional Development (IDR), Spain

The photogrammetric problem of 3D reconstruction from multiple images has received a lot of attention in the last decade1, especially focused on its two main pillars: (i) image orientation and self-calibration, and (ii) dense matching reconstruction. However, the overall performance of both steps strongly depends on the quality of the initial feature (keypoints) extraction and matching stage. Therefore, determining which feature detectors and descriptors offer the most discriminative power and the best matching performance is of significant interest to a large part of the photogrammetry and computer vision community. Methods for performing these tasks are usually based on representing an image using some global or local image properties and comparing them using a similarity measure or some machine/deep learning approaches. Nevertheless, most of the existing methods are designed for matching images within the same modality and under similar geometric conditions. In addition, public datasets for quantitative evaluation of algorithms for multiview and multimodal feature extraction matching are very limited. Motivated by the lack of multiview and multimodal dataset and, the limitations of existing feature extraction and matching tools especially in close-range photogrammetric applications, we propose to develop a novel tool that encloses and combines different state-of-the-art detectors and descriptors, together with different matching strategies. All these algorithms will be validated using a novel multiview and multimodal dataset that will be created in the frame of the Scientific Initiative. In particular, our dataset will cover different indoor and outdoor scenarios that go beyond the existing dataset based on stereo images, adding also images coming from different modalities (visible, infrared, thermal, rangemap, etc.). Last but not least, the results for the proposed dataset will be evaluated in terms of repeatability for the case of detectors, through ROC (Receiving Operating Characteristic) curves in case of descriptors and the number and distribution of keypoints correctly matched based on a robust estimator.

 

International Benchmarking of terrestrial Image-based Point Clouds for Forestry          

Principal Investigators: Markus Hollaus, TU Wien, Austria; Martin Mokroš, Czech University of Life Sciences, Czech Republic; Yunsheng Wang, Finnish Geospatial Research Institute, Finland

Co-investigators: Livia Piermattei, TU Wien, Austria; Peter Surový, Czech University of Life Sciences, Czech Republic; Xinlian Liang, Finnish Geospatial Research Institute, Finland; Milan Koreň, and Julián Tomaštík, Technical University in Zvolen, Slovakia; Lin Cao, Nanjing Forestry University, China

The project aims to evaluate the performance of terrestrial image-based point clouds in plot-level forest inventory through an international benchmarking. Furthermore, to investigate whether the image-based point clouds can be an alternative solution to the more expensive terrestrial laser scanning (TLS) derived point clouds. In recent years, the photogrammetric technique based on structure from motion (SfM) and dense image matching showed the capability of generating accurate dense point clouds from different platforms and for different purposes. However, the implementation of this technology in real forest environments is practically challenging due to the difficulty of correspondence recognition in complex forest stands.

Based on recent studies on TLS, it is recognised that terrestrial point clouds are competitive for the estimation of tree characteristics such as the stem curve and the stem volume, which are hardly achievable with non-destructive measurements. Several applications showed the similarity of TLS and terrestrial image-based point clouds. Moreover, for a practical use, the terrestrial photogrammetry offers a cheaper and lighter equipment (i.e. handheld camera) in comparison with the TLS. Data acquisition for photogrammetric point clouds also requires less expertise. The main differences between the image- and TLS-based point clouds include the geometric precision, the point density, the noise ratio and the plot coverage, which might lead to differences on the measurements of tree attributes such as the tree position and the diameter at breast height (DBH). Thus, an essential question for image-based point cloud is - in which forest stand condition and with what strategy of image acquisition can the image-based point clouds carry out a similar performance as TLS for tree detection and modeling?

To answer such question, we acquire the image-based point clouds of ten typical forest plots situated in five different countries (two in each country), namely: Austria, China, Czech Republic, Finland and Slovakia. These test plots differ in size, tree species composition, density, shape (i.e. circular, square, rectangle) and the amount of understory. Engaged participants from the five countries are going to conduct the image acquisition and point cloud generation for the test plots in their own country. In the end, image-based point clouds of ten test plots will be available for benchmarking. All participants will process the point cloud data of all test plots with their own algorithm for tree mapping and modeling. The results of partners will be evaluated with respect to field collected reference as well as TLS data. Findings of the project will be published in the ISPRS journal and in application oriented forest journals (i.e., country specific forest journals). The findings of this study will be presented in different ISPRS conferences / workshops. Furthermore, images and point clouds collected for the project will be opened for non-commercial uses of all interested communities.

 

GeoBIM benchmark: reference study on software support for open standards of city and building models      

Principle Investigator: Francesca Noardo, TU Delft, Netherlands

Co-Investigators: Ken Arroyo Ohori, Jantien Stoter and Giorgio Agugiaro, TU Delft, Netherlands; Filip Biljecki, National University of Singapore; Claire Ellul, University College London, UK; Lars Harrie, Lund University GIS Centre, Sweden; Margarita Kokla, National Technical University of Athens, Greece; Thomas Krijnen, TU Eindhoven, Netherlands

Many data capture methodologies provide very high-quality 3D models covering individual buildings at a high level of detail or entire cities at a lower level of detail. Combined, these models can provide essential data for many applications and use cases such as Smart Cities, asset management and sustainable construction, and can be an effective and powerful base for further research and technological improvements.

This benchmark investigates the current state of the art regarding the interoperability and integration of open standards for two of these 3D models - 3D City Models (from GIS) and Building Information Models (BIM). The aim is to identify what sort of tools are needed to better support the integration of the two kinds of models, working towards GeoBIM, which is an integrated, single, view of the built environment. Reciprocal integration is critical to take better advantage of the high quantity of information that is costly to produce, resulting in an increased return on data capture investment. The key for effective integration is the standardisation of data formats and structures, regarding both geometry and semantics. Open standards are available for this purpose, and among these, the Open Geospatial Consortium (OGC) CityGML and the buildingSMART Industry Foundation Classes (IFC) are the most accepted and widespread. However, the support provided by the available software for open standards is often ineffective and incomplete and varies greatly across different software packages. Additionally, while correct geo-referencing is essential to fuse together heterogeneous datasets, in practice, geo-referencing BIM models is still far from straightforward (Arroyo Ohori et al., 2018). Both of these issues, along with semantic differences, result in difficulties when integrating CityGML and IFC. The benchmark will: 1) test a significant number of software tools (at least 10), including the most frequently used ones, to validate support for open standards (CityGML and IFC); 2) test effective management of geo-referencing by BIM software (for IFC data); 3) test available conversion procedures between CityGML and IFC.

The PI and Co-Is (from here on, ‘proponents’) will make some test data (in IFC and CityGML), together with their detailed description, available, and questionnaires about a list of features to be tested and checked in the software tools, with the criteria to be followed. People having some interest in the fields (researchers, expert professionals, software developers, students, other experts) will be able to participate by testing one, or more, tool, and delivering the results, which will be finally evaluated, synthesised and shared in a common reference list for anyone working with geo-information, BIM or geo-BIM. The outcome of the benchmark will be a reference framework for researchers and practitioners working on geo-BIM integration, along with a wish list for further developments. Importantly, it will also establish a baseline against which to test new tools and the increasing quantities of available CityGML and IFC datasets for their suitability for GeoBIM.

 

The ISPRS Benchmark Test on Indoor Modelling            

PI: Kourosh Khoshelham, University of Melbourne, Australia

Co-Investigators: Lucía Díaz-Vilariño, University of Vigo, Spain; Zhizhong Kang, China University of Geosciences, China; Sagi Dalyot, Technion - Israel Institute of Technology, Israel

This scientific initiative aims to organise a benchmark test to evaluate and compare indoor modelling methods. Currently, there is little knowledge on the comparative performance of existing indoor modelling methods. In the literature, different methods have been evaluated using different datasets and based on different evaluation criteria. This heterogeneity in the evaluation data and methods makes it difficult to compare and benchmark the performance of existing indoor modelling methods. This scientific initiative will organise a benchmark test for the experimental evaluation and comparison of indoor modelling methods using the benchmark dataset and evaluation framework established in a previous scientific initiative. The results will be disseminated using a dedicated ISPRS web page (which will be continuously updated with the results of new submissions) and several publications. The expected outcome of the proposed scientific initiative is new knowledge on the strengths and limitations of the existing indoor modelling methods, which will stimulate and facilitate further research on indoor modelling to overcome the limitations and improve the performance of indoor modelling methods.

 

Capacity building for object detection and tracking in UAV videos using deep learning               

Principal Investigator: K. Vani, Anna University, India

Co-investigators: S. Sanjeevi, Anna University, Inida; Chao-Hung Lin, National Cheng-Kung University, Taiwan

Recently, the advent of UAV and Deep Learning are creating significant changes in remote sensing applications. UAVs have the ability to focus the objects at closer and farther distances. This capability of the UAV enables it to obtain more detailed information than the satellite images and static video cameras. The deployment of UAV is wide spread in volcanic gas monitoring, surveillance, land mapping, species monitoring and tracking, smart city planning, earth science and atmospheric research, ecological and forestry monitoring, disaster assessment and recovery. Emergence of UAV into above-mentioned fields requires precise object detection and tracking which paves way to deep learning. In recent years, deep learning techniques provide higher accuracy than the traditional vision based algorithms in object detection and tracking. Traditional vision algorithms require feature engineering to choose appropriate features to achieve accurate detection. The rich feature learning capacity of deep learning does not require feature engineering, while additionally providing high precision. Emergence of various open source software aids students to unveil image-processing methodologies at low cost. This proposal will help students and researchers to explore the possibilities of deep learning with open source software in object detection and tracking using UAV videos.

The expansion of UAV application and emergence of deep learning in various fields creates the need for knowledge about the UAV and deep learning. Lack of curriculum materials regarding UAV and the need for deep learning techniques to process the UAV captured information, has encouraged us to create materials, which would enlarge the vision of students, trainers and researchers. Educating the students with upcoming technologies such as the UAV, deep learning and various open sources is indispensable. Hence, this proposal would apprise the remote sensing community with the evolution of UAV, practical issues of processing visual information of UAV, unresolved challenges of object tracking in UAV videos and the exploration of deep learning in object detection and tracking with open source software. This proposal also aims to provide accurate object detection, long-term object tracking and obtaining the trajectory of the objects. It will enable the students and researchers to widen the applicability of UAV and deep learning to unexplored research areas.