October 22 - 25, 2024, Fremantle, Perth, Australia

The tutorials will be run during the symposium

TU 1:  Approaches for simplifying 3D data exchange between systems

This tutorial will examine a range of techniques for simplifying the problem of exchanging data between different systems using 3D information as a case in point. 3D data is of interest because different systems use quite different approaches for 3D depending on application needs, yet common semantics need to be established when assessing things like building permits.

The techniques to be explored are:

    1)  Profiling to limit scope and simplify exchange specifications

    2)  “Semantic uplift” to match different schemas to a common model

    3)  Reusable schema “Building Blocks” for common patterns such as 3d topology

The CHEK project and others will be used as case studies and tutorial participants will be introduced to open source tooling to help with these approaches.

Duration: 1.5 h

Teacher: Rob Atkinson, OGC.

Rob Atkinson is a researcher at the OGC (Open Geospatial Consortium).

Rob has 25 years’ experience in the development of standards across a wide range of application domains. He is a leader in the adoption of semantic approaches that bridge the needs of different user communities. Current research focus is supporting a variety of pragmatic implementation solutions to address different aspects of common concerns - different communities needing different data about the same "real world" things.

Rob is working on improving data interoperability standards through integration of conceptual and implementation modelling approaches, continuous testing and deployment (CI/CT/CD) of examples and test cases, and architecture design patterns for systems integration through APIs and metadata. Recently he has been leading the ANZLIC 3D Cadastre Data Model for the exchange of 3D surveying data, and exploring the challenges of the implicit commonalities and varying degrees of support for 2 , 3 and 4D topology in different implementation approaches.

Tutorial organised by ISPRS WG IV/1

TU 2:  AI for Geospatial Science


In recent years, Artificial Intelligence (AI) has attracted more and more attentions from the communities of Geospatial Science, particularly after the advent of Deep Learning technology. On the topic of AI for Geospatial Science, a number of special issues have been published and special workshops has been held, various new terminologies have been created such as GeoAI, GeospatialAI, MapAI, AI Cartography and AI GIS.

On 30 Nov 2022, OpenAI has launched ChatGPT, which has impressive abilities of language understanding, reasoning and expression, as well as considerable knowledge. On 14 March 2023, OpenAI has launched GPT-4, which is a large multimodal model that “exhibits human-level performance on various professional and academic benchmarks”, marking a breakthrough in artificial intelligence. Such technologies have also been applied in geospatial science, resulting in GeoGPT, MapGPT and Image GPT. 

As GPT-4 is moving towards artificial general intelligence (AGI), it will have great impact on geospatial science. It is therefore appropriate to have a tutorial to discuss the relevant issues, such as

    1) AI-empowered spatial data handling,

    2) AI-empowered spatial representation and visual understanding,

    3) AI-empowered spatial cognition and spatial reasoning,

    4) AI-empowered spatial information service,

    5) AI-empowered modeling of dynamic spatial systems,

    6) Hybrid computing theory and methods for AI in Geospatial Science,

    7) How GPT will shape the discipline of geospatial science;

    8) Research agenda for GPT-empowered Geospatial Science.

Call for Presentations:

This tutorial will include presentations and a discussion forum. The first half of our will be for participants to present Lightning Talks. Each presenter will be given 5 minute to showcase either one major challenge or one significant opportunity you see that intersects between AI and Geospatial Science. The second half will be a discussion forum.

We take great pleasure in inviting you to  participate this tutorial.

Duration:  half day 

Chair: Prof. Zhilin Li, Southwest Jiaotong University, China

Prof. Li is currently a professor at Southwest Jiaotong University. He obtained his PhD from the University of Glasgow in 1990. Since then, he had also worked at Technical University of Berlin (Germany), Southampton and Newcastle upon Tyne (UK) as researchers, at Curtin University of Technology (Australia) as a lecturer and at the Hong Kong Polytechnic University as assistant/associate/full/chair professor.

Prof Li has been working on the multi-scale modelling and representation of geospatial data, remote sensing image processing, information theory of cartography and AI-powered cartography. His work (such as book "Digital terrain modeling: Principles and methodology, nature principle for objective generalization, Li-Openshaw algorithm) has been well recognized and received Schwidefsky medal (2004) and Gino Cassinis (2008) award from the ISPRS, and Natural Science Award from Chinese Government.

Prof. Prof Li has published more than 200 journal papers and 3 authored research monographs. 

TU 3:  3D spatial modeling and intelligence for scene-realistic analysis and digital twin


This tutorial will examine a range of techniques for solving the problem of 3D modeling and reconstruction in digital twins using geospatial information, analytics and intelligence with following goals:

    1) To enhance the understanding and application of spatial modeling and digital twin methodologies in scene-realistic 3D analysis among researchers, experts, and industrial professionals.

    2) To offer a platform for the exchange of innovative ideas and the latest research outcomes in the field of spatial modeling, 3D analysis, and digital twin technologies.

    3) To foster advancements in technological tools and methodologies leveraged in spatial modeling and digital twins, stimulating future cutting-edge research and development.

    4) To encourage collaboration between academic, governmental, and industrial entities, thereby driving innovation and practical applications within the field.

    5) To stimulate meaningful dialogue around the challenges, opportunities, and future directions of spatial modeling and digital twin implementations across diverse sectors.

The topics to be explored are:

    1) "Enhancing Spatial Intelligence: The Future of 3D Analysis": Exploring advancements in spatial modeling tools and techniques for improved scene-realistic 3D analysis.

    2) "Digital Twin Technology - Bridging the Gap Between Virtual and Physical Worlds": Discussing the evolution, applications, and future of digital twins in diverse fields.

    3) "The Synergy of Spatial Modeling and Digital Twins": Examining the benefits, challenges, and best practices of integrating spatial modeling with digital twin technologies.

    4) "Spatial Intelligence in Real-World Applications": Highlighting exciting case studies where meticulous spatial modeling and digital twins have made significant impacts.

    5) "The Role of Artificial Intelligence in Spatial 3D Modeling and Analysis": Analyzing the transformative impact of AI on spatial modeling, scene-realistic 3D analysis, and the development of digital twins.

Call for Presentations:

We take great pleasure in inviting you to participate in the discussion of the latest research works and findings related to the field of scene-realistic 3D modeling and understanding. This tutorial provides a premier opportunity for researchers, practitioners, and students from around the globe to present their latest contributions, exchange ideas, develop collaborations, and propagate the understanding and advancements of geospatial 3D modeling based digital twins.

The (invited) talks and panel discussion will be used as presentation mode and platform and participants will be introduced to cutting-edge techniques to get refreshed with the current development status and challenge in the field. We look forward to receiving your participation and communication that will add value to this tutorial and aid in expanding the knowledge base in the realm of scene-realistic 3D modeling and understanding.

Duration: half day 

Chair: Prof. Bisheng Yang, Wuhan University, China 

Dr. Bisheng Yang is a full Professor in GeoInformatics at Wuhan University, China, and director of State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing.

His research expertise includes Lidar and UAV Photogrammetry, point cloud processing, and GIS and remote sensing applications. Dr. Yang has so far published more than 100 papers in peer-review journal articles, conference and workshop proceedings.

He is Co-Chair of Point Cloud Processing Workgroup in Photogrammetry Commission of the International Society for Photogrammetry and Remote Sensing (ISPRS) from 2016-2024. He is Editorial Boarding Member of ISPRS Journal of Photogrammetry and Remote Sensing, and the recipient of a lot of national and international academic awards including Carl Pulfrich Award (2019). 

TU 4:  Advancing Air Quality Research and Public Awareness Through Innovative Geospatial Technologies

Urban and industrial regions frequently encounter stagnant air patterns caused by geographic characteristics, intensifying pollution from diverse origins. Elevated concentrations of PM, O3, NO2, and SO2 contribute to health concerns. Employing a data-driven strategy utilizing pollutant levels and meteorological and morphological factors will facilitate environmental state modeling and spatial mapping. The approach we propose integrates IoT and EO technologies, leveraging geo-intelligence and implementing XR technologies for virtual visualization. It aims to raise societal consciousness and foster a healthier community.

The tutorial is divided into two parts:

In the first part of the tutorial, innovative techniques will be demonstrated for creating open data cubes containing air quality information. These data cubes result from processing satellite-based and model-based data. To monitor air pollution and analyze exposure patterns, a combination of satellite-based, model-based, and ground sensor data will be utilized. The research emphasizes digital and open data sources for pollutants and meteorological variables, accessible programmatically with the highest space-time resolution.

In the second part, attention will turn to XR solutions, which will be unveiled following the development of meticulous spatial models derived from EO/IoT inputs regarding air quality information. This section will delve into wearable and mobile applications working in XR 3D environment, technical implementation using data interoperable and updated from GIS and the definition of rules and standard data layer to feed the application with real-time data.

Duration: 1.5 h 

Teachers: Maria Antonia Brovelli (part 1) and Eva S. Malinverni (part 2) 

Maria Antonia Brovelli holds a Ph.D. in Geodesy and Cartography and serves as a Professor of GIS and The Copernicus Green Revolution for Sustainable Development at Politecnico di Milano (PoliMI). With a career spanning from researcher to Full Professor and Vice-Rector for the Como Campus at PoliMI, she also lectures at ETH Zurich and holds prominent roles in international organizations. As Vice President of the ISPRS Technical Commission on Spatial Information Science, co-chair of the United Nations Open GIS Initiative, chair of the UN-GGIM Academic Network, and curator of the GEO series at AI For Good Summit, she influences global spatial information science. Brovelli's extensive publication record and involvement in national and European projects have earned her prestigious awards and editorial roles, highlighting her significant contributions and leadership in the field.

Eva S. Malinverni is a Full Professor in Geomatics, DICEA, Engineering Faculty, Università Politecnica delle Marche, Italy. Her research is involved in different fields of Geomatics: from Cultural Heritage to Land Use, from acquisition with digital tools to management of increasingly complex data in GIS/(H)BIM, 3D and CityGML. She has MAECI, CONCYTEC, EU, COST, international projects. She attends the National Project (PRIN) (2023-25) “Geo-Intelligence for improved air quality monitoring and analysis (GeoAIr)” using Earth Observation Data and AI and focusing her attention on the sharing and displaying of the prediction by 3D heat maps on Web and Cloud visualization. Her actual H index is 17.

TU 5:  Building an Underwater Heritage VR Experience with Unity

The Unity game engine is one of the most widely used platforms for displaying content in virtual reality thanks to its cross-platform functionality and user-friendly interface that caters to novice and veteran developers alike. In this tutorial, you’ll learn the basics of how to build a simple underwater heritage photogrammetry VR viewer in Unity that can display both surface meshes and point clouds. We’ll cover project setup and configuration, along with VR interactions and mixed reality integration. The tutorial will focus on developing for the Meta Quest virtual reality headset, but Unity’s cross-platform architecture means that the skills you learn will allow you to build experiences for other head mounted displays, as well as desktop and smartphone applications.


Presenter:  Dr Michael Ovens, HIVE, Curtin University.

Dr Michael Ovens completed his PhD in Medieval and Early Modern Studies at the University of Western Australia and spent several years working as an early career researcher and sessional lecturer/tutor before making an abrupt turn into a second career as an extended reality software developer specialising in serious games for research and education. He is currently employed as a Visualisation Technology Specialist at the Curtin HIVE (Hub for Immersive Visualisation and eResearch).

TU 6:  Creating Immersive GIS Experiences with XR Technology and Real World Data

Have you ever been intrigued by the possibility of developing applications that not only offer an immersive GIS experience but also enable users to navigate and interact within a virtual environment of digital twin or overlay real-world visuals with geo-spatial data through smartphone cameras in real-time? This comprehensive tutorial is designed to guide you through the process of building augmented reality (AR) and virtual reality (VR) applications by leveraging game engines alongside the ArcGIS Maps SDK for Unity or Unreal Engine.

In this tutorial, we will delve into:

     1) A quick introduction to game engine editors and the ArcGIS Maps SDK package dependency. 

     2) An exploration of exemplary projects available in GitHub's public repositories which can be used to boot strap application creation.

     3) The step-by-step creation of AR applications compatible with both iOS and Android smartphones, bringing geo-spatial data to life right before your eyes.

    4) The development process for VR applications that can be deployed on desktops and experienced through VR headsets, offering a fully immersive geo-spatial experience.

    5) A demonstration of select projects to showcase the potential and impact of integrating GIS with XR technology.

Join us to unlock the potential of cutting-edge technology in creating immersive, interactive maps and environments that bridge the gap between the digital and the physical worlds.

Presenter: Dr Morakot Pilouk, ESRI. 

Dr. Morakot Pilouk stands as a distinguished Senior Principal Software Development Engineer within the Real-Time Visualization & Analytics team. Currently, he spearheads the 3D Tech Center and champions IPS Technology for Esri. With an impressive tenure exceeding 30 years in GIS and software development, Dr. Pilouk has dedicated more than 28 years to Esri. His tenure at Esri is marked by significant contributions across a spectrum of domains including raster imagery, spatial analysis, 3D technologies, real-time GIS, game engine integration, indoor GIS, and Indoor Positioning Systems. His role not only involves pivotal product development but also extends to being a technical evangelist, where he has played a crucial role in advancing Esri's technological frontiers and fostering innovation within the GIS community.

Submission of papers/
extended abstracts:
March 31st, 2024
Notification of acceptance:
May 1st, 2024
Final paper submission:
May 26th, 2024
Submission of abstracts-only:
June 1st, 2024
Notification for abstracts-only:
June 6th, 2024
Early bird registration:
August 2nd, 2024