GSA Smartphone GNSS Raw Data Task Force

Until recently GNSS positioning of smartphones was calculated completely on the internal GNSS chips. The algorithm, the correction data used and thus also the position accuracy was therefore specified by the hardware manufacturer.
However, with the release of GNSS raw data on smartphones since Android 7.0, this is no longer true for many devices. The GSA Smartphone GNSS Raw Data Task Force (https://www.gsa.europa.eu/gnss-applications/gnss-raw-measurements/gnss-raw-measurements-task-force) is therefore bringing together research and industry to develop a market for high accuracy positioning using smartphone GNSS raw data.

The RaD at HSKA in the frame of the NAVKA project are dealing with cycle slip and ambiguity robust GNSS algorithms and software devlopments (DGNSS, PPP) using GNSS raw data - e.g. from GPS and Galileo satellites - provided on the smartphone. The algorithms are on applications for  positionung only, as well as for the georeferencing and identification of objects in mobile GIS applications, by using in addition the MEMS-sensors and camera of the smartphone as well plugable commerical laser disto data in the fusion algorithms.

  

Screenshot of the Plugin for Logging and Streaming of binary data in the ubx protocol, integrated into Google GnssLogger App

Left: Evaluation of the possible accuracy of smartphone GNSS Raw Data for a static setup with a good sky view
Right: Deviation of the Smartphone Position from True Position in East (E), North (N) and Up (U)

With the use of a Virtual Reference Station in DGNSS mode, for this setup an accuracy in the range of several decimeters is achievable (blue), while processing as PPP with IGS correction data only submeter accuarcy could be reached (green)

R&D project HOBA

1. Introduction

The R&D project "Homogeneous soils assistant for the automatic, construction site-specific recording of soil classes according to the new VOB 2016 HOBA”, shortly HOBA, deals with the development of a system for an automatic classification, detection & segmentation and a georeferenced voxel-based 3D-volume model generation for excavation site specific soil types according to the new VOB 2016.

HOBA is financed as a so-called ZIM (Central Innovation Program for SMEs) research and development project by the Federal Ministry for Economic Affairs and Energy (BMWI) up to 03/2023. HOBA is located at the Institute for Applied Research (IAF) of the Center for Applied Research (CAR) at Karlsruhe University of Applied Sciences (HKA). Research and development are carried out in the GNSS & Navigation Laboratory (goca.info/Labor.GNSS.und.Navigation ) in collaboration with the main industry partner MTS Schrode AG (www.mts-online.de) and their partner VEMCON GmbH (www.vemcon.de ).

Figure 1: Excavator with distributed MTS sensors and HKA HOBA-Box located in the area of the excavator bucket

The aim of the R&D at the HKA is the development of the hardware and software of a compact sensor and a computing system unit, mounted on the excavator with data interfaces to the excavator IT - hereinafter referred to as "HKA HOBA-Box".

The hardware and software development of the HKA HOBA-Box is an innovative contribution to the BIM-compliant digital real-time documentation of excavation work. Here, the HKA HOBA-Box enables a multi-sensory 3D geo-referencing of the excavation in the ETRF89/ITRF in connection with the sensor-based acquisition (GNSS/MEMS/RGB/ToF-3D-camera Optics), and so a calculation of a so-called "voxel"-based 3D model of the excavation volume. This means the box will allow the classification of the soil types on the site using image-based AI/ML algorithms and finally do the re-calculation of the classified and georeferenced 2D images into the geo-referenced 3D voxel model according to the soil types.
The complete geo-referencing steps of the box is based on algorithmic fusion and SLAM of all internal sensor data of the HKA HOBA-Box (IMU, magnetometer, barometer, inclinometer, GNSS, ToF-3D-camera and RGB-camera) in the general NAVKA multisensors-multiplatform leverarm design. All sensor data will contribute to the calculation of a Bayesian sensor-fusion in favour of the navigation state vector

In case of SLAM (Simultaneous Localization and Mapping) in favour of a fusion and a state vector y(t)SLAM=(y(t),m(t)). The extension of y(t) in case of SLAM is based on the optical sensor-data of the ToF and digital RGB-camera, and parameter space of the 3D map m(t).

In the self-sufficient box variant 1 of the HKA HOBA-Box (Fig. 1, Fig. 2) uses only the data from the HKA HOBA-Box for the sensor fusion in respect to the estimation of y(t), or the SLAM parameters y(t)SLAM=(y(t),m(t)), respectively.

In the case of box variant 2, y(t) and vector y(t)SLAM=(y(t),m(t)) are calculated on the HKA HOBA-Box again from all sensors, except the GNSS. The reason is, the unfavorable placement of GNSS at the location of the box in the vicinity of the bucket with regard to signal shading, multipath and cycle slips. Instead, the sensor fusion and SLAM on the box variant 2 are making use in addition of the MTS navigation part solution y(t)'=(xe ye ze )T provided via the local machine server (see local machine server, fig. 2) at the localization of the body (b) and box origin, respectively.

The HKA HOBA-Box - in both variants - enables a multi-sensory 3D geo-referencing and voxel-based classified 3D model of the excavation volume in the ETRF89 / ITRF based on (GNSS/MEMS/RGB/ToF-3D-camera Optics) sensor data. The classification of soil types on the site is based on image-related AI/ML algorithms, and finally the re-calculation of the classified and geo-referenced 2D images into the above-mentioned geo-referenced 3D voxel model according to the soil types.

2.  Hardware Developments

The concept of the hardware design as shown in fig. 2. As central processor unit an  NVIDIA Jetson TX2 with a 256-core NVIDIA Pascal GPU, a hex-core ARMv8 64-bit CPU complex and an 8 GB LPDDR4 memory with 128-bit interface is used on the box. The CPU combines one dual-core NVIDIA Denver and 2 processors with a quad-core Cortex-A57 arm. The HKA HOBA-Box internal sensors include ZED F9 GNSS, compact ICM 20948 sensors (3-axis gyroscope, 3-axis magnetometer and 3-axis accelerometers), and MS5611 barometer and an SCL3300 inclinometer (tilt meter). This box is the above box-variant 1, while variant 2 uses a part of the navigation state y(t)’ - namely position the position y(t)’= (xe ye ze) - calculated by MTS from the two GNSS and tiltmeters on the excavator machine (fig. 1) by the use of the Denavit-Hartenberg transformation.

The used backup drive is a WD Red SA500 NAS SATA SSD 1 TB and is used for data backup via defined threshold values and/or a defined backup rate (e.g. hourly).

Figure 2: Design concept of the HKA HOBA-Box and the data communication on the Excavation machine

The LUCID Helios-2 TOF with integrated RBG IP67 kit (Triton 3.2MP) is used as the optical component of the HKA HOBA-Box. The additional digital camera of the LU-CID Helios-2 TOF could is used mainly for the ML/AI-based classification, as well as for texturing the TOF generated point clouds.

The prototype box (fig.3) is set up as full system for the running navigation and SLAM, image processing, voxel generation and georeferencing algorithms and software developments, while the final smart box is under development as well.  

Figure 3: Prototype box for system algorithms and software development 1.) ToF, 2.) Back up drive, 3.) Additional Giga-Ethernet ports, 4.) Digital camera MIPI CSI, 5.) Additional pins for CAN and other functions.

3.  Algorithms and Software Developments

The previous hardware design is used as a base for the algorithms and software implementations as described in fig. 4.

All the calculations are processed centrally on the HKA HOBA-Box, i.e. the NVIDIA Jetson TX2 computer (NVIDIA Pascal Architecture GPU, 8 GB L 128 bit DDR4 memory, 32 GB eMMC 5.1 flash memory and 1.0 TB external memory) there. The operating system of the HKA HOBA-Box is Linux 18.04 LTS "Bionic Beaver" with ROS Distribution (Robot Operating System-Melodic). The system is already prepared for deep learning based on Python and C/C++ with a lot of necessary dependencies installed.

Initial experimental tests using TF1.x and PyTorch have successfully been carried out on image classification, detection and segmentation using transfer learning with different pre-trained models e.g., ReNet, FCN- ReNet, SSD-Mobilenet, Inception V3, etc.

Figure 4:  Data flow of the HKA HOBA-Box

Harby water robot


The "Harby" water robot is an automatic moving catamaran, that is scalable in size for different applications. The control is carried out in an Internet-based manner in bidirectional communication with transmission of the current navigation status vector and image data. Harby's current robotic application is aimed at cleaning docks. In addition to multisensorial SLAM, automated detection and avoidance of obstacles will be developed and implemented.

The Harby R & D project comprises two research and development stages. In the first, the multi-sensor navigation and remote control is developed for the controllable catamaran developed by Weico. Stage 2 implements the realization of automated driving in an ITRF-based global reference system.

Project overview:

Project level 1: completed April 2019
Project level 2: May 2019 - April 2020
Cooperation partners: Companies Weico and Soleon, Italy

ZIM Network High-Precision Realtime Navigation Baden-Württemberg

The ZIM-funded network "High-precision real-time navigation Baden-Württemberg (RTK B.W.)" is engaged along the development lines
- Construction (civil engineering 3D+, SLAM, BIM)
- Autonomous driving and flying
- Georeferencing objects
- Logistics
- Rescue
in developing different multi-sensor GNSS / MEMS / optics-based navigation technologies and intelligent systems.

The ZIM consortium includes - in addition to the Laboratory for GNSS & Navigation of the Karlsruhe University of Applied Sciences (HsKA) as a research center - the eight companies 2E mechatronic GmbH & Co. KG, AReS Ingenieurgesellschaft mbH, Convexis GmbH, geomer GmbH, Heidelberg Mobil International GmbH , Ingenieurbüro Bernd Hölle GmbH, Krämer Automotive Systems GmbH and MTS Maschinentechnik Schrode AG.

Link to news at www.geonet-mrn.de

Link to news at www.esnc-bw.de

ZIM consortium "High-Precision Realtime Navigation B.W." at the Kick-Off-Meeting at Technologiepark Tübingen-Reutlingen (TTR)

 

MSM - Multisensor Selfreferencing 3D-Mapping System

Download MSM summary

The MSM project at HSKA, funded by the Baden-Württemberg Ministry of Science, Research and Art (MWK) in the frame of the research programme "Innovative Projects", has predominantly focused on indoor navigation and mapping (without GNSS). Indoor scenes pose a challenge for navigation system as GNSS access is virtually impossible. The MSM project builds on the already developed "Navka" navigation and sensor fusion algorithms by implementing the system in ROS (Robot Operating System). This involves the development of ROS interfaces and wrappers that enable the system to seamlessly integrate with other open source ROS based mapping, navigation and SLAM packages.

The mapping component of the system is based on the Velodyne VLP16 3D laser scanner, a stereo camera rig and wheel odometry. Even if sensors like laser scanners and cameras are usually used only for mapping, the MSM project uses data from such exteroceptive sensors to estimate the platform's state and motion trajectory, which can be of a robot, a UAV or even a hand held mapping/navigation platform. The MSM mapping and navigation system substitutes such sensors to compensate for lack of GNSS in indoor scenarios. Further more the system detects loop closures so as to constrain drift and hence enhance the navigation state estimation. The final out of the system is an accurately registered 3D point cloud map with the respective platform trajectory estimation.

A top view of part of a foyer scene with the sparse point clouds from the VLP16 laser scanner registered. The integration of a navigation component significantly improves the accuracy of such sparse 3D point cloud registration and optimized trajectory computation.

News

Indoor 3D Mapping with VolksBot, 3D laser scanner and NAVKArine-MSM

Indoor 3D Mapping Based on the VolksBot mobile robot platform, VLP16 3D laser scanner and
NAVKArine-MSM, as a further develpment of the G1MC multi-sensor navigation module and NAVKA-software
In an addition small computer (NUC) was used as central input processing unit (GPIO) for the extended NAVKA-algorithms.

As further hardware component as laserscanner (Velodyne) was used.

This short animation shows a 3D point cloud map of the HSKA Building-B ground floor mapped with
the Multi sensor self-referenceing 3D mapping platform NAVKArine-MSM developed at HSKA under the MSM project "MSM - Multisensor Selfreferencing 3D-Mapping System", funded by the Baden-Württemberg Ministry of Science, Research and Art (MWK) in the frame of the "Innovative Projects" research program.

Development platform: ROS
Hardware: VolksBot mobile robot,
Velodyne VLP16 3D laser scanner
NAVKAarine-G1MC navigation module (GNSS receiver, accelerometer, gryroscope, barometer and magnetometer)
Packages and Algorithms used: NAVKA-navigation and sensor fusion algorithms, togehter with Lidar Odometry and Mapping + Generalized ICP

Project PREGON-X: Mobile GIS and Precise Object Georeferencing with Smartphones

The Research and Developments of the HSKA in the context of PREGON-X include mathematical models, algorithms and software for precise positioning and non-contact object geo referencing with smartphones, up to cm level of precision. The HSKA basic algorithms and software for satellite geodesy, multisensor navigation and mathematical geodesy are taken up by the company Disy, Karlsruhe (http://www.disy.net/nc/home.html), further developed and integrated into innovative apps as well as general server client technologies.

In the focus of the HSKA R & D are DGNSS and OPPP positioning algorithms for the GNSS raw data on smartphones that are only available since the end of 2016. Even in the international comparison, this R & D fields for mathematic models, algorithms and software of tightly coupled GNSS & MEMS data in the low-cost domain, is challenging. The poor signal and antenna technology as well as multiphase effects of ubiquitous systems are investigated with innovative approaches to the parametric estimation, the ambiguity solution and to the elimination of cycle slips. A focus here is also on the signal structures of Galileo, which are more advantageous compared to the other GNSS.

One of the first developments will focus on the integration of external GNSS receivers and miniaturized GNSS/MEMS sensor platforms into smartphone apps, utilizing the smartphone as controller and processing unit:  NAVKA_smartphoneRtk.pdf

The R & D of PREGON-X mean, in general, a technological revolution for ubiquitous systems with sustained interdisciplinary potentials which extend well beyond geodesy and geoinformatics fields.

Project overview

Start: 01. April 2017
Project leader: Prof. Dr.-Ing. Reiner Jäger