CONVERGE

Telecommunications and Computer Vision Convergence Tools for Research Infrastructures Communications

Funding

Horizon Europe, 8.8M€

Duration

2023—2026

Description INESC TEC leads a European project that combines radio communications with computer vision towards 6G. The CONVERGE project aims to develop a set of innovative tools - aligned with the motto “view-to-communicate and communicate-to-view” - to support research infrastructures. This toolset is a world-first and consists of vision-aided large intelligent surfaces, vision-aided fixed and mobile base stations, a vision-radio simulator and 3D environment modeler, and machine learning algorithms for multimodal data including radio signals, video streams, RF sensing, and traffic traces. In the future, research supported by these tools is expected to play a major role in the healthcare, industry, automotive, telecommunications and media sectors. This set of tools is unprecedented in the world: on the one hand, it will provide the scientific community with a series of exclusive and open data, and, on the other hand, it will improve the competitiveness of research infrastructures and the companies involved.

Scientific Advances

Telecommunications and computer vision have evolved as separate scientific areas. This is envisioned to change with the advent of wireless communications with radios operating in the millimetre-wave frequencies and up to the sub-THz, characterised by line-of-sight operating ranges, which could benefit from visual data to accurately predict the wireless channel dynamics such as anticipating future received power and blockages, as well as constructing high-definition 3D maps for positioning.

On the other hand, computer vision applications will become more robust against occlusion and low luminosity if helped by radio-based imaging, such as the high frequency radio signals generated by 6G large reconfigurable intelligent surfaces that can also provide high-resolution sensing. This new and emerging joint research field relies on a range of technologies in the fields of wireless communications, computer vision, sensing, computing, and machine learning, and is perfectly aligned with the recent research trend called Integrated Sensing and Communications (ISAC).

The CONVERGE toolset will be truly innovative and unique worldwide. Currently, only the COSMOS RI, which is a partner of the CONVERGE consortium, provides access to cameras to potentiate research to improve communications taking advantage of video frames to better sense the environment. However, these cameras do not have the same point of view of the base-stations and the UEs, nor do they provide the traces resolution or the full repeatability and reproducibility we are aiming for in CONVERGE. To the best of this consortium knowledge, the CONVERGE concept is a world-first.

The toolset to be developed by the CONVERGE project will be a world-first, consisting of the following tools:
• Vision-aided large intelligent surface with communications and sensing capabilities.
• Vision-aided base stations fixed and mobile.
• Vision-radio simulator and 3D environment modeler.
• Machine Learning algorithms for processing multi-modal data.
This toolset is expected to enable the following outcomes:
• Better communications solutions which, dynamically and in real time, will take advantage of vision and sensing information.
• Better vision solutions which will take advantage of networks of cameras, sensing and radio information.
• Enable new classes of applications having an enormous business potential in the verticals of:
- Telecommunications (enabling the control of the electromagnetic response of the environment and allowing for ubiquitous communications and high accuracy localization and high-resolution sensing services).
- Automotive (improved perception of vehicle’s surroundings and external conditions that may affect the driving quality).
- Health (improved fall-detection and non-contact detection of vital medical signature, automatic assessment of subject posture and prosthetic device alignment in physical rehabilitation).
- Manufacturing (improved understand of the factory floor).
- Media (lower computational complexity imaging of objects in cluttered environments).
• Advance the state of the art of a set of Research Infrastructures (RIs) to the greatest extent aligned with the ESFRI SLICES-RI, a distributed ESFRI that was selected in the ESFRI Roadmap in 2021, and make the toolset totally or partially available at 7 independent RIs.
• Provide the scientific community with unique and open datasets.
• Improve competitiveness of the associated research infrastructures, and enhance competitiveness of involved companies.
• Developing new research infrastructures, namely the large Portuguese CONVERGE-PT RI currently under implementation, which is fully aligned with the proposed concept.


Impact

The toolset developed by CONVERGE includes: 1) Vision-aided Large Intelligent Surface, 2) Vision-aided base station, 3) Vision-radio simulator and 3D environment modeller, and 4) Machine Learning (ML) algorithms.
The basis of the CONVERGE vision-radio research infrastructure is the CONVERGE anechoic Chamber, which enables the collection of experimental data from radio communications, radio sensing, and vision sensing. To set up and collect this experimental data, an extension to the 5G core is deployed and a set of autonomous processes are in place that include the set up the Chamber and the Core equipment and functions, and data collection, exchange and storage. These processes can include a publish-subscribe approach, with the main output being a dataset that includes synchronized data and experience annotation.

• CSI-based Path Loss Model for Reproducible Experiments using ns-3:
Network Simulator 3 (ns-3) Path Loss Model capable of assimilating the real traces of Channel State Information (CSI), captured in previous 5G experiments, and reproducing them in the simulation scenario. This way, an experimenter is able to repeat and reproduce the same RF propagation conditions in a controlled simulation environment, without depending on the availability of the testbed. This work builds on top of previous work developed by INESC TEC regarding the Trace-based Simulation Approach for reproducing past Wi-Fi experimented in ns-3. While the previous work focused on reproducing the same SNR as in the real experiments, now this model makes use of the Spectrum-PHY ns-3 capabilities to reproduce a more realistic phase and magnitude of the signal per sub-carrier, leading to more accurate simulations.

• NimbusRT:
This software allows ray tracing-based radio signal path computation, when a point cloud-based 3D modelling of the radio environment is used. The output can be used as a basis for computing electromagnetic models for point-to-point connections.
The SW can be found in: https://github.com/nvaara/NimbusRT

• Vision Radio Simulator:
VR-S is a Vision Radio Simulator tool that combines radio, vision-based communication and sensing technologies in the spirit of " view to communicate and communicate to view". It serves as a virtual and testing platform that enables a series of experiments to be carried out to overcome the limitations of physical facilities and ensure that radio-vision simulations accurately replicate the real experiences. The vision-radio simulator takes advantage of a 3D environment modeller to enable the digital twin of the vision-radio environment to be implemented using vision and radio sensing information. The simulator will provide researchers with unrivalled tools for advanced RIS-assisted communications and planning sensing. Project results will also be exploited to extend the services proposed by the SLICES-RI facility to larger communities including industrials, researchers, and MSc and PhD students.

Know more about our projects

2023—2026 TERRAMETA Communications