Current Issue

2025 Vol. 14, No. 3
Special Topic Papers: Special lssue on Lidar Detection Technology
With the expansion of China’s space interests and the growth in the scale of on-orbit assets, high-precision detection of dark and weak targets in noncooperative space has become the core bottleneck in space security defense and debris removal. Traditional optical or radar detection technologies are limited by diffraction limit and signal-to-noise ratio constraints, and the detection and identification accuracy of “fast, far, small, and dark” targets is insufficient. Light Detection and Ranging (LiDAR), with its high precision and anti-jamming advantages, has gradually become the core technical means of accurately detecting space targets. Technologies such as sub-pixel scanning, synthetic aperture, and reflective tomography enable long-range super-resolution imaging by breaking through the physical limitations of conventional LiDAR systems. This paper begins by summarizing and sorting the critical problems associated with LiDAR super-resolution technology. The key technological research progress is then reported, typical experimental systems and experimental results are analyzed, and the characteristics, advantages, and shortcomings of each system are described with respect to requirements of space exploration, remote sensing, and mapping missions. Finally, the application prospects and development trends are presented.
As an important method of 3D (Three-Dimensional) data processing, point cloud fusion technology has shown great potential and promising applications in many fields. This paper systematically reviews the basic concepts, commonly used techniques, and applications of point cloud fusion and thoroughly analyzes the current status and future development trends of various fusion methods. Additionally, the paper explores the practical applications and challenges of point cloud fusion in fields such as autonomous driving, architecture, and robotics. Special attention is given to balancing algorithmic complexity with fusion accuracy, particularly in addressing issues like noise, data sparsity, and uneven point cloud density. This study serves as a strong reference for the future development of point cloud fusion technology by providing a comprehensive overview of the existing research progress and identifying possible research directions for further improving the accuracy, robustness, and efficiency of fusion algorithms.
Small-footprint full-waveform Light Detection And Ranging (LiDAR) exhibits significant application potential owing to its high penetration capability and ability to capture complete echo data. However, the efficient and accurate processing of massive echo signals remains a crucial challenge for practical use, particularly in advancing waveform decomposition technology. In small-footprint full-waveform LiDAR systems, most echoes are single-target, while only multi-target echoes require detailed decomposition. Current solutions often sacrifice precision by employing simplified rapid waveform decomposition algorithms or process all echoes indiscriminately, resulting in low efficiency and the inability to balance accuracy and speed effectively. This study proposes a spatiotemporal coupling model-driven lightweight algorithm for detecting multi-target echoes in small-footprint full-waveform LiDAR. For the first time, it achieves efficient and accurate detection of multi-target echoes from waveform data with unknown echo counts. The proposed method eliminates redundant computations caused by indiscriminate processing of single-target echoes, significantly reducing waveform decomposition iterations. The technical contributions include constructing a spatiotemporal coupling echo signal model that captures the spatiotemporal characteristics of echo transmission, implementing model-driven lightweight waveform parameter estimation through double Gaussian function superposition fitting, and introducing an adaptive correlation discrimination method based on a signal-to-noise ratio approach. By leveraging the consistency of system-emitted pulses, the proposed method enables lightweight yet accurate multi-target echo detection. Experimental results on terrestrial and airborne waveform datasets demonstrate that our algorithm achieves 98.4% detection accuracy with a 93.1% recall rate. When integrated with four waveform decomposition methods, it improves processing efficiency by 2–3 times. The efficiency gain becomes even more pronounced as the proportion of single-target echoes increases.
The vertical characteristics of biological optical parameters in the upper ocean are essential for evaluating marine primary productivity and the carbon cycle. Although ocean lidar can effectively detect these parameters, the inversion results are usually highly biased due to the regional differences in the adaptability of empirical models. This study uses multiplatform LiDAR observations collected in a certain sea area of China (2023–2024), combined with a region-adaptive bio-optical model, to achieve high-precision profiling of bio-optical parameters in the region. The derived vertical profiles of chlorophyll-a concentration showed strong agreement with in-situ measurements, with a coefficient of determination (R2) of 0.84 and an average root mean square error of 0.14 μg·L–1. Further quantitative analysis using an error transfer model revealed that differences in band-specific optical sensitivity considerably affected error distribution. The effective detection depth in the blue band was 70 m, notably higher than the 58 m depth in the green band. In addition, at the subsurface chlorophyll maximum layer, the inversion bias in the blue band was 0.18 μg·L–1 lower than that in the green band, highlighting the intrinsic relationship between the optical characteristics of each wavelength and its associated bias. This result provides an effective method for improving the reliability of profile inversion of bio-optical parameters in complex waters and performing error analysis.
Antarctic Digital Elevation Models (DEMs) provide critical topographic support for polar scientific expeditions and enable the estimation of melt pond volumes. However, conventional ground calibration methods face implementation challenges in extreme Antarctic environments. Spaceborne Light Detection And Ranging (LiDAR) effectively addresses this limitation by directly acquiring high-precision surface elevation data. ICESat-2, a next-generation laser altimetry satellite, features an exceptionally small laser footprint spacing of merely 0.7 m. The elevation data products of ICESat-2 over the Antarctic ice sheet achieve centimeter-level accuracy using the Reference Elevation Model of Antarctica (REMA) source data. This study first validated the elevation accuracy of the ICESat-2 ATL06 (Advanced Topographic Laser Altimeter System Land Ice Height) data products using the IDHDT4 (IceBridge HiCARS Depth Digitizer Time Series, Version 4) data from the 2015 Operation IceBridge campaign of NASA in the McMurdo Dry Valleys region and mitigated disturbances from cloud cover, snowfall, and other factors through a quality control algorithm. Building upon this validation, this study systematically assessed the elevation accuracy of the 32 m resolution REMA DEM across selected low-ablation regions of the Antarctic ice sheet, delineated according to Antarctic drainage basin boundaries, using the ATL06 data as a reference. Results showed that REMA DEM achieves submeter accuracy (comparable to laser altimetry precision) in flat terrains with slopes below 5°, with a Root-Mean-Square Error (RMSE) of 0.72 m and a Mean Absolute Error (MAE) of 0.31 m. For moderate slopes of 5°–10°, the RMSE and MAE increased to 1.91 and 1.06 m, respectively; meanwhile, slopes of 10°–15° yielded values of 2.30 m (RMSE) and 1.57 m (MAE). Even at steeper slopes of 30°, the elevation error remained controlled, with the RMSE not exceeding 3.5 m. This study further quantified the impact of ground track orientation relative to slope aspect and seasonal variations. Track-aspect angles perpendicular to slopes intensify errors (e.g., RMSE increases by 170% at a slope of 15°), whereas seasonal differences in elevation errors remain minimal (i.e., <5%). The validation framework demonstrates the robustness of REMA DEM across diverse Antarctic terrains, providing a theoretical foundation for different applications, such as lake ice surface bathymetry inversion
In recent years, surface ship target tracking has been an important issue that needs to be solved in autonomous ship navigation. For three-dimensional environmental perception, LiDAR has the characteristics of high resolution and high precision, for three-dimensional environmental perception. By adding one-dimensional scanning, long-line array LiDAR has a larger field of view compared with single point and area array LiDAR, offering unique advantages in environmental perception. Owing to the inconsistency between the characteristics of surface ships and ground target, and the lack of relevant data sets, the current commonly used fitting methods cannot effectively perceive surface target characteristics. In this paper, an efficient target tracking method for ships is proposed based on the characteristics of single-photon point clouds and long-distance target detection. This method is based on the synchronous clustering and denoising of neighboring points; it uses the prior knowledge of the geometric features of ships to fit through the extraction of ship feature points and surfaces, further reducing the influence of noise. Combined with the extended Kalman filter and velocity estimation method, the real-time and stable trajectory tracking of a 600 m target is realized. The root mean square error of tracking is 0.5 m, with a single-frame processing time of 1.02 s, which meets real-time engineering requirements. The proposed method has also been tested in a complex environment and has a good tracking effect for large ships, which is better than the common fitting tracking method. This provides better information for the subsequent autonomous navigation of intelligent ships, and realizes better obstacle avoidance and path planning for ships.
Hyperspectral LiDAR (HSL) can obtain high precision and resolution spatial data along with the spectral information of the target, which can provide effective and multidimensional data for various research and application fields. However, differences in transmitting signal intensities of HSL at various wavelengths lead to variations in corresponding echo intensities, making it challenging to directly reconstruct accurate optical characteristics (reflectance spectral profile) of the target with echo intensities. To obtain the target reflectance spectral profile, a common solution is to correct the echo intensity (standard reference correction method) using standard diffuse reflectance whiteboards. However, in complex detection environments, whiteboards are susceptible to contamination, and the transmitting intensity of the laser may fluctuate due to changes in the environment and equipment conditions, which may potentially impact the calculation accuracy. The direct transmission of information from the full-waveform signals to the reconstruction of the reflectance spectral profiles is a more efficient approach. Therefore, we propose an echo intensity correction method based on HSL full-waveform data for the rapid generation of reflectance spectral profiles of targets. The initial step is to conduct a theoretical analysis that illustrates the similarity between the echo signals and the transmitting signals in terms of their waveforms. A skew-normal Gaussian function is then employed to fit the transmitting and echo signals of the HSL full waveform. Thereafter, the transmit-to-echo signal peak ratios (normalization factors) of the standard diffuse reflectance whiteboard at different wavelengths are calculated under ideal conditions. Finally, the reflectance spectral profile of the target is constructed by combining the normalization factor of the standard diffuse reflectance whiteboard with that of the target. To verify the effectiveness of the proposed method, we conducted experiments to compare the reflectance spectral profiles calculated using the standard reference correction method. Moreover, we performed wood-leaf separation and target classification experiments to assess its reliability and usability. The experimental results reveal the following: (1) The reconstructed reflectance spectral profiles of the target can be obtained by correcting the echo intensity with the transmitting signals, which is similar to that obtained by the standard reference correction method. Moreover, it demonstrates excellent stability under various temperatures and lighting conditions. Compared with the standard reference correction method, this approach effectively overcomes the influence of laser emission energy fluctuations, thereby considerably improving the measurement accuracy and consistency of reflectance spectral curves, especially under prolonged HSL operation conditions. (2) The wood-leaf separation and the multiple target classification can be conducted using the reconstructed target reflectance spectral profiles, with a classification accuracy of over 90%. Overall, the proposed method simplifies the correction of echo intensity for full-waveform HSL, which is suitable for the rapid reconstruction of target hyperspectral information during data acquisition.
Light Detection And Ranging (LiDAR) systems lack texture and color information, while cameras lack depth information. Thus, the information obtained from LiDAR and cameras is highly complementary. Therefore, combining these two types of sensors can obtain rich observation data and improve the accuracy and stability of environmental perception. The accurate joint calibration of the external parameters of these two types of sensors is the premise of data fusion. At present, most joint calibration methods need to be processed through target calibration and manual point selection. This makes it impossible to use them in dynamic application scenarios. This paper presents a ResCalib deep neural network model, which can be used to solve the problem of the online joint calibration of LiDAR and a camera. The method uses LiDAR point clouds, monocular images, and in-camera parameter matrices as the input to achieve the external parameters solving of LiDAR and cameras; however, the method has low dependence on external features or targets. ResCalib is a geometrically supervised deep neural network that automatically estimates the six-degree-of-freedom external parameter relationship between LiDAR and cameras by implementing supervised learning to maximize the geometric and photometric consistencies of input images and point clouds. Experiments show that the proposed method can correct errors in calibrating rotation by ±10° and translation by ±0.2 m. The average absolute errors of the rotation and translation components of the calibration solution are 0.35° and 0.032 m, respectively, and the time required for single-group calibration is 0.018 s, which provides technical support for realizing automatic joint calibration in a dynamic environment.
To address the issue of LiDAR’s low turbulence recognition rate at airports in low-altitude areas, a clear air turbulence recognition method based on an improved Squeeze-and-Excitation Residual Network with 50 layers (SE-ResNet50) is proposed. By introducing the squeeze-and-excitation module and improving the network structure, the model’s excessive sensitivity to feature location is reduced, thereby enabling the network to selectively highlight useful information features during the learning process. A sample dataset was established using measured data from Lanzhou Zhongchuan International Airport; for model training, a balanced dataset was created by extracting an equal amount of weak, moderate, and strong turbulence data based on the turbulence classification level. Under the same experimental conditions, the recognition accuracy of the improved SE-ResNet50 was increased by 7.44%, 6.52%, and 4.11% compared with the convolutional neural network, MobileNetV2, and ShuffleNetV1 networks, respectively. A comparison of the confusion matrices generated by each model showed that the accuracy of the proposed method reached 95%, verifying the feasibility of the proposed method.
The airport docking guidance system is essential for enhancing airport safety and operational efficiency. This study introduces a deep learning-based point cloud completion network designed for accurate aircraft localization using LiDAR technology. Initially, the aircraft parking process is simulated in a realistic virtual environment to generate complete point cloud data. Subsequently, partial point clouds caused by occlusions or sensor limitations are processed through the proposed network to reconstruct their complete geometric structures. Then the restored point cloud is aligned with a predefined aircraft model, enabling precise calculation of the aircraft’s center coordinates in the simulated coordinate system through spatial transformation. Experimental results demonstrate that the network effectively recovers structural details from incomplete point clouds, enabling accurate computation of aircraft centroid coordinates. This approach achieves high-precision position detection for aircraft during docking, showing significant potential for practical airport applications. The codes are available at: https://www.scidb.cn/anonymous/UXZFZkFm.
New System Radar
This paper addresses the challenges of adapting complex perception systems to perform core tasks such as detection, tracking, and countermeasures in highly dynamic and adversarial battlefield environments. We propose an information-driven theoretical model and a systematic methodology for system construction. This study introduces an information-driven theoretical model and construction methodology for such systems. Specifically, a multi-layered information description framework is introduced to overcome the application barriers of traditional static modeling and fixed-pattern design. This framework is based on syntax, semantics, and pragmatics and is designed to overcome the limitations of single syntactic structures and surface-level semantics. A dynamic evolution architecture is incorporated into the system. The theoretical achievements are also applied to the practice of distributed radar detection systems. Moreover, a structured hierarchical optimization algorithm with a finite-scenario interactive learning mechanism is designed to achieve ordered system organization and capability emergence, thereby addressing the complexity of system optimization. This study provides a theoretical framework and technical approach for the design of intelligent perception systems in complex battlefield environments.
Electromagnetic (EM) metasurfaces are a novel type of artificial EM material exhibiting great advantages for wireless communication and signal processing. By introducing external excitation (mechanical, thermal, electrical, optical, and magnetic excitations), the EM metasurface realizes a more flexible dynamic control of the EM response. On the basis of the dynamic control method, the EM metasurface can accurately control the phase, amplitude, polarization mode, propagation mode, and other characteristics of EM waves to realize wavefront control in different application scenarios. In this paper, we first summarize the research progress of dynamic control technology for EM metasurfaces. Then, the research status of EM metasurfaces in the fields of holographic imaging, polarization conversion, metalensing, beam steering, and intelligent systems based on the application scenarios is discussed. Finally, the development modes of EM metasurfaces and the development trends of intelligent control in the future are summarized and explored.
In this paper, a Frequency Diverse Array (FDA)-based Synthetic-Aperture Radar (FDA-SAR) imaging method based on sub-band shifting and splicing is proposed to address the conflicting issues associated with multi-mode SAR imaging in parameter design, such as those related to resolution and imaging swath. Using the multiple-sub-band concurrent mode of a FDA radar, a radar waveform with an adjustable bandwidth is designed. The time-frequency-domain expressions of the synthesized signals with arbitrary bandwidths are derived in detail, enabling compensation for the involved azimuth-time-delay differences and inconsistent frequency bands. The effect of the spectrum distribution of the synthesized signals on the imaging performance is analyzed, and spectrum synthesis based on non-uniform sub-band shifting is adopted, which reduces the peak sidelobe level and improves the imaging performance. Finally, simulations verify the effectiveness of the proposed method to simultaneously achieve signal-level fusion processing for coarse-resolution imaging of large observation scenes and fine-resolution imaging of key areas.
Radar Remote Sensing Application
Synthetic Aperture Radar (SAR) ocean remote sensing simulation is an important analytical tool for designing SAR systems for ocean applications. It can also provide training samples for detecting and recognizing SAR images of complex ocean phenomena. Therefore, it plays an important role in the design and application of SAR ocean remote sensing systems. The motion, time-varying, and decoherence characteristics of the sea surface caused the simulation difficulty and calculation amount of SAR ocean remote sensing to be much larger than those of fixed land targets. Therefore, improving the simulation efficiency while ensuring the simulation accuracy is key to achieving high-precision and high-efficiency simulation of SAR ocean imaging. This study introduces the main methods, development status, and main problems of dynamic ocean SAR imaging simulation and provides methods for realizing key problems in high-precision simulation of dynamic ocean SAR imaging. The method can complete the simulation of a 4 m resolution at a 400 km2 scene within 10 min while ensuring high fidelity. Under typical working conditions, the spectral peak error of a simulated SAR image is 3%, and the spectral width error is 4%. The typical applications of dynamic ocean surface SAR imaging simulation in wave spectrum inversion, wave texture suppression based on depth cancellation networks, and ship wake detection based on the Wake2Wake network are introduced. On the one hand, these applications verify that the fidelity of the high-precision simulation of dynamic sea SAR imaging presented in this study can satisfy the requirements of intelligent simulation training. On the other hand, the high-precision simulation offers a good prospect for intelligent application of SAR ocean images and can be an important method for providing samples for intelligent application of SAR ocean remote sensing.
Crop and soil parameters serve as fundamental indicators for characterizing crop growth status and monitoring vegetation dynamics. Radar remote sensing presents unique advantages, such as all-weather and day-and-night observation capabilities, as well as insensitivity to meteorological conditions. Furthermore, the penetration ability of microwaves enhances the sensitivity to soil parameter variations beneath crop canopies, demonstrating significant potential for retrieving crop and soil parameters. This article presents a comprehensive review and analysis of inversion models used for crop and soil parameters based on the microwave scattering theory. First, it discusses the evolution of microwave scattering models from theoretical frameworks to semiempirical approaches, demonstrating key trends in theoretical advancements and methodological refinements. Subsequently, it systematically examines inversion methods for crop parameters, soil parameters, and crop-soil interactions, revealing their underlying microwave scattering mechanisms. Finally, the article discusses current model limitations and proposes future research directions aligned with emerging technological developments to provide novel insights for subsequent investigations.
Communications
Maritime target detection and identification technology are developed using large-scale, high-quality multi-sensor measurement data. Therefore, the Sea Detection Radar Data Sharing Program (SDRDSP) was upgraded to the Maritime Target Data Sharing Program (MTDSP), integrating multiple observation modalities, such as HH-polarized radar, VV-polarized radar, electro-optical devices, and Automatic Identification System (AIS) equipment to conduct multisource observation experiments on maritime vessel targets. The program collects various data types, including radar intermediate frequency/video echo slice data, visible and infrared imagery, AIS static and dynamic messages, and meteorological and hydrological data, covering representative sea conditions and multiple vessel types. A comprehensive multisource observation dataset was constructed, enabling the matching and annotation of multimodal data for the same target. Moreover, an automated data management system was implemented to support data storage, conditional retrieval, and batch export, providing a solid foundation for the automated acquisition, long-term accumulation, and efficient use of maritime target characteristic data. Based on this system and measured data, the time/frequency domain features of the same and different vessel targets under different sea states, attitudes, polarization conditions are compared and analyzed, and the statistical conclusion of the change in target features is obtained.