Resolution | Imaging mode | Polarization mode | Format |
1 m, 3 m | Spotlight, Strip | Single | Tiff |
Citation: | SUN Xian, WANG Zhirui, SUN Yuanrui, et al. AIR-SARShip-1.0: High-resolution SAR ship detection dataset[J]. Journal of Radars, 2019, 8(6): 852–862. doi: 10.12000/JR19097 |
Over the recent years, deep-learning technology has been widely used. However, in research based on Synthetic Aperture Radar (SAR) ship target detection, it is difficult to support the training of a deep-learning network model because of the difficulty in data acquisition and the small scale of the samples. This paper provides a SAR ship detection dataset with a high resolution and large-scale images. This dataset comprises 31 images from Gaofen-3 satellite SAR images, including harbors, islands, reefs, and the sea surface in different conditions. The backgrounds include various scenarios such as the near shore and open sea. We conducted experiments using both traditional detection algorithms and deep-learning algorithms and observed the densely connected end-to-end neural network to achieve the highest average precision of 88.1%. Based on the experiments and performance analysis, corresponding benchmarks are provided as a basis for further research on SAR ship detection using this dataset.
Synthetic Aperture Radar (SAR) is an active microwave imaging radar that can provide all-weather, day-and-night imaging capability. SAR has broad application prospects in the military and civilian fields. With the development of China’s ground observation technology in recent years, many high-resolution SAR satellites, such as Gaofen-3, have been put into use. The quality and quantity of SAR data both continue to increase.
The interpretation of SAR images faces many challenges. SAR imaging and optical imaging are different. For example, characterization is not intuitive, and coherent spots and overlays during imaging usually interfere with object interpretation. Most existing daily operations use manual interpretation, which is time-consuming and labor-intensive. Furthermore, meeting the needs of real-time interpretation of massive SAR images is difficult.
Continuous monitoring of ships in ports and maritime areas is an important application task[1]. Ship detection has also been a research focus in the field of SAR image interpretation. Ship detection is divided into two types: nearshore ship detection and offshore ship detection. In general, the background of offshore ships is relatively unvarying, thus resulting in slightly less difficulty in extracting foreground objects. By contrast, the nearshore area has a larger number and a wider variety of ships. In addition, ports are located in the land–sea division area and are affected by background noise and ground interference. Therefore, detecting nearshore ships is more difficult.
The classic ship detection method mainly combines statistical learning with the Constant False Alarm Rate (CFAR). In research on ship detection for single polarized SAR, Rey proposed a detection method that uses the K distribution of an ocean clutter model combined with CFAR[2]. Novaket al[3]. developed a two-parameter CFAR by using a Gaussian model. Stagliano et al[4]. proposed a ship detection algorithm for SAR, which is based on the combination of CFAR and wavelet transform. He Jinglu et al[5]. proposed an automatic ship detection method for polarized SAR based on superpixel-level local information measurement. The algorithm calculates the measured values between the superpixel and the surrounding pixels by generating multiscale superpixels and converts different metrics from the superpixel level to the pixel level for discrimination and detection. The traditional methods have been widely used in the ship detection field and rely on artificially designed feature classifiers to extract ship features. For example, the performance of the CFAR algorithm depends on the modeling of ocean clutter. Meanwhile, artificially designed feature classifiers often work well in the offshore ship detection of ocean ships with relatively unvarying backgrounds. When it comes to the nearshore scenario, the performance deteriorates because the traditional methods cannot fully distinguish the real ships and the false alarm targets, such as islands, reefs, and man-made facilities near the shore.
In recent years, with the progressive development of deep learning methods, many target detection algorithms using deep neural network models have been proposed, thereby improving the deficiencies of traditional learning methods to a certain extent. Commonly used network models include autoencoders, Boltzmann machines, and Convolutional Neural Networks (CNN). For CNN in particular, many basic networks have emerged, such as Alex Network (AlexNet), VGG, Google Network (GoogleNet), and Residual Network (ResNet). Many target detection models based on this structure have been developed, together with some classic detection models, including SSD, YOLOv1, and Faster-region CNNs (Faster-RCNNs). These methods have gradually become the mainstream in the field of SAR ship detection.
However, deep learning methods often require large amounts of data for training. In the field of computer vision, many public sample datasets are available, such as ImageNet[6], VOC[7], and COCO[8]. The data scale reaches thousands of types of targets and millions of slices. In the past two years, some datasets such as DOTA[9], HRRSD[10], and RSOD[11,12] have been released successively in the field of optical remote sensing, thereby facilitating the research and test of many algorithms.
By contrast, the existing datasets in the field of SAR ship detection are relatively limited. Publicly available datasets include SSDD[13], OpenSARShip[14], and the dataset provided in Ref. [15]. These three types of datasets are mainly based on civilian ship slices. The slice size is generally 256 × 256 pixels, and the resolution includes 3 m, 5 m, 8 m, 10 m, and 20 m. Most backgrounds feature offshore scenarios, and the nearshore scenario is limited. After the release of these three datasets, the application of the deep neural network models in ship detection for SAR has been better promoted. The benchmarks of the datasets are defined based on the mainstream deep learning algorithms.
In actual application, the ship detection task is often realized on the whole scene image, whose coverage area is generally tens of square kilometers or more. Under this condition, the environment around the target, such as docks, roads, outbuildings, and even waves, has a great impact on ship detection performance, especially in the nearshore and island reef scenarios. Therefore, a dataset that contains more realistic and diverse scenarios such as the distant sea and near shore and covers multiple types of ship targets will contribute to training a model with better performance, stronger robustness, and higher practicality.
To promote research on ship detection for SAR and improve the utilization rate of localized data, this paper publishes AIR-SARShip-1.0, a SAR ship dataset based on Gaofen-3 satellite data. The dataset contains 31 SAR images. The scene types include ports, islands, reefs, and the sea. The labeling information is the ship position, which has been confirmed by professional interpreters. Currently, the dataset is mainly used for ship target detection in complex scenarios. The dataset is free to download from the link on the official website of the Journal of Radar. The paper also uses several common deep learning networks for comparative experiments and analysis. The performance indexes form a benchmark for SAR ship detection, which is convenient for other scholars to cite as a reference in related research.
Gaofen-3 is a civilian microwave remote sensing imaging satellite in the major project of the National High-Resolution Earth Observation System, and it is also the first Chinese C-band multipolarization high-resolution synthetic aperture radar satellite[16]. The AIR-SARShip-1.0 dataset is collected from the Gaofen-3 satellite, which contains 31 images of large scenes. The detailed information of this dataset is shown in Tab. 1. The resolution of SAR images includes 1 m and 3 m. The imaging mode has both spotlight and strip modes. All images are in single polarization mode with a size of about 3000 × 3000 pixels and all saved in TIF format. Details of each image, including image number, pixel size, resolution, sea state, scene type, and the number of ships, are presented in the App. Tab. 1 of this paper.
Resolution | Imaging mode | Polarization mode | Format |
1 m, 3 m | Spotlight, Strip | Single | Tiff |
The AIR-SARShip-1.0 dataset is labeled according to the annotation format in the PASCAL VOC dataset, and the results are saved as XML files. Fig. 1(a) shows an example of some annotated rectangular boxes, and Fig. 1(b) shows partial details of the corresponding XML label file. The file in Fig. 1(b) actually contains the rectangle information of all ships in Fig. 1(a), but here, we list only the information of one target box as an example. The XML file includes image file name, pixel size, number of channels, resolution, category, and position of each target box. The top left corner of the SAR image is set as the origin of coordinates. Each target is labeled by a rectangular box, which is located in the top left corner (xmin, ymin) and the bottom right corner (xmax, ymax). The coordinates of these two points are the actual pixel position in the image. The annotation format in this dataset is consistent with that in the PASCAL VOC dataset. Fig. 2 presents some typical scenes in this dataset. The SAR images also contain the surrounding harbors, sea, and inland area, which is close to the real ship detection task.
The setting of the proportion of training and test sets is important in the process of training. Considering that this dataset contains 31 large-scene images, we take 21 images as training data and the remaining 10 images as test data. The area distribution of the bounding box is shown in Fig. 3. The horizontal axis represents the area of the bounding box, and the vertical axis is the proportion of total ships in the corresponding area. For example, the first column represents that 6% of ships have an area less than 1000, and the second column represents that 13% of ships have an area between 1000 and 2000. Fig. 3 shows that most target areas are between 2000 and 5000, and the ratios are small in the whole image. Even if the large image is cropped into slices with 500 × 500 pixels, the average area ratio of ship targets in the slice is between 0.008 and 0.020. Compared with the COCO dataset with about 41% small targets, being one of the most challenging datasets in the domain of computer vision, AIR-SARShip-1.0 has more small targets in large scenes. Therefore, AIR-SARShip-1.0 mainly focuses on the detection performance of targets in a small scale.
Before deep learning became popular, researchers from all over the world conducted in-depth research on the SAR ship detection field and proposed many classical SAR ship detection algorithms, such as the two-parameter CFAR algorithm, the optimal entropy automatic threshold method (KSW), and the CFAR method based on K distribution. The optimal entropy automatic threshold method applies the Shannon entropy in information theory to image segmentation. This algorithm overcomes the problem of ship detection disconnection and false alarm detection in high-resolution images by selecting double thresholds. The CFAR detection method is one of the most commonly used and effective detection algorithms in the field of radar signal detection. The core idea of this algorithm is to calculate the threshold for detecting ship targets on the basis of the CFAR and the statistical characteristics of ocean clutter in SAR images, i.e., the probability density function of ocean clutter. When the ocean background clutter is modeled by the Gaussian model, a double-parameter CFAR algorithm can be established. However, in many cases, the Gaussian model is not ideal for describing ocean clutter. Hence, in 1976, Jakeman and Pusey introduced the concept of K distribution to describe ocean clutter, that is, the CFAR method based on the K distribution further improves the accuracy of ship detection, which is universally accepted. In the experimental part of this paper, three classical ship detection algorithms are used to test and analyze the AIR-SARShip-1.0 dataset.
In recent years, with the development of deep learning technology, many algorithms have been proposed for object detection in the visual field, which are mainly divided into two categories: single-stage target detector and double-stage target detector. SSD[17], YOLOv1[18], and RetinaNet[19] are the representative single-stage detection algorithms. YOLOv1 contains only two parts: the feature extraction part and the detection object box part. YOLOv1 divides the image into S×S grids. The center of the grid where each object is located is responsible for predicting the position and category of the object box and can predict only the object of a single class. The difference between SSD and YOLOv1 is that SSD adds an anchor frame and a multiscale feature extraction layer, thereby improving YOLOv1’s rough grid and poor detection accuracy for small targets. The two-stage representative algorithms include R-CNN[20], fast-RCNN[21], Faster-RCNN[22], and feature pyramid networks[23]. The most representative Faster-RCNN consists of three parts. Part 1 is the basic network, which is used to extract high-level features from the image. Part 2 is the Region Proposal Network (RPN), which proposes candidate boxes that may be targets. Part 3 is the prediction box regression network, which further classifies the target based on the candidate box and performs the location regression. The two-stage detection network has the extraction part of candidate boxes, thus making it better than the single-stage detection network in controlling the proportion of positive and negative samples and refining the position of candidate boxes. However, it also greatly increases the time cost of detection.
Target detection algorithms in the visual domain have similar basic networks, such as VGG and ResNet. VGG is mainly divided into two parts: convolutional network and fully connected network. ResNet is mainly used to solve the problem of the network performance degrading as the network depth increases. It cleverly designs a jumper module to form a residual block, which greatly increases the available network depth. Commonly used ResNet networks include ResNet50, ResNet101, and ResNet152.
At present, the main means of data enhancement in are turnover, random image scaling and 90-degree rotation. However, SAR satellites often conduct multitemporal and multiangle imaging of the same location, but this angle is uncertain, being neither a 90-degree rotation nor a 180-degree flip. As shown in Fig. 4, the two SAR images from the same place look different because of the imaging angle. SAR imaging is different from optical imaging, and the imaging results from different angles are different[24]. Using only a rotation of 90 degrees in the data enhancement method limits the detection performance improvement. To solve this problem, this paper adopts the Faster-RCNN based on dense rotation (Faster-RCNN-DR) enhancement with a small angle interval to obtain the diversity of data angles to further improve the performance of SAR ship target detection. Fig. 5 shows the original image and the image after 20°, 40°, and 60° counterclockwise rotation.
SAR images have diverse resolutions according to different applications and imaging modes. The same ship object in different-resolution images and different ship objects in the same resolution image will show different sizes. The multiscale features of ships in the multiresolution SAR images pose great challenges to object detection. In DCNNs, the feature maps of low-level convolutional layers contain rich spatial information, but less semantic information is available. The feature maps of the higher layers contain more semantic information but less spatial information. Smaller-scale objects are left with little information after multilayer convolution, which is not good for small object detection and recognition. Therefore, to solve the multiscale ship object detection problem of SAR images with different resolutions, the Ref. [25] proposed a Densely Connected End-to-end Neural Network (DCENN) for ship detection. The main structure of this network, which uses ResNet101 as the backbone network, is shown in Fig. 6. As the convolutional network deepens and the image has been convolved multiple times, the feature map contains an increasing amount of semantic information, but the resolution continuously becomes lower. To combine the high-resolution feature map with the semantic information of the high-level feature map, the high-level and low-level feature maps can be iteratively connected as shown in Fig. 7. After the basic network and RPN network are established, a two-stage detection subnetwork is formed (shown in the dashed box in Fig. 6). This subnetwork is specifically divided into the proposal box pooling part and the fully connected layer part for classification and regression. Lightweighting and improving these two parts not only guarantee detection accuracy but also reduce the memory consumption and improve the processing speed.
We conducted experiments on the AIR-SARShip-1.0 dataset to verify the superiority of the deep learning methods. An Intel Xeon E5-2630 CPU with an Ubuntu 16.04 operating system, a 32 GB memory, and an NVIDIA Tesla P100 GPU for the deep learning algorithm was used for the experiments. The traditional algorithms use the CPU for calculation instead of the GPU for acceleration. The dataset is divided into the test set, which has 10 images, and the training set, which has 21 images. The dataset will provide train.txt and test.txt files to record the names of the training file and the test set file, respectively. In the CFAR algorithm, the ocean clutter is considered to obey the Gaussian distribution of (0,1). In the CFAR algorithm based on the K distribution, the parameter K is 2. In the KSW algorithm, the optimal threshold parameter is automatically selected according to the image. The traditional algorithms do not require training data and are thus directly tested based on the test set. The test accuracy is shown in Tab. 2.
Algorithm | AP(%) |
CFAR | 27.1 |
CFAR method based on K distribution | 19.2 |
KSW | 28.2 |
The calculation method of AP is shown in Eq. (1), and the calculation method of pinterp(rn+1) in Eq. (1) is shown in Eq. (2). p(
AP=1∑0(rn+1−rn)pinterp(rn+1) | (1) |
pinterp(rn+1)=max˜r:˜r≥rn+1p(˜r) | (2) |
p=TPTP+FP | (3) |
r=TPTP+FN | (4) |
IOU=area(Bp∩Bgt)area(Bp∪Bgt) | (5) |
In the visual field, the deep learning target detection algorithms SSD, YOLOv1, and Faster-RCNN and the detection network algorithm based on rotation enhancement are tested by using the open-source framework PyTorch. Jiao Jiao et al. used the DCENN algorithm in experiments with the open-source framework TensorFlow. In the experiment, the SAR image is divided into 500 × 500 pixels, and then the data are enhanced by image flipping, image rotation, contrast enhancement, and random scaling. The training sets used by the Faster-RCNN, SSD-512, SSD-300, and YOLOv1 algorithms are enhanced by 90-degree rotation, while the training sets used by the Faster-RCNN-based rotation enhancement algorithm are enhanced by dense rotation at 10-degree intervals. Two image sizes—SSD-300 and SSD-512—are used in SSD. In the experiment, the learning rate is 0.00001 and the momentum is set to 0.99. According to the memory limit of GPU, the batch processing capacity of SSD-300, SSD-512, Faster-RCNN, and DCENN are 24, 4, 12, and 12, respectively. Other hyperparameters are set the same as those in Ref. [22]. The hyperparameter setting in Faster-RCNN-DR is exactly the same as those in Faster-RCNN.
The ship detection performance of each deep learning algorithm is shown in Tab. 3, in which the running speed of each algorithm is measured by FPS, which represents the number of images that the algorithm can detect per second. The input test image size of DCENN, Faster-RCNN-DR, Faster-RCNN, and YOLOv1 is 500 × 500. The input test image size of SSD-512 is 512 × 512, and that of SSD-300 is 300 × 300. The table shows that among the algorithms whose training set was enhanced by 90-degree rotation, YOLOv1 has the worst performance but the fastest running speed, while the SAR ship detection algorithm proposed in Ref. [25] has the best performance but the slowest running speed. In the single-stage target detection algorithm, YOLOv1 does not use an anchor frame to predict. Instead, it divides the image into S × S grid, with each grid being able to predict only one target. Thus, YOLOv1 in the dense small-target dataset of AIR-SARShip-1.0 has poor detection performance. However, YOLOv1 has the fastest speed due to the removal of the anchor frame. SSD adds an anchor frame during training and forecasts it in multiple feature layers of the network, which makes up for the deficiency of YOLOv1 and improves the detection performance. However, its running time is slightly slower than that of YOLOv1. As a typical two-stage detection algorithm, Faster-RCNN uses RPN to propose candidate boxes to make the latter network detect the position of the regression target box more accurately. Faster-RCNN performs better than the single-stage detection algorithm. However, it also has the shortcomings of the two-stage algorithm. The running speed is obviously slower than that of the single-stage algorithm. Compared with Faster-RCNN, Faster-RCNN-DR increases the performance by 4.9% because the dense rotation method improves the richness and angle diversity of the dataset to a certain extent. Given that no additional calculation is performed in the test stage, the running time is basically the same as that of the Faster-RCNN detection algorithm. The DCENN ship detection algorithm can better extract ship features because of the use of dense connections and prediction on multiple feature layers. Hence, this algorithm has the best performance. Nevertheless, the dense connection also requires a high computational amount, thereby lowering the processing efficiency.
Performance ranking | Algorithm | AP(%) | FPS |
1 | DCENN | 88.1 | 24 |
2 | Faster-RCNN-DR | 84.2 | 29 |
3 | Faster-RCNN | 79.3 | 30 |
4 | SSD-512 | 74.3 | 64 |
5 | SSD-300 | 72.4 | 151 |
6 | YOLOv1 | 64.7 | 160 |
In Tab. 4, the detection results of the three representative algorithms are given in two different scenarios: nearshore and offshore. The detection accuracy of offshore scenario is obviously higher than that of the nearshore scenario. The highest accuracy in the offshore scenario on this dataset is better than 95%, while the performance of nearshore scenario is reduced by more than 20%. This phenomenon accords with the fact that the background of the offshore scenario is relatively unvarying and less noisy, whereas the nearshore scenario is interfered by wharfs, buildings, and land. To a certain extent, this finding also shows that a large gap still exists between scientific research on nearshore ship target detection and practical utilization, which is a challenging research topic.
Performance ranking | Algorithm | Nearshore ship AP(%) | Offshore ship AP(%) |
1 | DCENN | 68.1 | 96.3 |
2 | Faster-RCNN-DR | 57.6 | 94.6 |
3 | SSD-512 | 40.3 | 89.4 |
To show the detection effect on the AIR-SARShip-1.0 data set intuitively, we take one SAR image as an example and use the Faster-RCNN algorithm to detect ships. The results are shown in Fig. 8. The number in the green box represents the confidence of the detection box. Most of the ships are detected with correct rectangles as shown in Fig. 8(c). However, some unsatisfactory results still exist, i.e., false alarm (Fig. 8(a)), the detected ship with an inaccurate rectangle (Fig. 8(b)), and the overlooked ship (Fig. 8(d)), thus indicating that further research and improvement are needed.
To promote the application of deep learning technology in the field of SAR ship detection, this paper publishes a large-scale high-resolution dataset called AIR-SARShip-1.0, which includes two scenarios, i.e., nearshore and offshore. In this paper, both the traditional ship detection algorithms and the common deep learning detection algorithms are experimentally tested. The deep learning algorithms have significantly better detection performance than the traditional ship algorithms. On the basis of densely connected network structures, the DCENN detection algorithm uses multiple connections for prediction. DCENN achieves the highest AP performance, but its operation speed is the slowest. The data expansion method using dense angular rotation can increase the angular diversity of the data to a certain extent, which is conducive to the improvement of model performance without bringing extra calculation in the prediction. In addition, different algorithms are tested in the nearshore and offshore scenarios. The performance difference is small in the offshore scenario but is significant in the nearshore scenario, thereby indicating that the nearshore environment is more complicated and that the ship detection task faces more challenges. The experimental results establish a performance benchmark for the AIR-SARShip-1.0 dataset, which conveniently enables other scholars to further perform related research on SAR ship detection.
The high-resolution SAR ship detection dataset -1.0 (AIR-SARShip-1.0) is published on the official website of the Journal of Radar and has been uploaded to the “data / SAR sample data set” page (App. Fig. 1) at
To increase the utilization rate of domestic data and promote research on advanced technologies, such as SAR target detection, the AIR-SARShip-1.0 dataset was built based on the major scientific and technological specialties of the National High-Resolution Earth Observation System. This dataset has large-scene images and covers typical types of ships, which is close to practical applications. AIR-SARShip-1.0 is owned by the National Science and Technology Major Project of High-Resolution Earth Observation System and the Aerospace Information Research Institute, Chinese Academy of Sciences. The editorial department of the Journal of Radar has editorial rights.
Image No. | Size | Sea condition | Scenario | Resolution (m) | Ship numbe |
1 | 3000×3000 | Level 2 | nearshore | 3 | 5 |
2 | 3000×3000 | Level 0 | nearshore | 1 | 7 |
3 | 3000×3000 | Level 3 | offshore | 3 | 10 |
4 | 3000×3000 | Level 2 | offshore | 3 | 8 |
5 | 3000×3000 | Level 1 | nearshore | 3 | 15 |
6 | 3000×3000 | Level 4 | offshore | 3 | 3 |
7 | 3000×3000 | Level 4 | offshore | 3 | 5 |
8 | 3000×3000 | Level 1 | nearshore | 1 | 2 |
9 | 3000×3000 | Level 2 | nearshore | 1 | 7 |
10 | 3000×3000 | Level 1 | offshore | 1 | 50 |
11 | 3000×3000 | Level 1 | nearshore | 1 | 80 |
12 | 3000×3000 | Level 2 | nearshore | 1 | 18 |
13 | 4140×4140 | Level 1 | nearshore | 1 | 21 |
14 | 3000×3000 | Level 1 | nearshore | 1 | 15 |
15 | 3000×3000 | Level 1 | nearshore | 1 | 77 |
16 | 3000×3000 | Level 3 | nearshore | 3 | 13 |
17 | 3000×3000 | Level 3 | nearshore | 3 | 3 |
18 | 3000×3000 | Level 3 | nearshore | 3 | 2 |
19 | 3000×3000 | Level 3 | nearshore | 3 | 1 |
20 | 3000×3000 | Level 2 | nearshore | 3 | 7 |
21 | 3000×3000 | Level 2 | nearshore | 3 | 9 |
22 | 3000×3000 | Level 1 | nearshore | 3 | 14 |
23 | 3000×3000 | Level 1 | offshore | 3 | 4 |
24 | 3000×3000 | Level 4 | offshore | 3 | 6 |
25 | 3000×3000 | Level 4 | offshore | 1 | 20 |
26 | 3000×3000 | Level 2 | nearshore | 3 | 15 |
27 | 3000×3000 | Level 2 | nearshore | 3 | 19 |
28 | 3000×3000 | Level 1 | nearshore | 3 | 8 |
29 | 3000×3000 | Level 3 | offshore | 3 | 6 |
30 | 3000×3000 | Level 2 | offshore | 3 | 8 |
31 | 3000×3000 | Level 1 | nearshore | 3 | 3 |
[1] |
张杰, 张晰, 范陈清, 等. 极化SAR在海洋探测中的应用与探讨[J]. 雷达学报, 2016, 5(6): 596–606. doi: 10.12000/JR16124
ZHANG Jie, ZHANG Xi, FAN Chenqing, et al. Discussion on application of polarimetric synthetic aperture radar in marine surveillance[J]. Journal of Radars, 2016, 5(6): 596–606. doi: 10.12000/JR16124
|
[2] |
REY M T, CAMPBELL J, and PETROVIC D. A comparison of ocean clutter distribution estimators for CFAR-based ship detection in RADARSAT imagery[R]. Technical Report No. 1340, 1998.
|
[3] |
NOVAK L M, BURL M C, and IRVING W W. Optimal polarimetric processing for enhanced target detection[J]. IEEE Transactions on Aerospace and Electronic Systems, 1993, 29(1): 234–244. doi: 10.1109/7.249129
|
[4] |
STAGLIANO D, LUPIDI A, and BERIZZI F. Ship detection from SAR images based on CFAR and wavelet transform[C]. 2012 Tyrrhenian Workshop on Advances in Radar and Remote Sensing, Naples, Italy, 2012: 53–58.
|
[5] |
HE Jinglu, WANG Yinghua, LIU Hongwei, et al. A novel automatic PolSAR ship detection method based on superpixel-level local information measurement[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 15(3): 384–388. doi: 10.1109/LGRS.2017.2789204
|
[6] |
DENG Jia, DONG Wei, SOCHER R, et al. ImageNet: A large-scale hierarchical image database[C]. 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, USA, 2009: 248–255.
|
[7] |
EVERINGHAM M, VAN GOOL L, WILLIAMS C K I, et al. The PASCAL Visual Object Classes (VOC) challenge[J]. International Journal of Computer Vision, 2010, 88(2): 303–338. doi: 10.1007/s11263-009-0275-4
|
[8] |
LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft coco: Common objects in context[C]. The 13th European Conference on Computer Vision, Zurich, Switzerland, 2014: 740–755.
|
[9] |
XIA Guisong, BAI Xiang, DING Jian, et al. DOTA: A large-scale dataset for object detection in aerial images[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 3974–3983.
|
[10] |
ZHANG Yuanlin, YUAN Yuan, FENG Yachuang, et al. Hierarchical and robust convolutional neural network for very high-resolution remote sensing object detection[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(8): 5535–5548. doi: 10.1109/TGRS.2019.2900302
|
[11] |
LONG Yang, GONG Yiping, XIAO Zhifeng, et al. Accurate object localization in remote sensing images based on convolutional neural networks[J]. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(5): 2486–2498. doi: 10.1109/TGRS.2016.2645610
|
[12] |
XIAO Zhifeng, LIU Qing, TANG Gefu, et al. Elliptic Fourier transformation-based histograms of oriented gradients for rotationally invariant object detection in remote-sensing images[J]. International Journal of Remote Sensing, 2015, 36(2): 618–644. doi: 10.1080/01431161.2014.999881
|
[13] |
LI Jianwei, QU Changwen, and SHAO Jiaqi. Ship detection in SAR images based on an improved faster R-CNN[C]. 2017 SAR in Big Data Era: Models, Beijing, China, 2017: 1–6.
|
[14] |
HUANG Lanqing, LIU Bin, LI Boying, et al. OpenSARShip: A dataset dedicated to Sentinel-1 ship interpretation[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(1): 195–208. doi: 10.1109/JSTARS.2017.2755672
|
[15] |
WANG Yuanyuan, WANG Chao, ZHANG Hong, et al. A SAR dataset of ship detection for deep learning under complex backgrounds[J]. Remote Sensing, 2019, 11(7): 765. doi: 10.3390/rs11070765
|
[16] |
张庆君. 高分三号卫星总体设计与关键技术[J]. 测绘学报, 2017, 46(3): 269–277. doi: 10.11947/j.AGCS.2017.20170049
ZHANG Qingjun. System design and key technologies of the GF-3 satellite[J]. Acta Geodaetica et Cartographica Sinica, 2017, 46(3): 269–277. doi: 10.11947/j.AGCS.2017.20170049
|
[17] |
LIU Wei, ANGUELOV D, ERHAN D, et al. SSD: Single shot MultiBox detector[C]. The 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 2016: 21–37.
|
[18] |
REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: Unified, real-time object detection[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 779–788.
|
[19] |
LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]. 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 2999–3007.
|
[20] |
GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, 2014: 580–587.
|
[21] |
GIRSHICK R. Fast R-CNN[C]. 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 1440–1448.
|
[22] |
REN Shaoqing, HE Kaiming, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[C]. The 28th International Conference on Neural Information Processing Systems, Montreal, Canada, 2015: 91–99.
|
[23] |
LIN T Y, DOLLÁR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 936–944.
|
[24] |
LIU Peng and JIN Yaqiu. A study of ship rotation effects on SAR image[J]. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(6): 3132–3144. doi: 10.1109/TGRS.2017.2662038
|
[25] |
JIAO Jiao, ZHANG Yue, SUN Hao, et al. A densely connected end-to-end neural network for multiscale and multiscene SAR ship detection[J]. IEEE Access, 2018, 6: 20881–20892. doi: 10.1109/ACCESS.2018.2825376
|
[1] | WANG Zhirui, ZHAO Liangjin, WANG Yuelei, ZENG Xuan, KANG Jian, YANG Jian, SUN Xian. AIR-PolSAR-Seg-2.0: Polarimetric SAR Ground Terrain Classification Dataset for Large-scale Complex Scenes[J]. Journal of Radars, 2025, 14(2): 353-365. doi: 10.12000/JR24237 |
[2] | CHEN Xiaolong, Yuan Wang, Du Xiaolin, Yu Gang, He Xiaoyang, Guan Jian, Wang Xinghai. Multiband FMCW Radar LSS-target Detection Dataset (LSS-FMCWR-1.0) and High-resolution Micromotion Feature Extraction Method[J]. Journal of Radars, 2024, 13(3): 539-553. doi: 10.12000/JR23142 |
[3] | WANG Xiang, WANG Yumiao, CHEN Xingyu, ZANG Chuanfei, CUI Guolong. Deep Learning-based Marine Target Detection Method with Multiple Feature Fusion[J]. Journal of Radars, 2024, 13(3): 554-564. doi: 10.12000/JR23105 |
[4] | HAN Zhaoyun, CEN Xi, CUI Jiahe, LI Yachao, ZHANG Peng. Self-supervised Learning Method for SAR Interference Suppression Based on Abnormal Texture Perception[J]. Journal of Radars, 2023, 12(1): 154-172. doi: 10.12000/JR22168 |
[5] | DU Lan, CHEN Xiaoyang, SHI Yu, XUE Shikun, XIE Meng. MMRGait-1.0: A Radar Time-frequency Spectrogram Dataset for Gait Recognition under Multi-view and Multi-wearing Conditions[J]. Journal of Radars, 2023, 12(4): 892-905. doi: 10.12000/JR22227 |
[6] | XU Xiaowo, ZHANG Xiaoling, ZHANG Tianwen, SHAO Zikang, XU Yanqin, ZENG Tianjiao. SAR Ship Detection in Complex Scenes Based on Adaptive Anchor Assignment and IOU Supervise[J]. Journal of Radars, 2023, 12(5): 1097-1111. doi: 10.12000/JR23059 |
[7] | CHEN Xiang, WANG Liandong, XU Xiong, SHEN Xujian, FENG Yuntian. A Review of Radio Frequency Fingerprinting Methods Based on Raw I/Q and Deep Learning[J]. Journal of Radars, 2023, 12(1): 214-234. doi: 10.12000/JR22140 |
[8] | DING Zihang, XIE Junwei, WANG Bo. Missing Covariance Matrix Recovery with the FDA-MIMO Radar Using Deep Learning Method[J]. Journal of Radars, 2023, 12(5): 1112-1124. doi: 10.12000/JR23002 |
[9] | HE Mi, PING Qinwen, DAI Ran. Fall Detection Based on Deep Learning Fusing Ultrawideband Radar Spectrograms[J]. Journal of Radars, 2023, 12(2): 343-355. doi: 10.12000/JR22169 |
[10] | TIAN Ye, DING Chibiao, ZHANG Fubo, SHI Min’an. SAR Building Area Layover Detection Based on Deep Learning[J]. Journal of Radars, 2023, 12(2): 441-455. doi: 10.12000/JR23033 |
[11] | WANG Zhirui, KANG Yuzhuo, ZENG Xuan, WANG Yuelei, ZHANG Ting, SUN Xian. SAR-AIRcraft-1.0: High-resolution SAR Aircraft Detection and Recognition Dataset(in English)[J]. Journal of Radars, 2023, 12(4): 906-922. doi: 10.12000/JR23043 |
[12] | HUANG Zhongling, YAO Xiwen, HAN Junwei. Progress and Perspective on Physically Explainable Deep Learning for Synthetic Aperture Radar Image Interpretation(in English)[J]. Journal of Radars, 2022, 11(1): 107-125. doi: 10.12000/JR21165 |
[13] | XU Congan, SU Hang, LI Jianwei, LIU Yu, YAO Libo, GAO Long, YAN Wenjun, WANG Taoyang. RSDD-SAR: Rotated Ship Detection Dataset in SAR Images[J]. Journal of Radars, 2022, 11(4): 581-599. doi: 10.12000/JR22007 |
[14] | CUI Xingchao, SU Yi, CHEN Siwei. Polarimetric SAR Ship Detection Based on Polarimetric Rotation Domain Features and Superpixel Technique[J]. Journal of Radars, 2021, 10(1): 35-48. doi: 10.12000/JR20147 |
[15] | LIU Fangjian, LI Yuan. SAR Remote Sensing Image Ship Detection Method NanoDet Based on Visual Saliency[J]. Journal of Radars, 2021, 10(6): 885-894. doi: 10.12000/JR21105 |
[16] | LUO Ying, NI Jiacheng, ZHANG Qun. Synthetic Aperture Radar Learning-imaging Method Based onData-driven Technique and Artificial Intelligence[J]. Journal of Radars, 2020, 9(1): 107-122. doi: 10.12000/JR19103 |
[17] | ZHANG Jinsong, XING Mengdao, SUN Guangcai. A Water Segmentation Algorithm for SAR Image Based on Dense Depthwise Separable Convolution[J]. Journal of Radars, 2019, 8(3): 400-412. doi: 10.12000/JR19008 |
[18] | ZHANG Xiaoling, ZHANG Tianwen, SHI Jun, WEI Shunjun. High-speed and High-accurate SAR Ship Detection Based on a Depthwise Separable Convolution Neural Network[J]. Journal of Radars, 2019, 8(6): 841-851. doi: 10.12000/JR19111 |
[19] | Wang Jun, Zheng Tong, Lei Peng, Wei Shaoming. Study on Deep Learning in Radar[J]. Journal of Radars, 2018, 7(4): 395-411. doi: 10.12000/JR18040 |
[20] | Xu Feng, Wang Haipeng, Jin Yaqiu. Deep Learning as Applied in SAR Target Recognition and Terrain Classification[J]. Journal of Radars, 2017, 6(2): 136-148. doi: 10.12000/JR16130 |
1. | 余光浩,陈润霖,徐金燕,徐前祥,王大寒,陈峰. 可变形卷积与注意力的SAR舰船检测轻量化模型. 中国图象图形学报. 2025(03): 724-736 . ![]() | |
2. | 王一力,李强,沈俊逸,杨翊东,王琦. 多模态舰船图像生成及其关键部位检测. 空天防御. 2025(01): 77-85 . ![]() | |
3. | 扈琪,胡绍海,刘帅奇. 基于多层显著性模型的SAR图像舰船目标检测. 系统工程与电子技术. 2024(02): 478-487 . ![]() | |
4. | 娄欣 ,王晗 ,卢昊 ,张文驰 . 生成式知识迁移的SAR舰船检测. 遥感学报. 2024(02): 470-480 . ![]() | |
5. | 刘馨嫔,王洪,赵良瑾. 基于多任务学习的近岸舰船检测方法. 计算机与现代化. 2024(03): 29-33 . ![]() | |
6. | 谢洪途,姜新桥,王国倩,谢恺. 基于改进CenterNet的轻量级无锚框SAR图像多尺度舰船检测算法. 哈尔滨工程大学学报. 2024(03): 504-516 . ![]() | |
7. | Xueqian WANG,Gang LI,Zhizhuo JIANG,Yu LIU,You HE. Density-based ship detection in SAR images:Extension to a self-similarity perspective. Chinese Journal of Aeronautics. 2024(03): 168-180 . ![]() | |
8. | 赵志成,蒋攀,王福田,肖云,李成龙,汤进. 基于深度学习的SAR弱小目标检测研究进展. 计算机系统应用. 2024(06): 1-15 . ![]() | |
9. | 郑向涛,肖欣林,陈秀妹,卢宛萱,刘小煜,卢孝强. 跨域遥感场景解译研究进展. 中国图象图形学报. 2024(06): 1730-1746 . ![]() | |
10. | 曹红. 复杂场景SAR图像的船舰目标快速检测研究. 福建电脑. 2024(07): 53-57 . ![]() | |
11. | 郭柏麟,黄立威,路遥,张雪涛,马永强. 基于脉冲神经网络微调方法的遥感图像目标检测. 遥感学报. 2024(07): 1702-1712 . ![]() | |
12. | 季利鹏,孙志远,朱大奇. 基于U-Net网络结构的SAR图像去噪算法研究. 微电子学与计算机. 2024(09): 1-9 . ![]() | |
13. | 孟祥伟. SAR图像中舰船目标恒虚警率检测技术的研究. 电子与信息学报. 2024(09): 3739-3748 . ![]() | |
14. | 徐妍,孙维东,赵伶俐,黄蕾,潘飞,焦冉. 利用无锚框型深度网络的SAR船舶检测. 测绘科学. 2024(08): 100-110 . ![]() | |
15. | 陈文翰,朱正为,宋昌隆. 基于改进YOLOv7的SAR图像舰船目标检测方法. 电光与控制. 2024(12): 19-26+112 . ![]() | |
16. | 陈秋,邵长高,吕建军. 基于深度学习的海上船舶遥感识别方法对比分析. 地理空间信息. 2024(12): 74-78 . ![]() | |
17. | 龚峻扬,付卫红,刘乃安. SAR图像目标轮廓增强预处理模块设计. 系统工程与电子技术. 2024(12): 4010-4017 . ![]() | |
18. | 贾鹏,董天成,汪韬阳,张过,盛庆红,李俊. 面向星载SAR图像的双域联合密集多小舰目标检测算法(英文). Transactions of Nanjing University of Aeronautics and Astronautics. 2024(06): 725-738 . ![]() | |
19. | 张弘森,徐建,吴飞,季一木,吴蔚. 基于深度学习的视觉图像舰船目标检测与识别方法综述. 指挥信息系统与技术. 2024(06): 48-63 . ![]() | |
20. | 刘霖,肖嘉荣,王晓蓓,张德生,喻忠军. 改进YOLOX的SAR近岸区域船只检测方法. 电子科技大学学报. 2023(01): 44-53 . ![]() | |
21. | 陈诗琪,王威,占荣辉,张军,刘盛启. 特征图知识蒸馏引导的轻量化任意方向SAR舰船目标检测器. 雷达学报. 2023(01): 140-153 . ![]() | |
22. | 张帆,陆圣涛,项德良,袁新哲. 一种改进的高分辨率SAR图像超像素CFAR舰船检测算法. 雷达学报. 2023(01): 120-139 . ![]() | |
23. | 曾祥书,王强,黄一飞,赵书敏,蒋忠进. 基于改进YOLOv3网络的SAR图像舰船目标检测方法. 现代雷达. 2023(02): 31-38 . ![]() | |
24. | Tong ZHENG,Peng LEI,Jun WANG. A Hybrid Features Based Detection Method for Inshore Ship Targets in SAR Imagery. Journal of Geodesy and Geoinformation Science. 2023(01): 95-107 . ![]() | |
25. | 赵维谚,沈志,徐真,杨亮,雷明阳. 基于增强学习机制的SAR图像水域分割方法. 计算机应用与软件. 2023(05): 262-265+337 . ![]() | |
26. | 袁翔,程塨,李戈,戴威,尹文昕,冯瑛超,姚西文,黄钟泠,孙显,韩军伟. 遥感影像小目标检测研究进展. 中国图象图形学报. 2023(06): 1662-1684 . ![]() | |
27. | 曾祥书,黄一飞,蒋忠进. 基于YOLOX网络的SAR图像舰船目标检测. 雷达科学与技术. 2023(03): 255-263 . ![]() | |
28. | 黄泽贤,吴凡路,傅瑶,张雨,姜肖楠. 基于深度学习的遥感图像舰船目标检测算法综述. 光学精密工程. 2023(15): 2295-2318 . ![]() | |
29. | 姜赋坤,黄香诚,张晓波,周兴华. 基于YOLO模型的SAR舰船目标检测方法研究. 海洋技术学报. 2023(04): 14-27 . ![]() | |
30. | 胡国光. 基于注意力YOLO-V3的合成孔径雷达舰船检测识别一体化方法. 航空电子技术. 2023(02): 45-52 . ![]() | |
31. | 王智睿,康玉卓,曾璇,汪越雷,张汀,孙显. SAR-AIRcraft-1.0:高分辨率SAR飞机检测识别数据集. 雷达学报. 2023(04): 906-922 . ![]() | |
32. | 胥小我,张晓玲,张天文,邵子康,徐彦钦,曾天娇. 基于自适应锚框分配与IOU监督的复杂场景SAR舰船检测. 雷达学报. 2023(05): 1097-1111 . ![]() | |
33. | 王中宝,尹奎英. 一种无人机载高分辨率SAR图像目标快速检测方法. 指挥控制与仿真. 2023(05): 43-50 . ![]() | |
34. | 黄强,王钰宁,刘晓霞,胡云冰. 改进YOLOv3-SPP的SAR图像舰船目标检测. 遥感信息. 2023(05): 57-65 . ![]() | |
35. | 孙学娇,李俊杰,陈卫荣,严薇. 高分三号卫星助力海洋强国建设. 卫星应用. 2023(12): 20-27 . ![]() | |
36. | 于飞,隋正伟,邱凤婷,龚婷婷,赵旭东,刘子浩. SAR图像智能解译样本数据集构建进展综述. 网络安全与数据治理. 2023(S1): 97-105 . ![]() | |
37. | 张天文,张晓玲. 一种大场景SAR图像中舰船检测虚警抑制方法. 现代雷达. 2022(02): 1-8 . ![]() | |
38. | 黄钟泠,姚西文,韩军伟. 面向SAR图像解译的物理可解释深度学习技术进展与探讨. 雷达学报. 2022(01): 107-125 . ![]() | |
39. | 周玉金,谢宜壮,乔婷婷,冯杏. 基于Jetson TX2的SAR船只目标检测实现. 信号处理. 2022(02): 426-431 . ![]() | |
40. | 刘伟权,王程,臧彧,胡倩,于尚书,赖柏锜. 基于遥感大数据的信息提取技术综述. 大数据. 2022(02): 28-57 . ![]() | |
41. | 郑彤,雷鹏,王俊. 公开SAR图像目标数据集及其在深度学习中的应用综述. 航空科学技术. 2022(03): 1-10 . ![]() | |
42. | 王兆魁,方青云,韩大鹏. 成像卫星在轨智能处理技术研究进展. 宇航学报. 2022(03): 259-270 . ![]() | |
43. | 韩子硕,王春平,付强,赵斌. 联合生成对抗网络和检测网络的SAR图像目标检测. 国防科技大学学报. 2022(03): 164-175 . ![]() | |
44. | 张子茜,熊再立,张彪,杨琰鑫,付恩康. 基于超分辨率重建技术的遥感图像小目标检测. 东北电力大学学报. 2022(02): 33-40 . ![]() | |
45. | 孙显,孟瑜,刁文辉,黄丽佳,张新,骆剑承,高连如,王佩瑾,闫志远,郜丽静,董文,冯瑛超,李霁豪,付琨. 智能遥感:AI赋能遥感技术. 中国图象图形学报. 2022(06): 1799-1822 . ![]() | |
46. | 雷禹,冷祥光,孙忠镇,计科峰. 宽幅SAR海上大型运动舰船目标数据集构建及识别性能分析. 雷达学报. 2022(03): 347-362 . ![]() | |
47. | 黄琼男,朱卫纲,刘渊,李佳芯,杨莹. 基于多尺度GAN网络的SAR舰船目标扩充. 兵工自动化. 2022(07): 47-52 . ![]() | |
48. | 徐从安,苏航,李健伟,刘瑜,姚力波,高龙,闫文君,汪韬阳. RSDD-SAR:SAR舰船斜框检测数据集. 雷达学报. 2022(04): 581-599 . ![]() | |
49. | 张梓琪,王小龙. 一种基于显著性的高海况SAR图像船舶目标检测方法. 中国科学院大学学报. 2022(05): 695-703 . ![]() | |
50. | 肖嘉荣,刘霖,喻忠军. 基于多相水平集的SAR近岸区域密集船只分割. 无线电工程. 2022(10): 1857-1863 . ![]() | |
51. | 郁文贤. 自动目标识别的工程视角述评. 雷达学报. 2022(05): 737-752 . ![]() | |
52. | 邵文昭,张文新,张书强,王晓辉. 基于深度学习的高分辨率星载遥感影像目标检测综述. 邯郸职业技术学院学报. 2022(04): 34-37+41 . ![]() | |
53. | 刘畅,朱卫纲. 基于卷积神经网络的SAR图像目标检测综述. 兵器装备工程学报. 2021(03): 15-21 . ![]() | |
54. | 侯笑晗,金国栋,谭力宁. 基于深度学习的SAR图像舰船目标检测综述. 激光与光电子学进展. 2021(04): 53-64 . ![]() | |
55. | 孙忠镇,戴牧宸,雷禹,冷祥光,熊博莅,计科峰. 基于级联网络的复杂大场景SAR图像舰船目标快速检测. 信号处理. 2021(06): 941-951 . ![]() | |
56. | 刘畅,朱卫纲. 多尺度与复杂背景条件下的SAR图像船舶检测. 遥感信息. 2021(03): 50-57 . ![]() | |
57. | 周雪珂,刘畅,周滨. 多尺度特征融合与特征通道关系校准的SAR图像船舶检测. 雷达学报. 2021(04): 531-543 . ![]() | |
58. | 周正,崔宗勇,曹宗杰,杨建宇. 基于特征转移金字塔网络的SAR图像跨尺度目标检测. 雷达学报. 2021(04): 544-558 . ![]() | |
59. | 徐英,谷雨,彭冬亮,刘俊,陈华杰. 面向合成孔径雷达图像任意方向舰船检测的改进YOLOv3模型. 兵工学报. 2021(08): 1698-1707 . ![]() | |
60. | 韩子硕,王春平,付强. 基于深层次特征增强网络的SAR图像舰船检测. 北京理工大学学报. 2021(09): 1006-1014 . ![]() | |
61. | 黄琼男,朱卫纲,李永刚. 基于GAN的SAR数据扩充研究综述. 兵器装备工程学报. 2021(11): 31-38 . ![]() | |
62. | 周彦,陈少昌,吴可,宁明强,陈宏昆,张鹏. SCTD1.0:声呐常见目标检测数据集. 计算机科学. 2021(S2): 334-339 . ![]() | |
63. | 黄琼男,朱卫纲,李永刚. SAR图像舰船目标检测数据集构建研究综述. 电讯技术. 2021(11): 1451-1458 . ![]() | |
64. | 李永刚,朱卫纲,黄琼男. SAR图像目标检测方法综述. 兵工自动化. 2021(12): 91-96 . ![]() | |
65. | 常佳慧,赵建辉,李宁. 一种改进的2P-CFAR SAR舰船检测方法. 国外电子测量技术. 2021(11): 7-12 . ![]() | |
66. | 李晨瑄,胥辉旗,钱坤,邓博元,冯泽钦. 基于深度学习的舰船目标检测技术综述. 兵器装备工程学报. 2021(12): 57-63 . ![]() | |
67. | 王振东,刘思航. 多尺度特征融合技术的建筑寿命分析方法. 建筑节能(中英文). 2021(12): 126-131 . ![]() | |
68. | 杜兰,王兆成,王燕,魏迪,李璐. 复杂场景下单通道SAR目标检测及鉴别研究进展综述. 雷达学报. 2020(01): 34-54 . ![]() | |
69. | 焦军峰,靳国旺,熊新,罗玉林. 旋转矩形框与CBAM改进RetinaNet的SAR图像近岸舰船检测. 测绘科学技术学报. 2020(06): 603-609 . ![]() | |
70. | 付晓雅,王兆成. 结合场景分类的近岸区域SAR舰船目标快速检测方法. 信号处理. 2020(12): 2123-2130 . ![]() | |
71. | 张晓玲,张天文,师君,韦顺军. 基于深度分离卷积神经网络的高速高精度SAR舰船检测. 雷达学报. 2019(06): 841-851 . ![]() |
Resolution | Imaging mode | Polarization mode | Format |
1 m, 3 m | Spotlight, Strip | Single | Tiff |
Algorithm | AP(%) |
CFAR | 27.1 |
CFAR method based on K distribution | 19.2 |
KSW | 28.2 |
Performance ranking | Algorithm | AP(%) | FPS |
1 | DCENN | 88.1 | 24 |
2 | Faster-RCNN-DR | 84.2 | 29 |
3 | Faster-RCNN | 79.3 | 30 |
4 | SSD-512 | 74.3 | 64 |
5 | SSD-300 | 72.4 | 151 |
6 | YOLOv1 | 64.7 | 160 |
Performance ranking | Algorithm | Nearshore ship AP(%) | Offshore ship AP(%) |
1 | DCENN | 68.1 | 96.3 |
2 | Faster-RCNN-DR | 57.6 | 94.6 |
3 | SSD-512 | 40.3 | 89.4 |
Image No. | Size | Sea condition | Scenario | Resolution (m) | Ship numbe |
1 | 3000×3000 | Level 2 | nearshore | 3 | 5 |
2 | 3000×3000 | Level 0 | nearshore | 1 | 7 |
3 | 3000×3000 | Level 3 | offshore | 3 | 10 |
4 | 3000×3000 | Level 2 | offshore | 3 | 8 |
5 | 3000×3000 | Level 1 | nearshore | 3 | 15 |
6 | 3000×3000 | Level 4 | offshore | 3 | 3 |
7 | 3000×3000 | Level 4 | offshore | 3 | 5 |
8 | 3000×3000 | Level 1 | nearshore | 1 | 2 |
9 | 3000×3000 | Level 2 | nearshore | 1 | 7 |
10 | 3000×3000 | Level 1 | offshore | 1 | 50 |
11 | 3000×3000 | Level 1 | nearshore | 1 | 80 |
12 | 3000×3000 | Level 2 | nearshore | 1 | 18 |
13 | 4140×4140 | Level 1 | nearshore | 1 | 21 |
14 | 3000×3000 | Level 1 | nearshore | 1 | 15 |
15 | 3000×3000 | Level 1 | nearshore | 1 | 77 |
16 | 3000×3000 | Level 3 | nearshore | 3 | 13 |
17 | 3000×3000 | Level 3 | nearshore | 3 | 3 |
18 | 3000×3000 | Level 3 | nearshore | 3 | 2 |
19 | 3000×3000 | Level 3 | nearshore | 3 | 1 |
20 | 3000×3000 | Level 2 | nearshore | 3 | 7 |
21 | 3000×3000 | Level 2 | nearshore | 3 | 9 |
22 | 3000×3000 | Level 1 | nearshore | 3 | 14 |
23 | 3000×3000 | Level 1 | offshore | 3 | 4 |
24 | 3000×3000 | Level 4 | offshore | 3 | 6 |
25 | 3000×3000 | Level 4 | offshore | 1 | 20 |
26 | 3000×3000 | Level 2 | nearshore | 3 | 15 |
27 | 3000×3000 | Level 2 | nearshore | 3 | 19 |
28 | 3000×3000 | Level 1 | nearshore | 3 | 8 |
29 | 3000×3000 | Level 3 | offshore | 3 | 6 |
30 | 3000×3000 | Level 2 | offshore | 3 | 8 |
31 | 3000×3000 | Level 1 | nearshore | 3 | 3 |
Resolution | Imaging mode | Polarization mode | Format |
1 m, 3 m | Spotlight, Strip | Single | Tiff |
Algorithm | AP(%) |
CFAR | 27.1 |
CFAR method based on K distribution | 19.2 |
KSW | 28.2 |
Performance ranking | Algorithm | AP(%) | FPS |
1 | DCENN | 88.1 | 24 |
2 | Faster-RCNN-DR | 84.2 | 29 |
3 | Faster-RCNN | 79.3 | 30 |
4 | SSD-512 | 74.3 | 64 |
5 | SSD-300 | 72.4 | 151 |
6 | YOLOv1 | 64.7 | 160 |
Performance ranking | Algorithm | Nearshore ship AP(%) | Offshore ship AP(%) |
1 | DCENN | 68.1 | 96.3 |
2 | Faster-RCNN-DR | 57.6 | 94.6 |
3 | SSD-512 | 40.3 | 89.4 |
Image No. | Size | Sea condition | Scenario | Resolution (m) | Ship numbe |
1 | 3000×3000 | Level 2 | nearshore | 3 | 5 |
2 | 3000×3000 | Level 0 | nearshore | 1 | 7 |
3 | 3000×3000 | Level 3 | offshore | 3 | 10 |
4 | 3000×3000 | Level 2 | offshore | 3 | 8 |
5 | 3000×3000 | Level 1 | nearshore | 3 | 15 |
6 | 3000×3000 | Level 4 | offshore | 3 | 3 |
7 | 3000×3000 | Level 4 | offshore | 3 | 5 |
8 | 3000×3000 | Level 1 | nearshore | 1 | 2 |
9 | 3000×3000 | Level 2 | nearshore | 1 | 7 |
10 | 3000×3000 | Level 1 | offshore | 1 | 50 |
11 | 3000×3000 | Level 1 | nearshore | 1 | 80 |
12 | 3000×3000 | Level 2 | nearshore | 1 | 18 |
13 | 4140×4140 | Level 1 | nearshore | 1 | 21 |
14 | 3000×3000 | Level 1 | nearshore | 1 | 15 |
15 | 3000×3000 | Level 1 | nearshore | 1 | 77 |
16 | 3000×3000 | Level 3 | nearshore | 3 | 13 |
17 | 3000×3000 | Level 3 | nearshore | 3 | 3 |
18 | 3000×3000 | Level 3 | nearshore | 3 | 2 |
19 | 3000×3000 | Level 3 | nearshore | 3 | 1 |
20 | 3000×3000 | Level 2 | nearshore | 3 | 7 |
21 | 3000×3000 | Level 2 | nearshore | 3 | 9 |
22 | 3000×3000 | Level 1 | nearshore | 3 | 14 |
23 | 3000×3000 | Level 1 | offshore | 3 | 4 |
24 | 3000×3000 | Level 4 | offshore | 3 | 6 |
25 | 3000×3000 | Level 4 | offshore | 1 | 20 |
26 | 3000×3000 | Level 2 | nearshore | 3 | 15 |
27 | 3000×3000 | Level 2 | nearshore | 3 | 19 |
28 | 3000×3000 | Level 1 | nearshore | 3 | 8 |
29 | 3000×3000 | Level 3 | offshore | 3 | 6 |
30 | 3000×3000 | Level 2 | offshore | 3 | 8 |
31 | 3000×3000 | Level 1 | nearshore | 3 | 3 |