Ship object classification using sentinel 1 GRD image

Hi everyone,

I am a total newbie in this.

I am embarking on a project to detect ship in Sentinel 1 GRD images. I plan on using machine learning techniques to train a model to identify the ships objects in my downloaded land of interest.

I had thought out of the steps i will go about doing it. Currently I am trying on 1 image.
I will divide the image into smaller sub images (havent decided on the pixel count for each sub segment and if its possible on SNAP). I am trying to extract out those pixels that look like ship and cut them out in standard pixel size to generate my training set.

My test set will be on one of the sub images.

I would also be using python and snappy and working on Jupyter Notebook for testing my code.

STEPS

  1. preprocessing with SNAP (No clue how to start here)
  2. create a training set of 200 ships object (cross validate with optical images from sentinel 2)
    • these will be basically cut out picture from the original image with ship centered in the image. I do not know how to get these to a standard size of 80 by 80 at the moment of if there is a better pixel by pixel size i should consider.
  3. create a training sets of 400 non ship object such as island, coastal areas, small blobs of land.

Once these are done, I would be using a simple logistic regression to classify if the pic is ship or not ship. Once the model is trained, I would test it on another region of the map.

Once that is done, I would cross check it with optical image from Sentinel 2 on the same region.

I would like to know if this plan sounds good?

I plan to use SIFT or the SURF method for ship images before going into the training. Currently I am quite lost as to what method I should use.

Any help and advice from the gurus here would be much appreciated.

Do you know SNAP has an operator for ship detection called “Ocean Object Detection” under Radar->SAR Application->Ocean Application? It maybe useful to your project. For pre-processing, you may want to do the calibration and speckle filtering.

Hi Junlu,

thanks for the info. The main bulk of my project is the machine learning part so I would need to do some of my own coding in python.

You can always automatize the process in SNAP using graph builder / batch processing. Some days ago there was a webinar about ship detection and Sentinel-1. It may help you at least for the preprocessing of S1 images in SNAP.

For creating your ‘sub images’ you could use the Subset operator (Raster -> Subset). Use the option ‘pixel coordinate’ and define your size. If you want to create different ‘sub images’ at the same time, use the graph builder input as many subset operators as subimages you want and set different pixel coordinates for each one.

Hope it helps :slight_smile:
M

2 Likes

Hi there,

Do you know what is the recommended pixel for each ship training subset? currently I am going by the rule that the subset should encompass the whole ship. Thanks for the video!! much appreciated!

If a get your question properly, you are asking what should be the size of the sub image you extract so that it contains the complete ship. If so, I would say it really depends on the type of ships you want to detect. If your targets are very big cargo ships you will need more pixels than if it is a small one obviously.

Take into account that S1 pixel spacing in your case is 10x10 so targets smaller than this will not be identified

If you don’t have a specif size, you can always extract bigger areas to be sure you do not discriminate and then train your classifier with this type of input

1 Like

Hi MCG,

I have another question. after some reading, I found out that I had to consider some ambiguity size as range/azimuth ambiguity and possible small blob of land in the picture. How do I know if there are false ship in the pictures?

Are you working in an area with small islands? Using the algorithm of SNAP you have to understand how it works. It defines several background windows and then check if there is a large increase of refelctance inside. (Cause by a ship or any other object that produces the same effect such as some small islands) Of course this is related with the size of the background picture.
You have to try different configurations. Of course if you have validation data, it can help you a lot to discrimiate false detections

Check also AIS data. Although historical data are not for free, real time data can be screenshot and it could help you somehow…

Hope it helps :slight_smile:
M

MCG, thank you so much! I was thinking of web scraping marinetraffic.com to generate a database although it will take time and it will be hard to match with satellite pass. will take it slowly.

Just a few thoughts.

If a generic maritime surveillance processing chain is composed of:
ship detection -> discrimination -> ship classification
where

  • ship detection: detects the pixels likely to be ships
  • discrimination: goes through those pixels and tries to remove false alarms
  • classification: classifies the detected ships according to their ship type. This step is often skipped and there is relatively little research on it;

if I undestand correctly your first post, then I’d put your project in the ‘discrimination’ step.

People are doing all sorts of things with machine learning, but IMO these techniques are more suited to discrimination and classification than to detection. To be used for detection, I’d expect that some training on ‘sea’ pixels or areas would be needed. But you are not planning to do that.

The main hurdle in discrimination is that many false alarms (ambiguities and wave features) look very much like true small ships or boats, and getting 100% reliable ground truth is impossible. AIS is the best source of validation data, but it is not perfect: many vessels (especially boats) don’t transmit it, even if they transmit it you may not receive it, time synchronisation with the image… Essentially, it is very difficult to reliably label many of the samples as ‘ship’ or ‘other object’. You may end up labeling only easy examples of ‘ships’ and easy examples of ‘other objects’, so reducing a hard problem to an easy but unrealistic task.

For a small-scale experiment focused on the machine learning part, you may have a better chance in ship classification. You would definitely need AIS to know the ship type, but with a bit of work (or a lot of it!) you will be able to build reliable training and test sets. For an example of this, see http://elib.dlr.de/103689/1/FuSec2016_Proceeding.pdf

3 Likes

Hi css,

Exact same thoughts on discrimination of the ships ! Hence, I was thinking of ways to obtain AIS information from the web since they are available online daily. My original plan was to set out and do ship detection first before going into ship discrimination but as soon as i start to think about what training data to generate, thats when i realised its gonna be quite challenging as i do not have information on if that is a ship or not.
I am actually planning to training the model on ‘sea pixels’ and ‘ship pixels’ but at this stage, your accuracy of your model is determine by how good your training dataset is. I can for one detect all the white blobs with some technique like SIFT and cross validated with AIS or optical images.

Thanks for the tip!!!