Snappy run maximum likelihood classification

Hello!

I’m trying to run the Maximum Likelihood Classification in snappt, but I can’t find how to do it. Tell me in which direction to move, please.

for you should have a look at this wiki page. It describes the configuration and usage of snappy in general.
In the examples directory you find the snappy_subset.py script which shows the usage of an operator.
In the end you need to call GPF.createProduct with the name of the operator (Maximum-Likelihood-Classifier) the parameters and the source product.
You can find out which parameters are available if you call on the command line

gpt Maximum-Likelihood-Classifier -h

or you configure the operator in the GUI and then select from the menu File / Display Parameters…
if you search her in the forum for GPF.createProduct you will get several additional examples.

Marpet, thanks for the fast response!
I’m write script

import snappy from snappy import ProductIO red = ProductIO.readProduct('D:\\tmp\\T41VPD_20170603T065021_B02.jp2') green = ProductIO.readProduct('D:\\T41VPD_20170603T065021_B03.jp2') blue = ProductIO.readProduct('D:\\T41VPD_20170603T065021_B08.jp2') forest = 'D:\\tmp\\forest_Polygon.shp' houses = 'D:\\tmp\\houses_Polygon.shp' def loadVector(file, product): separateShapes = False HashMap = snappy.jpy.get_type('java.util.HashMap') parameters = HashMap() parameters.put('vectorFile', file) parameters.put('separateShapes', separateShapes) result = snappy.GPF.createProduct('Import-Vector', parameters, product) return result result = loadVector(forest, red) result = loadVector(houses, result) trainingVectors = [] vdGroup = result.getVectorDataGroup() for i in range(vdGroup.getNodeCount()): vec = vdGroup.get(i) trainingVectors.append(vec.getName()) HashMap = snappy.jpy.get_type('java.util.HashMap') parameters = HashMap() parameters.put('trainOnRaster', False) parameters.put('featureBands', ','.join([result.getName(),green.getName(),blue.getName()])) parameters.put('trainingVectors', ','.join(trainingVectors)) product_classifier = snappy.GPF.createProduct('Maximum-Likelihood-Classifier', parameters, result) print product_classifier And i have a problem Error: RuntimeError: org.esa.snap.core.gpf.OperatorException: java.lang.NullPointerException

I don’t know what I’m doing wrong.

One problem I see, is that you use the names of the products for the ‘featureBands’ parameter.
To get the name of a band you can do:

product.getBandAt(0).getName()

Also you provide only the result product to the MLC operator. This one contains only the red band and the vector data.
I think you should use the Merge operator as first step to merge all three products into one and the load the vector data into it.

In addition, are you sure that B08 is the red channel and B02 the blue channel?

When i’m margining 2 products i get only one band in result.
red = ProductIO.readProduct(‘D:\tmp\T41VPD_20170603T065021_B04.jp2’)
green = ProductIO.readProduct(‘D:\tmp\T41VPD_20170603T065021_B03.jp2’)
sourceProducts= HashMap()
sourceProducts.put(‘masterProduct’, red)
sourceProducts.put(‘slaveProduct’, green)
parameters = HashMap()
target = GPF.createProduct(‘Merge’, parameters, sourceProducts)
bands = target.getBandNames()
print list(bands)

Marpet, thank you for your comment about bands.

see

Well, I rewrote the script.

import snappy
from snappy import ProductIO, GPF, jpy
red = ProductIO.readProduct(‘D:\tmp\T41VPD_20170603T065021_B04.jp2’)
green = ProductIO.readProduct(‘D:\tmp\T41VPD_20170603T065021_B03.jp2’)
blue = ProductIO.readProduct(‘D:\tmp\T41VPD_20170603T065021_B02.jp2’)
forest = ‘D:\tmp\forest_Polygon.shp’
houses = ‘D:\tmp\houses_Polygon.shp’
GPF.getDefaultInstance().getOperatorSpiRegistry().loadOperatorSpis()
HashMap = jpy.get_type(‘java.util.HashMap’)
sourceProducts= HashMap()
sourceProducts.put(‘masterProduct’, red)
sourceProducts.put(‘slaveProduct1’, green)
sourceProducts.put(‘slaveProduct2’, blue)
NodeDescriptor = jpy.get_type(‘org.esa.snap.core.gpf.common.MergeOp$NodeDescriptor’)
include_1 = NodeDescriptor()
include_1.setProductId(‘masterProduct’)
include_1.setNamePattern(‘band_1’)
include_1.setNewName(‘red’)
include_2 = NodeDescriptor()
include_2.setProductId(‘slaveProduct1’)
include_2.setName(‘band_1’)
include_2.setNewName(‘green’)
include_3 = NodeDescriptor()
include_3.setProductId(‘slaveProduct2’)
include_3.setName(‘band_1’)
include_3.setNewName(‘blue’)
included_bands = jpy.array(‘org.esa.snap.core.gpf.common.MergeOp$NodeDescriptor’, 3)
included_bands[0] = include_1
included_bands[1] = include_2
included_bands[2] = include_3
parameters = HashMap()
parameters.put(‘includes’, included_bands)
mergedProduct = GPF.createProduct(‘Merge’, parameters, sourceProducts)
def loadVector(file, product):
HashMap = jpy.get_type(‘java.util.HashMap’)
parameters = HashMap()
parameters.put(‘vectorFile’, file)
parameters.put(‘separateShapes’, False)
result = GPF.createProduct(‘Import-Vector’, parameters, product)
return result
result = loadVector(forest, mergedProduct)
result = loadVector(houses, result)
classifierParameters = HashMap()
classifierParameters.put(‘trainOnRaster’, False)
classifierParameters.put(‘featureBands’, ‘,’.join([result.getBandAt(0).getName(),result.getBandAt(1).getName(),result.getBandAt(2).getName()]))
classifierParameters.put(‘trainingVectors’, ‘forest_Polygon,houses_Polygon’)
print classifierParameters
classifierResult = GPF.createProduct(‘Maximum-Likelihood-Classifier’, classifierParameters, result)
print classifierResult

I get a product with 3 bands and 2 training vectors.
After I set the parameters and run the algorithm, I get the error RuntimeError: org.esa.snap.core.gpf.OperatorException: java.lang.NillPointerseseption
If write down the product and run the algorithm in the UI Snap, then the classification is done.
Maybe I’m not setting the parameters correctly?

Hard to say what’s wrong. Maybe the default settings are different in the Desktop.
After processing you can have a look in the Metadata. There should be an Element ‘Processing_Graph’. Find the MLC operator there and check the parameters.
Also you can try to invoke this operator from the command line with gpt. Should be easy if you write your merged product and use it as source.
Then more error information should be shown, if it happens here too.

I had the same problem with the Random-Forest-Classifier.
Aditional to the java.lang.NullPointerseseption error i had a Java HeadlessException.
This is because the classifier is asking in the GUI if an existing classifier should be overwritten.
I added:
parameters.put(‘doLoadClassifier’,True), and it worked, i recieve the same results as in the gui.