Understanding the SNAP cross-validation file


Would you please help me to understand the information that is displayed in the cross-validation file (for the Random Forest classification). For instance, it is mentionned in the file "Using Testing dataset, % correct predictions ". However =, I couldn’t understand if this % value represents the mean of the accuracy or the mean of the precision for all the testing dataset.
I can’t find detailed information about this topic in the SNAP Help. Is there a SNAP documentation that gives more details about the cross-validation calculation methods?
Thank you!

1 Like

would you mind to upload your “cross-validation file”?
did you know something related with cross-validation? If no,maybe you should read some paper or files about this.

Thank you for the answer. Here is my cross-validation file.
Actually, I understand the performance measures. However, I don’t get how the feature importance score is calculated and what it is exactly its interpretation?
RandomForest classifier validation.txt (3.8 KB)

hi, chayma, the attachment is the document of random forest using in R and in the Detail of function “importance” (page 6) you could see how the importance score of feature is calculated.
BTW, how did you do the random forest calculation? or would you mind to share your code of random forest if you using R ,too.
recently, I have been reading some technical documentation obout random forest but it is really hard for me. thanks in advance!

randomForest.pdf (202.3 KB)

Thank you for the reply. Actually, for the Random Forest process, I am using the RF implementation that exists in the last version of the SNAP software in the menu Raster->Classification ->Supervised classification ->Random Forest Classifier. You can process it directly on your images without the need of any coding.

1 Like

Yes I did same like you but did you find what is the meaning with some items in cross validation files?