Maybe we first need to clarify which accuracy you mean. The accuracy given to you by SNAP is the training accuracy. It tells you how good your model represents the training data.
If you later validate your classification based on actual validation areas (should be different than your training samples) you get the classification accuracy. It tells you how good the model predicted the classes outside the trained areas.
I think it strongly depends on the used classifier: Random Forest, for example, automatically selects the input layers with the highest information content. The more layers you use, the better is your training accuracy (in most cases) and it doesn’t matter if some of the input rasters are redundant regarding their information content (which is often the case for texture measures).
I could however imagine that MaxLike or MinDist have problems with larger numbers of input rasters and training accuracy may indeed go down.