Yes, I also think that this is interested. Maybe an accidental coincidence. Actually I can’t really think of any reason why this helps to prevent the memory error. … Maybe… because the data can be faster transferred and less data needs to be kept in memory? Not sure.
I updated my Java and selected the Use FileCache and this worked for me. I was having issues before even on a 16 GB Ram system.
Simple solution that worked for me. Canceled all the other unnecessary products in the Product Explorer panel. But I have some powerful hardware, i7-4720, 2.6 GHz, 16GB, dedicated graphic board and a dual SSD-classic hard drive, therefore this error was quite unexpected.
16GB should definitely be enough but more RAM never hurts.
I have a 16GB RAM system and I am processing a stack of 30 dates of sentinel-1 data in order to apply a multi-temporel filter. My study area is not covered by the same track in all dates (for some dates I have two s1 images for the same date) so i can’t apply the same subset on all images.
Due to "can’t construct data buffer " problem for the whole image, I did a shell/python script where I added to my master image subset ( Jannury 1st 2016) mounth by month dates and it performed very well until November.
It is impossible to perform the stack with all the images of december, I increased the memory parameters as specified below and i even tried to stack just december images(6 dates) and always the same error “can’t construct data buffer”. I tried 2 per 2 stack of december but the 6 dates stack i had the famous error! I found that very strange, did any one have an idea or a proposition? Thanks in advance for your help!
you have installed Snap 32 bit while your java is 64 bit and Data should not be in external hard drive.
Have you changed the snappy.ini file?
Hi Marpet, thank you for the replay, no i didn’'t because i’m not using snappy in my python script it is just a script to rename stacked images and the dim file after each stack ( image names were too long with the name of the master at each time ( for exp Sigma0_VV_slv10_25Mar2016_mst_01Jan2016.img to Sigma0_VV_slv10_25Mar2016.img).
But my problem has just been solved when working on 32 Gb Ram computer…
I would like to share my solution in case somebody “suffered” the same as I with this error. I am working with the snappy module on Python 2.7. I tried everything discussed in here on my own laptop, which was working with 10GB, and nothing worked. So I thought there was nothing else to do and that it was all about the RAM. I then tried at the computer in my office were I don’t have administrator permits and it has 32GB… There it worked. I however decided to install everything on my own laptop again in the same way I did at the office. Since I had no administrator permits there, I installed both SNAP and The Intel Distribution for Python on my C:\Users\user\AppData\Local folder where no admin rights are asked. This is what finally did the trick for me! The only things I changed was setting up the file cache option and write 10G in the snappy.ini file. I hope this helps others. I’m also new with Python and this is probably a very basic solution and therefore wasn’t even discussed here, but for all the new users, there it is! hopefully it helps you!
I know that this adds to an old post, but I keep getting the Cannot construct data buffer error on a 32 Gb RAM computer. I work in linux (launching a graph with GPT for SLC to GRD with the latest SNAP version) and the RAM consumption does not climb above 10 Gb.
have you configured your gpt.vmoptions file? Please, see for example the following post:
Thanks for the quick answer. There are indeed many posts on the topic but I couldn’t find the best solution. Previously I’ve tried with -c = 30G or -c = 16G without success.
I’ve tried with -Xmx16G but then I have another error.
INFO: org.hsqldb.persist.Logger: dataFileCache open start
…10%.Java HotSpot™ 64-Bit Server VM warning: INFO: os::commit_memory(0x00000003f2f80000, 1206386688, 0) failed; error=‘Cannot allocate memory’ (errno=12)
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 1206386688 bytes for committing reserved memory.
Is there a compromise between -c and -Xmx16G ? (like using both at 16G ?)
Update: I was testing -Xmx together with -c and it didn’t work, but using -Xmx alone it worked fine. Thank you for the tips.