I'm facing the following problem:
My program needs to perform some calculations that involve allocating big amounts of data. The calculations may last minutes. Upon performing calculations memory blocks are continually allocated->used->freed. (this cannot be changed)
I have found that when the program has had initialized some structures with data, the calculations speed drastically drops down. As I later found - more time was spent in GC compared to situation when these structures were not [fully] initialized. My assumption is that GC needs more time to traverse more complicated object graph when searching for objects that could be removed from managed heap.
In general, the calculations are running within a loop:
//perform single calculation
Calculate call involves allocating [say] 50 MB on the heap, organized within some object hierarchy. When Calculate is run in a tight loop, it is imminent that memory is exhausted, and GC needs to be run to find free space. The problem is that when the mentioned structures are fully initialized - GC needs to spent a lot of time traversing that structures. This renders high percentages of time spent in GC and overall poor performance.
The question is:
Is it possible to give a hint to CLR memory manager
View Complete Post