Garbage collection should run on project load

Opening solutions in VS2008 does not cause R# to run the garbage collector. You can watch the memory use in the toolbar continually climb as you open solutions. I think that the GC should run on solution open. Anybody else feel that way? I'm running build 750. At least in 750 running the garbage collector takes you back down to where it should be. Builds before that continually climbed in memory usage.

8 comments
Comment actions Permalink

Hello,

Opening solutions in VS2008 does not cause R# to run the garbage
collector.


If the memory usage starts making problems, the .NET Runtime runs GC on its
own. If the memory usage is not making any problems to the program, the GC
is not run. We do not override the default GC behavior, because running GC
blindly can make it worse not better.

Also, reopening large solutions without restarting Visual Studio is, generally,
not a good idea.


Serge Baltic
JetBrains, Inc — http://www.jetbrains.com
“Develop with pleasure!”


0
Comment actions Permalink

In general, there would be no harm in having R# force a GC collection per project, but in general CLR does a pretty good of GC.

But I would not be surprised if there are some memory problems with subsequent opens of a solution/project -- stale information kept around due to some missing cleanup code etc. Since both VS and R# try to handle background modifications of files in the background (and automatically process such changes), there are probably still some bugs with properly flushing stale information on reloads or rescans.

0
Comment actions Permalink

Also, reopening large solutions without restarting
Visual Studio is, generally, not a good idea.


And yet, people do it all the time, especially those that regularly interact with a source control system on a rapidly-changing solution. These are usually the very same people who rely most on R#.

I'm with the other repliers: yes, in a perfect world you shouldn't have to force GC, but we all know this isn't a perfect world and project/solution (re)load is a good time when R# knows full well that its structures are going to get turfed and a good GC can be performed to reduce memory pressure. We can click the button ourselves, but it would be better still if R# forced GC between dumping its old references and loading up the new ones. Heck, make it configurable if you want, and default it to off if you like, but it is the very least you could do for those users (especially those with large projects and/or solutions) to help avoid OOM by reducing spikes in memory pressure.

0
Comment actions Permalink

I don't understand this - how are people getting OOM if there's memory which would be collected by forcing a GC? I can't believe the .NET collector is so seriously broken, is it?

0
Comment actions Permalink

From what I understand, it is due to memory fragmentation and a problem with how Visual Studio handles the allocation of memory.

0
Comment actions Permalink

Hello,

I don't understand this - how are people getting OOM if there's memory
which would be collected by forcing a GC?


When it comes to a managed OOM, there's no memory to be collected by GC already,
as we've been assured.

When running a 32-bit system with only 2GB of virtual address space available,
this virtual space gets severely fragmented and at some point a virtual allocation
(not using any memory in RAM or swap yet) would fail. This does not mean
that all those 2GB have been used up, but no more could be allocated.

On 64-bit system (Vista, at least), Visual Studio has about 3.5 GB of virtual
address space available, and the problem would not manifest itself. Also,
the memory management is a little bit more robust in Vista.


Serge Baltic
JetBrains, Inc — http://www.jetbrains.com
“Develop with pleasure!”


0
Comment actions Permalink

Unfortunateley, the problem does manifest itself even on 64-bit depending on what managed add-ins you have running. Right now, it is not as serious (i.e. frequent) problem as with 32-bit, but as long as all add-ins share the same address space there will always be a hard limit.

Since GC includes compaction, I am not confident that it is a fragmentation issue, but I freely admit I have no knowledge of what GC memory ends up being pinned during a session.

This is the kind of problem that WeakReference support should help, but there is typically very little use of WeakReference. But the lifetime of event handlers is probably also a big factor, and involves a pretty wide-scale refactoring to address.

0
Comment actions Permalink

Simple. Using a 32bit system (without any /3GB / PAE stuff going on) as an example, put an app domain up near 800MB of memory consumed and then rapidly remove and reapply roughly equal amounts of memory pressure. Heck, put it anywhere even remotely near 800MB and you can still remove and reapply enough memory pressure rapidly enough that the actions can fall between collection cycles and cause an OOM. This is the kind of situation where, if the application developer has good knowledge of their application's memory usage patterns, better knowledge than the runtime can have, that a forced GC collection may be appropriate. Not a subject to be treated lightly, of course, otherwise we'll find ourselves in a world where applications unnecessarily force GC willy nilly.

0

Please sign in to leave a comment.