Proposal: let Resharper go out of proc!

One of the biggest adoption blockers for R# in my opinion is that for the middle to large projects R# kills VS performance and it happens because of VS process memory exhaust.

For example, after loading solution I currently work on to VS 2013 status bar shows that R# consumes 500 Mb of RAM. After some activity it grows to 1 Gb very often. Unfortunately VS is still 32-bit process (and I would not hold my breath for 64-bit version soon) and even on 64-bit OS when VS process takes above 2 Gb of memory (may be closer to 2.5 Gb) VS gets really slow and what is worse – unstable!

R# gets really useful for big projects, which take a lot of VS memory by themselves. Now R# also needs more memory to handle those and as the result by the time you really need it – it is barely usable.

I wonder if R# team considered extraction of memory hungry portion of it to run out of VS process? This can be taken to different levels. As a minimum R# can start helper process (64-bit if environment allows) and use it to keep its data structures as in-memory cache, while most of its code still runs inside VS. At a maximum level it would be most of the R# running as a separate process and only have proxy code in VS process.

Obviously this move would make R# slower than it is now, but I honestly believe that substantial part of R# customers would trade for slower but less VS affecting version any moment! Additional benefit would be stability – R# crash or malfunction in a separate process has much less chances to disturb VS.

Ideally this out of proc mode should be optional or even automatic, so that out of proc data structures and components are used when memory pressure in VS process increases above predefined threshold.

Thank you for consideration!

Konstantin

5 comments
Comment actions Permalink

ReSharper already hosts some content out of process, for these reasons (it's the snappily titled JetBrains.ReSharper.ExternalProcessStorage.Process.exe you sometimes see in task manager). It's only used for some solution wide analysis data, where the cached information can safely be pushed to an external process without breaking things and too negatively impacting performance.

ReSharper 9 is changing the implementation of some of the caches, which can take up lots of memory in-process, and for large projects. Instead of maintaining the whole cache in memory at any one time, it's using Google's very fast and efficient leveldb key-value store to enable on-demand loading of parts of the cache. This has benefits in that physical memory usage is reduced, as is startup time (as the cache isn't loaded until it's needed, and only loads what it needs)

Performance is taken very seriously. For one thing, we use ReSharper while building ReSharper, and our solution is currently 310 projects large. So we quite often see performance issues before they reach the customer. But we do our best to make the product efficient for even these very large solutions. We don't tend to see real slow downs and aren't currently aware of any instability (do you mean Visual Studio crashes?) If you're encountering serious issues, you can start a profiling session from the ReSharper -> Help menu and send us performance information which we can take a look at.

0
Comment actions Permalink

Thanks a lot, Matt!

Great to hear you are going to utilize Google's leveldb!

Thanks for telling how to start profiling VS from R# -> Help menu! I never new it is there. Would be great to publish some tips how to use dotTrace specifically to profile VS and R# (or at least provide useful traces to R# support team).  

Yes I have noticed JetBrains.ReSharper.ExternalProcessStorage.Process.exe and don't mind to see it more often, especially when R# reports that it is using more than 1Gb of memory in VS process.

When I mentioned VS instability what I specifically mean is that when VS memory consumption gets close to 2.5 Gb (from Windows Resource Monitor) it does not necessarily crash (although sometimes it does), but slows down to almost complete stop, when trivial operations make its UI unresponsive for many minutes.By the way I never saw VS process consuming MORE than 2.5 Gb (on 64-bit OS) and I wonder if it is because of memory fragmentation or some other reason?

I understand R# 9 EAP publishes Debug builds, which makes it almost impossible to notice (putting aside measure) any performance improvements for big projects. It would be great to publish fully optimized builds for EAP once in a while to let people check performance aspects of the new version.

Thanks again!
Konstantin

0
Comment actions Permalink

Hello,

Great to hear you are going to utilize Google's leveldb!


It's already there in the EAP builds )

Yes I have noticed
JetBrains.ReSharper.ExternalProcessStorage.Process.exe and don't mind
to see it more often,


I'm afraid you won't ;) It's been superceded by the leveldb storage, which
does disk swapping instead of interprocess communication.

When I mentioned VS instability what I specifically mean is that when
VS memory consumption gets close to 2.5 Gb (from Windows Resource
Monitor) it does not necessarily crash (although sometimes it does),
but slows down to almost complete stop, when trivial operations make
its UI unresponsive for many minutes.By the way I never saw VS process
consuming MORE than 2.5 Gb (on 64-bit OS) and I wonder if it is
because of memory fragmentation or some other reason?


Depends on what's shown for "memory consumption". Usually you want to see
the private commit size there, which is about the memory pressure of the
process on the system. But OOMs would come out of the virtual address space
exhaustion in the first place. My devenv instance with our largest solution
currently shows about 3.1 GB of address space allocated, while there's only
2.3 GB of private commit size.

I understand R# 9 EAP publishes Debug builds, which makes it almost
impossible to notice (putting aside measure) any performance
improvements for big projects. It would be great to publish fully
optimized builds for EAP once in a while to let people check
performance aspects of the new version.


After we have reworked all of our infrastructure to allow multiple products
live in one devenv instance without an overhead, we didn't yet get optimized
builds in a deployable form. But coming soon, working on that right now.
Not sure if non-checked builds would be published for EAP though before RC
or around that, because that way we're not getting the exceptions back.


Serge Baltic
JetBrains, Inc — http://www.jetbrains.com
“Develop with pleasure!”


0
Comment actions Permalink

-- It's already there in the EAP builds )

I think I can see improvement and hope it is not a placebo, because R# memory load in VS status bar mysteriously disappeared, despite the option for it being set.  

-- It's been superceded by the leveldb storage, which does disk swapping instead of interprocess communication.

I don't really care too much which swapping paradigm you chose as long as it leaves more memory for VS itself. I'm just genuinely surprised that disk swapping (however efficient it is implemented) is faster or even comparable with interprocess communication, which is memory to memory after all.
But I guess devs with abundant RAM should just invest in better caching solution, like PrimoCache, which works so much better than Windows built in disk caching.

-- Depends on what's shown for "memory consumption"

I use Windows 8 built-in Resource Monitor (simply because it is always there) and look at the first column on the Memory tab, labelled "Commit". Don't know what exactly they mean under it, but numbers there are always a little less than Working Set (second column) and a little more than Private (third column).

-- Not sure if non-checked builds would be published for EAP though before RC or around that, because that way we're not getting the exceptions back.

You always have at least two options:
1. Produce optimized build which does not suppress exceptions and still let them to be reported. I think my R# reports about 20 exceptions per day :)
2. As I said , publish optimized builds interchangeably with debug builds, specifically for perf related beta testing.

Kostya

0
Comment actions Permalink

Hello,

>> It's been superceded by the leveldb storage, which does disk swapping
instead of interprocess communication.

I'm just genuinely surprised that disk swapping (however efficient it is

implemented) is faster or even comparable with interprocess communication,
which is memory to memory after all.

Probably it's not, but the change goes beyond just switching backends, the
feature has been largely rewritten. With its current memory needs and in-memory
cache strategy it's OK with leveldb.

I guess the fastest alternative for extending process memory space would
be to use memory-mapped files (without disk files backend). Should be something
like PAE but for wow6432.

>> I use Windows 8 built-in Resource Monitor (simply because it is always
there) and look at the first column on the Memory tab, labelled "Commit".
Don't know what exactly they mean under it, but numbers there are always
a little less than Working Set (second column) and a little more than Private
(third column).

Looks as if it's all about monitoring system health in the whole, like who's
eaten up all the memory, rather than monitoring the health of the specific
process. You could try smth like http://live.sysinternals.com/procexp.exe
for the "Virtual size" (column, or a field in process details). Or http://live.sysinternals.com/vmmap.exe
for process mem details.

>> Not sure if non-checked builds would be published for EAP though before
RC or around that, because that way we're not getting the exceptions back.

You always have at least two options:
1. Produce optimized build which does not suppress exceptions and still

let them to be reported. I think my R# reports about 20 exceptions per day
:)

That's where it gets complicated: an optimized build optimizes away much
of the exception throwers, and those left won't supply detailed metadata.
On the other hand, if you leave all the exceptions stuff intact, the build
is not so much optimized anymore.

2. As I said , publish optimized builds interchangeably with debug builds,

specifically for perf related beta testing.

I'll talk to QA, but I'm not sure that checked builds would be getting their
audience if we make it this way.


Serge Baltic
JetBrains, Inc — http://www.jetbrains.com
“Develop with pleasure!”


0

Please sign in to leave a comment.