Displayr is a cloud-based software, and requires resources on the cloud server as well as your local browser/computer. As a document grows with more data and outputs, it requires more resources. A document may get to a point where users notice a slowness in performance or causes strange errors (e.g., needing to restart). Our memory profiling tool enables users to investigate the memory load of their document and evaluate the load from individual items to see what might be causing the slowness. This article describes how to:
Start the memory profiler
From an open Displayr document, click the Tools menu on the top left of the screen and select Memory Profiling.
This will open a new tab in your browser, with an output like this:
Interpret the Memory Usage Profile
The following categories can appear in the memory profiler:
- Non-profiled memory: The profiler does not try to capture every object in the running process. It samples the objects we know to be large. The non-profiled memory represents objects not sampled and also reserved memory of the .NET runtime. Custom objects in the document that are not included as one of the defined categories, are included in the non-profiled portion.
-
Transform manager: This is the place where certain cache results are stored. If there are lots of constructed variables in the file, especially if they are dependent on other constructed items, those intermediate results are cached to make recalculations quicker at a cost of using more memory. Examples include R data sets, R variables, and JavaScript variables.
-
Data sets: This is the memory required by both the raw data file loaded into the document as well as storing the final results of any constructed variables made in Displayr.
-
R grids
-
R items
-
Images
-
Tables
-
All analysis base derived items other than the 4 immediately above.
-
Subscriptions: This mainly represents the size of cache that server keeps in response to network requests by the client.
-
QPack stream
-
Document XML: This is the size of the
.Q
file if it is loaded in memory. -
Project history (not the undo stack).
Note that:
- Large data sets memory can be reduced by using a (leaner) .sav or .QDat version of the raw data file, see Create a Separate Data Preparation Document, and by Reducing the Number of Unnecessary Variables. Reducing the number of variables (in the raw data or constructed in Displayr) or cases will also reduce the memory required. Text data is also stored less efficiently than other data types so reducing large open-end text variables can be very helpful.
-
A large non-profiled memory may indicate a project has been opened for a long time (so the undo history has built up in memory); in this case closing all tabs related to the project and reopening them after 20 mins may help.
-
Transform manager - this indicates large cached R results. You may be able to reduce this by deleting R data sets and variables that are not necessary for your project.
-
Subscriptions - this caches all network requests. If there are large objects, R results in particular, they get cached. Optimizing for it will depend on what is large in the project.
-
Document XML - not much can be done about this as it keeps track of the structure of the projects.