Your Python code reads some data, processes it, and uses too much memory; maybe it even dies due to an out-of-memory error. In order to reduce memory usage, you first need to figure out:
- Where peak memory usage is, also known as the high-water mark.
- What code was responsible for allocating the memory that was present at that peak moment.
That’s exactly what Fil will help you find. Fil an open source memory profiler designed for data processing applications written in Python, and includes native support for Jupyter.
Fil is open source, and is designed for offline profiling. It has enough of a performance impact that you won’t want to use it on production workloads, but it can profile even small amounts of memory.
If you want memory (and performance!) profiling for your Python batch jobs in production, consider using Sciagraph.
In this section you’ll learn how to:
Fil requires macOS or Linux, and Python 3.7 or Later. You can either use Conda, a sufficiently new version of Pip, or higher-level tools like Poetry or Pipenv.
To install on Conda:
$ conda install -c conda-forge filprofiler
To install the latest version of Fil you’ll need Pip 19 or newer. You can check the current version like this:
$ pip --version pip 19.3.0
If you’re using something older than v19, you can upgrade by doing:
$ pip install --upgrade pip
If that doesn’t work, try running your code in a virtualenv (always a good idea in general):
$ python3 -m venv venv/ $ source venv/bin/activate (venv) $ pip install --upgrade pip
Assuming you have a new enough version of pip, you can now install Fil:
$ pip install filprofiler
First, install Fil.
Then, create a Python file called
example.py with the following code:
import numpy as np def make_big_array(): return np.zeros((1024, 1024, 50)) def make_two_arrays(): arr1 = np.zeros((1024, 1024, 10)) arr2 = np.ones((1024, 1024, 10)) return arr1, arr2 def main(): arr1, arr2 = make_two_arrays() another_arr = make_big_array() main()
Now, you can run it with Fil:
$ fil-profile run example.py
This will run the program under Fil, and pop up the results.
In the next section, we’ll look at the results and see what they tell us.
Let’s look at the result of the Fil run from the previous section:
What does this mean?
What you’re seeing is a flamegraph, a visualization that shows a tree of callstacks and which ones were most expensive. In Fil’s case, it shows the callstacks responsible for memory allocations at the point in time when memory usage was highest.
The wider or redder the frame, the higher percentage of memory that function was responsible for. Each line is an additional call in the callstack.
This particular flamegraph is interactive:
- Click on a frame to see a zoomed in view of that part of the callstack. You can then click “Reset zoom” in the upper left corner to get back to the main overview.
- Hover over a frame with your mouse to get additional details.
To optimize your code, focus on the wider and redder frames.
These are the frames that allocated most of the memory.
In this particular example, you can see that the most memory was allocated by a line of code in the
Having found the source of the memory allocations at the moment of peak memory usage, you can then go and reduce memory usage. You can then validate your changes reduced memory usage by re-running your updated program with Fil and comparing the result.
In this section you’ll learn:
There are two distinct patterns of Python usage, each with its own source of memory problems.
In a long-running server, memory usage can grow indefinitely due to memory leaks. That is, some memory is not being freed.
- If the issue is in Python code, tools like
tracemallocand Pympler can tell you which objects are leaking and what is preventing them from being leaked.
- If you’re leaking memory in C code, you can use tools like Valgrind.
Fil, however, is not specifically aimed at memory leaks, but at the other use case: data processing applications. These applications load in data, process it somehow, and then finish running.
The problem with these applications is that they can, on purpose or by mistake, allocate huge amounts of memory. It might get freed soon after, but if you allocate 16GB RAM and only have 8GB in your computer, the lack of leaks doesn’t help you.
Fil will therefore tell you, in an easy to understand way:
- Where peak memory usage is, also known as the high-water mark.
- What code was responsible for allocating the memory that was present at that peak moment.
- This includes C/Fortran/C++/whatever extensions that don’t use Python’s memory allocation API (
tracemalloconly does Python memory APIs).
This allows you to optimize that code in a variety of ways.
Fil uses the
DYLD_INSERT_LIBRARIES mechanism to preload a shared library at process startup.
This is why Fil can’t be used as regular library and needs to be started in a special way: it requires setting up the correct environment before Python starts.
This shared library intercepts all the low-level C memory allocation and deallocation API calls, and keeps track of the corresponding allocation.
For example, instead of a
malloc() memory allocation going directly to your operating system, Fil will intercept it, keep note of the allocation, and then call the underlying implementation of
At the same time, the Python tracing infrastructure (the same infrastructure used by
coverage.py) is used to figure out which Python callstack/backtrace is responsible for each allocation.
In this section you will learn how to use Fil to profile:
You will also learn how to use Fil to debug:
You want to get a memory profile of your Python program end-to-end, from when it starts running to when it finishes.
Let’s say you usually run your program like this:
$ python yourscript.py --input-file=yourfile
$ fil-profile run yourscript.py --input-file=yourfile
And it will generate a report and automatically try to open it in for you in a browser.
Reports will be stored in the
fil-result/ directory in your current working directory.
You can also use this alternative syntax:
$ python -m filprofiler run yourscript.py --input-file=yourfile
If your program is usually run as a module:
$ python -m yourapp.yourmodule --args
You can run it with Fil like this:
$ fil-profile run -m yourapp.yourmodule --args
Or like this:
$ python -m filprofiler run -m yourapp.yourmodule --args
To measure peak memory usage of some code in Jupyter you need to do three things:
Jupyter notebooks run with a particular “kernel”, which most of the time just determines which programming language the notebook is using, like Python or R. Fil support in Jupyter requires a special kernel, so instead of using the “Python 3” kernel you’ll use the “Python 3 with Fil” kernel.
There are two ways to choose this kernel:
- You can choose this kernel when you create a new notebook.
- You can switch an existing notebook in the Kernel menu. There should be a “Change Kernel” option in there in both Jupyter Notebook and JupyterLab.
In one of the cells in your notebook, add this to the cell:
You can now do memory profiles of particular cells by adding
%%filprofile as the first line of the cell.
- Load the extension by doing
- Add the
%%filprofilemagic to the top of the cell with the code you wish to profile.
Here’s an example session:
Sometimes you only want to profile your Python program part of the time. For this use case, Fil provides a Python API.
Important: This API turns profiling on and off for the whole process! If you want more fine grained profiling, e.g. per thread, please file an issue.
Let’s you have some code that does the following:
def main(): config = load_config() result = run_processing(config) generate_report(result)
You only want to get memory profiling for the
You can do so in the code like so:
from filprofiler.api import profile def main(): config = load_config() result = profile(lambda: run_processing(config), "/tmp/fil-result") generate_report(result)
You could also make it conditional, e.g. based on an environment variable:
import os from filprofiler.api import profile def main(): config = load_config() if os.environ.get("FIL_PROFILE"): result = profile(lambda: run_processing(config), "/tmp/fil-result") else: result = run_processing(config) generate_report(result)
You still need to run your program in a special way. If previously you did:
$ python yourscript.py --config=myconfig
Now you would do:
$ fil-profile python yourscript.py --config=myconfig
Notice that you’re doing
python, rather than
fil-profile run as you would if you were profiling the full script.
Only functions running for the duration of the
filprofiler.api.profile() call will have memory profiling enabled, including of course the function you pass in.
The rest of the code will run at (close) to normal speed and configuration.
Each call to
profile() will generate a separate report.
The memory profiling report will be written to the directory specified as the output destination when calling
profile(); in or example above that was
Unlike full-program profiling:
- The directory you give will be used directly, there won’t be timestamped sub-directories.
If there are multiple calls to
profile(), it is your responsibility to ensure each call writes to a unique directory.
- The report(s) will not be opened in a browser automatically, on the presumption you’re running this in an automated fashion.
Typically when your program runs out of memory, it will crash, or get killed mysteriously by the operating system, or other unfortunate side-effects.
To help you debug these problems, Fil will heuristically try to catch out-of-memory conditions, and dump a report if thinks your program is out of memory. It will then exit with exit code 53.
$ fil-profile run oom.py ... =fil-profile= Wrote memory usage flamegraph to fil-result/2020-06-15T12:37:13.033/out-of-memory.svg
Fil uses three heuristics to determine if the process is close to running out of memory:
- A failed allocation, indicating insufficient memory is available.
- The operating system or memory-limited cgroup (e.g. a Docker container) only has 100MB of RAM available.
- The process swap is larger than available memory, indicating heavy swapping by the process.
In general you want to avoid swapping, and e.g. explicitly use
mmap()if you expect to be using disk as a backfill for memory.
For a more detailed example of out-of-memory detection with Fil, see this article on debugging out-of-memory crashes.
Sometimes the out-of-memory detection heuristic will kick in too soon, shutting down the program even though in practice it could finish running. You can disable the heuristic by doing:
fil-profile --disable-oom-detection run yourprogram.py
Is your program suffering from a memory leak? You can use Fil to debug it.
Fil works by reporting the moment in your process lifetime where memory is highest. If your program has a memory leak, eventually the highest memory usage point is always the present, as leaked memory accumulates.
If for example your Python web application is leaking memory, you can:
- Start it under Fil.
- Generate lots of traffic that causes memory leaks.
- When enough memory has leaked that it’s noticeable, cleanly kill the process (e.g. Ctrl-C).
Fil will then dump a report that will help pinpoint the leaking code.
For a more in-depth tutorial, read this article on debugging Python server memory leaks with Fil.
By default, Fil will open the result of a profiling run in a browser.
As of version 2021.04.2, you can disable this by using the
--no-browser option (see
fil-profile --help for details).
If you want to serve the report files from a static directory using a web server, you can do:
$ cd fil-result/ $ python -m http.server
- What Fil tracks.
- How threads are tracked.
- Fil’s behavior impacts on NumPy (BLAS), Zarr, BLOSC, OpenMP, numexpr.
- Known limitations.
- Getting help.
- Added initial Python 3.11 support; unfortunately this increased performance overhead a little. (#381)
/procis in unexpected format, try to keep running anyway. This can happen, for example, on very old versions of Linux. (#433)
- Complex flamegraphs should render faster. (#427)
- Hopefully fixed segfault on macOS, on Python 3.7 and perhaps other versions. (#412)
- Added wheels for ARM on Linux (
aarch64), useful for running native Docker images on ARM Macs. (#395)
- Stopped using
jemallocon Linux, for better compatibility with certain libraries. (#389)
- Speed up rendering of flamegraphs in cases where there are many smaller allocations, by filtering out allocations smaller than 0.2% of total memory. Future releases may re-enable showing smaller allocations if a better fix can be found. (#390)
- Added wheels for macOS ARM/Silicon machines. (#383)
- Fix a number of potential deadlock scenarios when writing out reports. (#374, #365, #349)
- Give more accurate message when running in no-browser mode (thanks to Paul-Louis NECH). (#347)
- Don’t include memory usage from NumPy imports in the profiling output. This is somewhat inaccurate, but is a reasonable short-term workaround. (#308)
- Added explanation of why error messages are printed on macOS when opening browser. (#334)
- The directories where reports are stored now avoid the characters ‘:’ and ‘.’, for better compatibility with other operating systems. (#336)
- Python 3.6 support was dropped. (#342)
- The jemalloc package used on Linux was unmaintained and old, and broke Conda-Forge builds; switched to a newer one. (#302)
- Reports now have a “open in new tab” button. Thanks to @petergaultney for the suggestion. (#298)
- Improved explanations in report of what it is that Fil tracks, and what a flamegraph tells you. (#185)
- Fix bad command name in the API documentation, thanks to @kdebrab. (#291)
- Work on versions of Linux with weird glibc versions. (#277)
- Build 3.10 wheels for macOS too. (#268)
- Added Python 3.10 support. (#242)
- Added back wheels for macOS Catalina (10.15). (#253)
- Fixed crashes on macOS Monterey. (#248)
- SIGUSR2 previously did not actually dump memory. Thanks to @gaspard-quenard for the bug report. (#237)
- Fix problem on macOS where certain subprocesses (e.g. from Homebrew) would fail to start from Python processes running under Fil. Thanks to @dreid for the bug report. (#230)
- Fix Apache Beam (and other libraries that depend on pickling
__main__module) when using
filprofile run -m. (#202)
- Fixed potential reentrancy bugs; unclear if this had any user-facing impacts, though. (#215)
- Fixed segfault on some Linux versions (regression in release 2021.7.0). (#208)
- Added a
--disable-oom-detectionto disable the out-of-memory detection heuristic. (#201)
- When using the Jupyter
%%filprofilemagic, locals defined in the cell are now stored in the Jupyter session as usual. (#167)
- Emulate Python’s module running code more faithfully, to enable profiling things like Apache Beam. (#202)
- Fixed bug where certain allocations went missing during thread creation and cleanup. (#179)
- Fixed race condition in threads that resulting in wrong allocation being removed in the tracking code. (#175)
- Major bugfix: mmap() was usually not added correctly on Linux, and when it was, munmap() was ignored. (#161)
- Added --no-browser option to disable automatically opening reports in a browser. (#59)
- Fixed bug where aligned_alloc()-created allocations were untracked when using pip packages with Conda; specifically this is relevant to libraries written in C++. (#152)
- Improved output in the rare case where allocations go missing. (#154)
- Fixed potential problem with threads noticing profiling is enabled. (#156)
- Fixed bug where reverse SVG sometimes was generated empty, e.g. if source code used tabs. (#150)
- Fil no longer blows up if checking cgroup memory is not possible, e.g. on CentOS 7. (#147)
- Try to ensure monospace font is used for reports. (#143)
- Number of allocations in the profiling results are now limited to 10,000. If there are more than this, they are all quite tiny, so probably less informative, and including massive number of tiny allocations makes report generation (and report display) extremely resource intensive. (#140)
- The out-of-memory detector should work more reliably on Linux. (#144)
- Improve error messages when using API in subprocesses, so it’s clear it’s not (yet) possible. (#133)
- On Linux, use a more robust method of preloading the shared library (requires glibc 2.30+, i.e. a Linux distribution released in 2020 or later). (#133)
- Fixed in regression in Fil v0.15 that made it unusable on macOS. (#135)
- Fewer spurious warnings about launching subprocesses. (#136)
- Fil now supports profiling individual functions in normal Python scripts; previously this was only possible in Jupyter. (#71)
- Fil now works better with subprocessses. It doesn’t support memory tracking in subprocesses yet, but it doesn’t break them either. (#117)
- Report memory stats when out-of-memory event is detected. (#114)
- Correctly handle bad data from cgroups about memory limits, fixing erroneous out-of-memory caused by Docker. (#113)
- Out-of-memory detection should work in many more cases than before. (#96)
- Fil now supports Python 3.9. (#83)
- Fil no longer uses a vast amount of memory to generate the SVG report. (#102)
- Fixed bug that would cause crashes when thread-local storage destructors allocated or freed memory. Thanks to @winash12 for reporting the issue. (#99)
- Allocations in C threads are now considered allocations by the Python code that launched the thread, to help give some sense of where they came from. (#72)
- It’s now possible to run Fil by doing
python -m filprofilerin addition to running it as
- Small performance improvements reducing overhead of malloc()/free() tracking. (#88 and #95)
- When running in Jupyter, NumPy/BLOSC/etc. thread pools are only limited to one thread when actually running a Fil profile. This means Fil’s Jupyter kernel is even closer to running the way a normal Python 3 kernel would. (#72)
- When tracking large numbers of allocations, Fil now runs much faster, and has much less memory overhead. (#65)
- Added support for tracking allocations done using
- Fixed edge case for large allocations, where wrong number of bytes was recorded as freed. (#66)
- Switched to using jemalloc on Linux, which should deal better both in terms of memory usage and speed with many small allocations. It also simplifies the code. (#42)
- Further reduced memory overhead for tracking objects, at the cost of slightly lower resolution when tracking allocations >2GB. Large allocations >2GB will only be accurate to a resoluion of ~1MB, i.e. they might be off by approximately 0.05%. (#47)
- Significantly reduced the memory used to generate the SVG report. (#38)
- Reduced memory overhead of Fil somewhat, specifically when tracking large numbers of small allocations. (#43)
- Fix bug that prevented Fil from running on macOS Mojave and older. (#36)
- C++ allocations get tracked more reliably, especially on macOS. (#10)
- Validated that Fortran 90 allocations are tracked by Fil. (#11)
- Anonymous mmap()s are now tracked by Fil. (#29)
- macOS is now supported. (#15)
fil-profilewith no arguments now prints the help. (#21)
- Fil now helps debug out-of-memory crashes by dumping memory usage at the time of the crash to an SVG. This feature is experimental.
- Generating the report should run faster.
- Allocations from the
realloc()allocation API are now tracked by Fil.
- Fix a bug that corrupted the SVGs.
- Hovering over a frame now shows the relevant details on top, where it’s visible.
- Command-line arguments after the script/module now work. To make it easier to implement, changed the code so you do
fil-profile run script.pyinstead of
- The flame graphs now include the actual code that was responsible for memory use.
Fil will track memory allocated by:
- Normal Python code.
- C code using
- C++ code using
- Fortran 90 explicitly allocated memory (tested with gcc’s
gfortran; let me know if other compilers don’t work).
Still not supported, but planned:
mmap(). The semantics are somewhat different than normal allocations or anonymous
mmap(), since the OS can swap it in or out from disk transparently, so supporting this will involve a different kind of resource usage and reporting.
- Other forms of shared memory, need to investigate if any of them allow sufficient allocation.
mmap()s created via
/dev/zero(not common, since it’s not cross-platform, e.g. macOS doesn’t support this).
memfd_create(), a Linux-only mechanism for creating in-memory files.
reallocarray(). These are all rarely used, as far as I can tell.
In general, Fil will track allocations in threads correctly.
First, if you start a thread via Python, running Python code, that thread will get its own callstack for tracking who is responsible for a memory allocation.
Second, if you start a C thread, the calling Python code is considered responsible for any memory allocations in that thread.
This works fine... except for thread pools. If you start a pool of threads that are not Python threads, the Python code that created those threads will be responsible for all allocations created during the thread pool’s lifetime. Fil therefore disables thread pools for a number of commonly-used libraries.
Fil can’t know which Python code was responsible for allocations in C threads.
Therefore, in order to ensure correct memory tracking, Fil disables thread pools in BLAS (used by NumPy), BLOSC (used e.g. by Zarr), OpenMP, and
They are all set to use 1 thread, so calls should run in the calling Python thread and everything should be tracked correctly.
This has some costs:
- This can reduce performance in some cases, since you’re doing computation with one CPU instead of many.
- Insofar as these libraries allocate memory proportional to number of threads, the measured memory usage might be wrong.
Fil does this for the whole program when using
When using the Jupyter kernel, anything run with the
%%filprofile magic will have thread pools disabled, but other code should run normally.
While every single allocation is tracked, for performance reasons only the largest allocations are reported, with a minimum of 99% of allocated memory reported. The remaining <1% is highly unlikely to be relevant when trying to reduce usage; it’s effectively noise.
This is planned, but not yet implemented.
See the list in the page on what Fil tracks.
On Linux, Fil replaces the standard glibc allocator with
jemalloc, though this is an implementation detail that may change in the future.
On all platforms, Fil will not work with custom allocators like
If you need help using Fil: