Submit New Event

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Submit News Feature

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Contribute a Blog

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Sign up for Newsletter

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Jan 16, 2015

Towards Out-of-core ND-Arrays -- Spilling to Disk

By

This work is supported by Continuum Analyticsand the XDATA Programas part of the Blaze Project

tl;dr We implement a dictionary that spills to disk when we run out ofmemory. We connect this to our scheduler.

Introduction

This is the fifth in a sequence of posts constructing an out-of-core nd-arrayusing NumPy, Blaze, and dask. You can view these posts here:

  1. Simple task scheduling,
  2. Frontend usability
  3. A multi-threaded scheduler
  4. Matrix Multiply Benchmark

We now present chest a dict type that spills to disk when we run out ofmemory. We show how it prevents large computations from flooding memory.

Intermediate Data

A case where our scheduling algorithm fails to avoid intermediates

If you read thepost on schedulingyou may recall our goal to minimize intermediate storage during a multi-workercomputation. The image on the right shows a trace of our scheduler as ittraverses a task dependency graph. We want to compute the entire graph quicklywhile keeping only a small amount of data in memory at once.

Sometimes we fail and our scheduler stores many large intermediate results. Inthese cases we want to spill excess intermediate data to disk rather thanflooding local memory.

Chest

Chest is a dict-like object that writes data to disk once it runs out ofmemory.

>>> from chest import Chest
>>> d = Chest(available_memory=1e9) # Only use a GigaByte

It satisfies the MutableMapping interface so it looks and feels exactly likea dict. Below we show an example using a chest with only enough data tostore one Python integer in memory.

>>> d = Chest(available_memory=24) # enough space for one Python integer

>>> d['one'] = 1
>>> d['two'] = 2
>>> d['three'] = 3
>>> d['three']
3
>>> d.keys()
['one', 'two', 'three']

We keep some data in memory

>>> d.inmem
{'three': 3}

While the rest lives on disk

>>> d.path
'/tmp/tmpb5ouAb.chest'
>>> os.listdir(d.path)
['one', 'two']

By default we store data with pickle but chest supports any protocolwith the dump/load interface (pickle, json, cbor, joblib, ….)

A quick point about pickle. Frequent readers of my blog may know of mysadness at how this libraryserializes functionsand the crippling effect on multiprocessing.That sadness does not extend to normal data. Pickle is fine for data if youuse the protocol= keyword to pickle.dump correctly . Pickle isn’t a goodcross-language solution, but that doesn’t matter in our application oftemporarily storing numpy arrays on disk.

Recent tweaks

In using chest alongside dask with any reasonable success I had to make thefollowing improvements to the original implementation:

  1. A basic LRU mechanism to write only infrequently used data
  2. A policy to avoid writing the same data to disk twice if it hasn’t changed
  3. Thread safety

Now we can execute more dask workflows without risk of flooding memory

A = ...
B = ...
expr = A.T.dot(B) - B.mean(axis=0)

cache = Chest(available_memory=1e9)

into('myfile.hdf5::/result', expr, cache=cache)

Now we incur only moderate slowdown when we schedule poorly and run into largequantities of intermediate data.

Conclusion

Chest is only useful when we fail to schedule well. We can still improvescheduling algorithms to avoid keeping data in memory but it’s nice to havechest as a backup for when these algorithms fail. Resilience is reassuring.