Why SLAB size is 2GB?


#1

I have ran a query on MapD with large tables over 20GB each, and got
exception of “not enough GPU memory”, but the query was not complex and
my GPU card has about 11GB global memory which is enough for the query.
Then I found on the open-source code that the size of a SLAB was 2GB in
default, which proposed the exception. After I changing the size the
query can run successfully. So why MapD set the size of slab as 2GB
rather than the total size of global memory? The latter choice won’t
arise some unnecessary exceptions and the system will be HIGHLY
AVAILABLE.


#2

Hi,

MapD will allocate multiple 2GB (or smaller) slab sizes up to the max it can access.

Historically there were issue with allocating larger single chunks of memory on gpus, so we needed to use a series of smaller managed allocations.

We are reviewing all combinations of platforms and versions we support to see if we can consider going to larger sizes, or at least allow for a larger default to be configured.

regards