Nividia Low End Card to start the POC Test?


Hello team,

would like to Built on premise Lab in Centos. please let us know which the low end card i can use to do the POC of 2-3TB of Total storage. However, each query will fetch records from 50GB-100GB of storage.


Hi Sumit -

As far as “low-end” cards, any of the NVIDIA GeForce cards will work with MapD. However, the more important thing to think about is how much data you want to keep ‘hot’ in GPU RAM (this data size is calculated post-serialization into MapD format). Even with a 1080ti having 11GB of RAM, you’ll likely not have enough capacity to use your full dataset within GPU RAM, which is where a lot of the performance of MapD comes from.

Is there a reason you don’t want to use a cloud for your POC? Paying by the hour, you’ll be able to have a much larger set of GPUs to work with to really evaluate how MapD works on a dataset of your size.



We would like to test this product for telecom domain as we are ISV. Due to government regulations this industry prefers to host data in premise. That is reason would prefer to test the product on dedicated on premise solution.




Makes perfect sense Sumit about wanting to keep data on-premises.

As a frame of reference for you during your POC, we have a telco demo on our website:

We are running this on a cloud instance with the following hardware:

24 vCPU
224 GB RAM
1440 GB SSD
4 Tesla K80 GPU

So if you’re not seeing comparable performance to this demo, adding more hardware might be necessary. If you have any other questions during your POC, please feel free to contact me directly via this message board, or post your questions on the forum and we’d be glad to help!


Hi Randy,
Thank you for your kind feedback. I requested my vendor to get the prices and delivery dates for the above suggested cards.

in the mean time, we will install the CPU version quickly and start verifying the functionality of MapD GPU DB.

As per my basic understanding after adding the GPU card, functionality remains same and will add the performance benefit.

Bangalore, India


When you will test on cpu, to use all cpu cores on query execution, I suggest to choose the right fragment size on big tables.
The default value is 32 millions of records and each cpu core will work on a fragment, so if you have 320m records on table with the default you will use just 10 cores, if the records are 640m 20 cores and so on.
To choose a gpu you would aim to get the gpu with the biggest possible amount of ram, so a Tesla p40 or a quadro p6000 with 24gb of ram (same performance) or a quadro p5000 16gb; quadros gpu cost less than testa but are aimed to workstations so the cooling solution of those cards is less efficient while mounted on rackable servers.
Put some attention while using data types and encoding on columns; the right fixed encoding and data type would lead to significant memory saving