How to solve problem. If database table's size is over than 30Gbyte


#1

Is it possible processing?

Can you show me your case?


#2

Hi,

Yes it is possible to have a database over 30Gbytes

If you look at some of our examples Taxis and Ships you will see some very large datasets being queried at interactive speeds via Immerse. These tables are hundreds of GB’s.

Regards


#3

p100 has 12G byte.

Are you recommend Distributed processing?


#4

The data loaded in the gpu are the (encoded) fields needed for filtering, processing, grouping only so the database will need only fields used for this to put in gpu memory.

There are also parameters that allow the software to split the query into subset of rows (with a performance penalty) or process the query on cpu (the software is pretty fast even using the CPU only)


#5

Just wanted to point out (in case its not obvious) that MapD scales to multiple GPUs in a server, and such capabilities are in the open source and community editions. You can then scale further by going multi-node, which is an Enterprise edition feature. Note that many of our customers easily handle 3-10 billion records on a single node, depending on the width of the table and query patterns.