Use of the JDBC Driver in a connection pool


#1

Does anyone have any working examples of using the JDBC driver in a connection pool? I’m trying to connect to MapD in a simple Spring app using the Hikari connection pool. When I attempt to run a query via the connection pool, I see the pool start up…

2017-10-15 16:32:09.042  INFO 5 --- [tp1357563986-11] com.zaxxer.hikari.HikariDataSource       : MapD - Starting...
Received 1
2017-10-15 16:32:09.252  INFO 5 --- [tp1357563986-11] com.zaxxer.hikari.pool.PoolBase          : MapD - Driver does not support get/set network timeout for connections. (Not supported yet, line:481 class:com.mapd.jdbc.MapDConnection method:getNetworkTimeout)
Received 2
2017-10-15 16:32:09.305  INFO 5 --- [tp1357563986-11] com.zaxxer.hikari.HikariDataSource       : MapD - Start completed.
Received 1
Received 1
Received 1
Received 1
Received 1
Received 1
Received 1

…and it just hangs there. Eventually after 15 minutes (!) the query is run and the request returns. From then on, queries work immediately every time. Very weird.

When I take a thread dump of the server when it’s in this hung state, I see this thread in RUNNABLE:

2017-10-15 03:05:28
Full thread dump OpenJDK 64-Bit Server VM (25.131-b11 mixed mode):

"MapD connection adder" #26 daemon prio=5 os_prio=0 tid=0x000055a1b3277000 nid=0x3f runnable [0x00007fe9fa4bb000]
   java.lang.Thread.State: RUNNABLE
        at java.net.SocketInputStream.socketRead0(Native Method)
        at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
        at java.net.SocketInputStream.read(SocketInputStream.java:171)
        at java.net.SocketInputStream.read(SocketInputStream.java:141)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
        - locked <0x00000000f8211538> (a java.io.BufferedInputStream)
        at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
        at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
        at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:424)
        at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:321)
        at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:225)
        at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77)
        at com.mapd.thrift.server.MapD$Client.recv_connect(MapD.java:255)
        at com.mapd.thrift.server.MapD$Client.connect(MapD.java:240)
        at com.mapd.jdbc.MapDConnection.<init>(MapDConnection.java:104)
        at com.mapd.jdbc.MapDDriver.connect(MapDDriver.java:52)
        at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:112)
        at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:118)
        at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:376)
        at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:205)
        at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:448)
        at com.zaxxer.hikari.pool.HikariPool.access$200(HikariPool.java:72)
        at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:635)
        at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:621)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)

Seems like it’s stuck reading from the server in the Thrift codepath. But I don’t have the Thrift source to see what exactly it’s doing.

Any ideas?

Thanks!
BP


#2

Hi,

MapD by default only has a small number of connections available (This about the change in the next release). I suspect in your case the connections in the connection pool are exhausting the available actual connections.

Please see Unable to connect after a few hours for additional discussion.

regards


#3

Yup, that seems to be what it was. The pool had a default size of 10. Tailing the MapD logs, after the 8th connection was established is when it started hanging. Setting the pool size to a size smaller than 8 makes the problem go away. Thanks!

Do you have an ETA on the MapD release that fixes this constraint? Also, what will be the maximum number of connections a MapD instance will be able to handle after this release?

Thanks again.
BP


#4

Hi,

Currently if you are not doing backend rendering you can set it up to whatever number you want.

The current constraint is only backend render related.

regards


#5

Got it. Thanks again.