Inserted BIGINT values incorrect


#1

Hi,

I’m seeing some unexpected behaviour around BIGINTs. [I’m running MapD v3.2.2 on Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-1030-aws x86_64).]

If I create a table as follows:

CREATE TABLE some_ints( intval BIGINT );

then insert a row as follows:

INSERT INTO some_ints VALUES(4489949424181064562);

if I then select the contents of that table I get a single row with the value being 4489949424181064700.

As far as I’m aware the value inserted should be well inside the max range of BIGINT. I’m assuming that BIGINT will be inline with Calcite in the range -9223372036854775808 to 9223372036854775807.

Is there something I’m missing?

I also see the same behaviour if I stream insert the data.

Thanks, Owen


#3

tried (i am running on ubuntu 16.04 and mapd 3.2.2)

mapdql> CREATE TABLE some_ints( intval BIGINT );
mapdql> INSERT INTO some_ints VALUES(4489949424181064562);
Execution time: 63 ms, Total time: 102 ms
mapdql> select * from some_ints;
4489949424181064562
1 rows returned.
Execution time: 122 ms, Total time: 123 ms
mapdql> select intval from some_ints;
4489949424181064562
1 rows returned.
Execution time: 21 ms, Total time: 21 ms
mapdql> select intval from some_ints where intval=4489949424181064562;
4489949424181064562
1 rows returned.
Execution time: 120 ms, Total time: 121 ms


#4

Thanks, that’s interesting. Maybe it’s an Immerse issue. I’ll try mapdql, see if I can repro.


#5

Yep, so with mapdql the values appear to be correct. So it must be an Immerse issue.

Thanks again.


#6

outch i havent tried immerse; with immerse i have the same issue.

tried to insert smaller values

mapdql> INSERT INTO some_ints VALUES(448994942418106456);
Execution time: 4 ms, Total time: 30 ms
mapdql> INSERT INTO some_ints VALUES(44899494241810645);
Execution time: 4 ms, Total time: 33 ms
mapdql> INSERT INTO some_ints VALUES(4489949424181064);
Execution time: 5 ms, Total time: 34 ms

and with immerse the results are:
select * from some_ints
intval
4489949424181064700 (original)
448994942418106430
44899494241810650
4489949424181064
Success: 4 rows in 77ms

there is definetly an issue on immerse client


#7

Yeah, I raised an issue on the Github repo.

No biggie, I guess it’s to be expected at the SQL Editor is still in beta.


#8

i tried on immerse objects and the problem is there


#9

My guess is this is happening because in JavaScript numeric values are automatically treated as doubles which does not have a full 64-bits of precision. Will discuss with the team to see if there are any ways in which we could preserve the full precision on the frontend.