Can immerse produce GeoTiff?


#1

I have a large Geographic, Scientific data set stored in MAPD. The query result sets from MapD are just (TIME/LAT/LON) + a number of variables aggregated from the original data. I.e. AVG(TEMP), MIN(speed), etc.

I’d like to be able to render this result in a browser on some open map api. E.g. ArcGIS Javascript or Leaflet.

I am planning to do some in-browser WebGL Shader rendering for the scientific data queries.

What I am missing is a good efficient way to get the large data cubes back to the browser. In the browser I eventually need to convert them to textures. CPU based and slow. Ideally, I could get textures (GeoTiff?) directly from MAPD or Immerse.

Is anything like this available in Mapd or Immerse?

Any other suggestions you have for this are very welcome.

Thanks,

Garth


#2

i know this isnt a so helpful response but i think you could code by yourself using vega api

hese some docs:
https://www.mapd.com/docs/latest/mapd-core-guide/vega/


#3

At this point we do not support GeoTiff, tho we may consider adding support for it if there’s enough interest.

However, based on your description of your query results, it’s very likely our backend rendering engine supports the kind of rendering you hope to do in webgl. As the previous post suggests, look at: https://www.mapd.com/docs/latest/mapd-core-guide/vega/

Immerse uses mapbox for browser-based map rendering and composites backend-rendered PNGs using mapbox’s APIs. The pointmap, heatmap, and scatterplot charts all employ backend rendering using vega descriptions of the visualization. Perhaps one of those charts will suffice, and if not, and you intend to write your own custom app, the backend rendering API using vega will likely give you what you need.

Note that backend rendering is only available in our community or enterprise editions.

If you can describe to us in more detail what you were intending to render with webgl, we can give you more insight.

Thanks,

Chris


#4

Hi, Thanks for the feedback. I drilled into the vega api. Nice. It does lot of nice rending on the server. However, I am looking for client side animation. Something similar to this.

https://mapbox.github.io/webgl-wind/demo/

So, what I had planned to try was to grab the flow field from my MapD database. I need it in a texture for WebGL but I thought I could translate GeoTiff, or png on the client side. The key is that the transfer format needs to be lossless. i.e. I need to data in the received "2d array to be identical to the data fetched form mapd with no compression artifacts. JSON would work but the results are too bloated with these large arrays. In fact they data is 4D with the time slices.= and the elevations included.

I would be grateful for any suggestions you have.

Thanks,

Garth


#5

This is an interesting use case, one that we should consider having better support for in the future. But at present your client-side simulation is the best way to achieve this affect.

We may be able to advise you on how to best grab the vector field data tho. It sounds as tho you’re trying to grab all the data aggregated in different time slices, is this correct? Doing so would ultimately equate to a 3D texture where the Z dimension of the texture is time, right? If we can find a solution that can generate your vector field in a texture from the server, you may be able to request this texture on-demand in the client and not require any caching, effectively working with one time slice at a time.

The problem with the backend rendering approach at present is that since we only support PNGs, you’re stuck with 8-bits/channel precision. We can be creative tho and pack vector data into these channels by doing some math in the query.

For example, if you can get away with only a 2D vector field with 16-bits/channel, you can generate a PNG from the backend using the following vega template:

const imgwidth = <img width>;
const imgheight = <img height>;
const mercxmin = <min mercator x coord>;
const mercxmax = <max mercator x coord>;
const mercymin = <min mercator y coord>;
const mercymax = <max mercator y coord>;
const rectbinwidth = <width of the bin in pixels, use 1 if you want a bin/pixel>;
const rectbinheight = <height of the bin in pixels, use 1 if you want a bin/pixel>;
const query = `SELECT rect_pixel_bin_x(conv_4326_900913_x(lon), ${mercxmin}, ${mercxmax}, ${rectbinwidth}, 0, ${imgwidth}) as x, rect_pixel_bin_y(conv_4326_900913_y(lat), ${mercymin}, ${mercymax}, ${rectbinheight}, 0, ${imgheight}) as y, CAST(AVG(TEMP) AS BIGINT) * 65536 + CAST(MIN(speed) AS BIGINT) as packed_vector_data FROM <data table> WHERE (conv_4326_900913_x(lon) >= ${mercxmin} AND conv_4326_900913_x(lon) <= ${mercxmax}) AND (conv_4326_900913_y(lat) >= ${mercymin} AND conv_4326_900913_y(lat) <= ${mercymax}) AND amount > 0 AND (<add your time slice filter>) GROUP BY x, y`
const vega = {
  "width": imgwidth,
  "height": imgheight,
  "data": [
    {
      "name": "texture_query",
      "sql": query
    }
  ],
  "marks": [
    {
      "type": "symbol",
      "from": {
        "data": "texture_query"
      },
      "properties": {
        "shape": "square",
        "xc": {
          "field": "x"
        },
        "yc": {
          "field": "y"
        },
        "width": rectbinwidth,
        "height": rectbinheight,
        "fillColor": {
          "field": "packed_vector_data"
        }
      }
    }
  ]
}

This would generate an image that would pack a 2D vector into the 4 channels of the png at 16-bit precision. I’m not sure how you’re generating the vector field data tho, so I just made a really quick example putting AVG(temp) and MIN(speed) into the RG and BA channels of a texture respectively. This part of the query CAST(AVG(TEMP) AS BIGINT) * 65536 + CAST(MIN(speed) AS BIGINT) as packed_vector_data is what packs the 2 values into a 32-bit integer that is subsequently unpacked as a color during the render. This code assumes that AVG(temp) and MIN(speed) will be in the range 0-65535. However, you’d want to maximize the use the 16-bits at your disposal by normalizing AVG(temp) and MIN(speed). You could do this by running a prequery first, such as this:

SELECT MAX(c.avgtemp) as maxavgtemp, MAX(c.maxspeed) as maxmaxspeed FROM (SELECT rect_pixel_bin_x(conv_4326_900913_x(lon), ${mercxmin}, ${mercxmax}, ${rectbinwidth}, 0, ${imgwidth}) as x, rect_pixel_bin_y(conv_4326_900913_y(lat), ${mercymin}, ${mercymax}, ${rectbinheight}, 0, ${imgheight}) as y, AVG(temp) as avgtemp, MAX(speed) as maxspeed FROM <data table> WHERE (conv_4326_900913_x(lon) >= ${mercxmin} AND conv_4326_900913_x(lon) <= ${mercxmax}) AND (conv_4326_900913_y(lat) >= ${mercymin} AND conv_4326_900913_y(lat) <= ${mercymax}) AND (<time slice filter>) GROUP BY x, y) as c

Then you can take this pre-query data (maxavgtemp & maxmaxspeed) to normalize the values in the texture query to maximize use of the 16-bits at your disposal. The packing of the channels in the query could look like this now:

CAST((AVG(temp) / ${maxavgtemp}) * 65535.0 AS BIGINT) * 65536 + CAST((MAX(speed) / ${maxmaxspeed}) * 65535.0 AS BIGINT) as packed_vector_data

This would produce a texture with much less quality loss. You obviously would need to properly unpack this texture in the client to build out your vectors and advect your particles.

If you need more precision, or more channels for data, you can modify this approach to run multiple backend renders to get multiple textures.

Or, rather than using backend rendering, you can just run queries in the client similar to:

SELECT rect_pixel_bin_x(conv_4326_900913_x(lon), ${mercxmin}, ${mercxmax}, ${rectbinwidth}, 0, ${imgwidth}) as x, rect_pixel_bin_y(conv_4326_900913_y(lat), ${mercymin}, ${mercymax}, ${rectbinheight}, 0, ${imgheight}) as y, CAST(AVG(TEMP) AS BIGINT) * 65536 + CAST(MIN(speed) AS BIGINT) as packed_vector_data FROM <data table> WHERE (conv_4326_900913_x(lon) >= ${mercxmin} AND conv_4326_900913_x(lon) <= ${mercxmax}) AND (conv_4326_900913_y(lat) >= ${mercymin} AND conv_4326_900913_y(lat) <= ${mercymax}) AND amount > 0 AND (<add your time slice filter>) GROUP BY x, y

This query essentially aggregates your data per-pixel. This would equate to your texture data, and should drastically reduce the amount of data passed to the client. You can create your textures on the client from this data yourself rather than going through the backend rendering piece if you’re having precision-related issues.

Hopefully that gives you some clues. Sorry if some of this is not properly documented yet (such as the rect_pixel_bin_x/y functions), but if you have any questions, feel free to ask.

Chris


#6

Ahhh… Wow! This very helpful. It will take me a while to dig into though.

I’ll come back to this thread when I’ve had time to digest the ideas.

Thanks very much.