I am sooo not worthy…

[Sarah Battersby has on previous occasions shown how outrageously smart she is, but she has blown my mind today. I am sooo not worthy…]

I’ve been asked on occasion if we can show point data from one data source (containing explicit lat/lon fields) along with custom shapes drawn from a shapefile. My response in the past as been that you can if you convert the explicit lat/lon fields to a point shape so you can just change the GEOMETRY field on a dual axis map. The reason for this thinking was that I believed that you need to use the Latitude (generated) and Longitude (generated) fields for the GEOMETRY field and you couldn’t dual axis this with other lat/lon fields.

Well, it turns out I was wrong. I found this out when reading Sarah’s blog post about dual axis mapping (and specifically the section on combining a shapefile with a CSV):

https://community.tableau.com/people/sarah.battersby.0/blog/2017/06/24/dual-axis-mapping-many-ways

There were two particular tricks she used in that section that caught my eye. First was the “faux union” done by a full outer join on the criteria of 1=0. What a neat trick – I’d never thought of doing that before! But the real winner was how she used the ZN(Latitude) and ZN(Longitude) to allow us to plot the GEOMETRY fields when the lat/lon values were NULL. I had no idea you could do that. Forest for the trees, or something like that…

Anyhow – the short of it is that I now know that it’s relatively easy to create a viz like this:

Dots&Polys.png

Thanks, Sarah! Genius!

Posted in Uncategorized | Leave a comment

Tableau Data Day Out – Recorded Sessions

As a follow-up to my last post, we now have all the presentation recordings from our Data Day Out event in Melbourne available for viewing.

TITLE LENGTH
Ideas that Should Die
32 MIN
Scribbles and Lines
20 MIN
Playing it Safe: Freedom and Governance in a Self-Service Analytics Environment
26 MIN
More Than Dots on a Map
26 MIN
So You Want to be a Data Scientist?
30 MIN
Traditional BI versus Modern BI – Does it Matter?
29 MIN
What’s the Story?
28 MIN

Congratulations and thanks to all the presales team members who helped make this event such a success!

Posted in Uncategorized | Leave a comment

More than Dots on a Map

At the recent Tableau Data Day Out in Melbourne, I presented a session entitled “More than Dots on a Map”. The idea was to show how Tableau could allow users to do spatial analysis of their data – irrespective of whether it had implicit or explicit location fields. By this I mean to go beyond just plotting shapes and empower the user to interact in powerful ways, asking lots of different questions that require the data to be shaped and visualised differently.

Below is a recording I made of the session – enjoy!

Posted in Uncategorized | 4 Comments

More on spatial file rendering

Following from my post yesterday I had a colleague ask me: “So what happens when you force client-side rendering at the detailed level? Does it blow up with too much data?”

Interesting question. Let’s find out.

(For those of you who don’t want to read the rest of the post… the answer is “no – we’re too clever for that!”)

As outlined in the documentation here, you can adjust the complexity threshold where we switch between client-side and server-side rendering, or you can force either using “:render=true” or “:render=false” on the URL. So here is my workbook from yesterday with the “:render=true” flag set at the raw level – notice that the map is being rendered in the browser:

Clearly it didn’t blow up. This is because Tableau Server does dynamic simplification of the polygons based on the zoom level of the viewer. We reduce the complexity of the data we deliver to the browser, removing data points that can’t be seen. Clever, huh? This means that for the above map we only ship 7.2MB of data to the browser (see the bootstrap package in the debug window) so there is no danger of the runtime exploding due to too many points.

Performance was acceptable for initial view but I’m local on GB Ethernet to my server. As previously documented, it would be slower on a low bandwidth connection, but once the map is rendered responsiveness is great! You can test it yourself here:

https://tab.databender.net/views/ShapefileRenderingTest/GCCMap?:embed=y&:render=true&:jsdebug=true

The simplification of the polygons is done on the server and this becomes apparent when you zoom. From the above map, I zoomed right in to Port Melbourne – a much closer view. Initially the map renders with the outline of the polygons fuzzy – it’s just zoomed the existing client-side rendered image:

However, the server is busy in the background computing the new model. Watching the trace, the pane-zoom event sits in “pending” state for a few seconds, then it starts to stream down the more granular polygon definitions and the boundaries sharpen up:

Additionally, to manage the data we ship to the browser, we also clip the data to the viewport so we only transfer the vertices we need to render. You can see the pane-pan-server event in pending state after I pan:

The viewport is updated once the data has arrived:

So – you can safely force client-side rendering even with very complex shapefiles however there are performance trade-offs as the server must compute simplified polygons for each visual model. The upside is that you can zoom in a long way and still have detailed boundaries.

Kudos to our product and development teams for developing this feature in such a robust way!

Posted in Uncategorized | Leave a comment

Spatial files and client/server rendering

In the pre-10.3 world, if we wanted to show custom regions on a map we had to use custom polygons. This approach was less than optimal for a number of reasons (probably the most painful of which was changing the LOD of the viz) but one little-known one was that the polygon mark type forced server-side rendering when the workbook was published to Tableau Server. See here for the reference.

In the post-10.3 world we can now use spatial files to display custom regions which is a great big bucket of awesome sauce. However, the question came to my mind – “these are still effectively polygons, so do they trigger server-side rendering too?”

The answer, I’m pleased to report, is not necessarily.

Check out this workbook:
https://tab.databender.net/views/ShapefileRenderingTest/GCCMap?:embed=y&:jsdebug=true

This workbook shows Greater Capital City statistical areas for Australia – selected because it only has a few polygons but the source data is very detailed (~1.96M vertices). When you view the raw data from the shapefile it causes server-side rendering:

I generated (via Alteryx) some generalised versions of the same polygons – at 250m, 100m, 50m, 10m and 1m resolution:

This results in polygons with less vertices which means a simpler data set to render. The number of vertices in the data set for each resolution was:

Detail (Kms)

Number of Vertices

0.25

50,905

0.1

110,332

0.05

188,466

0.01

483,807

0.001

1,256,083

raw

1,960,032

When we select the simpler polygons for our map, we see that the rendering mode flips over to client-side rendering. Which is awesome as it gives a much smoother experience for tooltips, selections and highlighting.

In this exercise I found that when viewing all of the polygons (i.e. the whole of Australia) the cutover between client-side and server-side rendering was between 10m and 1m resolution – i.e. between 483K and 1.25M vertices. However, I also noticed that when viewing the raw shapefile resolution, if I filtered the data set to reduce the number of polygons (e.g. by selecting Victoria only – ~222K vertices) this brought me back under the threshold and allowed rendering to go back to client-side:

So the short of all of this is that working with shapefile regions works in a similar way to other dashboards when it comes to client/server-side rendering. If the complexity of the viz is over the complexity threshold then we use server-side rendering. If below, we use client-side rendering. The takeaway from this from a performance perspective is – if you are working with shapefiles and find yourself experiencing server-side rendering, consider either filtering the number of polygons or try to use a lower-resolution version of the shapefile.

As I also pointed out in my previous blog post about filled maps and low bandwidth connections, the polygon data for client-side rendering can add significantly to the size of the bootstrap package so in low bandwidth environments it might be preferable to trade responsiveness for rendering time. For the above dashboard the client-side rendering bootstrap package was 2.3M for all of Australia @ 10m resolution, vs. 5.3K for the bootstrap and 182K for the image tile when using server-side rendering.

Enjoy!

Posted in Uncategorized | 1 Comment

Beyond shape files…

[This is one of those moments when you realise you haven’t been seeing the big picture. Digging around the edges of a new concept you suddenly see the foundations are much deeper than you thought. So – hats off to our wonderful dev team for being several steps ahead…]

I finally had a few moments of spare time the other day, so I got to watching some internal training videos for Tableau 10.2. These particular videos are what we call WINK (what I need to know) training and are deep dive sessions on new features we have released. One of them immediately caught my eye with the following abstract:

“Extract API supports geospatial data”

Wait… what!?!

Sure enough – when I went looking I found one of the new features in 10.2 is that the extract API now supports the spatial data type. You can find more about this feature in the Tableau SDK Reference. The really cool part of this is that it’s super simple to use – all you have to do is insert spatial data in WKT format. This means you can easily fabricate your own spatial data or import it from a spatial database using a function like ST_AsText().

It’s been a long time since I flexed my coding muscles but my Google-fu is mighty, so without too much hassle I was able to install Python, install our Tableau API and fiddle with our SDK sample. The code was easy (once I realised that indenting is apparently important in Python J) and the relevant lines are highlighted:

# Insert Data
row = Row( schema )
row.setDateTime( 0, 2012, 7, 3, 11, 40, 12, 4550 )    # Purchased
row.setCharString( 1, 'Beans' )                # Product
row.setString( 2, u'uniBeans'    )            # Unicode Product
row.setDouble( 3, 1.08 )                 # Price
row.setDate( 6, 2029, 1, 1 )                 # Expiration Date
row.setCharString( 7, 'Bohnen' )            # Produkt
for i in range( 10 ):
    row.setInteger( 4, i * 10 )                # Quantity
    row.setBoolean( 5, i % 2 == 1 )             # Taxed
    inner = str(i * 3)
    outer = str(i * 5)
    row.setSpatial( 8, "POLYGON ((" + inner + " " + inner + ", " + inner + " " +
outer + ", " + outer + " " + outer + ", " + outer + " " + inner + ", " + inner +
" " + outer + "))"
    table.insert( row )

The result was this:

I was also able to load the file with mixed spatial types:

# Insert Data
row = Row( schema )
row.setDateTime( 0, 2012, 7, 3, 11, 40, 12, 4550 )  # Purchased
row.setCharString( 1, 'Beans' )                     # Product
row.setString( 2, u'uniBeans'    )                  # Unicode Product
row.setDouble( 3, 1.08 )                            # Price
row.setDate( 6, 2029, 1, 1 )                        # Expiration Date
row.setCharString( 7, 'Bohnen' )                    # Produkt
for i in range( 10 ):
    row.setInteger( 4, i * 10 )                     # Quantity
    row.setBoolean( 5, i % 2 == 1 )                 # Taxed
    inner = str(i * 3)
    outer = str(i * 5)
    if ( i % 2 == 0 ):
           row.setSpatial( 8, 'POINT(' + inner + ' ' + inner + ')' )
    else:
           row.setSpatial( 8, 'POLYGON ((' + inner + ' ' + inner + ', ' + inner +
' ' + outer + ', ' + outer + ' ' + outer + ', ' + outer + ' ' + inner + ', ' +
inner + ' ' + outer + '))' )
    table.insert( row )

Note that this isn’t fully supported in Tableau but you can use them if you are careful not to mix them in the same mark. Get it wrong and you’ll see this:

Get it right and you’ll see this:

The result of all this is that we are not limited to just bringing in spatial data from spatial files – we can bring it from anywhere with a little bit of effort. This is very exciting and I look forward to seeing what you all create.

Posted in Uncategorized | Leave a comment

How to visualize polar projection data in Tableau

[This ice cap data is the gift that just keeps giving. Today’s guest post is courtesy of Sarah Battersby – Chief Crazy Map Lady in Residence – where she explains how she weaves her dark magic.

And then she gloats. Fair enough – she earned it.]

The first step in attempting something like this is to wait for someone else to find a nice dataset for you to work with and call you out in their blog lamenting the challenges of working with polar data in a Web Mercator projected base map.

Then get to work.

The National Snow and Ice Data Center data is delivered in a polar stereographic projection.    For the southern hemisphere data that I used, the projection was the NSIDC Sea Ice Polar Stereographic South.  This projection takes our latitude and longitude coordinates out of  angular units and puts them onto a nice, flat Cartesian plane, with the center of the projection (the South Pole) as our new 0,0 coordinate, and all locations plotted on a grid measured in meters away from the center.

Here is what that projection looks like (graphic from NSIDC)

sdfg

And with this great dataset, founded on solid sea and ice data science, and represented in a carefully selected projection that is mathematically appropriate for the data…we will start doing some serious lying with data to bend it to our will.

1.   Projections are just mathematical transformations from angular coordinates to planar coordinates.  So, using an open source GIS (QGIS) we’ll tell our first lie: Set the coordinate system to Web Mercator.  Do not re-project into Web Mercator!  We just want the dataset to think it is in Web Mercator coordinates.

sdfgg

That essentially shifts the center of our projection from (0°, -90°) to (0°, 0°).  That’s right, we just moved the south pole to a spot right off the west coast of Africa.   I am already a little ashamed of myself, but, now I can show polar data in a system that uses Web Mercator.

fdsasdf

But, there is a problem…my coordinates are still in “Web Mercator” meters.  While Tableau can work with shapefiles in non-latitude and longitude coordinates if there is a projection defined for the dataset, I still wanted to force the data back into latitude and longitude, so I then reprojected (or, perhaps un-projected) back to latitude and longitude using QGIS.

2.   And there is an even bigger problem – I don’t want a base map that shows Antarctica at the equator!  I need a new custom base map…off to Mapbox I go.  With Mapbox I can style a new set of basemap tiles starting from a blank canvas.  That means I can lie about the coordinate system of all sorts of spatial files and have them show up in the same wrong location in the world!  I am totally going to lose my cartographic license for this…

I grabbed a dataset with boundaries of the world countries (in latitude and longitude) -> used QGIS to project to Polar Stereographic to match the real sea ice data (to get it in the right coordinate system) -> changed the projection definition to Web Mercator (introducing the lie to make it think it was really located at the equator) -> reprojected back to latitude and longitude.

I spent way too much time searching for some imagery to bling up the base map, and eventually found a nice geotiff (tiff with geographic coordinates attached to it) from IBCSO.  I jumped through the hoop of lie about the projection (it was originally the polar stereographic, just like the sea ice data) and then send back to lat/lon process.

Using Mapbox Studio I put all of the data onto a blank basemap and published it for use in Tableau.

dsasdfgfsdfg

3.   Load up the new tiles in Tableau, and then lock down the pan/zoom to hide the fact that this is a polar stereographic wearing Web Mercator clothing (where did the rest of those continents go???  That’s right, I deleted them because I found them inconvenient and ugly in my map…cartographic license at work!).

fasdfds

4.   In the battle of polar data:  Sarah – 1, Alan – 0

Posted in Uncategorized | 1 Comment

I am not worthy…

This is why it’s fun working with people who are smarter than me. Way, way smarter…

Less than 24 hours after I post about my issues with polar data, Crazy Map Lady Extraordinaire Sarah Battersby tears it up and produces this:

Little Polar

In her own words:

NOAA polar ice files come in using a Polar Stereographic projection. I (ahem) just modified the definition to make it think it was Web Mercator. I reprojected into WGS84 to make it think it was lat/lon (which then places Antarctica roughly over the equator). First step down – data is in Tableau, ready to be analyzed.

To get a bit of context, I used the same projection trick with some continent data that I had lying around – the data round tripped from WGS84 data -> Polar Stereographic -> tell the data that it’s in Web Mercator (but it is really in Polar Stereographic) -> WGS84

I threw the bastardized projection version of the continents into Mapbox to run off some quick tiles.  Added them to Tableau, and… see video.

On Tableau Public here.

So not worthy…

Posted in Uncategorized | Leave a comment

The Importance of Projections

A few days ago I found a wonderful story about polar ice caps melting that led me to some wonderful data. I thought I could potentially make a viz that showed the changing extent of the sea ice at the poles – some line charts for the temporal view and a map with the shape files. I figured some animation would be even sexier.

I pulled the shape files down from the web and used Alteryx to union them together into a single file (+1 vote for shape file union). I loaded it into Tableau and drew my map but I got this:

WTF!?! Something is very broken. I decided to take a closer look at a single shape file:

Yep – definitely borked somewhere. My initial thought was that maybe there was a problem with the polygon data – that perhaps the polygons weren’t being closed properly. Because look here…

But surely not. I mean, these people are professionals. I’m sure their data is used all the time and an error like this would certainly be flagged. I downloaded and installed QGIS, pointed it at the file (and the Tableau tile service) and voila! One sea ice polygon:

BTW – check out how QGIS takes our Tile Service and projects it nicely. Very cool!

Anyhow – it turns out the problem is with the projection in the data. The shape file has the data in EPSG:3412 (NSIDC Sea Ice Polar Stereographic South). However, Tableau only understands WGS 84 (Web Mercator) and so it is doing on-the-fly transformations. Here’s what happens when I transform the data into a Mercator projection in QGIS:

BOOM! Also borked. I’m no GIS expert (looks around for Sarah Battersby) but it looks like Mercator can’t handle polygons that cross the +/- 180 degree meridian. So until Tableau can support more projections, I’m going to have to park this project. Or like Sarah suggested, map it to completely different coordinates somewhere else on the globe.

Learnin’ every day, folks.

Posted in Uncategorized | 4 Comments

Using GEOMETRY Fields in Calculations

In Tableau 10.2, we have a new data type that is read from spatial files – the GEOMETRY field. Right now, it would seem there is not much we can do directly with these fields other than display them.

map

The GEOMETRY field is presented as a measure object with a single aggregation function COLLECT(). This aggregation makes a group of polygons and/or points – GEOMETRY fields can contain both – act together as a collection (hence the name) based on the dimensions included in the viz. This means they are coloured, labelled, selected, highlighted, etc. as a single mark.

Right now there are no other built-in functions for GEOMETRY fields but we can use them in calculated fields. Here’s a simple, yet interesting application allowing us to dynamically select different levels of detail.

Australia’s Bureau of Statistics reports their data spatially via Statistical Areas (SA). There are multiple levels of detail in this model from SA4 down to SA1 (and further down to mesh blocks). The boundary definitions for these areas are available from the ABS website in ESRI and MapInfo formats.

To bring this data together, we can download and join the shape files together in a single data source:

joins

With some cleanup this results as follows – with a GEOMETRY field sourced from each file, containing the boundaries of the associated SA level:

measures

We can create a parameter that allows the user to select which level they would like to display:

parameter

We can use this parameter in a calculated field, returning a different GEOMETRY field based on the parameter value:

CASE [Select Level]
  WHEN "SA1" THEN [SA1 Geometry]
  WHEN "SA2" THEN [SA2 Geometry]
  WHEN "SA3" THEN [SA3 Geometry]
  WHEN "SA4" THEN [SA4 Geometry]
END

Double-clicking on this calculated GEOMETRY field and exposing the parameter allows the end user to display the required SA level dynamically. However, because there is no dimension in the viz, the COLLECT() aggregation makes all the polygons act as a single mark. To have each area act as a separate mark, we can use the parameter again to create a dynamic code dimension:

CASE [Select Level]
  WHEN "SA1" THEN [Sa1 7Dig16]
  WHEN "SA2" THEN [Sa2 Name16]
  WHEN "SA3" THEN [Sa3 Name16]
  WHEN "SA4" THEN [Sa4 Name16]
END

Check out this workbook for an example of this technique.

I look forward to finding more cool things to do with this new spatial capability in Tableau 10.2, and reading about your tricks as well.

Enjoy!

Posted in Uncategorized | 8 Comments