Snowflake multi-cluster warehouse vs. TabJolt load

One of Snowflake’s key features is something called multi-cluster warehouses.

By default, a virtual warehouse consists of a single cluster of servers that determines the total resources available to the warehouse for executing queries. As queries are submitted to a warehouse, the warehouse allocates resources to each query and begins executing the queries. If sufficient resources are not available to execute all the queries submitted to the warehouse, Snowflake queues the additional queries until the necessary resources become available.

With multi-cluster warehouses, Snowflake supports allocating a larger pool of resources to each warehouse. As the number of concurrent user sessions and/or queries for the warehouse increases, and queries start to queue due to insufficient resources, Snowflake automatically starts additional clusters, up to the maximum number defined for the warehouse.

2018-05-30_09-56-56

Similarly, as the load on the warehouse decreases, Snowflake automatically shuts down clusters to reduce the number of running servers and, correspondingly, the number of credits used by the warehouse.

As you can imagine, this capability is very useful to maintain a consistent response time for users in BI and reporting scenarios where varying user loads are common. Imagine that you normally have 5 concurrent users on your system but on Monday mornings, you have a spike of 20 concurrent users, all wanting to view reports for the weekend’s business. A multi-cluster warehouse can ensure that users during the peak load experience the same response times (from the DB that is… saturation of the BI server is a separate factor) as users during non-peak periods.

To show this capability in action, I created a dashboard that queries against about 57M records of Citibike data, hosted in a Snowflake DB. I published this dashboard to Tableau Server and then used TabJolt to simulate a) a single user baseline, and then b) a peak period with multiple concurrent users. Note that caching was turned off both in Tableau Server and in the Snowflake data source to ensure that all queries were actually run in the DB.

You can see this experiment and the results in the following video. Apologies for just providing a download link, but I tried posting this to Vimeo but the video quality was terrible. That’s most likely my fault, but right now this is an easier way to share:

https://www.dropbox.com/s/1j1437irfcjc1g1/Multi-cluster%20Scaling.mp4?dl=0

Enjoy!

Posted in Uncategorized | Leave a comment

Snowflake and Tableau – in action!

A few days ago I was invited to present Snowflake as part of a webinar run with our partner BockCorp. Check out the recording of the session here, which includes an overview of the Snowflake architecture as well as a demo showing all the cool capabilities like instant elasticity, semi-structured data support, data sharing and more:

Oh – if you want to jump straight to the demo, it starts around the 29min point. Go ahead, I won’t be offended. 🙂

Enjoy!

Posted in Uncategorized | Leave a comment

Best Practices for Using Tableau with Snowflake

Update 18 July 2018:
The whitepaper has now been released as an official Snowflake whitepaper so I’ve updated the download link to point to the document on the Snowflake website. The new whitepaper is much prettier and has been through editing to clean up all my bad writing habits. Thanks to Vincent Morello and Marta Bright in our content marketing team for all their help in making this happen.

As announced in my last post, since joining Snowflake I’ve been working on a whitepaper that provides best practice guidance for using Tableau with our built-for-the-cloud data warehouse.

Well, I’m pleased to report that it’s done. Or at least, done enough to release. You can download it from here:

https://resources.snowflake.net/ecosystem/best-practices-for-using-tableau-with-snowflake

I hope you find it useful, and please let me know if you have any feedback or corrections.

Posted in Uncategorized | 2 Comments

Tableau and Snowflake

Happy New Year everyone!

I’ve been a bit quiet of late. Probably to be expected, what with getting my head around all the new stuff here at Snowflake. Also, properly relaxing over Christmas and summer requires a degree of focus (ah, the joy of the southern hemisphere!). But I’ve not been completely idle. Over the past few weeks I’ve been steadily working on a new whitepaper:

2018-01-18_22-18-56

Here’s an overview of the document scope…

  • Introduction
  • What is Tableau?
  • What is Snowflake?
  • What you DON’T have to worry about with Snowflake
  • Creating Efficient Tableau Workbooks
  • Connecting to Snowflake
  • Working with Semi-Structured Data
  • Working with Snowflake Time Travel
  • Working with Snowflake Data Sharing
  • Implementing Role-Based Security
  • Using Custom Aggregations
  • Scaling Snowflake Warehouses
  • Caching
  • Other Performance Considerations
  • Measuring Performance

Of course, it’s turning out to be a lengthy read – it seems I know no other way. 🙂 But believe me, a lot of that is screenshots and SQL. The document is being reviewed at the moment, but I plan to break it into consumable chunks and release material as posts over the next couple of weeks. Maybe here, maybe on the Snowflake or Tableau blogs.

So, keep your eyes peeled…

Posted in Uncategorized | 4 Comments

Endings and Beginnings

Well it was a bittersweet day on Friday. After 6+ years at Tableau I have decided that it’s time for a new challenge. Tomorrow I start my first day at Snowflake Computing, a company that is revolutionising the cloud analytic database market.

Snowflake_Computing_logo[1].png

I’m going to continue to blog here and given I still have a deep love for Tableau, some (many) of my posts will continue to be about it and data visualisation in general. However, I’ll also be posting about Snowflake and interesting things I’m learning as I settle in to my new role. Given that a primary use case for Snowflake is BI and analytics, the two topics should be quite complementary.

Thanks for your support and questions over the past few years and I hope you continue to find my ramblings informative.

Posted in Uncategorized | 8 Comments

Hexbin Scatterplot in Tableau

An interesting tweet came across my Twitter-stream the other day, showing a hexbin scatterplot chart type for Power BI:

Having just presented a session at TC17 on working with dense data where Sarah Battersby and I covered (among other things) hexbinning in Tableau, I was intrigued by this viz type and wondered if it could be created in Tableau. I was a little wary as mixing polygons and points together can be complicated, but I hoped it could be done.

Let’s just say that I’m glad I was bald when I started this exercise because it involved quite a bit of hair-pulling. But after a few hours of trial and error and a well-timed break to go sit in the sun and ruminate, I managed to produce this little beauty:

Hexbin Scatterplot.gif

I started with Alberto Cairo’s Datasaurus dataset – a group of datasets that behave similarly to Anscombe’s quartet. Really I was just being lazy as I had it lying around and therefore didn’t need to mock up my own sample scatterplots. The source data looks like this:

dataset

record id

x

y

dino

1

55.3846

97.1795

dino

2

51.5385

96.0256

dino

3

46.1538

94.4872

dino

4

42.8205

91.4103

dino

5

40.7692

88.3333

dino

6

38.7179

84.8718

dino

7

35.641

79.8718

With the data in this format there are two approaches for generating the hexbins – one uses densification to generate the polygon vertex records, and the other generates them through a join to a scaffolding table. I opted to use the scaffolding approach as a) I have a manageable amount of data and b) it makes life easier when you have hexbins that contain just a single point. The scaffold table looks like this:

Point ID
0
1
2
3
4
5
6

And the join of these tables in Tableau looks like this (the join simulates a Cartesian product of the two tables):

The result of this is 7 rows of data for each point on the scatterplot:

I’ll use one of these (PointID=0) to plot the actual point location, and the other 6 to plot the hexagon shape. I’ve blogged on several occasions on how to generate a dynamic hexbin polygon and we’re going to use the same techniques here:

Generate the hexbin center point:
[HexbinX]: HEXBINX([X]/[Hexbin Size], [Y]/[Hexbin Size]) * [Hexbin Size]
[HexbinY]: HEXBINY([X]/[Hexbin Size], [Y]/[Hexbin Size]) * [Hexbin Size]

 

Generate a unique identifier for each hexbin. As you may know, I’m an advocate for efficiency so I use a numeric function for this (based on Cantor’s pairing function) instead of a string function:
[HexbinID]: ([HexbinX]^3 + 3*[HexbinX] + 2*[HexbinX]*[HexbinY] + [HexbinY] + [HexbinY]^2)/2

 

Generate the actual plot points keeping the original location when PointID=0 and using trigonometry to generate the hexagon vertices when PointID=(1..6):
[PointType]: IF [Point ID] = 0 THEN 0 ELSE 1 END
[Angle]: (1.047198 * INDEX())
[PlotX]: IF MIN([PointType]) = 0 THEN MIN([X]) ELSE WINDOW_AVG(MIN([HexbinX])) + [Hexbin Size]*COS([Angle]) END
[PlotY]: IF MIN([PointType]) = 0 THEN MIN([Y]) ELSE WINDOW_AVG(MIN([HexbinY])) + [Hexbin Size]*SIN([Angle]) END

We can now start plotting our viz – first let’s just get the points up:

You can see that the blue marks are the original data points and the orange points are the vertices for the hexagons. Because we want two marks types (a polygon and a point) we need a dual axis chart:

We need to isolate the orange marks on one side and the blue marks on the other. We can’t filter them, so we have to make some clever use of the “hide” function. I duplicated the [PointType] calculation from before so I can use one to colour one axis and the other to colour the other:

We then hide the marks we don’t need on each axis (right-click on the colour swatch in each legend and select “Hide”):

We can now make the hexagon marks on one axis, and circle marks on the other. Tidy up the colours and other formatting:

Finally, we set the axis to be “dual axis”, synchronise and hide the unwanted top axis, and voila:

The last couple of steps I put in were to a) colour the hexbins by the number of points they contain, b) tidy up the tooltips for each mark type, and c) set up a hover action to highlight the elements in a hexbin:

This ended up being quite a challenging viz and required quite a few techniques to get it done. But being able to do it at all reinforces for me that an expressive presentation model that allows you to natively create complex chart types (i.e. the Tableau approach) is faster and more reliable than a model where you are reliant on a developer to write a custom chart widget (i.e. the Power BI model). Even accounting for the trial and error needed to nut out the final successful method, Tableau allowed me to achieve the result much faster than a solution based on coding.

And of course, now that I know how, I can reproduce this solution in minutes.

You can download the workbook from here. Enjoy.

 

PS. I couldn’t help myself. The workbook now includes solution examples using both the scaffolding and the densification approaches.

Hexbin Scatterplot.png

It was a mental itch that needed scratching.

Posted in Uncategorized | 6 Comments

Loupe Tooltips

Well, it’s been a hectic week in Las Vegas for our customer conference but I have a brief window to put up this post. If you’ve been here, I hope you had a great time and learned heaps!

One of the sessions I co-delivered with the ever amazing Sarah Battersby (@MapsOverlord) was “Masters of Hex: Interpreting Dense Data with Tableau”. We’ve presented this session for the last 3 years but as always we update our material for any new features and techniques. This year we have access to the 10.5 beta, and I came up with a great idea to use it for a way to dynamically zoom in on dense data.

I’ve called this idea a “loupe” tooltip – after the magnifying eyepiece used by photographers and watchmakers:

Here’s my starting data – a scatter plot with 100,000 data points packed densely:

As you can see, it’s impossible to make out what is going on in the bottom middle of the chart – there is too much overplotting of the marks even when we make the marks as small as possible and ramp up the transparency. But what if we could dynamically zoom in on a small section – like we were looking through a loupe?

To achieve this, I’ve created binning calculations (I’ve had to use calcs to do this as you can’t use a native bin object in a calculation which I need to do later) to allow me to select a small group:

BinX:    FLOAT(INT([X]*10)/10)
BinY:    FLOAT(INT([Y]*10)/10)

I’ve also created another sheet which we will use in the tooltip – essentially it’s just a duplicate of the primary scatter plot, but I’ve cleaned up the axes and the formatting to make it clean and minimal:

I also made a loupe title sheet to just show the count of the marks in the scope of the loupe:

Now we add them to the tooltip of the primary scatter plot, and set the filter fields to be BinX and BinY:

And voila!

However we have a problem when we loupe over a sparse bin – the loupe axes are zooming in to just show the point:

It would be preferable to fix the loupe to always show the extent of the bin, so we can use the neat trick of placing reference lines to pad out the axes to a larger size than the data demands. We create a couple of boundary calculations for X and Y:

XLowerBound:    MIN([BinX])
XUpperBound:    [XLowerBound] + 0.1
YLowerBound:    MIN([BinY])
YUpperBound:    [YLowerBound] + 0.1

We then put these calcs on the detail shelf of the loupe and we can create a reference band:

Now when we loupe over a sparse bin we have a much nicer view:

The workbook can be downloaded here but remember, you’ll need Tableau 10.5 to view this so make sure you enroll for the beta program!

Enjoy!

Posted in Uncategorized | 2 Comments