Elastic Pool: When to use it

This is a part of a Series “Stairway to being an Azure SQL DBA“, in which I am planning to cover everything related to working as an Azure SQL DBA.

In the previous two blogs about Elastic Pool we discussed about the concept of Elastic Pool, and also the implementation.

In this blog we’ll see how to identify the pre-existing Azure SQL Databases which can be grouped together into an Elastic Pool.

Before looking into the counters for already existing databases I have seen Azure SQL Databases being used extensively in Microservices and SaaS applications . When the Microservices and SaaS architecture is used the databases size is generally very small sometimes even 5-10 Mb, each not using even more than 1-2 DTUs even at peaks, and their count easily becomes 1000+.
In Azure SQL Database the cheapest database that can be created still has 5 DTUs and costs $5 per month. In this scenario the cumulative cost becomes $5000+ per month and by using two Basic Elastic Pool with 300 DTUs each (Basic Elastic Pool can only have 500 max databases) will cost only $900 and so having 20% less costing.

If you have an existing Azure environment and want to check for the possibility of changing the database model to an Elastic Pool model you need to check some extra parameters and not just the current pricing and configurations (DTUs).

The things to consider when you want to make the move to DTU based Elastic Pool are:

  1. Check the total size of the databases, the different Elastic Pools have different maximum size limitation. For example a 50 eDTUs Elastic Pool can only have 5 GB of cumulative space. In case of Standard and Premium Elastic pool you can have more storage but it is billed at monthly rate of $0.20/GB and $0.41/GB respectively.
  2. Check the total maximum concurrent workers, for each database that you are planning to add.
  3. Check the concurrent sessions for each database as at any particular time there can’t be more than 30000 concurrent sessions for the whole Elastic Pool.
  4. Check for Tempdb size, as the maximum size of Tempdb data file can be 166.7 GB in case of Premium, Standard (S0-S2) and Basic Elastic Pool, and 384 GB in case of S3 and above for Standard Elastic Pools.
  5. Most importantly check the cumulative DTUs load for all the databases. The size of Elastic pool should be 20% more then the peak of the cumulative DTUs load at any particular time.

The things to consider when making to move with vCore Elastic Pool:

  1. Check for the vCore % utilization for every database.
  2. Check for Total storage for all the databases, data and the log file size.
  3. Check for the TempDB size for all the databases.
  4. Confirm the maximum IOPS happening for every database data file. The maximum IOPS depends upon the model chosen for Elastic Pool.
  5. Maximum concurrent logins, sessions and workers.

Once you have collected the relevant information check for the resource limits implied for each model of Elastic Pool, and choose an Elastic pool which have 120% resources of the peaks.

These metrics are much easier to collect if you have configured Azure Log Analytics for your databases, a simple KQL query can provide you all the relevant information. If not using Azure Log Analytics you can get a lot of information from the database DMVs but collecting such data from each database using either ADF or SSIS is going to be a big task in itself.

The above considerations can also be a make or break for choosing the Elastic Pool, but if you have checked everything then changing the model from single Azure SQL Database to Elastic Pool can save you tons.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s