I covered some of the basics of table geometry in this post: Synapse Fundamentals for Performance Tuning – Distribution; however, it’s critical to dive deeper into the performance impact of having the correct table distribution defined. Sometimes we think we have the correct distribution chosen for a table but are painfully mistaken as performance suffers. Generally, that can be somewhat obvious when looking at execution plans and seeing shuffle or broadcast moves taking longer than expected. But What happens when you don’t see data movement and queries on an individual node are taking too long? Since replicated tables by their design eliminate data movement, performance impact is a little different than round robin or hash distributed tables. So here I want to drill into replicated tables and their pitfalls. Replicated tables are many times mistakenly used in the name of limiting data movement, but they can have the opposite effect on performance. Below I briefly list some causes of poor performance and the symptoms that you should watch out for when a replicated table is used incorrectly. (Reminder you might want to get some basic knowledge about distribution and how replicated tables work in the post mentioned above.)
- Large replicated tables are a problem because they can take too long to cache which consumes resources for extended periods of time and can cause queuing.
- Replicated tables that are consistently joined to other replicated tables can be a problem due to limiting the compute power of the environment. Replicated tables essential remove the MPP aspect of the environment by focusing all data processing to a single node when not joined to a round robin or hash distributed table. This is problematic for large tables because large tables can benefit from being spread across multiple compute nodes and therefore could be more efficient with parallel processing rather than symmetric processing (which is the result of joining two or more replicated tables.) Note that this is the same principle as having the wrong column chosen for a hash key on a distributed table. Don’t consolidate processing onto single processors instead of spreading the work evenly across multiple processors.
- Dynamic replicated tables can be troublesome because the in-memory cache is invalidated upon every write action (insert, update, and delete). This means that on every subsequent read there is the additional performance impact of recaching the entire table to every node. This is a very common issue and one that may go unnoticed in initial development and becomes a problem in production when the table is used differently than planned.
- Frequently scaled Dedicated SQL Pools are a problem because the cached tables are invalidated when the environment is scaled. Paused environments typically will retain the cache (unless maintenance is performed during pause/resume) but scaled environments add or remove compute nodes which results in the loss of the in-memory cache. After the environment is scaled, the cache must be rebuilt upon first read of the table.
- Too many replicated tables can be a problem if they are all processed at the same time daily and are all being requested to re-cache their data at the same time upon first read of an updated replicated table. Upon first read, a background system process runs the “BuildReplicatedTableCache” operation using a small resource class. More importantly only two table will be cached in parallel so if there are multiple requests to recache tables, it will cause significant queuing. Note that you can see this process as an additional request in the DMV “sys.dm_pdw_exec_requests”.
SELECT *
FROM sys.dm_pdw_exec_requests
WHERE command like '%BuildReplicatedTableCache%'
Replicated tables can be a boon to performance if they are implemented correctly and sparingly. Too often users will see that making a table replicated will solve performance problems in one instance and then go overboard making many or most tables replicated which will come back to bite them. Replicated tables should not be the default but the exception.
There is at least one anti-pattern that I do want to highlight here. While academically speaking, we should only replicate tables where there is less than 2GB of data, it is not unusual for tables to be many times larger than just 2GB. If you have a query pattern where statistics cannot be calculated for a table as part of a join (which results in unpredictable data movement possibly of the wrong dataset), you should consider replicating the smaller dimension table which may be larger than 2GB. Note that this should be proportional to the size of the fact table. For example, replication of a 5GB dimension table should be considered instead of the potential data movement of a 50TB fact table.
To dive deeper into replicated tables, be sure to check out this post: All you need to know about Replicated Tables in Synapse dedicated SQL pool – Microsoft Community Hub