Coder Social home page Coder Social logo

marcingminski / sqlwatch Goto Github PK

View Code? Open in Web Editor NEW
413.0 47.0 164.0 150.56 MB

SQL Server Performance Monitor

Home Page: https://docs.sqlwatch.io

License: Other

PLpgSQL 0.43% C# 5.10% TSQL 83.28% PowerShell 7.57% HTML 3.59% Shell 0.03%
sqlserver powerbi performance-monitoring sqlwatch grafana-dashboard sql-server-monitoring database

sqlwatch's Introduction

About

License GitHub contributors GitHub Repo stars GitHub All Releases GitHub release GitHub commits since latest release (by date) GitHub last commit GitHub closed pull requests GitHub closed issues AppVeyor AppVeyor tests Testspace tests Space Metric Codacy Badge

Twitter Follow YouTube Channel Subscribers YouTube Channel Views

SQLWATCH is decentralised, real to near-real time SQL Server Monitoring Solution. It is designed to provide comprehensive monitoring out of the box and to serve as a monitoring framework for your own projects or applications. It collects performance data in a local database with an option for centralised reporting for convenience.

Server Overview: SQLWATCH Grafana Instance Overview

Detailed charts: SQLWATCH Grafana Dashboard

Features

  • 5 second granularity to capture spikes in your workload.
  • Grafana for real-time dashboarding and Power BI for in depth analysis
  • Minimal performance impact (around 1% on a single core SQL Instance when using broker for invocation).
  • Out of the box collection with minimal configuration required to get it up and running.
  • Extensive configuration available for your convenience.
  • Zero maintenance. It has been designed to maintain itself.
  • Unlimited scalability. As each instance monitors itself, you are not constrained by the capacity of the monitoring server.
  • Works with all supported SQL Servers (with some limitations on 2008R2)

Resources

Architecture

SQLWATCH uses SQL Agent Jobs to trigger data collection on a schedule which write results to a local database. For that reason each monitored SQL Server instance must have SQLWATCH deployed, however, the destination database can be an existing "dbatools" database, msdb or a dedicated SQLWATCH database. For performance reasons, it is advisable to deploy into a dedicated database as we're setting Read Committed Snapshot Isolation which will not be done if deployed to an existing database. The data can be consumed and analysed by the Power BI report.

Requirements

Tested on the following SQL Server versions:

  • 2008 R2 SP3 (with some limitations)
  • 2012
  • 2014
  • 2016
  • 2017
  • 2019

Although Docker and Linux work, the Windows-only WMI based disk utilisation collector will fail.

Installation

The easiest way to install SQLWATCH is to use dbatools:

Install-DbaSqlWatch -SqlInstance SQLSERVER1,SQLSERVER2,SQLSERVER3 -Database SQLWATCH

Alternatively, SQLWATCH can also be deployed manually from the included Dacpac either via command line using SqlPackage.exe:

SqlPackage.exe 
   /Action:Publish 
   /SourceFile:C:\Temp\SQLWATCH.dacpac 
   /TargetDatabaseName:SQLWATCH 
   /TargetServerName:YOURSQLSERVER 
   /p:RegisterDataTierApplication=True

Or by deploying Data-Tier application in SQL Server Management Studio

sqlwatch's People

Contributors

ben-thul avatar chrgraefe avatar chvol avatar claudioessilva avatar codacy-badger avatar colindouglas92 avatar csatnic avatar daniellohmann avatar halinegv avatar j-dc avatar kekcjkee avatar kevchant avatar leedsdba avatar marcingminski avatar mctsql avatar oliveruwehahn avatar pierreletter avatar playman86 avatar pluim003 avatar schenkfab avatar siavashgolchoobian avatar steffenengelhamburg avatar vkotturi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sqlwatch's Issues

Collect Database and individual tables growth history

Is your feature request related to a problem? Please describe.
To record database and individual tables (clustered/non-clustered indexes and heaps) size at time intervals

Describe the solution you'd like
Ideally: Ability to adjust frequency of when collection is taken - however regular intervals should be taken to enable the best report display
Collection of size of clustered/non-clustered indexes and heaps at time taken, stored into repository
DB size - overall DB size, or file size?

Describe alternatives you've considered
None.

Additional context
We need to be careful with this collection mechanism, as certain indexes can be very large and collection could impact performance. We should aim to test this with very large heaps/clustered/non-clustered indexes.

Wanted to raise as an official feature request, so we can open comments about implementation

Collation conflict on sp_sql_perf_mon_logger

Describe the bug
Documentation documented here states the following:

We use Latin1_General_CI_AS collation

The server I'm trying to deploy the solution to is configured under that same collation.

Collation conflict on server that is configured to Latin1_General_CI_AS

To Reproduce
Steps to reproduce the behavior:

  1. Pull down latest version of dbatools
  2. Run Install-DbaSqlWatch -SqlInstance myserver -Database SQLWATCH
  3. Receive error on collation conflict:
WARNING: [03:26:50][Publish-DbaDacPackage] Deployment failed | Could not deploy package.
Error SQL72014: .Net SqlClient Data Provider: Msg 468, Level 16, State 9, Procedure sp_sql_perf_mon_logger, Line 130
Cannot resolve the collation conflict between "SQL_Latin1_General_CP1_CI_AS" and "Latin1_General_CI_AS" in the like
operation.
Error SQL72045: Script execution error.  The executed script:
CREATE PROCEDURE [dbo].[sp_sql_perf_mon_logger]
AS
SET NOCOUNT ON;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
DECLARE @product_version AS NVARCHAR (128);
DECLARE @product_version_major AS DECIMAL (10, 2);
DECLARE @product_version_minor AS DECIMAL (10, 2);
DECLARE @sql_memory_mb AS INT;
DECLARE @os_memory_mb AS INT;
DECLARE @memory_available AS INT;
DECLARE @percent_idle_time AS REAL;
DECLARE @percent_processor_time AS REAL;
DECLARE @date_snapshot_current AS DATETIME;
DECLARE @date_snapshot_previous AS DATETIME;
DECLARE @sp_whoisactive_destination_table AS VARCHAR (255);
DECLARE @sql AS NVARCHAR (4000);
SET @product_version = CONVERT (NVARCHAR (128), serverproperty('productversion'));
SELECT @product_version_major = substring(@product_version, 1, charindex('.', @product_version) + 1),
       @product_version_minor = parsename(CONVERT (VARCHAR (32), @product_version), 2);
SELECT @sql_memory_mb = CONVERT (INT, value)
FROM   sys.configurations
WHERE  name = 'max server memory (mb)';
I


ComputerName : cma-cac1-sql
InstanceName : MSSQLSERVER
SqlInstance  : myserver
Database     : SQLWATCH
Status       : (Failed)

Expected behavior
Install to create database and deploy solution.

Screenshots
Unable to provide due to client environment.

Windows Server (please complete the following information):

  • OS Version: Window Server 2016

SQL Server (please complete the following information):

  • SQL Version: SQL Server 2016 SP2
  • SQL Edition: Enterprise

better job management

Is your feature request related to a problem? Please describe.
Currently jobs are simply scripted into post deploy script. this is only good for small number of jobs. We need meta data driven solution.

Describe the solution you'd like
A simple approach where I can simply give JOB-NAME -> PROC-NAME-1,PROC-NAME-2 which would automatically create job with the name of JOB-NAME and two steps PROC-NAME-1 and PROC-NAME-2.
This will only be used during deployment but it will simplify how we do dev as I am expecting quite few more jobs.

rename sp_ to usp_

Not strictly a bug but closer to a bug than a feature. User objects should not be using sp_ prefix. This prefix is reserved for MS objects, although the risk of MS releasing same objects as ours, the best practice is not to use sp_ prefix.

create integrated unit testing

currently we have AppVeyor pipeline setup but with hardcoded tests. as we add new procedures I want AppVeyor to automatically detect those and execute

point-in-time Performance Counters must be averaged over the period

Describe the bug
When we calculate report, we take two snapshots and calculate the difference between them. This works great for delta counters and wait_stats but not for the point in time counters.

For 15 minutes interval, we could have the following scenario:

minute 1  - CPU 0%
minute 2  - CPU 100%
minute 3  - CPU 100%
minute 4  - CPU 100%
minute 5  - CPU 100%
minute 6  - CPU 100%
minute 7  - CPU 100%
minute 8  - CPU 100%
minute 9  - CPU 100%
minute 10 - CPU 100%
minute 11 - CPU 100%
minute 12 - CPU 100%
minute 13 - CPU 100%
minute 14 - CPU 100%
minute 15 - CPU 1%

And because of this piece:

when pc.cntr_type = 65792 then isnull(pc.cntr_value,0) -- point-in-time
.
.
.
from dbo.sql_perf_mon_perf_counters as pc
inner join [dbo].[vw_sql_perf_mon_time_intervals] s
	on pc.snapshot_time = s.last_snapshot_time 

we can see what the point-in-time performance value will be s.last_snapshot_time so 1% and NOT averaged over the 15 minutes period which would be 86%

I will fix this.

Hard-coded Database Name "SQLWATCH" in DACPAC file

Describe the bug
Hard-coded database name still exists in the postdeploy script.

To Reproduce
Steps to reproduce the behavior:

  1. Download the dacpac from the development branch
  2. Double-click on the dacpac to extract the files
  3. Open the postdeploy.sql file extracted from the dacpac
  4. Line 844: @database_name=N'SQLWATCH',

Expected behavior
Line 844 should be @database_name=@database,

The maximum key length for a clustered index exceeded.

Describe the bug
When ran the setup for the first time got to warnings.

To Reproduce
Steps to reproduce the behavior:

  1. Go to 'setup.sql'
  2. Create 'pk_sql_perf_mon_perf_counters' and 'pk_sql_perf_mon_config_perf_counters'
  3. See error
Warning! The maximum key length for a clustered index is 900 bytes. The index 'pk_sql_perf_mon_perf_counters' has maximum length of 12008 bytes. For some combination of large values, the insert/update operation will fail.
Warning! The maximum key length for a clustered index is 900 bytes. The index 'pk_sql_perf_mon_config_perf_counters' has maximum length of 1536 bytes. For some combination of large values, the insert/update operation will fail.

Expected behavior
The key length should be within the limit.

Additional context
I can see differences in the type VARCHAR vs NVARCHAR + max size.

[object_name] varchar(4000) not null,
[instance_name] varchar(4000) not null,
[counter_name] varchar(4000) not null,

as well ass

[object_name] nvarchar(256) not null,
[instance_name] nvarchar(256) not null,
[counter_name] nvarchar(256) not null,
[base_counter_name] nvarchar(256) null,

The usual object_name won't be longer than 128 characters.
The instance_name it most likely will be OK with the same length (considering FQDN).
Per MSDN counter_name type is nchar(128).

Unless there is a reason for longer fields, the table could use NVARCHAR(128) or to be safe(?) NVARCHAR(256).

Any thoughts?

Procedure sp_sql_perf_mon_logger, Line 130 Cannot resolve the collation conflict

Describe the bug
Installing DACPAC deployment using SSMS I receive this:

Could not deploy package.
Error SQL72014: .Net SqlClient Data Provider: Msg 468, Level 16, State 9, Procedure sp_sql_perf_mon_logger, Line 130 Cannot resolve the collation conflict between "SQL_Latin1_General_CP1_CI_AS" and "Latin1_General_CI_AS" in the like operation.

To Reproduce
Steps to reproduce the behavior:

  1. follow the guide DACPAC deployment using SSMS

Expected behavior
It should install, right?

Screenshots

collation

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: Desktop
  • OS: Windows 10 Enterprise

SSMS:

Microsoft SQL Server Management Studio 13.0.16106.4
Microsoft Analysis Services Client Tools 13.0.1700.441
Microsoft Data Access Components (MDAC) 10.0.15063.0
Microsoft MSXML 3.0 4.0 6.0
Microsoft Internet Explorer 9.11.15063.0
Microsoft .NET Framework 4.0.30319.42000
Operating System 6.3.15063

Additional context
The whole error is:

TITLE: Microsoft SQL Server Management Studio

Could not deploy package.
Error SQL72014: .Net SqlClient Data Provider: Msg 468, Level 16, State 9, Procedure sp_sql_perf_mon_logger, Line 130 Cannot resolve the collation conflict between "SQL_Latin1_General_CP1_CI_AS" and "Latin1_General_CI_AS" in the like operation.
Error SQL72045: Script execution error. The executed script:
CREATE PROCEDURE [dbo].[sp_sql_perf_mon_logger]
AS
SET NOCOUNT ON;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
DECLARE @product_version AS NVARCHAR (128);
DECLARE @product_version_major AS DECIMAL (10, 2);
DECLARE @product_version_minor AS DECIMAL (10, 2);
DECLARE @sql_memory_mb AS INT;
DECLARE @os_memory_mb AS INT;
DECLARE @memory_available AS INT;
DECLARE @percent_idle_time AS REAL;
DECLARE @percent_processor_time AS REAL;
DECLARE @date_snapshot_current AS DATETIME;
DECLARE @date_snapshot_previous AS DATETIME;
DECLARE @sp_whoisactive_destination_table AS VARCHAR (255);
DECLARE @Sql AS NVARCHAR (4000);
SET @product_version = CONVERT (NVARCHAR (128), serverproperty('productversioN'));
SELECT @product_version_major = substring(@product_version, 1, charindex('.', @product_version) + 1),
@product_version_minor = parsename(CONVERT (VARCHAR (32), @product_version), 2);
SELECT @sql_memory_mb = CONVERT (INT, value)
FROM sys.configurations
WHERE name = 'max server memory (mb)';
I
(Microsoft.SqlServer.Dac)


BUTTONS:

OK

Missing index report

Is your feature request related to a problem? Please describe.
N/A

Describe the solution you'd like
Likely a daily or weekly snapshot of what SQL things are missing indexes, this will allow to track how this changes if at all.

Describe alternatives you've considered
N/A

Additional context
N/A

move performance calculations from Power BI to the database

It has been suggested to me that it would be handy to have performance results exposed in the database itself as opposed to just Power BI. This would allow DevOps integrations to have pre and post deployment performance based tests. I like this suggestion.

Currently most of the calculations happen in Power BI (DAX). There are two reasons for this:

  1. I can be implementing Power BI changes without having to deploy database changes, potentially in production environments.
  2. I can use DAX measures.

I imagine DevOps would only require raw snapshot pre and post deployment deltas to establish performance impact and not the entire suite of DAX calculations.

However, once the projects settles and we capture most of what we need the queries will not be changing a lot between versions and therefore the point one above will not have a strong argument anymore. Also, when the project gains trust subsequent and automated schema changes may not be an issue.

I will keep this in mind.

time change or time-zone change are not supported

upon changing clocks backwards (from summer to winter time) or changing time zone to behind current time zone, the data collection will fail due to the primary key violation as it will try to insert the same time again.
The fix is to introduce GETUTCDATEs on insert and TIMEOFFSET when selecting from tables

Ability to handle reboots

Is your feature request related to a problem? Please describe.
In SQL Server performance data is zeroed upon reboot. This means that any cumulative counters that we calculate deltas for will come back as negative completely obscuring results.

Describe the solution you'd like
The solution i to check if the second snapshot value is greater than first snapshots values as counter are ever increasing until reboot.

Describe alternatives you've considered
N/A

Additional context
N/A

[New Feature] Backup and maintenance history

Is your feature request related to a problem? Please describe.
N/A

Describe the solution you'd like
We want to be able to show and track status of backups including size to help capacity planing. Possibly find good candicates for DIFF/FULL backups by calculating extent changes

Describe alternatives you've considered
N/A

Additional context

[New Feature] Auto-update consideration

Once we have a deployment via dbatools, we could have a job triggering a deployment/update once a week which would provide a simple self-update mechanism. It would mean the server would have to have access to the internet so not for everyone but it may be something to consider if there is an apetite.

Collation issue in LIKE operator

Did you check DOCS to make sure there is no workaround?
https://sqlwatch.io/docs/

Describe the bug

(212,1): SQL72014: .Net SqlClient Data Provider: Msg 468, Level 16, State 9, Procedure sp_sql_perf_mon_logger, Line 130 Cannot resolve the collation conflict between "SQL_Latin1_General_CP1_CI_AS" and "Cyrillic_General_CI_AS" in the like operation.

To Reproduce
Steps to reproduce the behavior:

  1. Install sqlwatch in SQL Server with Cyrillic_General_CI_AS

Expected behavior
Being installed

Windows Server (please complete the following information):

  • Windows Server 2016 Standard 10.0 (Build 14393: )

SQL Server (please complete the following information):

  • Microsoft SQL Server 2017 (RTM-CU11) (KB4462262) - 14.0.3038.14 (X64) Developer Edition (64-bit)

SQLWATCH version (from DACPAC or from sysinstances)

  • 1.3.0.0
  • files date at 2018-10-07

Additional context
Published from VS Community 15.8.7

[New Feature] Capturing Drive Space

Is your feature request related to a problem? Please describe.
Capturing drive space usage (drive name, total drive space, free space, etc.). This will allow the user to track how drive space usage changes. This collection could be beneficial if there is an appetite to add a reporting mechanism in the future, as low drive space can be a "show stopper" in some cases. It could also be used to forecast future drive growth.

Describe the solution you'd like
A number of ways this could be collected, with pro's and con's for all. SQL Server does not provide a DMV that includes information for all drives and omits important values like total drive size for all drives that may reside on a server.

The best solution would be to use command line/powershell to pull the information for all drives, but this would be assuming that the user had xp_cmdshell enabled which wouldn't be ideal.

The second best solution would be to use the sys.dm_os_volume_stats system function. This would return all of the necessary information only for drives that contain SQL Server data + log files. This gets us closer to the requested feature but still has its pitfalls, such as a drive that contains the OS but does not contain any SQL Server data/log files would not be included in the collection.

Describe alternatives you've considered
Options above. - The PhysicalDisk PerfMon counters do not include total drive space.

Additional context
Let me know if there is a better solution than the ones described above. If the xp_cmdshell requirement is not an issue, then I would say that's the route to go. There could always be a check of if xp_cmdshell is enabled, then collect, if not, skip. Thanks!

[New Feature] Server configuration overview

Is your feature request related to a problem? Please describe.
N/A

Describe the solution you'd like
Configuraton change history to be able to correlate performace isses with configuration changes. A collateral benefit will be a dashboard page showing in-depth information about the instance.

Describe alternatives you've considered
N/A

Additional context
N/A

[New Feature] Top queries report

Is your feature request related to a problem? Please describe.
N/A

Describe the solution you'd like
Similar to missing index report this would be a snapshot of top queres in a given time slice.

Describe alternatives you've considered
N/A

Additional context
N/A

Index usage history

Is your feature request related to a problem? Please describe.
N/A

Describe the solution you'd like
Index utilisation history to be able to see which indexes are used and which are not

Describe alternatives you've considered
N/A

Additional context
N/A

DACPAC 1.0.0.0 Server Collation SQL_Latin1_General_CP1_CI_AS conflict

Describe the bug
Deployment Error DACPAC 1.0.0.0

Could not deploy package.
Error SQL72014: .Net SqlClient Data Provider: Msg 468, Level 16, State 9, Procedure sp_sql_perf_mon_logger, Line 130 Ein Sortierungskonflikt zwischen "SQL_Latin1_General_CP1_CI_AS" und "Latin1_General_CI_AS" im like-Vorgang kann nicht aufgelรถst werden.
Error SQL72045: Script execution error. The executed script:
CREATE PROCEDURE [dbo].[sp_sql_perf_mon_logger]
AS
...
To Reproduce
just use to deploy a DACPAC App to a new SQL Server

Expected behavior
deployment without error

Screenshots
image

Desktop (please complete the following information):

  • OS: W2k12R2
  • SQL2k16Sp2 Remote

Additional context
Server Collation: SQL_Latin1_General_CP1_CI_AS
Server Lang: EN
SSMS 2017 17.7 EN

[New Feature] SQL ERRORLOG Analysis

Is your feature request related to a problem? Please describe.
Inability to easily see and filter ERRORLOG across multiple instances

Describe the solution you'd like
Ideally read filtered ERRORLOG into a table excluding successful logins and backups for better reporting and analysis

Describe alternatives you've considered
N/A

Additional context
N/A

aligned naming standard

Is your feature request related to a problem? Please describe.
current naming standard is quite long. this is because this used to be a stand-alone ad-hoc solution that would often go into tempdb or any other user db and therefore naming had to be distinct.

Describe the solution you'd like
now we have a VS solution with its own database, something like:
dbo.sql_perf_mon_config_perf_counters --> dbo.*_config_perf_counters

where * is vw, tbl, usp, fn....

Some more ideas :)

Hi Marcin,

Great project - and one i think a lot will find value from!

Agree with Chrissy's comments - also some ideas from me:

A mechanism for people to raise new feature requests or bugs via Issues (dbatools have a great mechanism in place for this already, perhaps this could use something similar?)

  • Potentially issue with setup.sql later down the line if you make changes
    If you were to make changes to setup.sql later down the line and we deployed out to a server which already had those changes deployed it currently wouldn't upgrade some objects to the new version. This is less of an issue i guess while this is deployed to tempdb as it would be dropped if the instance restarted :) Could it be moved to SSDT with a PowerShell cmdlet to push out changes to servers?

  • Centralised perf repo
    Would be great to monitor multiple servers and push that data into a centralised repository. For this, you'd need to define a standard mechanism to pull the data from the source servers and enter into a centralised repo. You could then hook the PowerBI reports into the centralised repo. It would be great if this centralised repo could be on-prem or in the cloud, so that we could hook PowerBI reports straight into DB from Azure for example.

dbo.sp_sql_perf_mon_retention not Found on the Tempd DB after the instalation

i have install the sollution when i try to run the two jobs manually , DBA-PERF-LOGGER succeeded and the DBA-PERF-LOGGER-RETENTION failed with message
Message
Executed as user: NT AUTHORITY\NETWORK SERVICE. Could not find stored procedure 'dbo.sp_sql_perf_mon_retention'. [SQLSTATE 42000] (Error 2812). The step failed.

The descriptoin of latency (ms) is misleading

reported by @larspl on slack (Thank you)

the disk latency (ms) thats comes from io_stall_read/write AVG Counters needs to be renamed. is should show the word "stall" in dashboard
got yesterday a call from a customer : he means that 26523426ms is to high for one IO ......
i confirm

My interpretion: It is understood that the storage IO latency would be measured per storage transaction and averaged over the measurment period, or, preferably max value taken from the measurement period. However, the dashboard shows cumulative stall across all files and all IO transactions and therefore:

  • either description must be changed to reflect this (as suggested by @larspl)
  • or, the actual IO latency calculated

I will investigate further.

Autoconfiguation of available databases

Is your feature request related to a problem? Please describe.
When a new database is added after the solution has been installed it is not being monitored.

Describe the solution you'd like
An autoconfiguration job that will be periodically checking for and adding newly found databases.

Describe alternatives you've considered
using sys.databases but this would require permission to sys.databases for users who just want to view the report. not a good idea.

Additional context
We need a database dimension for the report. This is currently stored in dbo.sql_perf_mon_database and populated at the time of installation. When the new database is added after the solution has been deployed it will not be shown on the report - performance data will still be collected though.
This is because this solution was originally designed as temporary to be run on ad-hoc basis.

[New Feature] Alerts

Utilise SQL Mail to send critical alerts.
In the future alerts will be configurable but for now, we will start with the basic high severity alerts.

This will require database mail to be preconfigured and database mail profile to be saved in SQLWATCH

Alerting in case of Value: Days Until Full is between 0 and 2 ( configurable )

Is your feature request related to a problem? Please describe.
DiskUtilisation-Volume Data shows Growth,Disk Free and Days until Full.
image

Describe the solution you'd like
It's a perfect Alert for each SQL Admin to get a Alert if this value is between 0 and 2 or some days.
If the Value is smaller there are some LogShipping or CleanUp Jobs happening so the Alert needs only to fire if value is between 0 and ..( 2 Days ) i think

Describe alternatives you've considered
We need to move the Calculation of "Days until Full" additional to new SQL Stored Proc and call this alerting proc in SQL Agent each hour.. maybe in the same JOB where the IO Data collected.
(SQLWATCH-LOGGER-DISK-UTILISATION) or in a new (SQLWATCH-ALERT) or so..

for developing:
we can take a look to
exec sys.sp_check_log_shipping_monitor_alert ... so here we get from one proc different error/alert outputs.

[New Feature] Show relations between databases and disks

we have two reports for disk utilisation:

  • database
  • os disks

It would be good to marry them up so we know which databases sit on which disk and how much space they occupy. This is currently not as transparent as I would have wished.

deploying the DACPAC in sql s2016 sp2 windows 2012r2

Did you check DOCS to make sure there is no workaround?
https://sqlwatch.io/docs/

Describe the bug
when i try to deploy a dacpac for the first time i'm getting these errors on the image please let me know if i need to get you something else.
To Reproduce
Steps to reproduce the behavior:

  1. Go to..
  2. Click this..

Expected behavior
A clear and concise desc
sqlwatchioinstalation
ription of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Windows Server (please complete the following information):

  • OS Version: [e.g. Windows 2012 R2]

SQL Server (please complete the following information):

  • SQL Version: [SQL Server 2016 SP3 cu 3]
  • SQL Edition: [Enterprise]

SQL Server Management Studio (SSMS -> about -> copy info):
Microsoft SQL Server Management Studio 14.0.17028.0
Microsoft Analysis Services Client Tools 14.0.338.80
Microsoft Data Access Components (MDAC) 6.3.9600.16384
Microsoft MSXML 3.0 6.0
Microsoft Internet Explorer 9.11.9600.18817
Microsoft .NET Framework 4.0.30319.42000
Operating System 6.3.9600

SQLWATCH version (from DACPAC or from sysinstances)

  • 1.3.2.0

Additional context
Add any other context about the problem here. here is the error message
Could not deploy package.
Error SQL72014: .Net SqlClient Data Provider: Msg 14607, Level 16, State 1, Procedure sysmail_verify_profile_sp, Line 42 profile name is not valid
Error SQL72045: Script execution error. The executed script:
CREATE DATABASE [$(DatabaseName)]
ON
PRIMARY(NAME = [$(DatabaseName)], FILENAME = N'$(DefaultDataPath)$(DefaultFilePrefix)_Primary.mdf')
LOG ON (NAME = [$(DatabaseName)_log], FILENAME = N'$(DefaultLogPath)$(DefaultFilePrefix)_Primary.ldf') COLLATE SQL_Latin1_General_CP1_CI_AS;

(Microsoft.SqlServer.Dac)

[New Feature] Capturing Deadlocks

Is your feature request related to a problem? Please describe.
Capturing deadlock information will help a user identify the queries/processes, lock types, graphs, etc. involved which will assist in implementing a solution to handling the deadlock(s).

Describe the solution you'd like
From a technical details perspective, a number of ways to capture the deadlock information. My suggestion would either be using an extended event or just capture what's provided in the ring buffer. As far as displaying the data, I assume we would want a visualization similar to the whoisactive tab.

Describe alternatives you've considered
N/A

Additional context
N/A

Installation Options - Finding Power BI Template files

I can see the benefit of including an installation command within dbatools. However the downside to this is a user is not going to be made aware where to find the PBIT files that are pulled down from GitHub when they perform the installation.
image

I would recommend removing the commands from dbatools and simply building out your own PowerShell module. You can always put a dependency upon dbatools and utilize the structure for deployments, but you get more visibility by letting your solution be a sqlwatch module on the PowerShell Gallery. In addition to that you can then set the path to be a more easily accessible (or known) directory such as the root of the module folder.

As it stands right now you are using the temporary folder for dbatools which is buried within a user's profile path on Windows. If you for example had your own module that pulled the repository down I would know to just go look at my PSModulePath for the sqlwatch and find a folder (would also put them in a folder for better organization) of say PowerBITemplates.

Some ideas

Awesome project! Looking forward to seeing more.

Something to consider - making a PowerShell module that helps with the installation. You can include Invoke-SqlCmd2.ps1 in an internal\functions directory, then use it to execute commands without an SMO dependency.

The one thing that's ๐Ÿค” is which prefix to use (to avoid clashing with microsoft). I'm thinking SW and then aliasing two commands
Install-SWatch -> Install-SqlWatch
Uninstall-SqlWatch -> Uninstall-SqlWatch

Then you can make a command like
Install-SWatch -SqlInstance sql01 -Database DBA
Uninstall-SqlWatch
Reset-SWDatabase -SqlInstance sql01 -Database DBA
Read-SWData -SqlInstance sql01 -Database DBA

If you set a dependency to PSFramework, you can write the app config locally then do like
Get-SWConfig
Set-SWConfig
Which writes the default sqlinstance and db, so users can just execute

Read-SWData

Without -SqlInstance sql01 -Database DBA and it'll still work.

In the future, if you'd be interested, we can host you at /sqlcollaborative - you'd have admin privs to your repo but have greater exposure.

Object Reference not set to an instance of an object - PowerBI

I am setting up a new report. Within PowerBI, after applying the changes following the editing of the list of servers, I got the following error multiple times after being prompted to run the Native Database Queries. Cancelling then continues a cascade of error and more database queries messages. As it stands, I am unable to test the tool.

I am using Beta 3 and have configured the servers with sqlwatch databases as instructed.

imgur link

Details are:

Feedback Type:
Frown (Error)

Error Message:
Object reference not set to an instance of an object.

Stack Trace:
   at Microsoft.Mashup.Evaluator.ChannelMessenger.CreateChannel()
   at Microsoft.PowerBI.Client.OleDbProvider.PowerBIMashupDataSource.EvaluationSession.Dispose()
   at Microsoft.Mashup.Host.ProviderShared.PackageMashupEvaluator.<>c__DisplayClass7_0.<Evaluate>b__1(Object o)
   at Microsoft.Mashup.Evaluator.Interface.InvokeManyAction`1.CheckDone()
   at Microsoft.Mashup.OleDbProvider.DataHost.MashupEvaluator.<>c__DisplayClass4_1.<Evaluate>b__1()
   at Microsoft.Mashup.OleDbProvider.DataHost.AsyncResultIDataReaderSourceHelper.InvokeThenOnDispose(Action`1 callback, AsyncResult`1 result, Action action)
   at Microsoft.Mashup.OleDbProvider.DataHost.MashupEvaluator.<>c__DisplayClass4_0.<Evaluate>b__0(EvaluationResult2`1 result)
   at Microsoft.Mashup.Evaluator.Interface.EvaluationResultExtensions.InvokeThenOnDispose(Action`1 callback, EvaluationResult2`1 result, Action action)
   at Microsoft.Mashup.Evaluator.Interface.EvaluationResultExtensions.InvokeThenOnDispose[T](Action`1 callback, EvaluationResult2`1 result, Action action)
   at Microsoft.Mashup.Evaluator.LimitedDocumentEvaluatorFactory.Evaluation`1.Complete(EvaluationResult2`1 result)
   at Microsoft.Mashup.Evaluator.LimitedDocumentEvaluatorFactory.Evaluation`1.Cancel()
   at Microsoft.Mashup.Evaluator.Interface.CompositeEvaluation.Cancel()
   at Microsoft.Mashup.Host.ProviderShared.NotifyingMashupEvaluator.MashupEvaluation.Cancel()
   at Microsoft.Mashup.OleDbProvider.MashupRowset.AbortIfNotComplete(Boolean timedOut)
   at Microsoft.Mashup.OleDbProvider.MashupRowset.Microsoft.OleDb.IDBAsynchStatus.Abort(HCHAPTER hChapter, DBASYNCHOP eOperation)
   at Microsoft.Mashup.OleDbProvider.TracingRowset.Microsoft.OleDb.IDBAsynchStatus.Abort(HCHAPTER hChapter, DBASYNCHOP eOperation)

Stack Trace Message:
Object reference not set to an instance of an object.

Invocation Stack Trace:
   at Microsoft.Mashup.Host.Document.ExceptionExtensions.GetCurrentInvocationStackTrace()
   at Microsoft.Mashup.Client.UI.Shared.StackTraceInfo..ctor(String exceptionStackTrace, String invocationStackTrace, String exceptionMessage)
   at Microsoft.PowerBI.Client.Windows.Telemetry.PowerBIUserFeedbackServices.GetStackTraceInfo(Exception e)
   at Microsoft.PowerBI.Client.Windows.Telemetry.PowerBIUserFeedbackServices.ReportException(IWindowHandle activeWindow, IUIHost uiHost, FeedbackPackageInfo feedbackPackageInfo, Exception e, Boolean useGDICapture)
   at Microsoft.Mashup.Client.UI.Shared.UnexpectedExceptionHandler.<>c__DisplayClass14_0.<HandleException>b__0()
   at Microsoft.Mashup.Host.Document.SynchronizationContextExtensions.<>c__DisplayClass0_1.<SendAndMarshalExceptions>b__0(Object null)
   at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor)
   at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Object[] parameters, Object[] arguments)
   at System.Delegate.DynamicInvokeImpl(Object[] args)
   at System.Windows.Forms.Control.InvokeMarshaledCallbackDo(ThreadMethodEntry tme)
   at System.Windows.Forms.Control.InvokeMarshaledCallbackHelper(Object obj)
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
   at System.Windows.Forms.Control.InvokeMarshaledCallback(ThreadMethodEntry tme)
   at System.Windows.Forms.Control.InvokeMarshaledCallbacks()
   at System.Windows.Forms.Control.WndProc(Message& m)
   at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
   at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg)
   at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg)
   at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(IntPtr dwComponentID, Int32 reason, Int32 pvLoopData)
   at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context)
   at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context)
   at System.Windows.Forms.Form.ShowDialog(IWin32Window owner)
   at Microsoft.Mashup.Client.UI.Shared.WebDialogs.WebDialog.<>n__0(IWindowHandle owner)
   at Microsoft.Mashup.Client.UI.Shared.WindowManager.ShowModal[T](T dialog, Func`1 showModalFunction)
   at Microsoft.Mashup.Client.UI.Shared.Ux.FloatingDialog.SimpleDialog.NativeQueryPermissionDialog.Show(IWindowHandle owner, IUIHost uiHost, PackageReference packageReference, QueryPermissionChallenge queryPermissionChallenge)
   at Microsoft.Mashup.Client.UI.Shared.QueriesExtensions.TryResolveQueryPermissionChallenge(Queries queries, QueryPermissionChallenge queryPermissionChallenge, IWindowHandle owner, IUIHost uiHost)
   at Microsoft.Mashup.Host.Document.SynchronizationContextExtensions.<>c__DisplayClass1_0`1.<SendAndMarshalExceptions>b__0()
   at Microsoft.Mashup.Host.Document.SynchronizationContextExtensions.<>c__DisplayClass0_1.<SendAndMarshalExceptions>b__0(Object null)
   at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor)
   at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Object[] parameters, Object[] arguments)
   at System.Delegate.DynamicInvokeImpl(Object[] args)
   at System.Windows.Forms.Control.InvokeMarshaledCallbackDo(ThreadMethodEntry tme)
   at System.Windows.Forms.Control.InvokeMarshaledCallbackHelper(Object obj)
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
   at System.Windows.Forms.Control.InvokeMarshaledCallback(ThreadMethodEntry tme)
   at System.Windows.Forms.Control.InvokeMarshaledCallbacks()
   at System.Windows.Forms.Control.WndProc(Message& m)
   at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
   at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg)
   at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg)
   at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(IntPtr dwComponentID, Int32 reason, Int32 pvLoopData)
   at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context)
   at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context)
   at System.Windows.Forms.Form.ShowDialog(IWin32Window owner)
   at Microsoft.Mashup.Client.UI.Shared.WebDialogs.WebDialog.<>n__0(IWindowHandle owner)
   at Microsoft.Mashup.Client.UI.Shared.WindowManager.ShowModal[T](T dialog, Func`1 showModalFunction)
   at Microsoft.Mashup.Client.UI.Shared.Ux.FloatingDialog.FloatingDialog.ShowDialogWithTimeout(IWindowHandle owner, Nullable`1 showTimeout)
   at Microsoft.Mashup.Client.UI.Shared.Ux.WindowService.ShowDialogWithTimeout(FloatingDialog dialog, Nullable`1 showTimeout)
   at Microsoft.PowerBI.Client.Windows.FloatingDialog.KoLoadToReportDialog.RunEvaluationsAndLoadWithDialog(IPowerBIWindowService windowService, IEnumerable`1 queriesToLoad, IEnumerable`1 queriesRequiringTableDefinitionGeneration, Boolean requireFullDataRefresh)
   at Microsoft.PowerBI.Client.Windows.FloatingDialog.KoLoadToReportDialog.<>c__DisplayClass26_0.<TryShowDialog>b__0(IWindowHandle ownerWindow)
   at Microsoft.PowerBI.Client.Windows.Services.PowerBIWindowServiceExtensions.<>c__DisplayClass0_0.<ExecuteWithLegacyOwnerWindow>b__0(IWindowHandle ownerWindow)
   at Microsoft.PowerBI.Client.Windows.FloatingDialog.KoLoadToReportDialog.TryShowDialog(Report report, IPowerBIWindowService windowService, IQueryServices queryServices, IUIHost uiHost, IEnumerable`1 queriesToLoad, IEnumerable`1 queriesRequiringTableDefinitionGeneration, String loadReason, LocalizedString title, Boolean requireFullDataRefresh, ModelChange modelChangeToExecuteBeforeSchemaSync)
   at Microsoft.PowerBI.Client.Windows.FloatingDialog.KoLoadToReportDialog.TryApplyQueryChanges(Report report, IPowerBIWindowService windowService, IQueryServices queryServices, IUIHost uiHost, String loadReason)
   at Microsoft.PowerBI.Client.Windows.Services.WarningDetectionService.<DetectReportLoadWarning>b__17_2()
   at Microsoft.Mashup.Host.Document.ExceptionHandlerExtensions.HandleExceptions(IExceptionHandler exceptionHandler, Action action)
   at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor)
   at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Object[] parameters, Object[] arguments)
   at System.Delegate.DynamicInvokeImpl(Object[] args)
   at System.Windows.Forms.Control.InvokeMarshaledCallbackDo(ThreadMethodEntry tme)
   at System.Windows.Forms.Control.InvokeMarshaledCallbackHelper(Object obj)
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
   at System.Windows.Forms.Control.InvokeMarshaledCallback(ThreadMethodEntry tme)
   at System.Windows.Forms.Control.InvokeMarshaledCallbacks()
   at System.Windows.Forms.Control.MarshaledInvoke(Control caller, Delegate method, Object[] args, Boolean synchronous)
   at System.Windows.Forms.Control.Invoke(Delegate method, Object[] args)
   at System.Windows.Forms.WindowsFormsSynchronizationContext.Send(SendOrPostCallback d, Object state)
   at Microsoft.PowerBI.Client.Windows.Services.WarningNotificationService.ExecuteOnUiThread(Action action)
   at Microsoft.PowerBI.Client.Windows.Services.WarningAction.<>c__DisplayClass17_0.<ToFunction>b__0()
   at Microsoft.PowerBI.Client.Windows.Services.WarningAction.ExecuteAndCompleteTask(Func`1 func)
   at Microsoft.Mashup.Host.Document.ExceptionHandlerExtensions.HandleExceptions(IExceptionHandler exceptionHandler, Action action)
   at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor)
   at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Object[] parameters, Object[] arguments)
   at System.Delegate.DynamicInvokeImpl(Object[] args)
   at System.Windows.Forms.Control.InvokeMarshaledCallbackDo(ThreadMethodEntry tme)
   at System.Windows.Forms.Control.InvokeMarshaledCallbackHelper(Object obj)
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
   at System.Windows.Forms.Control.InvokeMarshaledCallback(ThreadMethodEntry tme)
   at System.Windows.Forms.Control.InvokeMarshaledCallbacks()
   at System.Windows.Forms.Control.WndProc(Message& m)
   at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
   at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg)
   at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg)
   at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(IntPtr dwComponentID, Int32 reason, Int32 pvLoopData)
   at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context)
   at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context)
   at System.Windows.Forms.Form.ShowDialog(IWin32Window owner)
   at Microsoft.Mashup.Client.UI.Shared.WebDialogs.WebDialog.<>n__0(IWindowHandle owner)
   at Microsoft.Mashup.Client.UI.Shared.WindowManager.ShowModal[T](T dialog, Func`1 showModalFunction)
   at Microsoft.PowerBI.Client.Program.<>c__DisplayClass4_0.<Main>b__1()
   at Microsoft.PowerBI.Client.Windows.IExceptionHandlerExtensions.<>c__DisplayClass3_0.<HandleExceptionsWithNestedTasks>b__0()
   at Microsoft.Mashup.Host.Document.ExceptionHandlerExtensions.HandleExceptions(IExceptionHandler exceptionHandler, Action action)
   at Microsoft.PowerBI.Client.Program.Main(String[] args)


Model Default Mode:
Import

Snapshot Trace Logs:
C:\Users\atay\Microsoft\Power BI Desktop Store App\FrownSnapShot1903185833.zip

Performance Trace Logs:
C:\Users\atay\Microsoft\Power BI Desktop Store App\PerformanceTraces.zip

Disabled Preview Features:
PBI_shapeMapVisualEnabled
MIntellisense
PBI_SpanishLinguisticsEnabled
PBI_PdfImport
PBI_ColumnProfiling
PBI_variationUIChange
PBI_PythonSupportEnabled
PBI_showIncrementalRefreshPolicy
PBI_showManageAggregations
PBI_FuzzyMatching
PBI_EnableWebDiagramView
PBI_improvedFilterExperience
PBI_qnaLiveConnect
PBI_keyDrivers

Disabled DirectQuery Options:
TreatHanaAsRelationalSource

Cloud:
GlobalCloud

Formulas:


section Section1;

[ Description = "The reference list of servers must be defined upfront. This dropdown allows to quickly download data for a selected server only. Handy when investigating issue on a production server, saves having to amend the master list of servers." ]
shared #"Server Name" = "1. All" meta [IsParameterQuery=true, ExpressionIdentifier=#"Quick download server list", Type="Any", IsParameterQueryRequired=true];

[ Description = "Type GETDATE() to get most recent data or a specific date and time. For example '2018-12-31 23:59:59' (note the quotes) will mark the end of the timeline." ]
shared #"Report end time (datetime)" = "'2019-02-08 10:00'" meta [IsParameterQuery=true, Type="Text", IsParameterQueryRequired=true];

[ Description = "How many hours to import going back from the Report end time. For example if this parameter = 4 and End Time = GETDATE() the report will show last 4 hours from now. This way you can travel back in time and see any time slice of historical performance data." ]
shared #"Report window (hours)" = "1" meta [IsParameterQuery=true, Type="Text", IsParameterQueryRequired=true];

[ Description = "type DEFAULT to let it automatically calculate best inteval based on report windo or you can specify custom interval. For example 5 minute inteval will show data points every 5 minutes and 60 minutes every hour. For large windows i.e. last 30 days you will want to aggregate over longer interval and for shorter windows i.e. 1 hour you will want to investigate at 2 minute intervals." ]
shared #"Report interval minutes" = "DEFAULT" meta [IsParameterQuery=true, Type="Text", IsParameterQueryRequired=true];

shared fn_get_time_intervals = let Source = (server_name as any, sqlwatch_database_name as any) =>
let
    Source = Sql.Database(server_name, sqlwatch_database_name, [Query="select /* SQLWATCH Power BI fn_get_time_intervals */
snapshot_interval_end 
from [dbo].[ufn_time_intervals](1,"&#"Report interval minutes"&","&#"Report window (hours)"&","&#"Report end time (datetime)"&")"]),
    #"Inserted Time" = Table.AddColumn(Source, "Time", each DateTime.Time([snapshot_interval_end]), type time),
    #"Renamed Columns" = Table.RenameColumns(#"Inserted Time",{{"Time", "snapshot_interval_end_timeonly"}}),
    #"Inserted Date" = Table.AddColumn(#"Renamed Columns", "Date", each DateTime.Date([snapshot_interval_end]), type date),
    #"Renamed Columns1" = Table.RenameColumns(#"Inserted Date",{{"Date", "snapshot_interval_end_dateonly"}}),
    #"Removed Columns" = Table.RemoveColumns(#"Renamed Columns1",{"snapshot_interval_end_dateonly", "snapshot_interval_end_timeonly"})
in
    #"Removed Columns"
in Source;

shared fn_get_server_info = let 
    Source = (server_name as any, sqlwatch_database_name as any) =>
let
    Source = Sql.Database(server_name, sqlwatch_database_name, [Query="select /* SQLWATCH Power BI fn_get_server_info */
*, sql_version=@@VERSION from dbo.sql_perf_mon_server"])
in
    Source
in Source;

shared fn_get_wait_statistics = let 
    Source = (server_name as any, sqlwatch_database_name as any) =>
let
    Source = Sql.Database(server_name, sqlwatch_database_name, [Query="select /* SQLWATCH Power BI fn_get_wait_statistics */
	 [report_time] = s.[snapshot_interval_end]  		
	,[wait_time_ms] = sum(w2.[wait_time_ms] - isnull(w1.[wait_time_ms],0)) 	
        ,w2.wait_type
FROM [dbo].[sql_perf_mon_wait_stats] w2
INNER JOIN [dbo].[ufn_time_intervals](1,"&#"Report interval minutes"&","&#"Report window (hours)"&","&#"Report end time (datetime)"&") s	
	on w2.snapshot_time = s.last_snapshot_time
	and w2.snapshot_type_id = s.snapshot_type_id
LEFT JOIN [dbo].[sql_perf_mon_wait_stats] w1
    ON w1.wait_type = w2.wait_type
    and w1.snapshot_time = s.first_snapshot_time
    and w1.snapshot_type_id = w2.snapshot_type_id
WHERE w2.wait_time_ms > 0GROUP BY w2.wait_type
	,s.[snapshot_interval_end]
	,s.[report_time_interval_minutes]
	,[snapshot_age_hours]
HAVING sum(w2.[wait_time_ms] - isnull(w1.[wait_time_ms],0)) > 0
", CommandTimeout=#duration(0, 0, 5, 0)])
in
    Source
in Source;

shared fn_get_performance_counters = let Source = (server_name as any, sqlwatch_database_name as any) =>
let
    Source = Sql.Database(server_name, sqlwatch_database_name, [Query="select /* SQLWATCH Power BI fn_get_performance_counters */ distinct
		 [report_time] = s.snapshot_interval_end
		,[object_name] = rtrim(ltrim(pc.[object_name]))
		,[instance_name] = case when rtrim(ltrim(pc.[object_name])) = 'win32_perfformatteddata_perfos_processor' and rtrim(ltrim(pc.counter_name)) = 'Processor Time %' and rtrim(ltrim(isnull(pc.instance_name,'')))
 = 'system' then 'os' else rtrim(ltrim(pc.instance_name)) end
		,counter_name = rtrim(ltrim(pc.counter_name))
		,[cntr_value] = convert(real,(
			case 
				when sc.object_name = 'Batch Resp Statistics' then case when pc.cntr_value > fsc.cntr_value then cast((pc.cntr_value - fsc.cntr_value) as real) else 0 end -- delta absolute
				when pc.cntr_type = 65792 then isnull(pc.cntr_value,0) -- point-in-time
				when pc.cntr_type = 272696576 then case when (pc.cntr_value > fsc.cntr_value) then (pc.cntr_value - fsc.cntr_value) / cast(datediff(second,s.first_snapshot_time,s.last_snapshot_time) as real) else 0 end -- delta rate
				when pc.cntr_type = 537003264 then isnull(cast(100.0 as real) * pc.cntr_value / nullif(bc.cntr_value, 0),0) -- ratio
				when pc.cntr_type = 1073874176 then isnull(case when pc.cntr_value > fsc.cntr_value then isnull((pc.cntr_value - fsc.cntr_value) / nullif(bc.cntr_value - fsc.base_cntr_value, 0) / cast(datediff(second,s.first_snapshot_time,s.last_snapshot_time) as real), 0) else 0 end,0) -- delta ratio
			end))
		 
from dbo.sql_perf_mon_perf_counters as pc

INNER JOIN [dbo].[ufn_time_intervals](1,"&#"Report interval minutes"&","&#"Report window (hours)"&","&#"Report end time (datetime)"&") s
	on pc.snapshot_time = s.last_snapshot_time 
        and s.snapshot_type_id = pc.snapshot_type_id
inner join dbo.sql_perf_mon_config_perf_counters as sc
on rtrim(pc.object_name) like '%' + sc.object_name
	and rtrim(pc.counter_name) = sc.counter_name
	and (rtrim(pc.instance_name) = sc.instance_name 
		or (
			sc.instance_name = '<* !_total>' 
			and rtrim(pc.instance_name) <> '_total'
			)
	)
outer apply (
			select top (1) fsc.cntr_value,
							fsc.base_cntr_value
			from (
				select * 
				from [dbo].[sql_perf_mon_perf_counters] 
				where snapshot_time = s.first_snapshot_time
				) as fsc
			where fsc.[object_name] = rtrim(pc.[object_name])
					and fsc.counter_name = rtrim(pc.counter_name)
					and fsc.instance_name = rtrim(pc.instance_name)
			) as fsc
outer apply (
			select top (1) pc2.cntr_value
			from [dbo].[sql_perf_mon_perf_counters] as pc2 
			where snapshot_time = s.last_snapshot_time 
				and pc2.cntr_type = 1073939712
					and pc2.object_name = pc.object_name
					and pc2.instance_name = pc.instance_name
					and rtrim(pc2.counter_name) = sc.base_counter_name
			) as bc

/* this is slow, moved to union
outer apply (
			/* point in time counters must be averaged */
			select object_name, counter_name, instance_name, snapshot_interval_end, cntr_value=avg(cntr_value)
			from (
				select cavg.[object_name], cavg.counter_name, cavg.instance_name, s.snapshot_interval_end, cntr_value
				from [dbo].[sql_perf_mon_perf_counters] cavg
				where cavg.[object_name] = rtrim(pc.[object_name])
					and cavg.counter_name = rtrim(pc.counter_name)
					and cavg.instance_name = rtrim(pc.instance_name)
					and cavg.snapshot_time between s.first_snapshot_time and s.last_snapshot_time
					and cavg.cntr_type = 65792
					) t
			group by object_name, counter_name, instance_name, snapshot_interval_end

			) pnt
*/

where 		pc.cntr_type in (272696576,1073874176)

union all

-- point in time and ratio counters that must be averaged over period of time
select 
    report_time = s.snapshot_interval_end
    ,[object_name] = rtrim(ltrim(pc.[object_name]))
    ,[instance_name] = case when rtrim(ltrim(pc.[object_name])) = 'win32_perfformatteddata_perfos_processor' and rtrim(ltrim(pc.counter_name)) = 'Processor Time %' and rtrim(ltrim(isnull(pc.instance_name, ''))) = 'system' then 'os' 
            else rtrim(ltrim(pc.instance_name)) end
    , [counter_name] = rtrim(ltrim(pc.counter_name))
	, [cntr_value] = avg(convert(real,(case 
		when pc.cntr_type = 65792 then isnull(pc.cntr_value,0) -- point-in-time
		when pc.cntr_type = 537003264 then isnull(cast(100.0 as real) * pc.cntr_value / nullif(bc.cntr_value, 0),0) -- ratio
		end)))
from [dbo].[sql_perf_mon_perf_counters] pc

INNER JOIN [dbo].[ufn_time_intervals](1,"&#"Report interval minutes"&","&#"Report window (hours)"&","&#"Report end time (datetime)"&") s
	on pc.snapshot_time = s.last_snapshot_time 
	and s.snapshot_type_id = pc.snapshot_type_id

inner join dbo.sql_perf_mon_config_perf_counters as sc
	on rtrim(pc.object_name) like '%' + sc.object_name
	and rtrim(pc.counter_name) = sc.counter_name
	and (rtrim(pc.instance_name) = sc.instance_name 
		or (
			sc.instance_name = '<* !_total>' 
			and rtrim(pc.instance_name) <> '_total'
			)
	)
outer apply (
			select top (1) pc2.cntr_value
			from [dbo].[sql_perf_mon_perf_counters] as pc2 
			where snapshot_time = s.last_snapshot_time 
				and pc2.cntr_type = 1073939712
					and pc2.object_name = pc.object_name
					and pc2.instance_name = pc.instance_name
					and rtrim(pc2.counter_name) = sc.base_counter_name
			) as bc
    where pc.snapshot_time between s.first_snapshot_time and s.last_snapshot_time
    and pc.cntr_type in (65792,537003264)
	group by s.snapshot_interval_end, rtrim(ltrim(pc.[object_name])),
	case when rtrim(ltrim(pc.[object_name])) = 'win32_perfformatteddata_perfos_processor' and rtrim(ltrim(pc.counter_name)) = 'Processor Time %' and rtrim(ltrim(isnull(pc.instance_name, ''))) = 'system' then 'os' 
            else rtrim(ltrim(pc.instance_name)) end,
			rtrim(ltrim(pc.counter_name))
"])
in
    Source
in Source;

shared fn_get_process_memory = let Source = (server_name as any, sqlwatch_database_name as any) =>
let
    Source = Sql.Database(server_name, sqlwatch_database_name, [Query="select /* SQLWATCH Power BI fn_get_process_memory */
                [report_time] = s.[snapshot_interval_end] 
		,[Physical memory in use (MB)]=avg([physical_memory_in_use_kb]/1024)
		,[Locked page allocations (MB)]=avg([locked_page_allocations_kb]/1024)
		,[Page faults]=avg([page_fault_count])
		,[Memory utilisation %]=avg([memory_utilization_percentage])
		 
	from [dbo].[sql_perf_mon_os_process_memory]  pm
	
	INNER JOIN [dbo].[ufn_time_intervals](1,"&#"Report interval minutes"&","&#"Report window (hours)"&","&#"Report end time (datetime)"&") s
		on pm.snapshot_time >= s.first_snapshot_time
		and pm.snapshot_time <= s.last_snapshot_time
	        and pm.snapshot_type_id = s.snapshot_type_id
	
	group by s.[snapshot_interval_end],s.[report_time_interval_minutes]	"]),
    #"Unpivoted Only Selected Columns" = Table.Unpivot(Source, {"Locked page allocations (MB)", "Page faults", "Physical memory in use (MB)"}, "Attribute", "Value")
in
    #"Unpivoted Only Selected Columns"
in Source;

shared fn_get_memory_clerks = let Source = (server_name as any, sqlwatch_database_name as any) =>
let
    Source = Sql.Database(server_name, sqlwatch_database_name, [Query="select /* SQLWATCH Power BI fn_get_memory_clerks */
        [report_time] = s.[snapshot_interval_end]
          , [allocated_mb]= max([allocated_kb]) / 1024.0	
          , [clerk_name]=upper([clerk_name])
	
	from [dbo].[ufn_time_intervals](1,"&#"Report interval minutes"&","&#"Report window (hours)"&","&#"Report end time (datetime)"&") s
    inner join [dbo].[sql_perf_mon_os_memory_clerks]  mc
		on mc.snapshot_time >= s.first_snapshot_time
		and mc.snapshot_time <= s.last_snapshot_time
		and mc.snapshot_type_id = s.snapshot_type_id
	group by [clerk_name], s.[snapshot_interval_end],s.[report_time_interval_minutes]"])
in
    Source
in Source;

shared fn_get_file_statistics = let Source = (server_name as any, sqlwatch_database_name as any) =>
let
    Source = Sql.Database(server_name, sqlwatch_database_name, [Query="select /* SQLWATCH Power BI fn_get_file_statistics */
		[report_time] = s.[snapshot_interval_end]
		,fs2.[database_name]
                ,fs2.[logical_file_name]
		,fs2.[type_desc]
		,fs2.[logical_disk]
		,[num_of_mb_transferred_delta] = ((fs2.[num_of_bytes_read] - fs1.[num_of_bytes_read]) + (fs2.[num_of_bytes_written] - fs1.[num_of_bytes_written])) / 1024.0 / 1024.0
		,[io_stall_ms_delta] = ((fs2.[io_stall_read_ms] - fs1.[io_stall_read_ms]) + (fs2.io_stall_write_ms - fs1.io_stall_write_ms))
		,[io_num_of_readswrites] = ((fs2.num_of_reads - fs1.num_of_reads) + (fs2.num_of_writes - fs1.num_of_writes)) 
		,[io_latency_ms] = case when ((fs2.num_of_reads - fs1.num_of_reads) + (fs2.num_of_writes - fs1.num_of_writes)) <= 0 then 0 else ((fs2.[io_stall_read_ms] - fs1.[io_stall_read_ms]) + (fs2.io_stall_write_ms - fs1.io_stall_write_ms)) / ((fs2.num_of_reads - fs1.num_of_reads) + (fs2.num_of_writes - fs1.num_of_writes)) end
                ,[io_num_of_reads_delta]=fs2.num_of_reads - fs1.num_of_reads
                ,[io_num_of_writes_delta]=fs2.num_of_writes - fs1.num_of_writes
                ,[io_stall_read_ms_delta]=fs2.[io_stall_read_ms] - fs1.[io_stall_read_ms]
                ,[io_stall_write_ms_delta]=fs2.io_stall_write_ms - fs1.io_stall_write_ms
                ,[io_latency_ms_read] = case when (fs2.num_of_reads - fs1.num_of_reads) <= 0 then 0 else ((fs2.[io_stall_read_ms] - fs1.[io_stall_read_ms])) / ((fs2.num_of_reads - fs1.num_of_reads)) end
                ,[io_latency_ms_write] = case when (fs2.num_of_writes - fs1.num_of_writes) <= 0 then 0 else ((fs2.[io_stall_write_ms] - fs1.[io_stall_write_ms])) / ((fs2.num_of_writes - fs1.num_of_writes)) end
                ,[num_of_mb_read_delta] = ((fs2.[num_of_bytes_read] - fs1.[num_of_bytes_read]) ) / 1024.0 / 1024.0
                ,[num_of_mb_written_delta] = ((fs2.[num_of_bytes_written] - fs1.[num_of_bytes_written])) / 1024.0 / 1024.0
                ,[size_on_disk_mb]=fs2.[size_on_disk_bytes]/ 1024.0 / 1024.0
                ,[size_on_disk_mb_delta]=(fs2.[size_on_disk_bytes]-fs1.[size_on_disk_bytes]) / 1024.0 / 1024.0
                ,s.[snapshot_type_id]
                ,[is_latest] = case when fs2.[snapshot_time] = (select max(t.snapshot_time) 
		from [dbo].[sql_perf_mon_snapshot_header] t
		where t.snapshot_type_id = snapshot_type_id) then 1 else 0 end
    from [dbo].[sql_perf_mon_file_stats]  fs1

        inner join [dbo].[ufn_time_intervals](1,"&#"Report interval minutes"&","&#"Report window (hours)"&","&#"Report end time (datetime)"&") s
			on fs1.snapshot_time = s.first_snapshot_time
                        and fs1.snapshot_type_id = s.snapshot_type_id
	inner join [dbo].[sql_perf_mon_file_stats]  fs2 
		on fs1.database_name = fs2.database_name 
		and fs1.logical_file_name = fs2.logical_file_name 
		and fs2.snapshot_time = s.last_snapshot_time
                and fs2.snapshot_type_id = fs1.snapshot_type_id
"])
in
    Source
in Source;

shared fn_get_database = let Source = (server_name as any, sqlwatch_database_name as any) =>
let
    Source = Sql.Database(server_name, sqlwatch_database_name),
    dbo_sql_perf_mon_database = Source{[Schema="dbo",Item="sql_perf_mon_database"]}[Data],
    #"Filtered Rows" = Table.SelectRows(dbo_sql_perf_mon_database, each ([database_current] = true) and ([database_name] <> "model" and [database_name] <> "mssqlsystemresource"))
in
    #"Filtered Rows"
in Source;

shared fn_get_who_is_active = let Source = (server_name as any, sqlwatch_database_name as any) =>
let
    Source = Sql.Database(server_name, sqlwatch_database_name, [Query="select /* SQLWATCH Power BI fn_get_who_is_active */
	  report_time = s.[snapshot_interval_end]
        , session = convert(varchar(max),st.session_id) + ': ' + st.program_name
        , st.program_name
	, session_start_time = min(st.start_time )
        , session_duration_s = datediff(second,min(st.start_time ),max(f.snapshot_time))
	, session_end_time = max(f.snapshot_time)
        , [sql_command] = replace(replace(convert(varchar(max),st.[sql_command]),'<?query --' + char(13) + char(10),''),char(13) + char(10) + '--?>','')
        , [sql_text] = replace(replace(convert(varchar(max),st.[sql_text]),'<?query --' + char(13) + char(10),''),char(13) + char(10) + '--?>','')
        , st.database_name, st.login_name, st.host_name
  FROM [dbo].[sql_perf_mon_who_is_active] st
  INNER JOIN [dbo].[sql_perf_mon_who_is_active] f
   ON st.session_id = f.session_id
   AND st.start_time = f.start_time
    inner join [dbo].[ufn_time_intervals](1,"&#"Report interval minutes"&","&#"Report window (hours)"&","&#"Report end time (datetime)"&") s
	ON st.snapshot_time >= s.[first_snapshot_time]
        AND f.snapshot_time <= s.[last_snapshot_time]
        and f.snapshot_type_id = s.snapshot_type_id
GROUP BY st.session_id, st.program_name, s.[snapshot_interval_end], replace(replace(convert(varchar(max),st.[sql_command]),'<?query --' + char(13) + char(10),''),char(13) + char(10) + '--?>',''),replace(replace(convert(varchar(max),st.[sql_text]),'<?query --' + char(13) + char(10),''),char(13) + char(10) + '--?>',''),st.database_name, st.login_name, st.host_name"])
in
    Source
in Source;

shared ref_report_title = let
    Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WCg70CXcMcfbQ8/RXio0FAA==", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [report_title = _t]),
    #"Changed Type" = Table.TransformColumnTypes(Source,{{"report_title", type text}})
in
    #"Changed Type";

shared fn_get_xe_query_waits = let Source = (server_name as any, sqlwatch_database_name as any) =>
let
    Source = Sql.Database(server_name, sqlwatch_database_name, [Query="select /* SQLWATCH Power BI fn_get_xe_query_waits */
		 [event_time]
      ,[session_id]
      ,xw.[wait_type]
      ,[duration]
      ,[signal_duration]
      ,[wait_resource]
      ,[query]
      ,[snapshot_time]
        ,[report_time] = s.[snapshot_interval_end]
    from [dbo].[logger_perf_xes_waits] xw
     inner join [dbo].[ufn_time_intervals](6,"&#"Report interval minutes"&","&#"Report window (hours)"&","&#"Report end time (datetime)"&") s
		on xw.snapshot_time >= s.first_snapshot_time
		and xw.snapshot_time <= s.last_snapshot_time
	        and xw.snapshot_type_id = s.snapshot_type_id
"])
in
    Source
in Source;

shared fn_get_xe_query_processing = let Source = (server_name as any, sqlwatch_database_name as any) =>
let
    Source = Sql.Database(server_name, sqlwatch_database_name, [Query="select /* SQLWATCH Power BI fn_get_xe_query_processing */
		 [event_time]
      ,[max_workers]
      ,[workers_created]
      ,[idle_workers]
      ,[pending_tasks]
      ,[unresolvable_deadlocks]
      ,[deadlocked_scheduler]
      ,[report_time] = s.[snapshot_interval_end]
    from [dbo].[logger_perf_xes_query_processing] qp     
    inner join [dbo].[ufn_time_intervals](1,"&#"Report interval minutes"&","&#"Report window (hours)"&","&#"Report end time (datetime)"&") s
		on qp.snapshot_time >= s.first_snapshot_time
		and qp.snapshot_time <= s.last_snapshot_time
	        and qp.snapshot_type_id = s.snapshot_type_id
"])
in
    Source
in Source;

shared fn_get_xe_io_subsystem = let Source = (server_name as any, sqlwatch_database_name as any) =>
let
    Source = Sql.Database(server_name, sqlwatch_database_name, [Query="select /* SQLWATCH Power BI fn_get_xe_io_subsystem */
		 [event_time]
      ,[io_latch_timeouts]
      ,[total_long_ios]
      ,[longest_pending_request_file]
      ,[longest_pending_request_duration]
      ,[report_time] = s.[snapshot_interval_end]
    from [dbo].[logger_perf_xes_iosubsystem] io
    inner join [dbo].[ufn_time_intervals](1,"&#"Report interval minutes"&","&#"Report window (hours)"&","&#"Report end time (datetime)"&") s
		on io.snapshot_time >= s.first_snapshot_time
		and io.snapshot_time <= s.last_snapshot_time
	        and io.snapshot_type_id = s.snapshot_type_id
"])
in
    Source
in Source;

shared ref_servers = let
    Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WCvZ383SMiQkO9DEyMDRR0lECssIdQ5w9gMwQ1+AQEBUU6qoUqxOtFOrrHhwWBFRgYIhHYSwA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [server_name = _t, sqlwatch_database_name = _t, server_group = _t, enabled = _t]),
    #"Changed Type" = Table.TransformColumnTypes(Source,{{"server_name", type text}, {"enabled", type logical}}),
    #"Filtered Rows" = Table.SelectRows(#"Changed Type", each ([enabled] = true))
in
    #"Filtered Rows";

shared #"Server Group" = let
    Source = ref_servers,
    #"Removed Other Columns" = Table.SelectColumns(Source,{"server_group"}),
    #"Removed Duplicates" = Table.Distinct(#"Removed Other Columns")
in
    #"Removed Duplicates";

shared #"Server Info" = let
    Source = ref_servers,
    #"Removed Other Columns" = Table.SelectColumns(Source,{"server_name", "sqlwatch_database_name", "server_group"}),
    #"Invoked Custom Function" = Table.AddColumn(#"Removed Other Columns", "fn_server_info", each fn_get_server_info([server_name], [sqlwatch_database_name])),
    #"Expanded fn_server_info1" = Table.ExpandTableColumn(#"Invoked Custom Function", "fn_server_info", {"physical_name", "servername", "service_name", "local_net_address", "local_tcp_port", "sql_version"}, {"physical_name", "servername", "service_name", "local_net_address", "local_tcp_port", "sql_version"}),
    #"Added Index" = Table.AddIndexColumn(#"Expanded fn_server_info1", "server_id", 1, 1),
    #"Added Conditional Column" = Table.AddColumn(#"Added Index", "quick_download", each if ([server_name] = #"Server Name" or "1. All" = #"Server Name") then [server_name] else null),
    #"Filtered Rows" = Table.SelectRows(#"Added Conditional Column", each ([quick_download] <> null))
in
    #"Filtered Rows";

shared #"Wait Statistics" = let
    Source = #"Server Info",
    #"Removed Other Columns" = Table.SelectColumns(Source,{"server_name", "sqlwatch_database_name", "server_id"}),
    #"Invoked Custom Function" = Table.AddColumn(#"Removed Other Columns", "fn_get_wait_statistics", each fn_get_wait_statistics([server_name], [sqlwatch_database_name])),
    #"Expanded fn_get_wait_statistics" = Table.ExpandTableColumn(#"Invoked Custom Function", "fn_get_wait_statistics", {"report_time", "wait_time_ms", "wait_type"}, {"report_time", "wait_time_ms", "wait_type"}),
    #"Filtered Rows" = Table.SelectRows(#"Expanded fn_get_wait_statistics", each [wait_time_ms] > 0),
    #"Added Custom" = Table.AddColumn(#"Filtered Rows", "wait_time_t", each #duration(0,0,0,[wait_time_ms]/1000)),
    #"Changed Type" = Table.TransformColumnTypes(#"Added Custom",{{"wait_time_ms", Int64.Type}, {"report_time", type datetime}, {"wait_type", type text}, {"wait_time_t", type duration}}),
    #"Removed Columns" = Table.RemoveColumns(#"Changed Type",{"sqlwatch_database_name","server_name"}),
    #"Merged Queries" = Table.NestedJoin(#"Removed Columns",{"server_id", "report_time", "wait_type"},#"XE Query Waits",{"server_id", "report_time", "wait_type"},"XE Query Waits",JoinKind.LeftOuter),
    #"Expanded XE Query Waits" = Table.ExpandTableColumn(#"Merged Queries", "XE Query Waits", {"event_time", "session_id", "duration", "signal_duration", "wait_resource", "query"}, {"XE Query Waits.event_time", "XE Query Waits.session_id", "XE Query Waits.duration", "XE Query Waits.signal_duration", "XE Query Waits.wait_resource", "XE Query Waits.query"})
in
    #"Expanded XE Query Waits";

shared #"Performance Counters" = let
    Source = #"Server Info",
    #"Removed Other Columns" = Table.SelectColumns(Source,{"server_name", "sqlwatch_database_name","server_id"}),
    #"Invoked Custom Function" = Table.AddColumn(#"Removed Other Columns", "fn_get_performance_counters", each fn_get_performance_counters([server_name], [sqlwatch_database_name])),
    #"Expanded fn_get_performance_counters" = Table.ExpandTableColumn(#"Invoked Custom Function", "fn_get_performance_counters", {"report_time", "object_name", "instance_name", "counter_name", "cntr_value"}, {"report_time", "object_name", "instance_name", "counter_name", "cntr_value"}),
    #"Removed Columns" = Table.RemoveColumns(#"Expanded fn_get_performance_counters",{"sqlwatch_database_name","server_name"}),
    #"Changed Type" = Table.TransformColumnTypes(#"Removed Columns",{{"report_time", type datetime}, {"cntr_value", type number}}),
    #"Merged Queries" = Table.NestedJoin(#"Changed Type",{"server_id", "instance_name"},Database,{"server_id", "database_name"},"Database",JoinKind.LeftOuter),
    #"Expanded Database" = Table.ExpandTableColumn(#"Merged Queries", "Database", {"database_id"}, {"database_id"}),
    #"Added Custom" = Table.AddColumn(#"Expanded Database", "perf_counter_key", each [object_name] & " " & [counter_name]),
    #"Merged Queries1" = Table.NestedJoin(#"Added Custom",{"counter_name"},ref_perf_counters_poster,{"counter_name"},"ref_perf_counters_poster",JoinKind.LeftOuter),
    #"Expanded ref_perf_counters_poster" = Table.ExpandTableColumn(#"Merged Queries1", "ref_perf_counters_poster", {"desired_value_txt", "description"}, {"desired_value_txt", "description"})
in
    #"Expanded ref_perf_counters_poster";

shared Database = let
    Source = #"Server Info",
    #"Removed Other Columns" = Table.SelectColumns(Source,{"server_id", "server_name", "sqlwatch_database_name"}),
    #"Invoked Custom Function" = Table.AddColumn(#"Removed Other Columns", "fn_get_database", each fn_get_database([server_name], [sqlwatch_database_name])),
    #"Expanded fn_get_database" = Table.ExpandTableColumn(#"Invoked Custom Function", "fn_get_database", {"database_name", "database_create_date", "database_current"}, {"database_name", "database_create_date", "database_current"}),
    #"Filtered Rows" = Table.SelectRows(#"Expanded fn_get_database", each ([database_current] = true)),
    #"Removed Other Columns1" = Table.SelectColumns(#"Filtered Rows",{"server_id", "database_name", "database_create_date"}),
    #"Added Index" = Table.AddIndexColumn(#"Removed Other Columns1", "database_id", 1000, 1)
in
    #"Added Index";

shared #"Time Intervals" = let
    Source = #"Server Info",
    #"Removed Other Columns" = Table.SelectColumns(Source,{"server_name", "sqlwatch_database_name","server_id"}),
    #"Invoked Custom Function" = Table.AddColumn(#"Removed Other Columns", "fn_get_time_intervals", each fn_get_time_intervals([server_name], [sqlwatch_database_name])),
    #"Expanded fn_get_time_intervals" = Table.ExpandTableColumn(#"Invoked Custom Function", "fn_get_time_intervals", {"snapshot_interval_end"}, {"snapshot_interval_end"}),
    #"Filtered Rows" = Table.SelectRows(#"Expanded fn_get_time_intervals", each ([snapshot_interval_end] <> null)),
    #"Removed Other Columns1" = Table.SelectColumns(#"Filtered Rows",{"snapshot_interval_end"}),
    #"Changed Type" = Table.TransformColumnTypes(#"Removed Other Columns1",{{"snapshot_interval_end", type datetime}}),
    #"Removed Duplicates" = Table.Distinct(#"Changed Type")
in
    #"Removed Duplicates";

shared #"Process Memory" = let
    Source = #"Server Info",
    #"Removed Other Columns" = Table.SelectColumns(Source,{"server_name", "sqlwatch_database_name","server_id"}),
    #"Invoked Custom Function" = Table.AddColumn(#"Removed Other Columns", "fn_get_process_memory", each fn_get_process_memory([server_name], [sqlwatch_database_name])),
    #"Expanded fn_get_process_memory" = Table.ExpandTableColumn(#"Invoked Custom Function", "fn_get_process_memory", {"report_time", "Memory utilisation %", "Attribute", "Value"}, {"report_time", "Memory utilisation %", "Attribute", "Value"}),
    #"Changed Type" = Table.TransformColumnTypes(#"Expanded fn_get_process_memory",{{"report_time", type datetime}, {"Memory utilisation %", Int64.Type}, {"Value", Int64.Type}}),
    #"Removed Columns" = Table.RemoveColumns(#"Changed Type",{"sqlwatch_database_name","server_name"}),
    #"Divided Column" = Table.TransformColumns(#"Removed Columns", {{"Memory utilisation %", each _ / 100, type number}}),
    #"Changed Type1" = Table.TransformColumnTypes(#"Divided Column",{{"Memory utilisation %", Percentage.Type}})
in
    #"Changed Type1";

shared #"Memory Clerks" = let
    Source = #"Server Info",
    #"Removed Other Columns" = Table.SelectColumns(Source,{"server_name", "sqlwatch_database_name", "server_id"}),
    #"Invoked Custom Function" = Table.AddColumn(#"Removed Other Columns", "fn_get_memory_clerks", each fn_get_memory_clerks([server_name], [sqlwatch_database_name])),
    #"Expanded fn_get_memory_clerks" = Table.ExpandTableColumn(#"Invoked Custom Function", "fn_get_memory_clerks", {"report_time", "allocated_mb", "clerk_name"}, {"report_time", "allocated_mb", "clerk_name"}),
    #"Removed Columns" = Table.RemoveColumns(#"Expanded fn_get_memory_clerks",{"sqlwatch_database_name","server_name"}),
    #"Changed Type" = Table.TransformColumnTypes(#"Removed Columns",{{"report_time", type datetime}, {"allocated_mb", type number}})
in
    #"Changed Type";

shared #"File Statistics" = let
    Source = #"Server Info",
    #"Removed Other Columns" = Table.SelectColumns(Source,{"server_name", "sqlwatch_database_name","server_id"}),
    #"Invoked Custom Function" = Table.AddColumn(#"Removed Other Columns", "fn_get_file_statistics", each fn_get_file_statistics([server_name], [sqlwatch_database_name])),
    #"Expanded fn_get_file_statistics" = Table.ExpandTableColumn(#"Invoked Custom Function", "fn_get_file_statistics", {"report_time", "database_name", "logical_file_name", "type_desc", "logical_disk", "num_of_mb_transferred_delta", "io_stall_ms_delta", "io_num_of_readswrites", "io_latency_ms", "io_num_of_reads_delta", "io_num_of_writes_delta", "io_stall_read_ms_delta", "io_stall_write_ms_delta", "io_latency_ms_read", "io_latency_ms_write", "num_of_mb_read_delta", "num_of_mb_written_delta", "size_on_disk_mb", "size_on_disk_mb_delta", "snapshot_type_id", "is_latest"}, {"report_time", "database_name", "logical_file_name", "type_desc", "logical_disk", "num_of_mb_transferred_delta", "io_stall_ms_delta", "io_num_of_readswrites", "io_latency_ms", "io_num_of_reads_delta", "io_num_of_writes_delta", "io_stall_read_ms_delta", "io_stall_write_ms_delta", "io_latency_ms_read", "io_latency_ms_write", "num_of_mb_read_delta", "num_of_mb_written_delta", "size_on_disk_mb", "size_on_disk_mb_delta", "snapshot_type_id", "is_latest"}),
    #"Changed Type" = Table.TransformColumnTypes(#"Expanded fn_get_file_statistics",{{"report_time", type datetime}, {"num_of_mb_transferred_delta", type number}, {"io_stall_ms_delta", Int64.Type}, {"io_num_of_readswrites", Int64.Type}, {"io_latency_ms", Int64.Type}, {"io_num_of_reads_delta", Int64.Type}, {"io_num_of_writes_delta", Int64.Type}, {"io_stall_read_ms_delta", Int64.Type}, {"io_stall_write_ms_delta", Int64.Type}, {"io_latency_ms_read", Int64.Type}, {"io_latency_ms_write", Int64.Type}, {"num_of_mb_read_delta", type number}, {"num_of_mb_written_delta", type number}}),
    #"Merged Queries" = Table.NestedJoin(#"Changed Type",{"server_id", "database_name"},Database,{"server_id", "database_name"},"Database",JoinKind.LeftOuter),
    #"Expanded Database" = Table.ExpandTableColumn(#"Merged Queries", "Database", {"database_id"}, {"database_id"}),
    #"Removed Columns" = Table.RemoveColumns(#"Expanded Database",{"sqlwatch_database_name","server_name"}),
    #"Added Conditional Column" = Table.AddColumn(#"Removed Columns", "io_latency_t", each if [io_latency_ms] <> null then #duration(0,0,0,[io_latency_ms]/1000) else null),
    #"Changed Type1" = Table.TransformColumnTypes(#"Added Conditional Column",{{"io_latency_t", type duration}, {"size_on_disk_mb", type number}, {"size_on_disk_mb_delta", Int64.Type}, {"snapshot_type_id", Int64.Type}, {"is_latest", Int64.Type}})
in
    #"Changed Type1";

shared #"XE Query Waits" = let
    Source = #"Server Info",
    #"Removed Other Columns" = Table.SelectColumns(Source,{"server_name", "sqlwatch_database_name", "server_id"}),
    #"Invoked Custom Function" = Table.AddColumn(#"Removed Other Columns", "fn_get_xe_query_waits", each fn_get_xe_query_waits([server_name], [sqlwatch_database_name])),
    #"Expanded fn_get_xe_query_waits" = Table.ExpandTableColumn(#"Invoked Custom Function", "fn_get_xe_query_waits", {"event_time", "session_id", "wait_type", "duration", "signal_duration", "wait_resource", "query", "snapshot_time", "report_time"}, {"event_time", "session_id", "wait_type", "duration", "signal_duration", "wait_resource", "query", "snapshot_time", "report_time"}),
    #"Removed Columns" = Table.RemoveColumns(#"Expanded fn_get_xe_query_waits",{"sqlwatch_database_name","server_name"}),
    #"Changed Type" = Table.TransformColumnTypes(#"Removed Columns",{{"event_time", type datetime}, {"session_id", Int64.Type}, {"duration", Int64.Type}, {"signal_duration", Int64.Type}, {"snapshot_time", type datetime}, {"report_time", type datetime}})
in
    #"Changed Type";

shared #"XE Query Processing" = let
    Source = #"Server Info",
    #"Removed Other Columns" = Table.SelectColumns(Source,{"server_name", "sqlwatch_database_name", "server_id"}),
    #"Invoked Custom Function" = Table.AddColumn(#"Removed Other Columns", "fn_get_xe_query_processing", each fn_get_xe_query_processing([server_name], [sqlwatch_database_name])),
    #"Expanded fn_get_xe_query_processing" = Table.ExpandTableColumn(#"Invoked Custom Function", "fn_get_xe_query_processing", {"event_time", "max_workers", "workers_created", "idle_workers", "pending_tasks", "unresolvable_deadlocks", "deadlocked_scheduler", "report_time"}, {"event_time", "max_workers", "workers_created", "idle_workers", "pending_tasks", "unresolvable_deadlocks", "deadlocked_scheduler", "report_time"}),
    #"Removed Columns" = Table.RemoveColumns(#"Expanded fn_get_xe_query_processing",{"sqlwatch_database_name","server_name"}),
    #"Changed Type" = Table.TransformColumnTypes(#"Removed Columns",{{"event_time", type datetime}, {"max_workers", Int64.Type}, {"workers_created", Int64.Type}, {"idle_workers", Int64.Type}, {"pending_tasks", Int64.Type}, {"unresolvable_deadlocks", Int64.Type}, {"deadlocked_scheduler", Int64.Type}, {"report_time", type datetime}})
in
    #"Changed Type";

shared #"XE IO Subsystem" = let
    Source = #"Server Info",
    #"Removed Other Columns" = Table.SelectColumns(Source,{"server_name", "sqlwatch_database_name", "server_id"}),
    #"Invoked Custom Function" = Table.AddColumn(#"Removed Other Columns", "fn_get_xe_io_subsystem", each fn_get_xe_io_subsystem([server_name], [sqlwatch_database_name])),
    #"Expanded fn_get_xe_io_subsystem" = Table.ExpandTableColumn(#"Invoked Custom Function", "fn_get_xe_io_subsystem", {"event_time", "io_latch_timeouts", "total_long_ios", "longest_pending_request_file", "longest_pending_request_duration", "report_time"}, {"event_time", "io_latch_timeouts", "total_long_ios", "longest_pending_request_file", "longest_pending_request_duration", "report_time"}),
    #"Removed Columns" = Table.RemoveColumns(#"Expanded fn_get_xe_io_subsystem",{"sqlwatch_database_name","server_name"}),
    #"Changed Type" = Table.TransformColumnTypes(#"Removed Columns",{{"event_time", type datetime}, {"io_latch_timeouts", Int64.Type}, {"total_long_ios", Int64.Type}, {"longest_pending_request_file", type text}, {"longest_pending_request_duration", Int64.Type}, {"report_time", type datetime}}),
    #"Added Conditional Column" = Table.AddColumn(#"Changed Type", "longest_pending_request_duration_t", each if [longest_pending_request_duration] <> null then #duration(0,0,0,[longest_pending_request_duration]) else null),
    #"Changed Type1" = Table.TransformColumnTypes(#"Added Conditional Column",{{"longest_pending_request_duration_t", type duration}})
in
    #"Changed Type1";

shared WhoIsActive = let
    Source = #"Server Info",
    #"Removed Other Columns" = Table.SelectColumns(Source,{"server_name", "sqlwatch_database_name"}),
    #"Invoked Custom Function" = Table.AddColumn(#"Removed Other Columns", "fn_get_who_is_active", each fn_get_who_is_active([server_name], [sqlwatch_database_name])),
    #"Expanded fn_get_who_is_active" = Table.ExpandTableColumn(#"Invoked Custom Function", "fn_get_who_is_active", {"report_time", "session", "program_name", "session_start_time", "session_duration_s", "session_end_time", "sql_command", "sql_text", "database_name", "login_name", "host_name"}, {"report_time", "session", "program_name", "session_start_time", "session_duration_s", "session_end_time", "sql_command", "sql_text", "database_name", "login_name", "host_name"}),
    #"Changed Type" = Table.TransformColumnTypes(#"Expanded fn_get_who_is_active",{{"report_time", type datetime}, {"session", type text}, {"program_name", type text}, {"session_start_time", type datetime}, {"session_duration_s", Int64.Type}, {"session_end_time", type datetime}, {"sql_command", type text}, {"sql_text", type text}, {"database_name", type text}, {"login_name", type text}, {"host_name", type text}}),
    #"Removed Columns" = Table.RemoveColumns(#"Changed Type",{"sqlwatch_database_name"})
in
    #"Removed Columns";

shared ref_percentage_rage = let
    Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMlDSUTJUio0FAA==", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [min_percentage = _t, max_percentage = _t]),
    #"Changed Type" = Table.TransformColumnTypes(Source,{{"min_percentage", Int64.Type}, {"max_percentage", Int64.Type}})
in
    #"Changed Type";

shared #"File Statistics Sankey" = let
    Source = #"File Statistics",
    #"Added Custom" = Table.AddColumn(Source, "logical_file_name_ind", each [logical_file_name] & " (" & [type_desc] & ")"),
    #"Removed Other Columns" = Table.SelectColumns(#"Added Custom",{"server_id", "report_time", "database_name", "num_of_mb_transferred_delta", "io_latency_ms", "size_on_disk_mb", "is_latest", "database_id", "logical_file_name_ind"}),
    #"Renamed Columns" = Table.RenameColumns(#"Removed Other Columns",{{"database_name", "source"}, {"logical_file_name_ind", "destination"}}),
    #"Reordered Columns" = Table.ReorderColumns(#"Renamed Columns",{"server_id", "database_id", "report_time", "source", "destination", "num_of_mb_transferred_delta","io_latency_ms"}),
    #"Appended Query" = Table.Combine({#"Reordered Columns", #"File Statistics Sankey Stage"}),
    #"Reordered Columns1" = Table.ReorderColumns(#"Appended Query",{"server_id", "database_id", "is_latest", "report_time", "source", "destination", "size_on_disk_mb", "num_of_mb_transferred_delta", "io_latency_ms"}),
    #"Filtered Rows" = Table.SelectRows(#"Reordered Columns1", each ([is_latest] = 1))
in
    #"Filtered Rows";

shared #"File Statistics Sankey Stage" = let
    Source = #"File Statistics",
    #"Added Custom" = Table.AddColumn(Source, "logical_file_name_ind", each [logical_file_name] & " (" & [type_desc] & ")"),
    #"Removed Other Columns" = Table.SelectColumns(#"Added Custom",{"server_id", "report_time", "logical_file_name", "logical_disk", "num_of_mb_transferred_delta", "io_latency_ms", "size_on_disk_mb", "is_latest", "database_id", "logical_file_name_ind"}),
    #"Reordered Columns" = Table.ReorderColumns(#"Removed Other Columns",{"server_id", "database_id", "report_time", "logical_file_name", "logical_disk", "num_of_mb_transferred_delta", "io_latency_ms", "logical_file_name_ind", "size_on_disk_mb"}),
    #"Renamed Columns" = Table.RenameColumns(#"Reordered Columns",{{"logical_file_name", "source"}, {"logical_disk", "destination"}}),
    #"Reordered Columns1" = Table.ReorderColumns(#"Renamed Columns",{"server_id", "database_id", "is_latest", "report_time", "source", "logical_file_name_ind", "destination", "size_on_disk_mb", "num_of_mb_transferred_delta", "io_latency_ms"}),
    #"Removed Columns" = Table.RemoveColumns(#"Reordered Columns1",{"source"}),
    #"Renamed Columns1" = Table.RenameColumns(#"Removed Columns",{{"logical_file_name_ind", "source"}}),
    #"Filtered Rows" = Table.SelectRows(#"Renamed Columns1", each ([is_latest] = 1)),
    #"Changed Type" = Table.TransformColumnTypes(#"Filtered Rows",{{"size_on_disk_mb", type number}})
in
    #"Changed Type";

shared ref_wait_stat_category_mapping = let
    Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("nVrfc9u4Ef5XPJ7p280005c+g+CKQgQSNH5I8l1vMG7qy3nOiTOK0/b+++6CFC2KC0juixIL3wLgYvHtt0v98sut+PfD0/PDP5+en17/vGkOLz++fb/96XYtahulDk70KkqhNX63EtrB7a8/FW1WSoPzFkQblWkbi5/StL0Gr0x37STaNFL0PliIO6H8tWad8WqlpKClYg13AQJca+tVC/gp3OZai52xm7hco/rx22+Phxv1V0Nfyypa6DVuCv8VNX7lbbgA3FnlIYtUJs4cyqN60YAyWni5jrW/Cgb7q2Cb/ipYp6+CufVVsJBfdOY1x8D0w+un38f5Cg5ZwliHLGGsQ5Yw1iFLGOuQJWzuEPny5RuG6evTy9fkEWeClRAdtKJfG7xFbfDzp7logYFt71Ok4YV+l6VrkS4G+1O71Y/n5xv/+N/XG/f4cEiP0zoHwsr1BZgzol/cnhxucXnUv54f8W+5Brnpjeo8c2cnzKb33MBmfMid2EDy/AKhxc/3aWkbnQbgMaYhQswub7oandjViYXyMEvE5nxcGYskJ2pt5CZOjmTg4+azEzqwW9y3qjXE5CYWRE8Vq94YHVeYF9i1BlBdOS+s5x01QjCwLoJad8VM7t55aBe8PcNcGIW2Ly/Uqbj2vo9CSuADxN1pbwU6uQqrFfoy66GEysUo5bp0rHhmQXvHTUAYgtAzuXXwtdl1HG4PsVauJ7LA/Syy6BtqSHywhW6OOHLOFGLQhXZJJEeYwuDdR9QLDTKAF97loRwJz8dKdnO6nY/NOXY+NifW+Vhg59wq2OENW6lOJUWxfKCXT3/QHOicFk8+O6RcfqhglV/Lqq/nz3o+nF+SRvOr0mhhYVea2JXm3Zcs92XL/Ibyczo817Y4WrAtHIsrnEveKhSszkY+f376+nlIFwObZIcxm5TGJhbKI5BoiPyj6Hvoah6a2Arxp6Pt45eXw5+UMlsk3/V5cp6GtdmthNIRUaepbwlc6ggW5ow77rnGOSsiJq2cz4K3ClldaMQalDO4HRYZPO6RxFVE6WLkDPN0OLwcBk/UVauspbRbtQxjZqEL4mCRaULHpGkWTdUHcGKCQ//l0uKqa6Js69lE3ePrf14Of4wyW7j7TsYOfCp7lDmdcY6ctujOQmoOazFnxvouD8C14jHb9UJuwOexRkNdnQ6b198fD7TtynJfV6Iezru3RoJzLMYa8rA0XQcyJQALEhTellFP5CzwsQehSXmQ0d5ncAsNRjBYDL2jZWk/J0k/B0n60XkzVzNHUJJ4+OC0Q3wiY2tuKhmsQ8enY+fGayEjJUZ2rJIS/aZD20UUPJ3TQ0kuBa7NG3hRaX4otL23qmmAPUhgNwfIIz1IOtyIdJLdKOxBhrS1XvUw3EAE42F0QnMGRGbphNhB8mfSkVFaseMnSJjQQgEStMaI8TeN8Kjd+OdugrC1EmwUJK06BSGeg5ZI0J5174jFDVlx3lCYYdJzcaOqRf+2vcGnzkY6juNZbJE2RiG6RIxORzXuPLIRizFRhFoV1tl45OWOEkIL/BkRgoJ+S9UtFlsiE1czHCUnHXLuOWE7zHOOFDAPcxK0Fh2YwN7vRIiJZSkYs4h9zw31IvGHjhUyZeiXmXYCDvceg4KQxH+NzRDgXTc6CAufu6AsO99d/9FUcaM0G8vD6JHFRWX4GLqjw4A2NkgWdAFXJn/GQ2MCrF3X6C5UAgpVQ206fncDeLrmQ5XiKFiHy25Q9+QNZw1FEjAXdjXDu1A5aVXP1w4Fw8SF714uIMfR5blkZHBDrfoZT3UIhQtwi8oJk1hlldy4C35Oxa0J7BGPlynKteiaTLKmVt6QJFKpzUdlAqF8R51YRA1NEqqBoXP0tCI4du8OZKytGRoBcQP3PAjn67xChmqCYjNmKTFTCwFjweNhUdDKHJdMrYbjlflATyDzYOI66NIVzbmCYEkR8G3jCYZqmRr9iRbo1Ld8CiTcSWchGz6EQy0tdJLUwht7LtRPkab6iF4h0WLxNtMbBHNfnntUblGsUoF+AY3pC+M3zU7oC2Ai4eQy6BKRTmKfg6+N85cBJVVxxJQ2lQB2RwVPCcGWTwvQ2KHMAtJFvrTUGKI5SsBclXi2M7bNP9gIoo8SaGiuZXd0BBSv4BGUumx4WlbBvMM2Ie02KcfSbMQVGLOnQ7JP5X8qhqlNypqpFpB2t6nr1oNVhqWSrRJvzcbFKD0AlRb5eNlJVhZT6eYrndiOG98f+5eUe/DapitGlwzqfIDPjFYWAP8qok/I4yMqVPYIENaaOmigSXMlCIKMdtmo2MNRHjC9yBnIuVSdJJFcWu2ILEFQN+bV9tRuLe3n7U3kIt32D4eH5+fH56fvX/BruV+Wxfbx2/PTp7e3M72e1f54CaQmwYufKKa9YdoYM8zAOMNbFcvONPyHOki1aYXqiiAMpbbS9yXM1G+uwS8T5Tn6LqBiDG3sUf649bnq94eHr98fPo3eqL0sDg76NI7SoQxNJcEW8qBBqwvMaayQy4LLQKqmsWyxG66HXoS+Y9Z5h70InTehi9B5J7IIDRc24ITkRfUcmhyK/B1PbC6gUaVqodrjTS+jKUqpVimjiHSR8CTTCArfHw83u4en1++304sdlg6PfR3DvFiaGjZ7lInUWeCnqJVFjTT0vMzZzzneMBMzD7LnVE0s4StUNSvvkhgnrh5fIS7eNZ3h13LyxBIzNmpHzUiHDPtrfv8x1b8WaoMJDolobJG+21ibZupUvc8yhfGoct9nOXZzx8Tyf9kmWUP/Z/3aY2JusfLbQsQU04D3wuJn6lDyBmk+epNPZQPmIepNpzdhQmP1WBeMMNSAbkdzDEharzs/x6l0rN3QZWR62acYqUF0ITWi9PDzBNRvQ5qktIPHhiVcUrb/+PHhw9/+np0IZZdTJHIZ06zRUQsym+TPaAiIJAYjvVuhDnv2dMb33YLawHVFNvcXcW19HS70jRV15sCo2kkN/berz+P6OP5KoFai6VD+K+kKLpvEtupQDVFaFuPPBgpGKXj2WPzKTe/xvI3jo2HCTWVIGWZWK606GKbtYBeHl1kFi2Pnb4Ya9H2qDyfCu1eg3xT8r/8D", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [wait_category = _t, wait_type = _t, wait_type_include = _t]),
    #"Changed Type" = Table.TransformColumnTypes(Source,{{"wait_category", type text}, {"wait_type", type text}, {"wait_type_include", type logical}})
in
    #"Changed Type";

shared #"Performance Counters Non Database" = let
    Source = #"Performance Counters",
    #"Filtered Rows" = Table.SelectRows(Source, each [database_id] = null),
    #"Filtered Rows1" = Table.SelectRows(#"Filtered Rows", each ([counter_name] <> "Processor Time %"))
in
    #"Filtered Rows1";

shared #"Performance Counters Database" = let
    Source = #"Performance Counters",
    #"Filtered Rows" = Table.SelectRows(Source, each ([object_name] = "SQLServer:Databases") and ([instance_name] <> "_Total") and ([database_id] <> null)),
    #"Removed Columns" = Table.RemoveColumns(#"Filtered Rows",{"object_name"})
in
    #"Removed Columns";

shared #"Performance Counters SQL Statistics Pivot" = let
    Source = #"Performance Counters",
    #"Filtered Rows" = Table.SelectRows(Source, each [counter_name] = "SQL Compilations/sec" or [counter_name] = "Batch Requests/sec" or [counter_name] = "Processor Time %"),
    #"Removed Other Columns" = Table.SelectColumns(#"Filtered Rows",{"server_id", "report_time", "counter_name", "cntr_value"}),
    #"Pivoted Column" = Table.Pivot(#"Removed Other Columns", List.Distinct(#"Removed Other Columns"[counter_name]), "counter_name", "cntr_value", List.Sum),
    #"Divided Column" = Table.TransformColumns(#"Pivoted Column", {{"Processor Time %", each _ / 100, Percentage.Type}}),
    #"Changed Type" = Table.TransformColumnTypes(#"Divided Column",{{"Processor Time %", Percentage.Type}})
in
    #"Changed Type";

shared ref_metric_description = let
    Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("TYwxCsJAEEWv8lmwkxxCGwUbYxlSrJNJMrDZic4sxNvrioXdh/ff67pwu15w1GWVFF00G1xxiE4zWn4UNje0lYR9OMk0YwcdQf8CaUkDrEzT912DYsjqiPfEtUeRZgZvTKUqWFPM1uDsP1fyIBSdP8PKOAoJZ8fCiz5fTej7Nw==", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [Measure = _t, Description = _t]),
    #"Changed Type" = Table.TransformColumnTypes(Source,{{"Measure", type text}, {"Description", type text}})
in
    #"Changed Type";

shared fn_get_os_schedulers = let Source = (server_name as any, sqlwatch_database_name as any) =>
let
    Source = Sql.Database(server_name, sqlwatch_database_name, [Query="select /* SQLWATCH Power BI fn_get_os_schedulers */
[report_time]=s.[snapshot_interval_end]
      ,[current_tasks_count]=avg([current_tasks_count]*1.0/[scheduler_count])
      ,[runnable_tasks_count]=avg([runnable_tasks_count]*1.0/[scheduler_count])
	from [dbo].[ufn_time_intervals](1,"&#"Report interval minutes"&","&#"Report window (hours)"&","&#"Report end time (datetime)"&") s
    inner join [dbo].[logger_perf_os_schedulers] sd
		on sd.snapshot_time >= s.first_snapshot_time
		and sd.snapshot_time <= s.last_snapshot_time
		and sd.snapshot_type_id = s.snapshot_type_id
	group by s.[snapshot_interval_end]"])
in
    Source
in Source;

shared Schedulers = let
    Source = #"Server Info",
    #"Removed Other Columns" = Table.SelectColumns(Source,{"server_id", "server_name", "sqlwatch_database_name"}),
    #"Invoked Custom Function" = Table.AddColumn(#"Removed Other Columns", "fn_get_os_schedulers", each fn_get_os_schedulers([server_name], [sqlwatch_database_name])),
    #"Expanded fn_get_os_schedulers" = Table.ExpandTableColumn(#"Invoked Custom Function", "fn_get_os_schedulers", {"report_time", "current_tasks_count", "runnable_tasks_count"}, {"report_time", "current_tasks_count", "runnable_tasks_count"}),
    #"Changed Type" = Table.TransformColumnTypes(#"Expanded fn_get_os_schedulers",{{"current_tasks_count", Int64.Type}, {"runnable_tasks_count", Int64.Type}, {"report_time", type datetime}}),
    #"Removed Columns" = Table.RemoveColumns(#"Changed Type",{"server_name", "sqlwatch_database_name"})
in
    #"Removed Columns";

shared fn_get_missing_indexes = let Source = (server_name as any, sqlwatch_database_name as any) =>
let
    Source = Sql.Database(server_name , sqlwatch_database_name , [Query="select /* SQLWATCH Power BI fn_get_missing_indexes */
       [servername]
      ,mi.[database_name]
      ,[database_create_date]
      ,[object_name]
      ,[snapshot_time]
      ,[index_handle]
      ,[last_user_seek]
      ,[unique_compiles]
      ,[user_seeks]
      ,[user_scans]
      ,[avg_total_user_cost]
      ,[avg_user_impact]
      ,[missing_index_def]
  FROM [SQLWATCH].[dbo].[logger_missing_indexes] mi
WHERE 1=1
--AND mi.[snapshot_time] >= DATEADD(DAY, -"&#"Report window (hours)"&", "&#"Report end time (datetime)"&")
--AND mi.[snapshot_time] <= "&#"Report end time (datetime)"&"
AND '"&Logical.ToText(#"Show index analysis")&"' = 'true'
"])

in
    Source
in Source;

shared #"Missing Indexes" = let
    Source = #"Server Info",
    #"Removed Other Columns" = Table.SelectColumns(Source,{"server_name", "sqlwatch_database_name", "server_id"}),
    #"Invoked Custom Function" = Table.AddColumn(#"Removed Other Columns", "fn_get_missing_indexes", each fn_get_missing_indexes([server_name], [sqlwatch_database_name])),
    #"Expanded fn_get_missing_indexes" = Table.ExpandTableColumn(#"Invoked Custom Function", "fn_get_missing_indexes", {"database_name", "database_create_date", "object_name", "snapshot_time", "index_handle", "last_user_seek", "unique_compiles", "user_seeks", "user_scans", "avg_total_user_cost", "avg_user_impact", "missing_index_def"}, {"database_name", "database_create_date", "object_name", "snapshot_time", "index_handle", "last_user_seek", "unique_compiles", "user_seeks", "user_scans", "avg_total_user_cost", "avg_user_impact", "missing_index_def"}),
    #"Merged Queries" = Table.NestedJoin(#"Expanded fn_get_missing_indexes",{"server_id", "database_name", "database_create_date"},Database,{"server_id", "database_name", "database_create_date"},"Database",JoinKind.Inner),
    #"Expanded Database" = Table.ExpandTableColumn(#"Merged Queries", "Database", {"database_id"}, {"database_id"}),
    #"Removed Columns" = Table.RemoveColumns(#"Expanded Database",{"server_name", "sqlwatch_database_name", "server_id", "database_name", "database_create_date"}),
    #"Changed Type" = Table.TransformColumnTypes(#"Removed Columns",{{"snapshot_time", type datetime}, {"last_user_seek", type datetime}, {"index_handle", Int64.Type}, {"unique_compiles", Int64.Type}, {"user_seeks", Int64.Type}, {"user_scans", Int64.Type}, {"avg_total_user_cost", type number}, {"avg_user_impact", type number}})
in
    #"Changed Type";

[ Description = "All Index history will be downloaded. You may want to exclude it if you are downloading last few hours of real-time performance problems from a production instance to minimise load. Best download index information out of hours." ]
shared #"Show index analysis" = true meta [IsParameterQuery=true, List={true, false}, DefaultValue=false, Type="Logical", IsParameterQueryRequired=true];

shared fn_get_database_disk_utilisation = let Source = (server_name as any, sqlwatch_database_name as any) =>
let
    Source = Sql.Database(server_name, sqlwatch_database_name, [Query="select /* SQLWATCH Power BI fn_get_database_disk_utilisation */
ut.[database_name]
      ,ut.[database_create_date]
      ,[Database Size (MB)]=[database_size_bytes]/1024.0/1024.0
      ,[Unallocated (MB)] = [unallocated_space_bytes]/1024.0/1024.0
      ,[Reserved (MB)]=[reserved_bytes]/1024.0/1024.0
      ,[Data (MB)] = [data_bytes]/1024.0/1024.0
      ,[Index Size (MB)]=[index_size_bytes]/1024.0/1024.0
      ,[Unused (MB)]=[unused_bytes]/1024.0/1024.0
      ,[Log Size (MB)]=[log_size_total_bytes]/1024.0/1024.0
      ,[Log Used (MB)]=[log_size_used_bytes]/1024.0/1024.0
      ,ut.[snapshot_time]
      ,ut.[snapshot_type_id]
  FROM [dbo].[logger_disk_utilisation_database] ut
WHERE 1=1
AND '"&Logical.ToText(#"Show disk utilisation")&"' = 'true'
"]),
    #"Renamed Columns" = Table.RenameColumns(Source,{{"database_name", "Database"}})
in
    #"Renamed Columns"
in Source;

shared #"Disk Utilisation DB" = let
    Source = #"Server Info",
    #"Removed Other Columns" = Table.SelectColumns(Source,{"server_name", "sqlwatch_database_name", "server_id"}),
    #"Invoked Custom Function" = Table.AddColumn(#"Removed Other Columns", "fn_get_database_disk_utilisation", each fn_get_database_disk_utilisation([server_name], [sqlwatch_database_name])),
    #"Expanded fn_get_database_disk_utilisation" = Table.ExpandTableColumn(#"Invoked Custom Function", "fn_get_database_disk_utilisation", {"Database", "database_create_date", "Database Size (MB)", "Unallocated (MB)", "Reserved (MB)", "Data (MB)", "Index Size (MB)", "Unused (MB)", "Log Size (MB)", "Log Used (MB)", "snapshot_time"}, {"Database", "database_create_date", "Database Size (MB)", "Unallocated (MB)", "Reserved (MB)", "Data (MB)", "Index Size (MB)", "Unused (MB)", "Log Size (MB)", "Log Used (MB)", "snapshot_time"}),
    #"Merged Queries" = Table.NestedJoin(#"Expanded fn_get_database_disk_utilisation",{"server_id", "Database", "database_create_date"},Database,{"server_id", "database_name", "database_create_date"},"Database.1",JoinKind.Inner),
    #"Expanded Database.1" = Table.ExpandTableColumn(#"Merged Queries", "Database.1", {"database_id"}, {"database_id"}),
    #"Removed Columns" = Table.RemoveColumns(#"Expanded Database.1",{"server_name", "sqlwatch_database_name", "Database", "database_create_date"}),
    #"Changed Type" = Table.TransformColumnTypes(#"Removed Columns",{{"Database Size (MB)", type number}, {"Unallocated (MB)", type number}, {"Reserved (MB)", type number}, {"Data (MB)", type number}, {"Index Size (MB)", type number}, {"Unused (MB)", type number}, {"Log Size (MB)", type number}, {"Log Used (MB)", type number}, {"snapshot_time", type datetime}})
in
    #"Changed Type";

[ Description = "All disk utilisation will be downloaded. You may want to exclude it if you are downloading last few hours of real-time performance problems from a production instance to minimise load. Best download disk utilisation out of hours." ]
shared #"Show disk utilisation" = true meta [IsParameterQuery=true, List={true, false}, DefaultValue=false, Type="Logical", IsParameterQueryRequired=true];

shared fn_get_os_disk_utilisation = let Source = (server_name as any, sqlwatch_database_name as any) =>

let
    Source = Sql.Database(server_name , sqlwatch_database_name , [Query="select /* SQLWATCH Power BI fn_get_os_disk_utilisation */
[volume_name]
      ,[volume_label]
      ,[volume_fs]
      ,[Block Size (KB)]=[volume_block_size_bytes]/1024.0
      ,[Disk Free (GB)]=[volume_free_space_bytes]/1024.0/1024.00/1024.00
      ,[Disk Total (GB)]=[volume_total_space_bytes]/1024.00/1024.00/1024.00
      ,[snapshot_type_id]
      ,[snapshot_time]
FROM [dbo].[logger_disk_utilisation_volume]
WHERE 1=1
AND '"&Logical.ToText(#"Show disk utilisation")&"' = 'true'
AND [volume_fs] <> 'UDF'
AND [volume_name] not like '\\?%'
"]),
    #"Renamed Columns" = Table.RenameColumns(Source,{{"volume_name", "Disk"}, {"volume_label", "Label"}, {"volume_fs", "File System"}}),
    #"Filtered Rows" = Table.SelectRows(#"Renamed Columns", each not Text.StartsWith([Disk], "\\?\Volume"))
in
    #"Filtered Rows"
in Source;

shared #"Disk Utilisation OS" = let
    Source = #"Server Info",
    #"Removed Other Columns" = Table.SelectColumns(Source,{"server_name", "sqlwatch_database_name", "server_id"}),
    #"Invoked Custom Function" = Table.AddColumn(#"Removed Other Columns", "fn_get_os_disk_utilisation", each fn_get_os_disk_utilisation([server_name], [sqlwatch_database_name])),
    #"Expanded fn_get_os_disk_utilisation" = Table.ExpandTableColumn(#"Invoked Custom Function", "fn_get_os_disk_utilisation", {"Disk", "Label", "File System", "Block Size (KB)", "Disk Free (GB)", "Disk Total (GB)", "snapshot_time"}, {"Disk", "Label", "File System", "Block Size (KB)", "Disk Free (GB)", "Disk Total (GB)", "snapshot_time"}),
    #"Removed Columns" = Table.RemoveColumns(#"Expanded fn_get_os_disk_utilisation",{"server_name", "sqlwatch_database_name"}),
    #"Changed Type" = Table.TransformColumnTypes(#"Removed Columns",{{"Disk", type text}, {"Label", type text}, {"File System", type text}, {"Block Size (KB)", Int64.Type}, {"Disk Free (GB)", type number}, {"Disk Total (GB)", type number}, {"snapshot_time", type datetime}})
in
    #"Changed Type";

shared fn_get_last_snapshot = let Source = (server_name as any, sqlwatch_database_name as any) =>

let
    Source = Sql.Database(server_name , sqlwatch_database_name , [Query="select /* SQLWATCH Power BI fn_get_last_snapshot */
    snapshot_time=MAX([snapshot_time])
    from [dbo].[sql_perf_mon_snapshot_header]
"])
in
    Source
in Source;

shared #"Last Snapshot" = let
    Source = #"Server Info",
    #"Removed Other Columns" = Table.SelectColumns(Source,{"server_name", "sqlwatch_database_name", "server_id"}),
    #"Invoked Custom Function" = Table.AddColumn(#"Removed Other Columns", "fn_get_last_snapshot", each fn_get_last_snapshot([server_name], [sqlwatch_database_name])),
    #"Expanded fn_get_last_snapshot" = Table.ExpandTableColumn(#"Invoked Custom Function", "fn_get_last_snapshot", {"snapshot_time"}, {"snapshot_time"}),
    #"Removed Other Columns1" = Table.SelectColumns(#"Expanded fn_get_last_snapshot",{"server_id", "snapshot_time"}),
    #"Changed Type" = Table.TransformColumnTypes(#"Removed Other Columns1",{{"snapshot_time", type datetime}})
in
    #"Changed Type";

shared ref_perf_counters_poster = let
    Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("zVpdbxw3sv0rhIEFJGA0lh0Em+RNsp2NEMlSLO01LrKLBaebM9Ornuak2a3x+Cn/YZ8W2P1z+SX3nCqyP2ZGkoO8XMCQpW42Waw6VXWqyJ9/fvHh3dnbq3cvJi/efHj39uLuFr91/+6Wzsx9WfpNUS1M7eaudlXmTOZXLph57VemwZDbny7NrasfXG1uXD3H0ze+rRpXB+Pn5oK/udCY2db81PKXWz9vNrZ25uhH91BU5seyqNzEnGPyxlx/tvXEvFnWRWgKW5lzXza+wls/Mx9tnU/MB59jgbJ0+QJffbC+Lc1FWW6tDy/+Pvn5BcRRab47yzIXgrlyzdLnARv6vi1Lc5vZKrwMLsODV6Z7xCdmjT28Oj09hdS5+4Rd2TpbujT6dHp6+gr//1Aslv2HpraNm+Lxla+KxmPXVErVrmaYDAqYc2DgosZXprGzEsrztSm4hAtTc7GoPLTRViWlXXLyNzd/hZaLKityDN4UzVKfh7QePvth/MCs7NbMYBzbBpdT26siBBouLjQx0MnWhJWFPCrGhHLU7heaBQbFH433mKja4mnm6xzLnJnQ5rmrME1WOxscfsEOi2AebNk6WRYrFBmEMNaExjY0XUY1wO5LX+ZmaQNEwxyYAPqEEfGmLRuVzthKZTSVN6WvFtDbzPEddzLdMaqgrVsEaj+3TbY0H+IuXt6Kqbp/7zs7zGRct1ts0BUP0BRtDvv6CmLZKjfYmTULD5QtXOVqW6btQT2iIljXZk3xUDRbU7oHV3L2sSNMzR01lKkfcEqar9ya3K0dtgqgEwv4ZglMizNw6V9aW3JWzJf53Jm6rSrqIY4N3dxQOyGz9hs4HPA/nGmif2E9V6uhIhYJlRleQ+SKU669ryESV8qNXa9LbrLwlRhdjQtBxB3GuhNXwbxAQ4EJbNPYoiKgsJxvgUprmu0as5X8+moWzPuLN7K8rzB+ia1irJ35tjFfHZ5+ekUQ+oZ7SFpUPJ0YICpERZp2jY0ckG6zLEpsFeAKCFZYOxDtFvDamCMYkeo5TorJPdyn8g00UsAyjRfVmHXtGT9oAFH01Pyvbw0Q3QLSc1muarAfoFT13Jm7YoicQSdUp7rvaCsQJbQQGW4RF8FybVOUxWf9hH8GGV9VLlOjXFR04MDXr6G0ifnt1//c1QgrVge8JIp/+/W/ZmM1BCVxVh5BFzEXVpfQgP2tYLoWAPIPBHjZ4XmiiksLBa70tUCztJyKW8WyOy6n607N/4y+WteFBBRze/N60q04UlRezCWnUItchLGkdFZEFLshhLVwlxpRBT5YLQDND/JeMpNvYDQnTinukDsAsQzPRgw+eONX66JUvMfw/rf29PSrDJD9U3LoPoinHZ+YUZg5nXb5IOsn7DJCH3yagimzWdrGJJudUIzBV0gKWdbWNcNjH5PMEeJu2eaagXW4C8caBErCchiMC7X8zDUN8SqCyZsQAdBFavdJsP0AP8yXPuO26i0XoR0ixpFNACW6DIHqMRw+jnw1f/xrRjqEEsewhUBUM5Fs6kIiPSXjwALSJOTnwETMhg/+3o3GtOJ6Yf0P9+nEuKxtHDQ2NR+XkkjSrEWofvv1300n34ROE5A36ziBNesSoWfR4hlXClBOEie3SITMafR5W9uVg96SEwJXEBDQyt0XQQroiKg6MV+KqwEUxx8NkJXM/iS2JqM8tg80P/snAklgsHardaNhYEYciGZhbATjpU2PO6gRBXM6F/8uHUWIyS1uYoCVhuqtwJnMZ1d7xLiCbLFHraQA4SdwVZuXPrsPgjfAqWwFTnGokXfjLAorJ5inGCzclOEWWFr7WvQD+R+LULIWXu75v0kRTDfmHBKM6qEqmP7N1rfY6TZAd4TuKgXv5BPPEc8buwA9QIZtdkDx+jRxTnOQx5xOXycYyBxB5iAICv8E51wPxg5iCYI7lpAwY/JWUM8MMI8MX0kYvw1fEAbuvLEPvsiHi5HYPRRuo/xS6ZRMChpbLMQ4iG8tUFH56iRoCi2EYiHdNRoJiLMVKdK8KMs5AIyHnGlt83/obJgGaeIhsiDiGbx2bTEtN0t5pub99d2770Bkll0MjOxtzMuY9mc2F6ZQNG2MxBska6dEuMJuokpqF3kpiXDDOK0ILxqjQZo0Ig43SKs+8ann4HGw2vgDtcl4xLNoUaWGNHzDAJt7yTi2om1ZvMhmF6WLo7U+MHPX8BtlzrUDOgpumd+S9gzIvWasviaQsAAz5xrWYIrQ1EVGKkA7XV/e3ZiNr+9Lb/Neg5Fff3fekjkYsEToumZpVztEDcRjViBluetm+LVzpN2hz+qnqxlGrkSEMH/0r2U6QERDHIqZmUipyLEP4CZWPOcMsFJEggMDxa/JjoC6QSGtMUjgvXL4bzv9fybO0wb5WwVtXtrPW8M07faCHs1xmuzR72z/i0csonRqsH9JeBwjU3zkFB2Bh9wPGJ0XNcKEunLXu1CVMKDkRbhngIMzg9uTfqzTWwktU3MpbKsIkV4pr2emi1EB/MKWkZ8sJEIw3FjS9clugaAsjJUmVdtT2C/TrupXEgLWyXfU+y01++1pnzfqOO4Ad1gvt0EKtY4JrbtZsdXQjgpkRFVfrzB67J0oZtZMv+abU6TXf0GAERkRZqHlaOSisVVhzdZJBp+XdiE6kLpLwoWUvfw9YOtFVjAfROVg81+sntL7+3adFHQ0ekYBj81Lc7STefV5x9qoSP3Zp2GdotPn3eFwAVTNsRkyUH5UVEPIYasJLHwqMehAm0KIaS80Uw8fv2IBCFYiqlwyFRaV6xSlpI5hldRXsmrX2OkJOzj27wDaIUc+iDQdeABrI4iFDnqlujyrk+SGfwxxKERjycPQhVc9j8lq0BdAGjkru8f/4sJghGUXeOodrigjpS/Ah6l+lyKIZhuX9RpZy+AF79KIqRhQRj2yQ5DeNcMl6S+Ud8b6HDr9aDHLHYKeOVqF484AXwss9WeEoY1fbPgFw6Q0QVZgUoVqB7mansZGnLDsBDUNobEC4NdMDPvTpQadBDasvArj3fWV4Yyzw40jqx4x99hZk+4TKl42FyJx585luyPSzgjOgOzZrNnElgokshlZCHnJ4zqUGYeVQKc+sil17gO5aKgbbUntOTq5Xl/E6G9QMRZvdsKmaADFJQjOfhmzo4ywloQ2KmO6tkPXalGEyz6c0sKRGbo6m4DMUhtLGsCleKnuQjr0td+Erv0ygzBzyYHMkC2Z3NOaJSp9u6tZqvURnTaDD6jT2yXXH+f3kfL38zxmyA3mmJi+OVLQAytEilE/+/31x7OLu1ROprbmKDtLBl9B0KZXnywjHbihIlkrpDJYcX2wBTMZ9mCe1F2HcxL4cdG/o4tNGkmN/YCUKbFcaRDlDDJCq3qb/dIWtLt+KlGQEq0tKe84RO5rou/lNyyy+Ak7ccmbyX+0TyUdHm0X2r5dLHOwzocM2T0w7cpixU7oYApL3rP2Taz9au+bEyWpYGygpCtYKnSxudOUQA3onwxU0tlF6xE9+OAp1IkBdTQIyyjg7byJKpAP2f/uDHkgB15aqWmEwdL9hlY6uvNg1aZ/IUIJixgOxlRDCiFB5oBl5YvNYPpxfClVkEQcyt5+B4J1lFoUUhLNGyeYVi6P2Vh1aT9MwkDD7pDkID1W4JGMb+vM9b1pWNHmeU0KzRzyO6gpm7R2ScopuX6X/ksjTFjFB6GvJuw2O/oJDvOIyCGE1Q7Cg9SbDXLrOvbRJUsX0riOOXvYK13ZeyX6ciRiUeF+dtK4AGJ9HU9iGrH4QFoW40fBbie7LP9P/fCeRR/v8LSOiHXUW8l251SPhoxeA29T9+xg0B0NHUXScZNQu0F6EudUd11fbmpupbUhBzbWvLm+urq4m5gP15eX52dvfpSDwzRW3B1qiG5uFVp6KmGOpLcnwJuj4kztXlfXqOqOY11UCNaKFSkd7BcNBGhjJsryriMVac3YruqqKO03peIhtZtkFT5lh2dwwhXf7/ZlrgThg6ZCfPAX7Aeh5hpJK55DjE7JxeCD3ptWnQfqU7E/nIuveSq8TaGaJQK5rfaw1NEALqz6O0W8cUm8J/PwcAWKKR9Nv3QvjDdiT/blHpPbvI8EozuIhIVPhyfFpWVymu9XdY9tVYWLHYn49ujH8+OhMXoyYVck5YPdigkGPQ3W4zxuiWd4jzFVsKOYRphuHI9oicw7sqhmLM0k9g/Wft2eyIlZkI8yq5FZSCJ9XYMXGItPu9dCsGjkqBxFYxHkmHwsbrBF6s+LsyI5hgYOsT3hqbsTYvjXKCrHs6c5eC/ef0KzIf5kLlXhQTq7kO3iOj1Smz2LvUM6+CMW2QgcsUFSAVaKeTr5Bu7mxaKlp1zZT+MVnxXztvEllPu0mHeOncBlYlZqoQ2baqJlzsBiZLdxxCpeTit5FjRPJ9Zxf9I1Gxz1lnL8kS21Mb7rOONjlf4WgpYNUhs7jbJO5eS0kgsxtm/h88C1V+kAehHQtVvZ0YlJX3s9cqx7xoIWnzkRjL137+/F+7XriC3DXqqjqDil07PuqaiNhbpfg7MiyyYdRS6X/hA5Peduls/3fOMDVekSmJcmSteu0Z+XUNXhgVN5K4yIrdKNrZuuQ5fu0AxU+fb8LOz28cLgyoqwKnyHyA6qAeI1bF3C8zJEGrKCxGC6OgXJkjXELrRUWj1h6Ohf7WZgy1ODEgCcpZ4cakAIMGKPklaC4QUbMdoldK5J7PRIXnqnJPaV6/rF0BIIe3RdfPoAAvU9M/cnS+I8MTeX78yR9qaKOQv/tcuQHrPtcTS/3hvipRdgTbfHiwtE0J9PU+kwO2iaKvZshX7Il99+M33NL/H/q6m55vOZxIUMVZ/aamyaOXYwqLULksVFi+JXvmTM+/bUHKWjhmOyhG/ig0GZfH15dnO8G2EinN773HV9xrEKUgJ2XwkG9SciIRA20lgTx09TBJow4sWmy6QPR7F1M4SjLsYo0fFhCUSNtAFUofHMRcO0l7MY6aPFk8zR0a3WepDUHH1N/nscc5Lcv+DziP1Rm6Hv1ybKdcT7G3K87rsbY7l29S5eXmsOZEHhoEJeOnES7LucQDvwpAkQYzgYVImB2ov+szoY+P8Sr2uNDuYv/aLYOYkfHQYNDh+6kfudHmkFlDJg1Nfpj06kyn79RKevv8ojSWNI9AeXCt4JL8b6+suwN7Avcf/BuEBScr1/8NtIuWP7Rqyfz3n3kti/L5j/7O61I7mpt1ZLldtUFWjEZ6wJjEIuriiNj22ClR4ziy8VVX9dai+sv43SRGuByvpNswyPb3sw6BBnTVdt3KgOgfG623vd/rt7iXAmINPBou/YF5VG56EZyAZ52FmWqqjuHuCqDeRuZRO5ojYb9INmOWg20CNruVGhdSE5z2Ck0J90QF/L1TOQHgdGyJsolk2uSsgbInrO6Ixw2dHr+MXCVSdGPIEUg09gkPr5a1FnbeNPbngLxpzpLZGEvxf7tyktB+9dmYm3S8ZOogbqg41cZWxX6TIMy0I5S7dzpwygrfj74SXYxTu4tCgtHl7LmTNrtcwlP4lRsxNQbiIw9FgNlbnSM61p1rzF4NvA9lW6HyNXnDRXkoqDxSBDYBo932ILSvYFocD1JilrSsKVTzV7yiEf3IEcXP0rOKc72r+AFPs+A/HPwbxQiFYnl3Tbo/Pry73ktGfW70W/prfuE1ZVW/wu4zIn9LaV28UQ6e//Bw==", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [object_name = _t, counter_name = _t, desired_value_txt = _t, desired_value_num = _t, issue_header = _t, description = _t]),
    #"Changed Type" = Table.TransformColumnTypes(Source,{{"desired_value_num", type number}}),
    #"Added Index" = Table.AddIndexColumn(#"Changed Type", "counter_id", 1, 1)
in
    #"Added Index";

shared #"Quick download server list" = let
    Source = ref_servers,
    #"Changed Type" = Table.TransformColumnTypes(Source,{{"server_name", type text}, {"enabled", type logical}}),
    #"Appended Query" = Table.Combine({#"Changed Type", ref_quick_download}),
    server_name1 = #"Appended Query"[server_name],
    #"Sorted Items" = List.Sort(server_name1,Order.Ascending)
in
    #"Sorted Items";

shared ref_quick_download = let
    Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMtRTcMzJUYqNBQA=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [server_name = _t]),
    #"Changed Type" = Table.TransformColumnTypes(Source,{{"server_name", type text}})
in
    #"Changed Type";

Load was cancelled by an error in loading a previous table.

Did you check DOCS to make sure there is no workaround?
Yes, the closest thing I found was "Login failed error when running disk logger" but is not my case

Describe the bug
I installed SQLWATCH-v1.3.20 and I run SQLWATCH Disk Utilisation Dashboard.pbit and I received an error

To Reproduce
Steps to reproduce the behavior:

  1. Install latest SQLWATCH
  2. Open SQLWATCH Disk Utilisation Dashboard.pbit
Server
Microsoft SQL: A connection was successfully established with the server, but then an error occurred during the login process. (provider: SSL Provider, error: 0 - The certificate chain was issued by an authority that is not trusted.)

Database
Load was cancelled by an error in loading a previous table.

DiskUtilisation-Database
Load was cancelled by an error in loading a previous table.

DiskUtilisation-Volume
Load was cancelled by an error in loading a previous table.

DiskUtilisation-Volume-FirstDate
Load was cancelled by an error in loading a previous table.

DiskUtilisation-Database-FirstDate
Load was cancelled by an error in loading a previous table.

Expected behavior
Well, it should work

Screenshots

This is my setup, maybe there was something wrong around the date format?

setup

This is the error:

1

Windows Server (please complete the following information):

  • Windows 10 Enterprise

SQL Server (please complete the following information):
Microsoft SQL Server Management Studio 13.0.16106.4
Microsoft Analysis Services Client Tools 13.0.1700.441
Microsoft Data Access Components (MDAC) 10.0.17134.1
Microsoft MSXML 3.0 4.0 6.0
Microsoft Internet Explorer 9.11.17134.0
Microsoft .NET Framework 4.0.30319.42000
Operating System 6.3.17134

SQL Server Management Studio (SSMS -> about -> copy info):
Microsoft SQL Server Management Studio 13.0.16106.4
Microsoft Analysis Services Client Tools 13.0.1700.441
Microsoft Data Access Components (MDAC) 10.0.17134.1
Microsoft MSXML 3.0 4.0 6.0
Microsoft Internet Explorer 9.11.17134.0
Microsoft .NET Framework 4.0.30319.42000
Operating System 6.3.17134

SQLWATCH version (from DACPAC or from sysinstances)

  • SQLWATCH-v1.3.20

Additional context
None

[New Feature] Capturing Page Life Expectancy

Is your feature request related to a problem? Please describe.
Capturing Page Lift Expectancy will help a user determine if performance issues are related to memory pressure or can be used in general performance tuning.

Describe the solution you'd like
Display PLE for a server on the main performance dashboard of the PowerBI template

Describe alternatives you've considered
I could see where additional memory performance metrics could create a need for a Memory specific tab, where the Overview tab would just display some higher level metrics (PLE, cache hit rates, etc.), while the Memory tab would include those same metrics + additional memory metrics. This would be the same workflow you see in other enterprise database monitoring solutions.

Additional context
This would be rather simple to implement, as PLE can be found by querying the sys.dm_os_ring_buffers. Suggestions for implementation (create new tables/use existing tables/create new sp/edit existing sp, etc) would be great!

I'm excited to have a chance to contribute to this project. Let me know if you have any questions, comments, concerns. Thanks!

[New Feature] Query store analysis

Is your feature request related to a problem? Please describe.
N/A

Describe the solution you'd like
Query store provides history on its own so need to understand what we need here

Describe alternatives you've considered
N/A

Additional context
N/A

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.