Monthly Archives: December 2016

Script to De-Orphan SQL Logins

–By Lori Brown   @SQLSupahStah

I am working with a client who is migrating from SQL 2012 to SQL 2016. It just so happens that while they have just a few databases to move, they also have a ton of SQL logins that needed to be migrated. This is not an issue since I know to use sp_help_revlogin to script logins and passwords into a script that can be applied to the new SQL 2016. However, inevitably, there are SQL logins that have been deleted from SQL but not from the database or while testing on SQL 2016 the databases are refreshed and now logins are orphaned. I needed to have a script that would look at each database and give me a script that could be used to create or de-orphan these logins.

I figured that with a little Google-fu I would find someone who had already created a way to do this. And, while I did find a solution, I decided to give it my own twist. My solution is to have a script that I can run when needed that will output the statements that I need to create or de-orphan logins from all databases in an instance. I combined things from these two blogs:

Here is my code and a quick look at the output:

USE master


DECLARE @usercount INT

DECLARE @sqlstr1 NVARCHAR(1000)

DECLARE @sqlstr2 NVARCHAR(500)

DECLARE @dbname VARCHAR(128)





SELECT name FROM sys.databases

WHERE name NOT IN (‘master’,‘tempdb’, ‘msdb’, ‘model’)

AND is_read_only = 0 AND state = 0







SET @sqlstr1 = ‘SET @orphanedusers = CURSOR


       FROM [‘+@dbname+‘].sys.database_principals a

       LEFT OUTER JOIN sys.server_principals b

             ON a.sid = b.sid

       WHERE (b.sid IS NULL)

            AND (a.principal_id > 4)

             AND (a.type_desc = ”SQL_USER”)


       OPEN @orphanedusers’


EXEC sys.sp_executesql @sqlstr1, N’@orphanedusers cursor output’, @OrphUsers OUTPUT


FETCH NEXT FROM @OrphUsers INTO @username




SELECT @usercount = COUNT(*) FROM sys.server_principals WHERE name = @username


IF @usercount = 0


PRINT ‘–User ‘+@username+‘ does not exist and must be created. Please use a strong password.’

SET @sqlstr2 = ‘USE [master] CREATE LOGIN ‘ + @username + ‘ WITH PASSWORD = ”ABC123”’




                    PRINT ‘–User ‘+@username+‘ is an orphan.’

SET @sqlstr2 = ‘USE [‘+@dbname+‘] ALTER USER ‘ + @username + ‘ WITH LOGIN = ‘ + @username



PRINT (@sqlstr2)

FETCH NEXT FROM @OrphUsers INTO @username


CLOSE @OrphUsers










The output labels what needs to be done to make sure the users have access to their databases. If the user does not exist in SQL at all then a CREATE LOGIN statement is generated. If the user is simply orphaned an ALTER LOGIN statement is generated.

The only thing I did not do yet (mostly because my needs were centered around SQL logins) is to include Windows domain accounts in my logic. As soon as I add that to my script, I will repost. Or, if someone out there adds it, please send an update.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at We will be happy to help! Leave a comment and feel free to track back to us. Visit us at!


SQL 2016 Service Pack 1 Released

— By Lori Brown   @SQLSupahStah

SQL 2016 SP1 has been released. I might be a little late to this announcement but thought I would at least give the download link and some highlights on things that are fixed in it.

Download it from here:

Some of the highlights are:

  • New CREATE OR ALTER that allows functions, triggers, procedures or views to be created without having to test for existence first.


  • New error log message that indicates if tempdb data files are not all the same size


  • New lightweight query profiling via trace flag 7412

  • Enterprise features now available in Standard Edition.


For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at We will be happy to help! Leave a comment and feel free to track back to us. Visit us at!

SQL Server Security – Controlling Access

— by Ginger Keys

Installing SQL following best practices is an important first step in securing your SQL server. The next step in securing your SQL server is to decide who can access the SQL instance, what databases and other objects they need access to, and what kind of permission to give them to those objects. So what is involved in securing access to your SQL server? We will go over the basic components, and discuss best practices to help keep your server secure.


There are three main areas of security within your SQL server instance.

  • Principals – these are the persons or entities needing to access your SQL server.
  • Securables – these are the databases, objects, and other resources in your SQL server instance.
  • Permissions – these determine which resources a principal can access and what actions they are allowed to perform.

There are also some other important areas to consider in controlling access to your SQL server:

  • Schemas and Ownership


  • SQL server service accounts
  • Administrative accounts



Principals are persons, applications, and entities that can be authenticated to access the SQL server resources. In SQL server, principals include logins, users, and roles.

LOGINS – Principals require a login in order to connect to the SQL server. Logins are at the server level only, and provide for the entry point, or the initial connection to the server. Logins are validated against the master database and the connection is to the instance, not the databases or other components.   SQL supports two methods for authenticating logins: Windows and SQL authentication. Mixed mode authentication allows for the use of either Windows or SQL logins.

Windows logins can include individual users, groups, domains accounts, or local accounts. Group accounts are granted login access to all logins that are members of the group. SQL relies on Windows to authenticate these accounts.

SQL logins are specific to the instance, and are stored in SQL, with the username and hash of the password in the master database. SQL uses internal authentication to validate login attempts. This type of login may be necessary for users not associated with a Windows domain.

Best Practices for Logins:

  • Use Windows authentication whenever possible
  • create Windows groups in Active Directory set with appropriate access/permissions to the SQL server, then add individual users to the appropriate groups
  • Do not use SQL authentication for logins if possible –when SQL Server logins are used, SQL Server login names and passwords are passed across the network, which makes them less secure
  • Audit SQL Server failed login attempts to monitor for possible hacking activity

USERS – Once logged in to a SQL instance, a user account is necessary in order to connect to a database and its components. Users are created within a database, and mapped back to the server login.

User accounts that are not mapped to a login account are known as orphaned users. An exception is contained database users; they do not need to map to a login.

Guest user – this account is a built-in account in SQL server, and is disabled in new databases by default. The guest user allows a login to access databases without being mapped to a specific database user, and it inherits the ‘public’ database role with its permissions.

dbo user – this account has implied permissions to perform all activities in the database. Any principals belonging to the sysadmin fixed server role are mapped to the dbo user account automatically. The dbo user is in every database and is a member of the db_owner database role.

Best Practices for Users:

  • Disable the guest user in every user database (not system DBs)
  • If you must use the guest account, grant it minimum permissions

ROLES – Roles exists at both the server and database level. Permissions can be assigned to a role which makes it more efficient to manage principals’ access to securables. Permissions are given to roles, then logins and users can be added to (or removed from) roles.

Server roles – server level roles can be fixed or user defined. Members of server roles have permissions to sever-level securables, and cannot be changed or revoked. Logins can be assigned to fixed server roles without having a user account in a database.     

For complete list and description of server roles click here

Database roles – These roles have a pre-defined set of permissions. Logins must be mapped to database user accounts in order to work with database objects. Database users can then be added to database roles, inheriting any permission sets associated with those roles.

For complete list and description of database roles click here

Public role – The public role is contained in every database including system databases. It cannot be dropped and you can’t add or remove users from it. Permissions granted to the public role are inherited by all users because they belong to the public role by default.

Best Practices for Roles:

  • Be very cautious when adding users to fixed server roles:
    • Do not add principals to the sysadmin role unless they are highly trusted.
    • Membership in the securityadmin role allows principals to control server permissions, and should be treated with the same caution as the sysadmin role.
    • Be very cautious in adding members to the bulkadmin role. This role can insert data from any local file into a table, which could put your data at risk. For more information click here



  • Grant public role only the permissions you want all users to have, and revoke unnecessary privileges.


SQL Server securables are the resources that can be accessed by a principal. SQL server resources operate within a hierarchy, with the server at the top of the hierarchy. Below the server instance lies the databases, and below the databases are a collection of objects (schemas, tables, views, etc.). Access to securables is controlled by granting or denying permissions to principals, or by adding or removing principals (logins and users) to roles which have access. All securables have an owner. The owner of a securable has absolute control over the securable and cannot be denied any privilege. Server level securables are owned by server principals (logins), and database level securables are owned by database principals (users).



Permissions determine the type of access granted on a securable to a specific principal and what tasks a principal can perform on securables. The TSQL permission statements are




Granting permission to a principal removes any DENY or REVOKE on that securable. A permission given at a higher scope in the hierarchy that contains that securable will take precedence over the lower level permission statement. Database level permissions only apply to that specific database.

Owners of securables can grant permissions on the objects they own, and any principal with CONTROL permissions can grant permissions on that securable as well.

Best Practices for Permissions:

  • Always use the principal of least privilege, which limits users by granting the minimum permissions necessary to accomplish a task. For more information click here
  • Document any elevated user permission and request managerial approval.
  • When developing an application use a least-privileged user account (LUA), which may be more difficult – but will eliminate the temptation to grant elevated privileges as a quick fix when an end user cannot perform certain tasks that the administrative developer could. Granting elevated permissions to users in order to acquire lost functionality after the development phase can leave your application vulnerable to attack.
  • Grant permissions to roles rather that to users. It is easier to add and remove users from roles, than to manage separate permission sets for individual users.
  • Don’t grant individual users access to SQL Server, instead create groups for specific servers with specific permissions, and add individual users to the appropriate groups.

Ownership & Schemas 

Owners of objects have irrevocable permissions to administer those objects. The owner of a securable has absolute control over the securable and cannot be denied any privilege. You cannot remove privileges from an object owner, and you cannot drop users from a database if they own objects in it. Server level securables are owned by server principals (logins), and database level securables are owned by database principals (users).

A schema is a named container for similar database objects, and can be owned by any principal. You can assign security rules for a schema which will then be inherited by all objects in that schema.

Objects can be moved between schemas, and schema ownership can be transferred between principals.

Best Practices for Schemas:

  • In your TSQL, refer to the objects using a fully qualified name. At the very least, use the schema name followed by the object name, separated by a period (.). Example: databasename.schemaname.tablename.
  • Use synonyms to obfuscate the schema ownership of objects.
  • Use schemas to protect the base database object from being altered or removed from the database by users without sysadmin permissions.
  • Use schemas to combine related, logical entities to reduce administration overhead.
  • For more info –

SQL Server Service Accounts 

Depending on what features and components you decide to install, SQL Server has several services that are used to manage authentication of SQL Server with Windows. These services need user accounts associated with them to start and run the services. The service accounts should have exactly the privileges that it needs to do its job and no more. For a list of all available SQL services click here .

Best Practices for Service Accounts:

  • Assign service accounts low-level permissions using the principle of least privilege.
  • Don’t use built-in Windows system accounts (e.g., Network Service, Local System) for SQL Server service accounts – the built-in system accounts inherit certain elevated rights in Active Directory that aren’t required by SQL Server.
  • Use a Managed Service Account (MSA) if resources external to the SQL Server will be used.
  • When MSA are not possible, use a specific low-privilege non-user domain account.
  • Use separate accounts for different SQL Server services. The service accounts should not only be different from one another, they should not be used by any other service on the same server.
  • SQL Server Agent account is the only service account that requires membership in the systems admin server role.
  • Do not grant additional permissions to the SQL Server service account or the service groups – SQL Server installation program will grant them the local rights that they need during the installation.
  • Always use SQL Server Configuration Manager to change service accounts or passwords.
  • Don’t add service accounts to the Local Administrator group
  • Use a service account for applications, and restrict the applications access only to data required – not every object in the database

Administrative Accounts

sa Account – SQL Server creates this server level login during installation by default. The sa account has full administrative rights in your SQL Server instance, and is well known and often targeted by hackers. The sa login maps to the sysadmin fixed server role, which has irrevocable administrative privileges on the whole server.

BUILTIN\Administrators – All members of the Windows BUILTIN\Administrators group are members of the sysadmin role by default. This built in group is no longer included in SQL Server 2008 and later.

Best Practices for Administrative Accounts

  • Disable or rename the sa account – make sure another account exists with admin privileges before doing this.
  • If you must use the sa account, assign a strong password and enforce password policies
  • Do not use the sa account for day-to-day administration or logging on to the server remotely
  • Do not allow applications to use the sa account to connect to SQL
  • Remove the BUILTIN/Administrators group from the SQL Server Logins if it’s present in your instance.
  • Restrict users with system admin privileges, using server roles instead.
  • Restrict members of local administrator group. Limit the administrative access only to those who really require it.
  • Assign every administrator a named login, and don’t allow shared logins so that you can identify the users behind each and every database change.
  • Document all users with administrative rights, and any elevated user permissions


Controlling access to your SQL Server should be well thought out ahead of time. Every organization needs to have an information security policy in place, and the role of the DBA is to enforce these policies and protect the SQL Server instances along with the data in them. The suggestions provided above give a basic guideline for controlling who has access to your SQL Server. Utilizing windows groups, database roles, applying the principle of least privilege, and implementing the other suggestions above can be instrumental in keeping your SQL environments safe.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at We will be happy to help! Leave a comment and feel free to track back to us. Visit us at!

Consistency Check an Analysis Services Database

— By Lori Brown @SQLSupahStah

I am migrating a client to SQL 2016 and had to restore the SSAS databases and do what amounts to a DBCC CHECKDB against it. The command is done while connected to SSAS in Management Studio. You open an XMLA query (not MDX)…


…and run the following:

<DBCC xmlns=”>


<DatabaseID>Work Metrics</DatabaseID>

<CubeID>Hourly Insights</CubeID>



The output looks like this:


As usual more info can be found from MSDN:

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at We will be happy to help! Leave a comment and feel free to track back to us. Visit us at!

SQL Server Memory In Task Manager – Tell Your Manager to Not Worry!

-by Ginger Keys

blog_20160929_1We are going to re-promote this subject because it is still so relevant. All too often we run into people who are in a panic after looking at Task Manager to try to diagnose SQL stress. They seem to become very worried when looking at how much memory SQL uses not realizing that is as expected.

Task Manager is just that, a task manager. If you are looking at it to get accurate values for memory (RAM) usage or CPU time, you will get not-so-accurate information. False alarms. Information that doesn’t exactly mean what it implies.

Blog_20150507_2What I usually see when I look at Task Manager is that CPU is low, disk usage is low, and memory used is high – which is what we expect on a SQL machine.  SQL will use as much memory as it can get, to improve performance by caching tons of data in memory in case it needs it, to avoid having to go to disk.  It will immediately release any memory to the OS if needed.  SQL is a memory hog, and will hold onto all the memory we allocate to it, whether it is actually using it or not.  And this is what we want it to do!

SQL will take all the memory that is allocated to it when it is needed, store data and execution plans in the buffer, and when it is through using it, it doesn’t release it…it caches that data into memory until either 1) SQL needs it again, or 2) Windows OS requests the memory.

This isn’t an indication of performance stress…it’s just showing that SQL has reserved all the memory that we told it to use (through setting min and max memory on the SQL instance properties [1]).  SQL doesn’t let go of memory unless the server comes under memory pressure (like if other apps need memory and Windows sends out a memory pressure notification).  As long as the OS doesn’t need to use it for anything, SQL will hold on to it for future use.

The default memory setting when you install SQL is for it to use all of the memory. When windows asks SQL to give it back, SQL gives it back in huge amounts. And then proceeds to build back up again to maximum amount. This is not optimum for stability or performance, and it is highly recommended to set the limit in the Max Memory Setting as a best practice. That way Windows OS is always guaranteed to have a certain amount.

Click here for more info on setting Max Memory:

Another good reference for setting Max Memory for SQL is here:


And when looking at Resource Monitor, it is a mistake to use the Available and Free metrics as the key indicators for determining overall system health [2]. The Available MB of memory is comprised of both the Standby and the Free… the stuff in Standby is what Windows has put in the cache (but can be booted out if needed), and Free means it is not even being used at all. The combination is what Windows calls Available. So there is actually data in the cache that Windows says is available. Some of the categories in task manager/resource monitor can be very confusing and even misleading, which is why we shouldn’t use these metrics alone to decide if we are in danger of being out of memory.

Instead of only relying on task manager or resource monitor, we really need to look at other metrics to find out if there is a memory related performance issue on the machine.

Some of the metrics to pay attention to are:

  • Page Life Expectancy
  • Available Bytes
  • Buffer Cache Hit Ratio
  • Target & Total Server Memory
  • Memory Grants Pending
  • Pages/sec (Hard Page Faults)
  • Batch Requests/sec & Compilations/sec

Some of these metrics can be queried from TSQL, or can be observed with Perfmon (Windows Performance Monitor – located in Administrative Tools).

 Page Life Expectancy (PLE)

Found in Perfmon at SQL Server: Buffer Manager – Page Life Expectancy, this counter measures the average amount of time unlocked data pages are staying in the buffer pool.   During peak production hours this number may dip into lower numbers, but we generally want this number to remain above 300 seconds (so data is staying in the cache for 5 minutes or longer). The longer data stays in the buffer, the less likely SQL will have to go to disk for I/O operations.

There is an interesting occurrence with page life expectancy… when SQL really does run out of memory, PLE drops very quickly…but it grows back very slowly. You’re probably not still having memory issues during the coming back stage, even though it may look like its struggling. If PLE stays down however, then you have a problem.   So be careful, because PLE can be misleading when it takes a drop… there’s a difference between it going down & staying down (you have a memory problem), and going down once and crawling back up (which is normal). If it stays down below 300 seconds consistently, you may need to add more RAM.

You can monitor this through Perfmon, or run the following query:

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Buffer Manager%’

AND [counter_name] = ‘Page life expectancy’

Available MBytes

Found in Perfmon as Memory: Available MBytes, this counter detects how much memory is available for use, and whether Windows is experiencing memory pressure. We generally start investigating if this number consistently goes below 500MB, which would indicate low memory, and Windows may start memory management operations. If Available Mbytes is consistently low, you may need to add more RAM.

This counter cannot be queried through TSQL, it has to be observed through Perfmon only.

Buffer Cache Hit Ratio

Found in Perfmon as SQL Server: Buffer Manager: Buffer Cache Hit Ratio. This counter averages (since the last restart of your SQL instance) how often SQL Server goes to the buffer pool to get data, and actually finds that data in memory, instead of having to go to disk. We want to see this ratio high in OLTP servers – around 90-95%. The higher the ratio, the less often SQL has to go to disk for I/O operations, which translates into better performance for your end users. If this number is consistently below the 90% mark, you may need to add more RAM to your server to boost performance.

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Buffer Manager%’

AND [counter_name] = ‘Buffer cache hit ratio’

 Target & Total Server Memory

Found in Perfmon at SQL Server: Memory Manager – Total Server Memory, and SQL Server: Memory Manager – Target Server Memory. The Total Server Memory is the current amount of memory that SQL Server is using.  The Total Server memory will be quite a bit lower than the Target memory during the initial buffer pool ramp up… it is trying to populate the cache and get pages loaded into memory.  Performance might be a little slower during this time since more disk I/O is required, but this is normal.  After it ramps up, and normal operations resume, Total Server Memory should be very close to Target Server Memory. (The ratio should be close to 1). If Total Server Memory does not increase much, but stays significantly less than Target, this could indicate a couple of things…

1) You may have allocated much more memory than SQL can use – SQL could cache the entire databases into memory, and if the databases are smaller than the amount of memory on the machine, the data won’t take up all the space allocated to SQL. So in this case Total Memory (actually memory being used by SQL) will never reach Target Memory (amount allocated to SQL). Or,

2) SQL cannot grow the buffer pool because of memory pressure from outside of SQL. If this is the case, you need to either increase the Max Server Memory, or add more RAM to boost performance.

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Memory Manager%’

AND [counter_name] IN (‘Total Server Memory (KB)’,’Target Server Memory (KB)’)

 Memory Grants Pending

Found in Perfmon as SQL Server: Memory Manager – Memory Grant Pending, this counter measures the total number of SQL processes waiting for a workspace memory grant. The general recommendation for this measurement should be 1 or less. Anything above 1 indicates there are SQL processes waiting for memory in order to operate.

Memory grants pending could be due to bad queries, missing indexes, sorts or hashes. To investigate this, you can query the sys.dm_exec_query_memory_grants view, which will show which queries (if any) that require a memory grant to execute [5].

If the Memory Grants Pending are not due to the above mentioned conditions, then more memory should be allocated to SQL Server by adjusting Max Server Memory. Adding more RAM should be the last resort in this case.

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Memory Manager%’

AND [counter_name] = ‘Memory Grants Pending’

Pages/sec (Hard Page Faults)

Found in Perfmon as Memory: Pages/sec, this measures the number of pages read from or written to disk. Pages/sec is the sum of Pages Input/sec and Pages Output/sec and reports the number of requested pages that were not immediately available in RAM and had to be read from the disk (resulting in hard page faults). A Hard Page Fault occurs when windows has to use the swap file on the disk [3]. It’s when the address in memory that’s part of a program is no longer in main memory, but has been instead swapped out to the paging file, making the system go looking for it on the hard disk. Since disk storage is much slower than RAM, performance can be greatly degraded.

We want to see the Page/sec number below 50, and closer to 0. If you see a high value of Pages/sec on a regular basis, you might have performance degradation, but not necessarily. A high Pages/sec value can happen while doing database backups or restores, importing or exporting data, or by reading a file mapped in memory [4].

Because a hard fault doesn’t necessarily indicate a critical error condition depending upon what’s normal for your environment, it’s a good idea to measure a baseline value, and monitor this number based on that. If the values are consistently higher that your baseline value, you should consider adding more RAM.

Batch Request & Compilations

SQL Server: SQL Statistics – Batch Request/Sec. This is the number of incoming queries

SQL Server: SQL Statistics – Compilations/Sec. This is the number of new executions plans SQL had to build

If Compilations/sec is 25% or higher relative to Batch Requests/sec, SQL Server is putting execution plans in the cache, but never actually reusing them.  Your valuable memory is being used up to cache query execution plans that will never be used again – instead of caching data.  This is bad. We don’t want this to happen. And a high Compilation/sec value (like over 100) indicates there are a lot of Ad-Hoc (one-hit-wonder) queries being run. You can enable the “optimize for ad hoc” setting if this is the case, and this will put the execution plan in the buffer, but only after the second time it has been used.

To query these metrics with TSQL:

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’

AND [counter_name] = ‘Batch Requests/sec’

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’

AND [counter_name] = ‘SQL Compilations/sec’

To get the ratio of Compilations to Batch Requests:


(SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’

AND [counter_name] = ‘SQL Compilations/sec’)


(SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’

AND [counter_name] = ‘Batch Requests/sec’)

,2) as [Ratio]


Looking at the counters in Perfmon (Report View) below, we can see that this particular server is in a healthy state, with over 11GB of Available Bytes, no page faults (so I/O is operating from buffer, not disk), Buffer cache with 100% ratio, PLE of 21,515 seconds (that’s almost 6 hours data spent in the buffer), no pending memory grants (so no SQL process is having to wait for memory), ample Total memory, and low compilation ratio (11%) compared to all the queries being run.


These measurements should be observed and understood, instead of using Task Manager to make critical decisions about resources on your SQL Server.

If it is decided to make changes based upon only what we see in Task Manager, and we reduce the amount of memory SQL can use, and it happens to need more to execute processes, it will have to read/write from disk, which will degrade the performance.  Reducing the amount of memory SQL can use will slow performance and will most likely increase complaints from end users. The lower you set your Maximum Server Memory, the lower your performance will go.

No matter how much memory you put in a system, SQL Server will use all it can get until its caching entire databases in memory and then some.  This isn’t an accident. SQL Server caches data in memory.  On purpose. That means it is healthy, and enhancing performance for your end users.

Now send a link to this blog post to your favorite person on staff who needs to be educated about Task Manager and SQL Server. You will be glad you did!


[1] “Glenn Berry’s SQL Server Performance”

[2] “Use Resource Monitor for Memory Monitoring” by Scott Lowe.

[3] “Swap File”. TechTarget.