I thought I’d share my crib sheet on SQL Server Database Management. I will expand of some of the recommendations at a later date.
Data and Log File Management
- Isolate data and log files from other application files.
- Minimizes file fragmentation, which can lead to degradation of transactions
- Create data and log files on separate volumes.
- This helps to maximize performance and availability.
- When creating data and log files, size them according to foreseeable data growth.
- If the files run out of space then the database server will need to grow the files by allocating additional space. This holds up other processes, and exacerbates file fragmentation.
- As a catch all, enable a database’s auto-growth settings to avoid running out of space.
- Ensure that the growth increments are large enough to avoid regular auto-growth events from occurring.
- An auto-growth event should be an abnormal event, and the aims should be to avoid them through regular maintenance and monitoring.
- Do not enable auto-shrink
- Leads to excessive file fragmentation
- Regularly check for free space on the data, log, and backup volumes.
- Regularly reorganize or rebuild indexes to avoid fragmentation.
- Consider determining which indexes should be rebuilt based on analyzing fragmentation.
- Enable the databases’s Auto Create Statistics option.
- Up-to-date statistics enable the query optimizer to choose an optimal query plan.
- When an index is created or rebuilt, a statistical histogram is generated for the first column in the index.
- Additionally, if the Auto Create Statistics option is enable for the database, then statistics will be created for non-indexed columns that are referenced in queries.
- Enable the Auto Update Statistics option.
- To avoid statistics from becoming out-of-date and this leading the optimizer to choose a sub-optimal query plan.
- If the Auto Update Statistics option is enabled, then when data modifications exceed specific thresholds, the database server will automatically update statistics for the table.
- Regularly rebuild indexes (see above) which in turn rebuilds statistics
- When statistics are created or updated automatically, only a sub-set of a columns data is sampled rather than a full scan being performed. The sample may not be representative of the population.
- However, when an index is created or rebuilt then a full scan is performed in generating the statistics.
- Regularly rebuild index statistics, but only for indexes that have not been rebuilt, and column statistics.
- Use the WITH FULLSCAN option where feasible.
- Set the database’s recovery model to Full or Bulk-logged where point-in-time recovery is required.
- Applicable for most OLTP databases which experience regular transactions throughout the day.
- Set the database’s recovery model to Simple where point-in-time recovery is not required.
- Data warehouses typically only experience regular scheduled updates. Point in time recovery is therefore not required.
- The simple recovery model is easier to manage and may improve bulk copy performance.
- Schedule regular full back-ups.
- A full back-up is resource intensive and may degrade database performance whilst executing. So if possible schedule the full back-up for an off-peak time.
- Intersperse full backups with regular differentials back-ups.
- If the Full or Bulk-logged recovery model is being used, then intersperse the differential backups with log backups.
- This is necessary for point-in-time recovery.
- Use backup compression to save space and also to speed up backups.
- To ensure the integrity of the backups, enable the following backup options –
- “Perform checksum before writing to the media” option.
- “Verify backup when finished” option.
- Schedule the removal of old backups.
- But ensure that you have at least 2 available at any one time just in case one of the backups is corrupt.
- In addition to backing up application databases , regularly backup the system databases – Master, TempDB, and Model.
- Determine a disaster recovery plan with the business. Document and test.