Entity Framework Core Package Manager Console Tools – Cheat-sheet

Getting help

get-help entityframework
get-help Add-Migration
get-help Add-Migration -examples
get-help Add-Migration -detailed
get-help Add-Migration -full
get-help Add-Migration -online

Adding a migration

Add-Migration -Name <migration name>

Removing the last migration

Update-Database -Migration <previous migration name>
Remove-Migration

Update the database

Update-Database

Update the database for a specific environment

Update-Database -Environment <environment name>

Update the database for a specific context

Update-Database -Context <context name>

Drop the database

Drop-Database
Advertisements

SQL Server Query Optimization – Sargable Functions

When a function is used as part of a SQL join or predicate, if the query optimizer is able to make efficient  use of indexes to speed up the query, then the function is sargable (Search ARGument ABLE). Not all Transact-SQL functions are sargable. So care must be taken to avoid non-sargable functions when creating queries.

In this blog I’m going to demonstrate some workarounds for non-sargable pitfalls.

Setup

Let’s create a table, load it with data (10,000 rows), and create some non-clustered indexes to target when optimizing our queries.

CREATE TABLE Product (
	ProductId int IDENTITY(1,1) NOT NULL,
	ProductName nvarchar(100) NOT NULL,
	ProductLine nchar(2) NULL,
	CreatedDT datetime,
	CONSTRAINT PK_Product PRIMARY KEY CLUSTERED
	(
		ProductId ASC
	)
) ON [PRIMARY];
GO

INSERT INTO Product (ProductName, ProductLine, CreatedDT)
VALUES		('Product 1', 'L1', DATEADD(year, -1, GETDATE())), (' Product 1 ', 'L1', DATEADD(year, -2, GETDATE())),
			('Product 2', 'L1', DATEADD(year, -3, GETDATE())), (' Product 2 ', 'L1', DATEADD(year, -4, GETDATE())),
			('Product 3', 'L2', DATEADD(year, -5, GETDATE())), (' Product 3 ', 'L2', DATEADD(year, -6, GETDATE())),
			('Product 4', 'L2', DATEADD(year, -7, GETDATE())), (' Product 4 ', 'L3', DATEADD(year, -8, GETDATE())),
			('Product 5', 'L4', DATEADD(year, -9, GETDATE())), (' Product 5 ', NULL, DATEADD(year, -10, GETDATE()));
GO 1000

CREATE NONCLUSTERED INDEX IX_Product_ProductName ON dbo.Product
(
	ProductName ASC
) INCLUDE (ProductId) ON [PRIMARY];

CREATE NONCLUSTERED INDEX IX_Product_CreatedDT ON dbo.Product
(
	CreatedDT ASC
) INCLUDE (ProductId, ProductName) ON [PRIMARY];

CREATE NONCLUSTERED INDEX IX_Product_ProductLine ON dbo.Product
(
	ProductLine ASC
) INCLUDE (ProductId, ProductName) ON [PRIMARY];

LTRIM and RTRIM

First off, we  want to optimize a query that returns Product records with ProductName equal to “Product 1”. But note that a subset of the products are prefixed and suffixed with spaces, and we also want to return these.

Here’s how NOT to do it –

SELECT	ProductId, ProductName
FROM	Product
WHERE	LTRIM(RTRIM(ProductName)) = 'Product 1';

We are targeting the IX_Product_ProductName index, and the query plan shows that the optimizer has chosen to use this index.

query1

However, it is scanning all 10,000 rows of the index in order to return the expected 2000. The LTRIM and RTRIM functions are not sargable.

If ProductName was only prefixed with spaces, and not suffixed, then we could optimize the query by using the LIKE operator.

SELECT	ProductId, ProductName
FROM	Product
WHERE	ProductName LIKE 'Product 1%';

But as we need to account for the suffixed spaces, this will not do for us.

To optimize the query, we add a computed column which uses the LTRIM and RTRIM functions to strip the spaces, and then create a non-clustered index on this column –

ALTER TABLE Product ADD ProductNameTrimmed AS LTRIM(RTRIM(ProductName));

CREATE NONCLUSTERED INDEX IX_Product_ProductNameTrimmed ON dbo.Product
(
	ProductNameTrimmed ASC
) INCLUDE (ProductId, ProductName) ON [PRIMARY];

If we now modify the query to use the computed column in the predicate –

SELECT	ProductId, ProductName
FROM	Product
WHERE	ProductNameTrimmed = 'Product 1';

The query optimizer is now able to perform a seek on the non-clustered index –

query2

The number of logical reads has reduced from 50 to 16, and the execution time from 193ms to 67ms. And if the query returned a smaller subset of records, then the performance improvement would be even greater.

DateTime Functions

If we now query on a specific CreatedDT datetime –

SELECT	ProductID, ProductName, CreatedDT
FROM	Product
WHERE	CreatedDT = '2014-11-21 14:08:42.593';

The optimizer performs an efficient seek on the IX_Product_CreatedDT index.

However, if we want to return rows for a date, irrespective of time, we may try the following widely used query –

SELECT	ProductID, ProductName, CreatedDT
FROM	Product
WHERE	DATEADD(dd, 0, DATEDIFF(dd, 0, CreatedDT)) = '2014-11-21';

But the DATEADD and DATEDIFF functions are not sargable so the query optimizer is unable to perform a seek on the IX_Product_CreatedDT non-clustered index. Instead it scans the clustered index, reading all 10,000 rows.

Instead let’s try the following –

SELECT	ProductID, ProductName, CreatedDT
FROM	Product
WHERE	CONVERT(date, CreatedDT) = '2014-11-21';

The query optimizer is now able to perform the seek on the index.

query3

The logical reads have Now lestbeen reduced from 54 to 8. The CONVERT function is this case is sargable.

If we look at the details of the index seek, we see that the query optimizer is using the following seek predicate –

Seek Keys[1]: 
Start: [Sandbox].[dbo].[Product].CreatedDT > Scalar Operator([Expr1005]), 
End: [Sandbox].[dbo].[Product].CreatedDT < Scalar Operator([Expr1006])

So it is translating the CONVERT predicate into a greater than and less than query. This gives us a clue as to another method for enabling the optimizer to efficiently utilize the index –

SELECT	ProductID, ProductName, CreatedDT
FROM	Product
WHERE	CreatedDT > '2014-11-21' AND CreatedDT < '2014-11-22';

Now let’s try and return all rows created in year 2014 –

SELECT	ProductID, ProductName, CreatedDT
FROM	Product
WHERE	YEAR(CreatedDT) = '2014';

Not surprisingly, the query optimizer scans all rows as the YEAR function is not sargable.

To enable the optimizer to use the IX_Product_CreatedDT index we can re-write the query as follows –

SELECT	ProductID, ProductName, CreatedDT
FROM	Product
WHERE	CreatedDT > '2014-1-1' AND CreatedDT < '2015-1-1';

query4

The number of logical reads is reduced from 54 to 8.

ISNULL and COALESCE

The following query returns all Product records with ProductLine equal to “L4”.

SELECT	ProductID, ProductName, ProductLine
FROM	Product
WHERE	ProductLine = 'L4';

The query optimizer performs an efficient seek on the IX_Product_ProductLine index.

query5

The ProductLine column is nullable, and if we want to return these as well we could use a ISNULL or COALESCE statement as follows –

SELECT	ProductID, ProductName, ProductLine
FROM	Product
WHERE	COALESCE(ProductLine, 'L4') = 'L4';

However, the optimizer now chooses a non-optimal scan on the index.

query6

The COALESE function, and also the ISNULL function, is not sargable. The fix is simply to change the predicate to use an OR .. IS NULL as shown below –

SELECT	ProductID, ProductName, ProductLine
FROM	Product
WHERE	ProductLine = 'L4' OR ProductLine IS NULL;

The optimizer now chooses an optimal seek on the index.

query7

Rule of Thumb

Sargable functions in SQL Server are few and far between. As a rule of thumb, I’d recommend that SQL Server developers, without a great deal of experience in optimizing queries, err on the side of caution by not applying functions, or operators, to the columns used in joins or predicates. If there is a imperative to do so, then consider whether there are alternative such as computed columns.

AutoFixture Default Builders

AutoFixture takes the heavy lifting out of creating test fixtures. If you haven’t used it before and want an intro, then check out the wiki.

I’ve experienced it’s use on a number of projects, and would generally recommend it.

However, some care must be taken as it generates some of the types in a constrained non-deterministic way. So if applied to non-anonymous variables that affect the outcome of the test then the test may fail intermittently.

AutoFixture comes with a set of default builders encapsulating the rules for creating a variety of .NET CTS types. It also provides points of extensibility that allow you to customize the rules.

The library has been developed over a number of years, and the documentation on the rules is quite fragmented. So I thought I’d dig into the source code and document what I find, as much for my reference and benefit as yours.

Note. The following applies to the current version (v3.0) of AutoFixture.

Numbers

var fixture = new Ploeh.AutoFixture.Fixture();

Console.WriteLine("{0}, {1}, {2}",
                  fixture.Create<int>(),
                  fixture.Create<double>(),
                  fixture.Create<byte>());
79, 236, 44

The default builder for numbers is RandomNumericSequenceGenerator. It generates a sequence of unique numbers for the following types – Byte, Decimal, Double, Int16, Int32, Int64, SByte, Single, UInt16, UInt32, UInt64.

Unique numbers are generated randomly from the set [1, 255]. Once these are used up they are then be generated from the set [256, 65,535]. And finally from the set [65,536,  2,147,483,647]. When all numbers within the final set have been used AutoFixture will start again from the first set.

fixture.Customizations.Add(
    new RandomNumericSequenceGenerator(1, 10)
);

Console.WriteLine("{0}, {1}, {2}",
                  fixture.Create<int>(),
                  fixture.Create<int>(),
                  fixture.Create<int>());
10, 6, 8

RandomNumericSequenceGenerator has a constructor that takes a variable number of Int64 limits used to constrain the sets. There must be at least 2 limits, hence one set, and they must be specified in numeric order.

By adding an instance of RandomNumericSequenceGenerator to the Fixture.Customizations collection, you can constrain the generated numbers by specifying custom limits; as shown above where the numbers are constrained between 1 and 10.

fixture.Customizations.Add(
    new NumericSequenceGenerator()
);

Console.WriteLine("{0}, {1}, {2}",
                  fixture.Create<int>(),
                  fixture.Create<int>(),
                  fixture.Create<int>());
1, 2, 3

The RandomNumericSequenceGenerator was introduced with v3.0. Earlier versions used the NumericSequenceGenerator which generates numbers in sequence starting at 1. If you wish to have AutoFixture generate numbers in a deterministic manner then you can specify the usage of NumericSequenceGenerator by adding it to the Customizations collection.

Chars

Console.WriteLine("{0}, {1}, {2}",
                  fixture.Create<char>(),
                  fixture.Create<char>(),
                  fixture.Create<char>());
Y, /, A

The default builder for char is RandomCharSequenceGenerator, which generates random characters from the printable ASCII character set (“!” (33) to “~” (126)).

fixture.Customizations.Add(
    new CharSequenceGenerator()
);

Console.WriteLine("{0}, {1}, {2}",
                  fixture.Create<char>(),
                  fixture.Create<char>(),
                  fixture.Create<char>());
!, ", #

For a more deterministic approach the CharSequenceGenerator builder can be used to cycle through the printable ASCII character set.

Strings

Console.WriteLine("{0}", fixture.Create<string>());
 eee9e1b5-70a1-4b61-952d-c6fa4c14b166

StringGenerator is a builder that has a constructor accepting a delegate (Func<object>), which is invoked in order to generate strings.

StringGenerator is registered as the default builder for strings an initialized with the following function – () => Guid.NewGuid(). It therefore generates unique Guids for strings.

Console.WriteLine("{0}", fixture.Create<string>().Substring(0, 10));
22ca485a-7

A Guid, including the hythens, is 36 characters in length. If you require a string with less characters, and do not want to go to the lengths of creating and registering a custom string builder, then the easiest thing to do is apply the Substring method to the resultant string.

Console.WriteLine("{0}", string.Join(string.Empty, fixture.CreateMany<string>(5)));
edd22091-e048-4d22-a41c-0c50482f65d96d741962-9b9b-4431-9d17-6993986933cd1a3814e5-3989-4448-8ea2-5bc8de4d55744314904f-8884-4af4-8253-45c7f1dc11306b0e2212-bae9-44de-8754-5dd62ebcf2a0

Generating a longer string is a little more complex, but can be achieved by using the Fixture.CreateMany() method to generate a collection of strings, and then join these together using String.Join.

using System.ComponentModel.DataAnnotations;
public class Product
{
    public int ProductId { get; set; }
   
    [StringLength(10)]
    public string Name { get; set; }
}

Console.WriteLine("{0}", fixture.Create<Product>().Name);
60ebc6bc-5

ConstrainedStringGenerator is registered as a default handler for strings, in addition to StringGenerator. The difference is that ConstrainedStringGenerator is used to constrain strings when they are annotated with the StringLength  attribute. It generates strings by concatenating Guids, and then taking a Substring of the Guids to ensure an exact length.

fixture.Register<string>(() => "Lorem ipsum dolor sit amet, consectetur adipiscing elit.");

Console.WriteLine("{0}", fixture.Create<string>());Console.WriteLine("{0}", fixture.Create<string>());
Lorem ipsum dolor sit amet, consectetur adipiscing elit.

AutoFixture has a Register method, which allows the function used to generate instances of a specific type to be overridden. In the above example, the Register method overrides the default method used for generating strings to one that generates Lorem Ipsum text. not sure how useful this is; it’s just an example.

One small caveat with overriding the string function as shown above. If you the try to generate a Uri, you will receive the following error ” Invalid URI: The hostname could not be parsed.”. the Uri utilizes the string function to generate the hostname, and hostnames can only comprise of a limited number of characters, which does not include spaces or commas.

DateTime

Console.WriteLine("{0}, {1}, {2}",
                  fixture.Create<DateTime>(),
                  fixture.Create<DateTime>(),
                  fixture.Create<DateTime>());
04/11/2016 00:59:14, 27/11/2017 10:33:06, 26/12/2015 04:33:45

RandomDateTimeSequenceGenerator is the default builder for DateTime objects. It has a constructor that takes a minimum and maximum DateTime, and it randomly generates a date between these limits. by default the limits are set to DateTime.Now.AddYears(-2) and DateTime.Now.AddYears(2).

fixture.Customizations.Add(
    new RandomDateTimeSequenceGenerator(new DateTime(2017, 1, 1), new DateTime(2017, 1, 31))
);

Console.WriteLine("{0}, {1}, {2}",
        fixture.Create<DateTime>(),
        fixture.Create<DateTime>(),
        fixture.Create<DateTime>());
06/01/2017 07:11:33, 26/01/2017 19:17:35, 01/01/2017 16:18:43

By adding the RandomDateTimeSequenceGenerator to the Customizations collection and specifying minimum and maximum DateTime limits you can customize the range from which DateTime values are generated. In the example above values are generated for the year 2017.

AutoFixture has 2 addition builders for DateTime: StrictlyMonotonicallyIncreasingDateTimeGenerator, which takes a seed DateTime as a constructor parameter, and increments each subsequent generated DateTime by one day; and CurrentDateTimeGenerator which returns DateTime.Now.

Uri

Console.WriteLine("{0}", fixture.Create<Uri>());

fixture.Inject(new UriScheme("ftp"));
Console.WriteLine("{0}", fixture.Create<Uri>());
http://0d171f49-5624-417f-a826-2866b166e225/
ftp://cf56a59a-5390-4c78-807a-7a4cd01b6eec/

UriGenerator is used to build instances of Uri, defaulting the scheme to “http” and the authority generated by StringGenerator, hence a Guid.

The scheme can be customized by injecting an instance of UriScheme with the scheme passed to the constructor, as shown above for “ftp”.

The authority can be customized by registering a new string builder function. But as mentioned in the Strings section, the string generator function should only generate strings containing only characters compatible with URIs.

Regular Expressions

public class Contact
{
    [RegularExpression(@"^[2-9]\d{2}-\d{3}-\d{4}$")]
    public string Telephone { get; set; }
}

Console.WriteLine("{0}", fixture.Create<Contact>().Telephone);
601-000-1101

As mentioned in the Strings section, the ConstrainedStringGenerator builds strings of specific lengths where AutoFixture is used to generate a custom type that has a string property annotated with the StringLength attribute.

Similarly, the RegularExpressionGenerator builds strings annotated with a RegularExpression attribute.

Booleans

Console.WriteLine("{0}, {1}",
                  fixture.Create<bool>(),
                  fixture.Create<bool>());
True, False

BooleanSwitch is the default builder for Booleans, generating in a deterministic manner, alternating between True and False.

Guids

Console.WriteLine("{0}", fixture.Create<Guid>());
2fac899e-55a2-4161-a90f-ec7e7b87ca92

GuidGenerator returns Guid.NewGuid().

Delegates

public bool void Process(int orderId);

Console.WriteLine("{0}, {1}", func.Method, func.Invoke(1));
Boolean lambda_method(System.Runtime.CompilerServices.Closure, Int32), True

DelegateGenerator is the default builder for delegates. It returns a delegate pointing to a dynamically generated method.

If the delegate has a return value, then the method will return a generated instance of that value when invoked.

MailAddress

Console.WriteLine("{0}", fixture.Create<MailAddress>());
"43b37eb4-48ee-49e2-bc8c-1bed70fc1796" <43b37eb4-48ee-49e2-bc8c-1bed70fc1796@example.com>

The MailAddressGenerator builder generates instances of MailAddress.

The user part of the address is generated using the registered string generator, which by default returns Guid.NewGuid(). As mentioned in the Url section, be careful when registering a custom string generator function, as it must only return compatible characters, so no spaces in commas for example.

The host part of the address is generated using the DomainNameGenerator builder, which randomly returns one of the following domains – example.com, example.net, example.org.

fixture.Register(() => new DomainName("acmecorp.com"));

Console.WriteLine("{0}", fixture.Create());

"393afa04-be57-4275-afbf-7349a1ba54d3" <393afa04-be57-4275-afbf-7349a1ba54d3@acmecorp.com>

If you want to use a custom host, then you can register a custom function for generating DomainName instances, as shown above.

Entity Framework Core 1.0 Migrations in a Separate Class Library

At the beginning of last year I started blogging about ASP.NET Core 1.0 (née ASP.NET 5). But it was early days and things were far from stable with the product, so I soon came to the conclusion that being such an early adopter was more trouble than it was worth due to the number of bugs, missing features, and breaking changes. Roll on a year, and ASP.NET Core 1.0 along with Entity Framework Core 1.0 and the requisite Visual Studio 2015 tooling has finally been released. So time to start playing again.

I can’t say that I’ve done a huge amount with it yet, so can’t really comment on it’s stability. But I would assume that it’s about as stable as you would expect from a 1.0.0 version. Good enough to work with but challenging at times.

And it didn’t take me long to come across a challenge, which I’d like to share with you here.

In any but the smallest of solutions, it’s good practice to seperate data concerns from domain and presentational concerns. In the case of ASP.NET MVC and Entity Framework, that typically involves placing the data context and data model classes into a separate class library.

If I do this and then attempt to add a migration, as follows –

PM> Add-Migration -Name "Initial" -Project "SandboxCore.Data"

I get an error telling me that the preview of EF does not support commands on class library projects –

Could not invoke this command on the startup project 'SandboxCore.Data'. 
This preview of Entity Framework tools does not support commands on class library projects in ASP.NET Core 
and .NET Core applications. See http://go.microsoft.com/fwlink/?LinkId=798221 for details and workarounds.

The link in the error message describes 2 workarounds. However, at the time of writing it is out-of-date, and the workarounds aren’t as simple as described.

So here’s what I believe to be the easiest solution to the problem, which is to create the data project, not as a class library, but as a console application.

1. Add a .NET Core console project, to your solution, to contain the data context and data model classes.

2. Add the entity framework core provider, for your chosen database, as a project dependency to your data project. If targetting SQL Server, add Microsoft.EntityFrameworkCore.SqlServer.

3. Add the entity framework tools, Microsoft.EntityFrameworkCore.Tools, as both a project dependency and as a tool to your data project. Adding it as a project dependency ensures that it is resolved and downloaded from NuGet. Whereas, adding it as a tool ensures that the entity framework commands are accessible. Your data project’s project.json should now look like this –

{
  "version": "1.0.0-*",
  "buildOptions": {
    "emitEntryPoint": true
  },

  "dependencies": {
    "Microsoft.NETCore.App": {
      "type": "platform",
      "version": "1.0.0"
    },
    "Microsoft.EntityFrameworkCore.SqlServer": "1.0.0",
    "Microsoft.EntityFrameworkCore.Tools": "1.0.0-preview2-final"
  },

  "frameworks": {
    "netcoreapp1.0": {
      "imports": "dnxcore50"
    }
  },

  "tools": {
    "Microsoft.EntityFrameworkCore.Tools": "1.0.0-preview2-final"
  }
}

4. Add data model classes to your data project. I like to add mine to a Models folder to keep the root of the project tidy.

using System.ComponentModel.DataAnnotations;

namespace SandboxCore.Data.Models
{
    public class Product
    {
        public int ProductId { get; set; }

        [StringLength(200)]
        public string ProductName { get; set; }
    }
}

5. And finally, add an implmentation of DbContext to the data project. The implementation should have a publically accessible Dbset<TEntity> for each data model class, and the OnConfiguring method should be overridden in order to configure the context to target your database of choice. This last part is necessary for migrations.

using Microsoft.EntityFrameworkCore;
using SandboxCore.Data.Models;

namespace SandboxCore.Data
{
    public class SandboxCoreDbContext : DbContext
    {
        public DbSet<Product> Projects { get; set; }

        protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
        {
            optionsBuilder.UseSqlServer(@"Server=(localdb)\mssqllocaldb;Database=SandboxCore;Trusted_Connection=True;");
        }
    }
}

6. You can now enable and add migrations using the following command.

PM> Add-Migration -Name "Initial" -Project "SandboxCore.Data" -StartupProject "SandboxCore.Data"
 

The -Project option specifies SandboxCore.Data as the target for migrations, whilst the -StartupProject option specifies SandboxCore.Data as the entry point for running the EF command.

 

AutoFixture

There are more ways than one to skin a cat, or if you are a late 19th century west country gentleman, there are more ways of killing a cat than choking it with cream.

And similarly there are more ways than one to unit test an application. Martin Fowler, who is much more intelligent and experienced than I, discusses this in his blog here. In particular he talks about 2 schools of thought with regard to collaborator isolation.  Classic-style testers are happy to test their units in collaboration with their dependencies, and will only isolate a unit from a dependency if need be.  Whereas Mockist-style testers use test doubles, and in particular mocks, to isolate their units from all dependencies. Jay Fields introduced the terminology, which I like very much, for these schools of thought as sociable testing versus solitary testing.

Like flavours of ice-cream, there’s no right or wrong. Martin classes himself as a classic-style tester, so it certainly can’t be wrong. However, if the community blogs and forums are a good indicator, I would say that the majority of testers are mockists. Or at least, the mockist-style is the current flavour of the month.

The aim of unit testing is to verify the behaviour of a unit, or SUT (System Under Test), ensuring that all relevant paths – happy, alternative, and error – are adequately covered. Mockists will typically isolate the SUT from it’s dependencies by creating test doubles using a mocking framework such as Moq, NSubstitute, or FakeItEasy.

Mocking frameworks greatly simplify unit testing by taking a lot of the leg work out of creating objects that simulate the behaviour of the dependencies. Mocking frameworks save a lot of lines of code, and additionally provide functionality to help verify the communication between the SUT and the dependency. But I’m not here to talk about mocking. I’m here to talk about how we can further simplify unit testing and save even more lines of code by using AutoFixture to help us construct inputs, or test fixtures, for our SUT.

AutoFixture is an open source library created by Mark Seemann. To quote the blurb on the AutoFixture GitHub repository –

“AutoFixture is an open source library for .NET designed to minimize the ‘Arrange’ phase of your unit tests in order to maximise maintainability.Its primary goal is to allow developers to focus on what is being tested rather than how to setup the test scenario, by making it easier to create object graphs containing test data.”

It takes the heavy lifting out of creating test fixtures, reducing the coding effort and lines of code required to create and maintain tests.

My SUT is FilmService, as shown below, and I want to test the happy path of the AddFilm method.

using Sandbox461.Models;
using System;

namespace Sandbox461.Domain
{
    public class FilmService
    {
        private Sandbox461DbContext db;

        public FilmService(Sandbox461DbContext db)
        {
            this.db = db;
        }

        public Film AddFilm(Film film)
        {
            if (film == null)
            {
                throw new ArgumentNullException();
            }

            db.Films.Add(film);
            db.SaveChanges();

            return film;
        }
    }
}

As mockists the first thing we do is create mocks for all classes that our SUT depends on. In this case, an instance of Sandbox461DbContext is injected into FilmService via the constructor. FilmService therefore depends on Sandbox461DbContext.

The happy path of the AddFilm method collaborates with Sandbox461DbContext in 2 ways. Firstly, the Add method of the Films property – which is of type DbSet<Film> – is called. And secondly, the SaveChanges method of Sandbox461DbContext is called. In order to verify that the SUT calls both these methods, we need to create mocks for both the Sandbox461DbContext class and the Films property, and we need to set up the mock for Sandbox461DbContext to return the mock for the Films property.

We then create an instance of FilmService and call the AddFilm method. However, the AddFilm method requires an instance of Film, and in this case we manually create and that instance and set the required properties to appropriate values. The AddFilm method is then called.

The test verifies that the Add method of the Films mock is called once – hence the instance of film is added to the Films set. It then verifies that the SaveChanges method of the Sandbox461DbContext mock is called once. And finally the test asserts that the AddFilm method returns the Index view.

using Microsoft.VisualStudio.TestTools.UnitTesting;
using Moq;
using System.Data.Entity;
using Sandbox461.Models;
using Sandbox461.Domain;
using System.Collections.Generic;

namespace Sandbox461Tests.Domain
{
    [TestClass]
    public class FilmServerTests
    {
        [TestMethod]
        public void AddFilm_HappyPath()
        {
            // Arrange
            var mockFilmsSet = new Mock<DbSet<Film>>();
            var mockContext = new Mock<Sandbox461DbContext>();
            mockContext.Setup(m => m.Films).Returns(mockFilmsSet.Object);

            var sut= new FilmService(mockContext.Object);

            var film = new Film
            {
                FilmId = 0,
                Title = "Test Title",
                Actors = new List<FilmMaker>()
                {
                    new FilmMaker() { FilmMakerId = 1, FirstName = "Billy Bob", LastName = "Thornton" },
                    new FilmMaker() { FilmMakerId = 1, FirstName = "Martin", LastName = "Freeman" },
                    new FilmMaker() { FilmMakerId = 1, FirstName = "Ross", LastName = "Whitehead" }
                },
                Directors = new List<FilmMaker>()
                {
                    new FilmMaker() { FilmMakerId = 1, FirstName = "Randall", LastName = "Einhorn" },
                    new FilmMaker() { FilmMakerId = 1, FirstName = "Michael", LastName = "Uppendahl" },
                    new FilmMaker() { FilmMakerId = 1, FirstName = "Adam", LastName = "Bernstein" }
                }
            };

            // Act
            var result = sut.AddFilm(film);

            // Assert
            mockFilmsSet.Verify(m => m.Add(It.IsAny<Film>()), Times.Once());
            mockContext.Verify(m => m.SaveChanges(), Times.Once());

            Assert.IsInstanceOfType(result, typeof(Film));
        }
    }
}

The instance of Film takes quite a bit of effort, and quite a few line of code to set-up. And as we develop the application Film will most likely be extended to include addition properties, such as StoryLine, Genres, Certificate, and Budget, thus becoming quite complex, and requiring many line of code to manually create the instance.

You may be questioning why I created 3 instances of FilmMaker in each of the Actors and Directors collections, where they are not required for this test. The reason is that when I introduce AutoFixture to the test, then this is exactly what AutoFixture will generate. So I wanted a like-for-like comparison.

Another issue with manually creating our test fixtures is that every time we modify the classes, possibly adding required properties, then we have to modify our tests. It would be nice if we could avoid this overhead.

So how can we reduce the effort required to set-up and maintain our test fixtures? This is cue for AutoFixture to enter stage left..

I’ve refactored the test to use AutoFixture, by replacing the explicit creation of the Film object with the following 2 lines of code –

var fixture = new Fixture();
var film = fixture.Create<Film>();

The first line of code creates an instance of the Ploeh.AutoFixture.Fixture class. And the second line uses the Create method of the Fixture instance to generate a “test fixture” of type Film.

If we take a look at the object graph for film, as shown below, we can observe the following –

  • AutoFixture populates all publically set-able fields.
  • Any fields that have private setters are not populated.
  • Unless they are set via the constructor, is which case AutoFixture will pass appropriate values through the constructor.
  • AutoFixture uses Constrained Non-Determinism to populate the values. Basically, it uses algorithms to generate non-deterministic values that are likely to stay away from any of our tests boundary conditions. The reason Mark states for using non-deterministic values is that it enforces good discipline by making it impossible to hard-code expected values in our tests. The algorithms can however be customized if need be.
  • For strings, the algorithm generates a new GUID converted to a string.
  • Numbers are random, and for integers the algorithm generates random numbers within the range of [1, 255] for the first 255 numbers, and then [256, 32767] for the remaining.
  • DataTimes are random, constrained to the past 2 and next 2 years.
  • Chars are also random.
  • For collections, such as List<T>, AutoFixture will populate the collection with 3 entries, because 3 is many. Although the number of entries can be changed by setting the Fixture.RepeatCount property.

film

The revised test is as follows –

using Microsoft.VisualStudio.TestTools.UnitTesting;
using Moq;
using System.Data.Entity;
using Sandbox461.Models;
using Sandbox461.Domain;
using Ploeh.AutoFixture;
using System.Collections.Generic;
using Ploeh.AutoFixture.AutoMoq;

namespace Sandbox461Tests.Domain
{
    [TestClass]
    public class FilmServerTests
    {
        [TestMethod]
        public void AddFilm_HappyPath_WithAutoFixture()
        {
            // Arrange
            var mockFilmsSet = new Mock<DbSet<Film>>();
            var mockContext = new Mock<Sandbox461DbContext>();
            mockContext.Setup(m => m.Films).Returns(mockFilmsSet.Object);

            var sut= new FilmService(mockContext.Object);

            var fixture = new Fixture();
            var film = fixture.Create<Film>();

            // Act
            var result = sut.AddFilm(film);

            // Assert
            mockFilmsSet.Verify(m => m.Add(It.IsAny<Film>()), Times.Once());
            mockContext.Verify(m => m.SaveChanges(), Times.Once());

            Assert.IsInstanceOfType(result, typeof(Film));
        }
    }
}

In summary, if you’re a mockist then you want to mock the dependencies, create test fixtures for the inputs, and then verify the behaviour and outputs for all paths in your SUT. A mocking framework will simplify the mocking of dependencies, whilst AutoFixture will simplify the creation of test fixtures. You will write less code, and your code will be durable as you will most likely not have to modify test code based on modification to the test fixture classes.

For more information on AutoFixture check out Mark Seemann’s blog. And consider buying him a cup of coffee. It’s a worthy read.

Also, check out the AutoFixture GitHub repository.

And install AutoFixture from Nuget.

 

 

 

 

Make sure your _reference.js references are in the correct order!

I’m messing about with angular at the moment, and I wasted an hour tonight trying to work out why javascript intellisense was not playing ball in Visual Studio 2015.

Mads Kristenson has written some great articles on javascript intellisense – here and here. He details the whys and wherefores, and in particular how the _references.js file controls which javascript files are included in javascript intelligence. I’m not going to detail how it works here, so check out Mads’ articles if you want to know more.

A typical _references.js file, and the one I was having problems with, looks like this –

/// <autosync enabled="true" />
/// <reference path="../js/app.js" />
/// <reference path="../js/project.js" />
/// <reference path="angular.min.js" />
/// <reference path="angular-animate.min.js" />
/// <reference path="angular-aria.min.js" />
/// <reference path="angular-cookies.min.js" />
/// <reference path="angular-loader.min.js" />
/// <reference path="angular-message-format.min.js" />
/// <reference path="angular-messages.min.js" />
/// <reference path="angular-mocks.js" />
/// <reference path="angular-resource.min.js" />
/// <reference path="angular-route.min.js" />
/// <reference path="angular-sanitize.min.js" />
/// <reference path="angular-scenario.js" />
/// <reference path="angular-touch.min.js" />
/// <reference path="bootstrap.js" />

Topping the file is the autosync directive which tells VS to automatically add, remove, and update references for all javascript files in the project. This is an awesome feature.

Having added the ../js/app.js and ../js/project.js files to my project, autosync added the references towards the top of the _references.js file, as shown above.

My app.js file simply defines an angular module, and as you can see below that intellisense is available for the angular object which is defined in the angular.min.js file.

intellisense1

However, when editing my project.js file, which extends the app module, there was no intellisense available for any of the angular objects.

intellisense2

This is where I spent a solid hour removing and adding references, trying out different intellisense configuration options, and basically going around in circles.

I finally tried reordering the references so that the angular files were referenced before my files.

/// <autosync enabled="true" />
/// <reference path="angular.min.js" />
/// <reference path="angular-animate.min.js" />
/// <reference path="angular-aria.min.js" />
/// <reference path="angular-cookies.min.js" />
/// <reference path="angular-loader.min.js" />
/// <reference path="angular-message-format.min.js" />
/// <reference path="angular-messages.min.js" />
/// <reference path="angular-mocks.js" />
/// <reference path="angular-resource.min.js" />
/// <reference path="angular-route.min.js" />
/// <reference path="angular-sanitize.min.js" />
/// <reference path="angular-scenario.js" />
/// <reference path="angular-touch.min.js" />
/// <reference path="bootstrap.js" />
/// <reference path="../js/app.js" />
/// <reference path="../js/project.js" />

And lo and behold, javascript intellisense now works. Doh!

intellisense3

To be exact, I’m using Visual Studio Community 2015 RTM, version 14.0.23107. I haven’t tried this out with any other version of Visual Studio. So it may simply be one of a number of teething problems with 2015 which will be ironed out. Let’s hope so. But if you experience this in another version, then you know what to do.

 

 

 

CDNs and Fallbacks for Angular Modules

I’ve posted before about CDNs and fallbacks in the context of ASP.NET 5 and MVC 6. I discussed the whys and wherefores, so please read the aforementioned post if you want to understand the details of why CDNs and fallbacks are recommended practice when consuming javascript resources in your web applications.

Bottom line is that consuming javascript libraries from CDNs improves performance both on the web server and on the client browsers. But CDNs don’t come with SLAs and so there is no guarantee that the CDN will be operational when a client requests a resource. We therefore need to include a fallback position which directs the client to request the resource from our own servers when the CDN has failed.

Basic Fallback Test

<!-- Load AngularJS from Google CDN -->
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.3.15/angular.min.js"></script>
<!-- Fallback to local file -->
<script>window.angular || document.write('<script src="scripts/angular.min.js">\x3C/script>')</script> 

The above script attempts to load the angular.min.js library from the Google CDNs.

It then executes a simple OR statement which checks to see if the windows.angular object resolves to true or false.

True indicates that the object exists and the library was successfully loaded from the Google CDN. The OR statement is short-circuited and all is well.

False indicates that the library failed to load. The expression on the second half of the OR statement is resolved, and a script element is written to the document object, directing the browser to request the library from our servers.

Note. All javascript objects can be evaluated in a boolean context, and will either evaluate to true (truthy values) or evaluate to false (falsy values). Falsy values include 0, null, undefined, false, “”, and NaN. All other values are truthy.

In short, the fallback position should check for the existence of a globally scoped object specific to the library in question. And if the object does not exist, then direct the browser to request the library from our servers.

But what if the library does not define any globally scoped objects, and instead only extends objects from another library that it depends on. This is the case with many of the angular UI libraries which extend the windows.angular object by injecting modules. In these cases we need to write a javascript statement that checks for the existence of modules and evaluates to true.

Complex Fallback Test

<!-- Load Angular Bootstrap UI from Google CDN -->
<script src="//cdnjs.cloudflare.com/ajax/libs/angular-ui-bootstrap/0.13.3/ui-bootstrap.min.js"></script>
<!-- Fallback to local file -->
<script>
    (function () {
        try {
            window.angular.module('ui.bootstrap');
        }
        catch (e) {
            return false;
        }
        return true;
    })() || document.write('<script src="scripts/ui-bootstrap.min.js">\x3C/script>')
</script>

In the example above, the fallback position uses a self-invoking anonymous function to check to see if the ui.bootstrap module has been injected into the window.angular.module collection. The function incorporates a try and catch block as attempting to reference a module that does not exist results in an errorThe catch block returns a false.if the module does not exist. Otherwise the function returns a true.

ASP.NET MVC 5 FallBack Test

When registering bundles for minification we can specify a CDN path and fallback position.

The ScriptBundle class has 2 arguments, the second of which we can use to register a CDN path. In addition we can assign a fallback expression string to the cdnFallbackExpression property – which I have done in the examples below using object initialization syntax.

If we then set the bundles.UseCdn property to true, and ensure that the web.config compilation debug flag is set to false – the CDN paths will be used to serve up script libraries, and the fallback expressions will be utilized to provide a fallback position.

using System.Web;
using System.Web.Optimization;

namespace MVC5TestApp
{
    public class BundleConfig
    {
        public static void RegisterBundles(BundleCollection bundles)
        {
            bundles.Add(
                new ScriptBundle(
                    "~/bundles/angular",
                    "//cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.3/angular.min.js") { CdnFallbackExpression = "window.angular" }
                    .Include("~/Scripts/angular.js"));

            bundles.Add(
                new ScriptBundle(
                    "~/bundles/angular-ui-bootstrap",
                    "//cdnjs.cloudflare.com/ajax/libs/angular-ui-bootstrap/0.13.3/ui-bootstrap2.min.js")
                        {
                            CdnFallbackExpression = @"   
                                (function () {
                                    try {
                                        window.angular.module('ui.bootstrap');
                                    }
                                    catch (e) {
                                        return false;
                                    }
                                    return true;
                                })()"
                        }
                    .Include("~/Scripts/angular-ui/ui-bootstrap.js"));

            bundles.UseCdn = true;
        }
    }
}

 ASP.NET MVC 6 FallBack Test

The Environment and Script TagHelper classes and attributes were introduced with MVC 6. They enable us to specify different sources for script, and fallback tests, based on the execution environment. For more details follow the link at the top of this post.

<environment names="Development">
    <script src="~/lib/angular/angular.js"></script>
    <script src="~/lib/angular-bootstrap/ui-bootstrap.js"></script>
</environment>
<environment names="Staging,Production">
    <script src="//cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.3/angular.min.js"
            asp-fallback-src="~/lib/angular/angular.min.js"
            asp-fallback-test="window.angular">
    </script>
    <script src="//cdnjs.cloudflare.com/ajax/libs/angular-ui-bootstrap/0.13.3/ui-bootstrap.min.js"
            asp-fallback-src="~/lib/angular-bootstrap/ui-bootstrap.min.js"
            asp-fallback-test="
                (function() {
                    try {

                        window.angular.module('ui.bootstrap');
                    } catch(e) {
                        return false;
                    }
                    return true;
                })()">
    </script>
</environment>