Monday, January 2, 2017

First steps with SpecFlow and Selenium

I've always been a Unit Testing guy, but reading Growing Object Oriented Software Guided by Tests really brought home the role an acceptance test outer structure can play in an iterative development process.
The core of the book focuses on building an "auction sniper" (an automated bidding tool), and the very first step (a Sprint Zero task) is to set up an acceptance test harness that can simulate connecting to an auction and showing that the sniper lost.  Just showing "Lost Auction" in a label is enough to get the test to pass, and then the rest of the book builds functionality up around that.  This made a strong impression on  me. My own practice has been to start the development process with controller tests, and not have anything that directly interacts with a browser; but this book got me wondering what a browser test exoskeleton would look like.  Since I don't develop with Java and Swing, the tooling in Freeman & Pryce's book wasn't applicable, but SpecFlow (the C# Given/When/Then test generator)  and the browser test tool Selenium seemed like a logical combination. Since I've been hacking a lot on the Sitecore Instance Manager recently, I thought this would be a logical place to try out some Selenium BDD (behavior driven development, the technical term for the Given/When/Then style).  I've created a branch on my SIM fork called "selenium-bdd" where you can follow my progress. Since it's all NuGet driven, you should be able to install SpecFlow and try this out yourself.

To get started, you need to install the Visual Studio plugin "SpecFlow for VisualStudio 2015".  I also added the NuGet package SpecRun.SpecFlow (or SpecFlow.NUnit, see update at bottom).  With these tools in place, you will now be able to add "SpecFlow feature" files through the Add New Item right click option: 



This creates a file that allows you to specify a story, and a number of Given, When, Then statements to describe it precisely. The Given clauses give preconditions, the When clauses an action, and Then the expected result. You can have multiple clauses of each type, or use And or But, which is equivalent.  Here is the Feature file I ended up with to describe the SIM "install" command:

Feature: SIM Command Line
 In order to work better with Sitecore
 As a developer
 I want a command line to work with Sitecore instances

@SIMCMD
Scenario: Create instance
 Given No Sitecore instance named 'TestExample' exists
 When I create 'TestExample' with the command tool
 Then I can navigate to 'TestExample'
 Then I see the Sitecore Welcome page
 Then Delete 'TestExample'


The header information listing the feature and the story is just documentation. The real action is with the Given/When/Then, which automatically generate a Feature.cs file that allows the Visual Studio test runner to treat this as a test.  Incidentally, putting the word 'TestExample' in single quotes helped the tooling understand that this was a parameterized value. 

The final piece of the puzzle is to create C# meanings for all of these rules. You can right click on the above file and select "Generate Step Definitions" which will bring up a little wizard to create these rules and write them to a C# file; these are called "rule bindings".  (I will refer you to the SpecFlow Getting Started guide for this: http://specflow.org/getting-started/)  

There are a couple of nice details here. If you modify or add a rule (e.g change "Delete 'TestExample' to Remove 'TestExample'), it will show up in purple, indicating that the rule doesn't exist in the Rule Bindings cs file. In this case, you will probably want to "Copy Rule to clipboard", so that you don't overwrite the rule bindings you've already written. 

The power of this technique is that you consolidate conditions like "Given a user has logged on" to a single place, allowing business users to read, and perhaps even write, the tests. If something changes, like the process of logging on, or the way to verify that an item is in a cart,  you only need to make this change in one place, in the rule binding for the affected rule.  This allows you to have a large number of tests with a finite and maintainable set of bindings. 

Since this scenario involved checking the existence of a web page, I decided to use the SeleniumWebDriver to implement the bindings.  Selenium can automate all the major browsers, but for my purposes Chrome was sufficient; this required also installing the Chromium.ChormeDriver NuGet package, which downloads the ChromeDriver.exe file.  Selenium can be thought of as a shim layer that provides a consistent developer UI for all the browsers.

This is the bindings file I ended up with:
using System;
using System.Diagnostics;
using System.Linq;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using OpenQA.Selenium;
using OpenQA.Selenium.Chrome;
using TechTalk.SpecFlow;
 
namespace SIM.Specs
{
    [Binding]
    public class CommandLineSteps:IDisposable
    {
        ChromeDriver driver = new ChromeDriver();
        public void Dispose()
        {
            if (driver != null)
            {
                driver.Dispose();
                driver = null;
            }
        }
 
        [Given(@"No Sitecore instance named '(.*)' exists")]
        public void GivenNoSitecoreInstanceNamedExists(string siteName)
        {
            ThenDelete(siteName);
            Assert.IsFalse(SiteFound(siteName));
        }
 
        [When(@"I create '(.*)' with the command tool")]
        public void WhenICreateWithTheCommandTool(string siteName)
        {
            RunSimCommand($"install --name {siteName}");
        }
 
        [Then(@"I can navigate to '(.*)'")]
        public void ThenICanNavigateTo(string siteName)
        {
            Assert.IsTrue(SiteFound(siteName));
        }
 
        [Then(@"I see the Sitecore Welcome page")]
        public void ThenISeeTheSitecoreWelcomePage()
        {
            IWebElement element = driver.FindElement(By.TagName("h1"));
            Assert.AreEqual("Sitecore Experience Platform", element.Text);
        }
 
        [Then(@"Delete '(.*)'")]
        public void ThenDelete(string siteName)
        {
            RunSimCommand($"delete --name {siteName}");
            Assert.IsFalse(SiteFound(siteName));
        }
 
    #region Private Methods
    private bool SiteFound(string siteName)
    {
      driver.Navigate().GoToUrl($"http://{siteName}/");
      bool nameNotResolved = driver.PageSource.Contains("ERR_NAME_NOT_RESOLVED");
 
      // HACK There is a moment in the test execution where IIS handles the page not found, rather than chrome.
      bool iisPage = driver.FindElementsByTagName("a").Any(
          e => (e.GetAttribute("href"?? "").Contains("go.microsoft.com/fwlink"));
 
      return !nameNotResolved && !iisPage;
    }
 
    private static void RunSimCommand(string arguments)
        {
            Process p = new Process();
            p.StartInfo.UseShellExecute = false;
            p.StartInfo.RedirectStandardOutput = true;
            p.StartInfo.FileName = $@"{Environment.CurrentDirectory}\..\Sim.Client\bin\SIM.exe";
            p.StartInfo.Arguments = arguments;
            p.Start();
            string output = p.StandardOutput.ReadToEnd();
            p.WaitForExit();
            p.Close();
            Console.WriteLine("Command output:");
            Console.WriteLine(output);
        }
        #endregion
 
    }
}
As you see, I create a Driver object and make sure it is disposed in my Dispose method; omitting this step will leave a lot of ChromeDriver.exe processes locking your files, which I learned the hard way.

Next you see the auto-generated attributes and method names, which provide the implementation for each rule. These are pretty straight forward. The only points worth noting are:
  • The actual navigation happens in the "SiteFound" method. (I'm losing some Command Query Separation karma here: clearly this is a query, but it also has a side effect. Something to refactor....)
  • To check if a site is found, I look for Chrome's "ERR_NAME_NOT_RESOLVED" message (there is no status code to capture because without a DNS entry chrome can't send a request).  There is a brief moment in the test where it hits the IIS Home page, which I identify with the truly horrendous hack of looking for the go.microsoft.com link. 
  • I chose to run the SIM commands through a command line shell, rather that directly through the SIM command classes, to better capture the full end-to-end nature of the action.  Imagine if a parameter property was not properly bound to the command line; a test that did not directly call the command would miss that. Plus this way the tests document the command syntax.
    • I just had an idea. I could have the command syntax in the WHEN clause:  WHEN I pass 'install -name TestInstance' to SIM. That would surface the command syntax directly into the acceptance test. I like that.
  • To check that the site is truly loaded, I use Selenium magic to read a H1 tag's value. This was starting to really feel like the examples in Freeman & Pryce's book.  
A few additional things to note. I ran this through Visual Studio's Test runner, which is a premium feature. "SpecFlow" is free, but "SpecFlow+" is 159 GBP. Not cheap, and I haven't purchased it. They currently add a six-second delay, and ask you to pay for the product. I saw no indication that the evaluation period is limited, but still want to explore other ways of running these tests.  I'll update this post (Done!) if I find any reasonable alternatives; of course, suggestions in the comments are welcome. To be clear, the tooling to create the tests is free, just the feature to run the tests through Visual Studio's test runner is (theoretically) not free.

Second, these tests generate a number of outputs:

The Test Explorer view:


The "Output" view (which you get to from a link from the above view, not to be confused with Visual Studio's normal output window:


The Visual Studio Output window, important because it shows links to the HTML and Log reports:


An HTML report (stamped with "This is an evaluation copy" verbiage in red):


A log file:


It seems clear to me that the HTML output, combined with a Continuous Integration deployment process, could provide a detailed bench mark of what features have been implemented, and which have not, giving "burn-down" like visibility into a team's progress. This seems pretty powerful to me.

Update: To run the tests through a test runner like NCrunch, instead of installing the package "SpecRun.SpecFlow", use the package SpecFlow.NUnit or SpecFlow.xUnit.  This changes the auto-generated CS code-behind for the .feature file, so that they are now visible to a normal NUnit (or xUnit) test runner.  The only difference is that you don't get the above reports, and you can no longer put breakpoints directly in your .feature file.  However, the Console output of the test has the Given/When/Then steps and the time duration of each, so should work. Here is the output, for example, from Reshaper's test runner:


1 comment:

  1. SpecFlow.NUnit, SpecFlow.xUnit or SpecFlow.MsTest also works with the Visual Studio Test Runner, you just need to add the necessary integration package to be able to see NUnit/xUnit tests in the test explorer window. In case of SpecFlow.MsTest, since MsTest is natively supported by VS, it works even out-of-the-box and all free.

    ReplyDelete