Setting up server to server trust relationships in Ansible

At my new(ish) job, I’ve taken on a dev ops role in addition to development. We’ve been working with the excellent configuration management tool Ansible to configure and maintain our servers.

One of the specific tasks I was working on was to setup backup and replication from our primary database server to a remote recovery server. This required setting up known_hosts and authorized keys on each server so they could talk to each other over ssh without a password in either direction.

Alot of the other documentation and tutorials on ansible didn’t really have a great way to do this. The most common approach I saw was people putting the private keys for servers into ansible-vault, which required pre-generating a key for each server locally, or using a shared key for all servers. I figured there had to be away to generate the private keys on the servers themselves, and then copy the public keys between them.

I did manage to create a solution I’m pretty happy with, though it required thinking about ansible a little differently than alot of the tutorials and documentation encourages. For the most part, ansible best practices seem to encourage to think about your configuration goals in terms of roles. Playbook are almost always shown as simple composition of roles, and rarely have a significant number of tasks over their own. This is fine when you’re thinking about configuration tasks that impact a single server in isolation, but doesn’t work so well when you’re working with tasks that have to touch multiple servers, like copying public keys between two boxes.

Fortunately, there are a number of other ways to organize ansibles work besides the typical example of 1 file-1 play-multiple roles. For starters, you can actually have multiple plays in a single yaml file. When you run ansible-playbook against a file like that, it will run each play successively. That allowed me to do something like this.

  • Play 1: Gather information about all the servers
  • Play 2: Generate ssh keys on the first group of servers and then fetch the public key for each
  • Play 3: Generate ssh keys on the second group of servers and then fetch the public key for each
  • Play 4: Add the fetched public keys to the first group of servers
  • Play 5: Add the fetched public keys to the second group of servers

This has a couple of key advantages over the other methods I’ve seen.

1. Each server get’s a unique private key, for better control of your infrastructure. Single servers can be removed from the trust relationship without affecting any others.

2. Each unique private key never leaves the server it was generated on. There’s no risk of leaking a key in source control, or storing on an insecure location.

We’ll take a look at each of those steps in isolation and then look at the playbook as a whole

Play 1

Play 1 is a simple dummy play that ensures that we gather information about all the servers involved first. That way we have everything we need for known_hosts entries. It targets an inventory named db, which should have every database involved in the trust relationships you are trying to setup. The only task it has will never run.

- hosts: db
  name: gather facts about all dbs
  tasks:
    - fail: msg=""
      when: false

Play 2

Play 2 is where I create the keys for all our replica database servers. We create the keys simply through a standard user task. The magic though, is that we subsequently use the fetch module to download a copy of the public key and store it in a temporary directory. This will be important later.

The second important piece is that we add the primary as a known_host. There are two steps to this. First we use ssh-keygen to check if the primary is already a known host. If it’s not then we use ssh-keyscan to read the key and add it to the known_hosts file. In my case, we’re dealing with a single server representing the primary, so I can store that server in the variable postgresql_primary. However, if you’re working with multiple machines with dynamic ip addresses, you can also use a loop with the built in groups dictionary to loop through a group with all the servers that need to be added as a known host.

- name: create keys for replicas
  hosts: replicas
  user: "{{ config_user }}"
  sudo: yes
  sudo_user: postgres

  tasks:
    - name: Generating postgres user and ssh key
      user: name=postgres group=postgres generate_ssh_key=yes
      sudo: yes
      sudo_user: root

    - name: Downloading pub key
      fetch: src=~/.ssh/id_rsa.pub dest=/tmp/pub-keys/replicas/{{ansible_hostname}}/id_rsa.tmp flat=yes
      changed_when: False

    - name: check if primary is already a known host
      shell: ssh-keygen -H -F {{ postgresql_primary }}
      register: know_host
      ignore_errors: true
      changed_when: False

    - name: make sure the ssh folder exists
      file: name=~/.ssh state=directory

    - name: make sure known_hosts exists
      file: name=~/.ssh/known_hosts state=touch

    - name: Make primary a known host
      shell: ssh-keyscan -H {{ postgresql_primary }} >>; ~/.ssh/known_hosts
      when: know_host|failed

Play 3

Play 3 is almost identical, but now we’re creating keys for our primary database server. The process is the same though, create the keys. fetch them to a temporary folder, and the other servers as known hosts.

- name: create keys for primary
  hosts: primary-db
  user: "{{config_user}}"
  sudo: yes
  sudo_user: postgres

  tasks:
  - name: Generating postgres user and ssh key
    user: name=postgres group=postgres generate_ssh_key=yes
    sudo: yes
    sudo_user: root

  - name: Downloading pub key
    fetch: src=~/.ssh/id_rsa.pub dest=/tmp/pub-keys/primary/{{ansible_hostname}}/id_rsa.tmp flat=yes
    changed_when: False

  - name: check if backup is already a known host
    shell: ssh-keygen -H -F {{ postgresql_backup }}
    register: know_host
    ignore_errors: true
    changed_when: False

  - name: make sure the ssh folder exists
    file: name=~/.ssh state=directory

  - name: make sure known_hosts exists
    file: name=~/.ssh/known_hosts state=touch

  - name: Make backup a known host
    shell: ssh-keyscan -H {{ postgresql_backup }} >> ~/.ssh/known_hosts
    when: know_host|failed

Play 4

Play 4 is where we start to use those saved public keys. On the primary database we make sure our trusted user exists and then we loop through the local temporary directory and add all those keys as authorized authorized keys for that user. Then we delete the local copies of the public keys.

- name: add keys to primary
  hosts: primary-db
  user: "{{config_user}}"
  sudo: yes
  sudo_user: root

  tasks:
    - name: Create postgres user
      user: name=postgres group=postgres

    - name: add keys
      authorized_key: user=postgres key="{{ lookup('file', 'item') }}"
      with_fileglob:
       - /tmp/pub-keys/replicas/*/id_rsa.tmp

    - name: Deleting public key files
      local_action: file path=/tmp/pub-keys/replicas state=absent
      changed_when: False
      sudo: no

Play 5

Play 5 is just the reverse of play 4. It copies the other set of keys from your local folders, up to the server as authorized keys

- name: add keys to backup
  hosts: backup-dbs
  user: "{{config_user}}"
  sudo: yes
  sudo_user: root

  tasks:
    - name: ensure replication user
      user: name=replication group=replication

    - name: add keys
      authorized_key: user=replication key="{{ lookup('file', 'item') }}"
      with_fileglob:
        - /tmp/pub-keys/primary/*/id_rsa.tmp

    - name: Deleting public key files
      local_action: file path=/tmp/pub-keys/primary state=absent
      changed_when: False
      sudo: no

Wrapping up

Here’s the full playbook, written out

- hosts: db
  name: gather facts about all dbs
  tasks:
    - fail: msg=""
      when: false

- name: create keys for replicas
  hosts: replicas
  user: "{{ config_user }}"
  sudo: yes
  sudo_user: postgres

  tasks:
    - name: Generating postgres user and ssh key
      user: name=postgres group=postgres generate_ssh_key=yes
      sudo: yes
      sudo_user: root

    - name: Downloading pub key
      fetch: src=~/.ssh/id_rsa.pub dest=/tmp/pub-keys/replicas/{{ansible_hostname}}/id_rsa.tmp flat=yes
      changed_when: False

    - name: check if primary is already a known host
      shell: ssh-keygen -H -F {{ postgresql_primary }}
      register: know_host
      ignore_errors: true
      changed_when: False

    - name: make sure the ssh folder exists
      file: name=~/.ssh state=directory

    - name: make sure known_hosts exists
      file: name=~/.ssh/known_hosts state=touch

    - name: Make primary a known host
      shell: ssh-keyscan -H {{ postgresql_primary }} >>; ~/.ssh/known_hosts
      when: know_host|failed

- name: create keys for primary
  hosts: primary-db
  user: "{{config_user}}"
  sudo: yes
  sudo_user: postgres

  tasks:
  - name: Generating postgres user and ssh key
    user: name=postgres group=postgres generate_ssh_key=yes
    sudo: yes
    sudo_user: root

  - name: Downloading pub key
    fetch: src=~/.ssh/id_rsa.pub dest=/tmp/pub-keys/primary/{{ansible_hostname}}/id_rsa.tmp flat=yes
    changed_when: False

  - name: check if backup is already a known host
    shell: ssh-keygen -H -F {{ postgresql_backup }}
    register: know_host
    ignore_errors: true
    changed_when: False

  - name: make sure the ssh folder exists
    file: name=~/.ssh state=directory

  - name: make sure known_hosts exists
    file: name=~/.ssh/known_hosts state=touch

  - name: Make backup a known host
    shell: ssh-keyscan -H {{ postgresql_backup }} >> ~/.ssh/known_hosts
    when: know_host|failed

- name: add keys to primary
  hosts: primary-db
  user: "{{config_user}}"
  sudo: yes
  sudo_user: root

  tasks:
    - name: Create postgres user
      user: name=postgres group=postgres

    - name: add keys
      authorized_key: user=postgres key="{{ lookup('file', 'item') }}"
      with_fileglob:
       - /tmp/pub-keys/replicas/*/id_rsa.tmp

    - name: Deleting public key files
      local_action: file path=/tmp/pub-keys/replicas state=absent
      changed_when: False
      sudo: no

- name: add keys to backup
  hosts: backup-dbs
  user: "{{config_user}}"
  sudo: yes
  sudo_user: root

  tasks:
    - name: ensure replication user
      user: name=replication group=replication

    - name: add keys
      authorized_key: user=replication key="{{ lookup('file', 'item') }}"
      with_fileglob:
        - /tmp/pub-keys/primary/*/id_rsa.tmp

    - name: Deleting public key files
      local_action: file path=/tmp/pub-keys/primary state=absent
      changed_when: False
      sudo: no

Save this in a yml file and you can run it with ansible-playbook. You can also include it in other playbooks where you need to ensure both groups of servers can ssh between each other before your configuration really begins.

Letter to my representative on the NSA

Below is a letter to my representative in the US House of Representatives that I wrote in response to the most recent revelations on the NSA’s abuses and over-reach as documented here in the New York Times. I sent this letter via snail mail, but I wanted to put it on the web as well to galvanize others. I’ve pasted it in it’s entirety below. 

Representative Wenstrup,

I am writing with deep concern about the recent revelations regarding the NSA’s unconstitutional over-reach and abuses. The stories from the past few months about vast surveillance programs, and over-broad collection of American citizen’s communications have left me very unsettled and distrustful of the American government. However, those feelings pale in comparison to my shock, disbelief and anger at the latest revelations about the NSA’s top secret “Bullrun” and “Signit” programs.

As detailed in the a September 5th article in the New York Times (N.S.A. Foils Much Internet Encryption) these programs consist of some of the following actions

  • Inserting vulnerabilities into commercial encryption systems
  • Developing techniques to defeat key encryption schemes such as HTTPS, SSL,VPN
  • Stealing encryption keys from major Internet companies

These actions are significantly more dangerous than the over-broad surveillance we’ve already been debating. The problem is that they significantly weaken the cryptographic infrastructure upon which our entire digital economy is built. By intentionally introducing backdoors in key cryptographic technologies, the NSA exposes our entire communications and networking systems to malicious hacking by criminal and foreign elements.

As a professional computer engineer, I am keenly aware of how important this cryptographic infrastructure is to our daily lives. By working to weaken this infrastructure the NSA is placing the digital transactions of millions of ordinary Americans at risk. eCommerce, online banking, electronic medical records, and numerous other aspects of our digital lives are completely reliant on strong cryptographic technology. I understand the NSA’s concern about losing out on valuable intelligence because of encryption, but the trade-offs and risks involved in actively working to undermine the very foundations of the Internet are far to high.

These risks are not purely theoretical either. For a clear example of how deliberately created back doors can be exploited by criminal elements, take a look at the 2005 “Athens Affair.” In this epic security fiasco, hackers infiltrated the infrastructure of the Greek arm of the telecom provider Vodafone. For almost half a year they bugged the phones of over 100 key players in the Greek political scene, including the prime minister, the mayor of Athens, and an employee of the US embassy. They were able to do this by hooking into the same back door used by law enforcement for legal wiretaps. To this day the perpetrators haven’t been caught, and the full extent of their surveillance is not known.

The NSA’s programs create the risk that the US will one day be embroiled in an “Athens Affair” of it’s own unless the agency is curtailed and it’s abuses reigned in. I’ve read your own opinions on the prior revelations about the NSA’s over-reach and I too recognize that it is important that our intelligence agencies have adequate information to keep American’s safe, while at the same time respecting our right to privacy and liberty. I appreciate that you’ve supported an amendment clarifying that NSA funds should not be used to target or store the communications of US citizens.

However in light of the most recent revelations I do not think that this is enough. I want you to know that during the 2014 elections I will not vote for any candidate that does not do the following

  • Condemn the NSA’s attempts to deliberately weaken the cryptographic infrastructure our digital lives rely on.
  • Call for a thorough, detailed, and above all transparent review of the NSA’s intelligence programs, particularly those centered on interfering with cryptographic technology
  • Call for legislation preventing the NSA from working with manufacturers and software companies to introduce non-targeted vulnerabilities into commercial hardware and software
  • Call for the dismissal of the Director of the NSA, Keith B. Alexander and other key NSA officials involved in the decision to focus so much of the agency’s resources on a quest to undermine basic encryption and place all Americans at risk.

I appreciate your consideration on this important issue and hope that you will make choices that will allow me to vote for you in next year’s elections.

30 Days of Scala: It’s a Wonderful Life Part 2

This a continuation of my thirty days of Scala series about learning the programming language Scala. For a list of all posts click here

In my last post I covered the process of setting up my development environment. Now we get down to discussing the actual code.

I worked the Game of Life Kata multiple times, using two different approaches. In the first, I focused on creating an actual Cell class which was responsible for handling it’s own life and death, and responsible for keeping track of it’s neighbors. In the second I created a game class that managed the states of all the cells and tracked the neighbors through a single Set which contained the coordinate pairs of all active cells in the game.

For most of the katas,  I was primarily focused on familiarizing myself with the scala syntax. Scala does a number of things differently than C# or Java. Some of this is cosmetic, like how Scala flips the type and parameter names in a method signature, but some of it is more fundamental. In general, Scala seems to be much less prescriptive about syntax for syntax’s sake. The compiler doesn’t care if you forget a semi colon at the end of the line. Single line methods don’t require brackets. You don’t have to explicitly return values from a method, you can just put the value at the end and Scala will assume you wanted it returned. The list goes on and on. It can be very freeing and make it simple to write code without worrying about syntactical details, but for someone coming from a more prescriptive language, it definitely hurts readability. The question is whether I’ll feel the same way after a month of writing and reviewing katas. To a large extent I suspect as my brain gets used to it, the readability will be fine.

However, there are some more fundamental things about Scala that I suspect will not get easier as I get used to it. A good example would be some of the challenges I had with getting my automated testing setup. I decided to use the testing framework specs2, after reviewing some documentation on it. In particular I focused on the acceptance testing syntax documented on their website. The syntax is very impressively clean, consisting of basically a block of plain text with test code interpolated in using a custom string interpolation method. They also have a great syntax for doing repeated tests with different inputs. In general getting a basic test up and running for each of these scenarios was not hard, but when I started to try to do some more complicated setups that weren’t explicitly covered in the documentation I started running into challenges. Debugging these was extremely hard because specs2 makes extremely heavy use of operator overloading to create it’s syntax. Looking at the code, I had a very tough time understanding what it was actually doing, even at a very high level. I had to dig into the code for specs2 on github to even have a basic grasp of even the basic control flow that specs2 very abstract syntax was actually generating.

The issue I was having turned out to be fairly prosaic, it was just an incorrect version number for specs2. I downloaded the right version and everything worked great, but the opaqueness of specs2 operator overloading had me digging into it’s internals unnecessarily because I feared I was misusing something. However, I don’t necessarily disagree with how the specs2 to did things. There’s no arguing that the syntax makes the tests very readable, and get’s rid of a lot of clutter that does nothing to impart meaning. But it does so at the expense of making the testing framework understandable. In this case that’s probably okay, the tradeoff makes sense given how often you’ll be reading your tests, but in Scala the power is definitely there to shoot yourself in the foot by misusing these features.

On the plus side, coming from C# a lot of the functional aspects of Scala felt very familiar. I’m a huge advocate of LINQ and was drawn to functional programming through it before I had even heard the term before. The syntax for Scala’s functional operators for collections are almost exactly identical to LINQ, with some minor differences in terminology (filter vs where, map vs select). I definitely used them fairly heavily, particularly in my second implementation of the kata, where essentially all of the work was various forms of collection manipulation.

In general I found Scala fairly easy to get used to, but I didn’t have any aha moments where I saw why it would be a better fit than C#. Of course this isn’t surprising, given that this was my first foray into it, and that I was doing code kata’s that by their very nature are designed to be fairly language agnostic. I know one major strength of Scala is how effective it’s supposed to be when you’re trying to handle concurrency and parallel processing, so that may be something I start exploring next

30 Days Of Scala: It’s a Wonderful Life Part 1

This a continuation of my thirty days of Scala series about learning the programming language Scala. For a list of all posts click here

The first kata I started with for this project based on Conway’s Game of life. This is actually the first code kata I was every exposed to, at my first clean code retreat, so it’s always held a special place in my heart. Plus I loved Conway’s Life as a kid where I would manually run games on graph paper while I was bored during math class. Basically an uber-geeky form of doodling.

If you aren’t already familiar with Conway’s Game of life here’s a good explanation. As a quick summary though, it’s a 2D grid, where each turn cells turn on and off based on the state of their neighbors.

For this coding kata, my real goal was to learn how to setup my development environment, so I actually gave myself much longer than 30 minutes including research on that topic. I started out looking into an IDE like IntelliJ, since that seemed very similar to the visual studio experience I’m familiar with on the .NET side. However, as I did my research and played with IntelliJ, I found it was abstracting me from key aspects of the scala process, particularly SBT.

SBT (Simple Build Tool) is scala’s build management tool, and a very unique beast. It’s basically a dedicated console that allows you to handle compiling, dependency management, and continuous testing from one place. Plus it’s got an extensible plugin model so you can add additional functionality on your own. I’ve seen tools that have similar goals, like grunt in the node.js space, but SBT feels like a more sophisticated, comprehensive implementation. I didn’t want some sort of IDE abstracting me away from such a powerful tool, particularly when the documentation I was reading was highlighting the key role sbt plays in Scala.

So instead I decided to go for one of the more popular minimalistic code editors out there, Sublime. If you aren’t familiar with sublime, it’s a highly extensible editor with a wide range of plugins out there for every imaginable language. It’s not a turnkey solution like some of the bigger IDE’s but it’s pretty easy to install a key set of plugins to create a first class IDE like experience for Scala. Here are the key plugins I found

Sublime ENSIME – A port of a syntax highlighting/code completion add on for EMACS. It’s a little complicated to setup, but it add alot to the experience, making it easier to catch and fix errors, and to discover language features. It’s not 100% as good a Visual Studio’s intellisense, but still manages to fill that niche fairly well.

Sublime SBT – A plugin that allows you to open an SBT console in a pane at the bottom of Sublime. It also offers quick keyboard access to sublime commands, like build, or start continuous testing. Easy to setup and definitely a must have.

With this setup, I found myself being highly productive, and happy with the quick feedback loop I was getting during test debug sessions.

In addition to sublime, I also found a great utility to help jump start each scala project from a series of code templates, giter8. Giter8 is a sublime based console app that can download project template for any programming language from github and user them to quickly create a basic boilerplate for your project. It’s an awesome idea, since github is a natural repository for those sorts of things, and since starting without boilerplate it always a little tricky, especially if you’re just starting with the language.

In the next section I’ll talk more about the actual coding experiences I had while working this kata.

30 Days of Scala

It’s been a while since I’ve posted here, but I’ve just started on a new project that seemed like it would be a crime not to do some blog posts on. For about a week and a half I’ve been teaching myself Scala, in an attempt to branch out from the .NET space and get back to some of my open source roots.

I originally I chose Scala because I was looking for something that was statically typed, but not C# or Java. I have nothing against dynamically typed languages, but I’ve already played around with node.js and ruby and I wanted something different. Scala seemed like a good and interesting fit.

I started out by trying to read some basic scala tutorials. The stuff put out there by twitter folks like Scala School or Effective Scala was good, but I still felt like the language just wasn’t resonating with me. Usually I try to learn a new language in the context of some sort of big project, which get’s me coding, but usually results in a somewhat spotty acquisition, focused around whatever pieces are important for the project at hand. So this time I decided to try something different. For the next month or so I’m going to try to do a series of code katas in scala.

If you aren’t familiar with the term, a code kata is a basically a simple, short coding exercise, meant to be done in 30 minutes to an hour. The term is borrowed from martial arts, and it literally means “form” in Japanese. The idea is that it’s a set of repeated movements meant to systematically in part of the larger art or discipline. Much like you’ll see practitioners of martial arts repeating the same motions over and over again, the point of code katas is to solve the same problems (or same sorts of problems) many times to help build programming skill systematically.

Once I decided to take this approach, it occurred to me that blogging would be a natural way to help the process of synthesizing that information. I figured this might help other people picking up the language (particularly if they’re coming from C# like I am) and might attract current scala users into a dialog that might lead to even more learning for me. So for the next 30 days I plan to try to consistently code in scala or write about coding in scala and see where that take me. I don’t plan to be super formal about things, sometimes I’ll spend longer than an hour on a problem, sometimes I’ll spend less, but I do plan to do something daily as my schedule allows. I’ll also post all of my code up on github for to help those following along.

Live.js and Visual Studio Part 3 – Automated Testing

This post is part of a series. Click for Part 1 or Part 2

In the last two posts I explored how Live.js can help you do client side testing, particularly for responsive layouts. Now we’ll be looking at another way that live.js can help out in your client side development.

But before we can do that, we have to take a brief foray into the world of javascript based unit testing. I’m not going to try to give a full treatise on the subject, but just a brief introduction so that we can see how live.js can help with this part of your development workflow too.

If you aren’t familiar with client side unit testing, don’t sweat it, it’s pretty straight forward. If you want a good overview check out smashing magazine’s intro or this great video on the qUnit framework. At a high level though it looks something like this.

1. Just like with your backend code, javascript testing starts with how you structure your code in the first place. Focus on small methods with minimal dependencies that return values that you can validate.

2. There are alot of javascript unit testing frameworks out there, but they all generally work the same way. Tests are functions passed into a method defined by the framework. Your to run your tests, you build a simple html page which has script references to the framework library, your test code and your application code. To run the tests, you load the page and the framework manipulates the html to report your results.

With this high level understanding, it’s pretty straight forward to see how live.js can help on this front. If you add live.js to that html page that runs your tests, then that page can refresh automatically and run your tests every time your test code or application code changes.

Note, that your automated testing page doesn’t have to be static html either. For example, in mvc we can set up a TestsController and Tests view that look a little like this.

Controller

public class TestsController : Controller
    {
        //
        // GET: /Tests/

        public ActionResult Index()
        {
            var testFiles = Directory.EnumerateFiles(Server.MapPath("~/Scripts/spec")).Where(f => f.EndsWith(".js"));
            var sutFiles = testFiles.Select(s => s.Replace("_spec", ""));
            ViewBag.SutFiles = sutFiles;
            ViewBag.TestFiles = testFiles;

            return View();
        }

    }

View

<!DOCTYPE html>
<html>
  <head>
    <meta name="viewport" content="width=device-width" />
    <title>Tests</title>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
    <script src="/Scripts/spec/lib/<your-testing-framework>.js"></script>
    @foreach (var fullpath in ViewBag.SutFiles)
    {
        var fileName = Path.GetFileName(fullpath);
    <script src="/Scripts/@fileName"></script>
    }
    @foreach (var fullpath in ViewBag.TestFiles)
    {
        var fileName = Path.GetFileName(fullpath);
    <script src="/Scripts/spec/@fileName"></script>
    }
    <script>
        onload = function () {
            var runner = mocha.run();
        };
    </script>
  </head>
  <body>
  </body>
</html>

The basic idea, is that we have a controller that builds up a list of files by looking in a specific folder where we put all of our tests. For all of the files it finds, it passes them along to the view, which then renders a set of script reference tags. The result is that our page dynamically adds all the assets it needs to test our javascript. Then live.js will do its thing and automatically refresh to run the tests any time there is a change.

Live.js and Visual Studio Part 2 – Test All The Devices

This post is part of a series. Click for Part 1 or Part 3

In my last post I introduced live.js and showed how you could setup a visual studio project to use it to speed up your front end development work. Today I’m going to walk you through how to configure your development environment so external mobile devices can connect to the machine and you can get this same live refresh on multiple screens and form factors simultaneously.

The idea is to create a setup where you can layout all the devices you’re currently testing, and see your changes on them live without lifting your fingers from the keyboard. Ultimately, this is fairly simple with live.js, because live.js all runs on the client side. So long as the devices your testing supports javascript, it will live refresh just like the browser on your development machine.

The complication is that unless your development machine is running with a full blown version of IIS, by default it won’t allow external connections. Fortunately, this is fairly easy to overcome. First, make sure you’re using IIS Express, instead of the Visual Studio’s Web Development Server (also called Cassini). If you’re on Visual Studio 2012, IIS Express is installed and the default, but if you’re on an older version you may have to download and install it.  Note you’ll also need at least Visual Studio 2010 SP1 to make use of IIS Express.

Once you’ve got IIS Express installed, you can configure your project to use it. Go into the Properties section of your web Project and go into the “Web” sub-section. Look for the radio button labeled “Use Local IIS server.” This will ensure that visual studio uses the local IIS Express server you just installed.

However, by default IIS Express won’t allow external connections either. To resolve that, we have to edit the IIS applicationhost.config file, found at C:\Users\<you>\Documents\IISExpress\config. You can usually just do a file search for IISExpress to get there. In this file look for the node <sites>. It should look something like this.

<sites>
<site name="WebSite1" id="1" serverAutoStart="true">
<application path="/">
<virtualDirectory path="/" physicalPath="%IIS_SITES_HOME%\WebSite1" />
</application>
<bindings>
<binding protocol="http" bindingInformation=":8080:localhost" />
</bindings>
</site>
<siteDefaults>
<logFile logFormat="W3C" directory="%IIS_USER_HOME%\Logs" />
<traceFailedRequestsLogging directory="%IIS_USER_HOME%\TraceLogFiles" enabled="true" maxLogFileSizeKB="1024" />
</siteDefaults>
<applicationDefaults applicationPool="Clr4IntegratedAppPool" />
<virtualDirectoryDefaults allowSubDirConfig="true" />
</sites>

Find the “site” node related to the site your working on and go down to the bindings node


<bindings>
<binding protocol="http" bindingInformation=":8080:localhost" />
</bindings>

In the bindingInformation attribute remove the string “localhost”. Then save the file. Now IIS Express will try to listen for external connections on that same port.

Unfortunately, just because IIS Express wants to listen for external connections doesn’t mean the OS will let it. You have to tell Windows that its okay for IIS Express to listen at that url. For that there are two steps.

First, you need to add an http namespace reservation to the OS that tells Windows IIS Express can listen at the specified URL and port. The easiest way to do that is run the following command at an administrative command prompt.

netsh http add urlacl url=http://*:8080/ user=Everyone

This will allow any user to listen for connections that make a request for port 8080. You’ll need to change the port number to whatever port your application is running on. If you want to be more restrictive, you can change the * to your machine name, or change the user to the specific user that IIS express is running under. The command above though will allow you to browse to the device with just the IP address, which is generally more convenient.

Finally, the last thing you’ll need to do is go to the Windows Firewall and add an new inbound rule that permits inbound traffic to the port your application is running on.

Killer client side development with live.js and Visual Studio

This post is part of a series. Click for Part 2 or Part 3

Tuesday I got the opportunity to do a presentation at CINNUG. There were some requests to turn the contents into a blog post, and a figured that wouldn’t be a bad way to kick off blogging for the new year.

The talk was on how to use live.js to make debugging and developing client side code a breeze. If you aren’t familiar with live.js. It’s a simple javascript library that’s designed to automatically apply changes to your CSS/JS/HTML, without the need for a manual refresh. Basically, it polls the server for updates to the various assets constantly. With each poll it checks to see if anything has changed and then applies the changes, either through a force refresh, or through dynamic application.

The result is that you can see the affects of your changes live, without ever taking your hands off the keyboard.

So with the basic introduction, let’s take a look how we can take this for a test drive. For the purposes of this discussion we’ll be working with an MVC application, but everything will work just fine with pretty much any framework, WebForms included.

To start, spin up a new Empty MVC project. Then, go download live.js from http://livejs.com/ and add it to your Scripts folder. Next go into the Views-> Shared folder and edit your _Layout.cshtml page by adding a script reference like this,

<br><script type="mce-text/javascript" src="@Url.Content(" data-mce-src="@Url.Content("></script>

Now we can take live.js for a test drive. Start up your application and go to the homepage. Arrange your windows so you can see the browser and edit the application at the same time. Go into the Site.css file and edit something, like maybe the background color of the body. In just a moment you’ll see the application change just in front of you. To see how this works, boot up fiddler. You’ll see a constant stream of requests as live.js polls the server for updates. While this is great for debugging, we don’t want this kind of behavior happening in production. So how do we avoid that? Well it’s simple enough to use a basic HTMLHelper to only render the live.js reference if we’re running in debug mode. Here’s the helper code.


public static class IsDebugHelper
{
     public static bool IsDebug(this HtmlHelper htmlHelper)
    {
      #if DEBUG
        return true;
      #else
        return false;
      #endif
    }
 }

Then we just call this helper in an if statement that conditionally renders our script reference, just like this.

  @if (Html.IsDebug())
    {<script type="mce-text/javascript" src="@Url.Content(" data-mce-src="@Url.Content("></script><br> }<br>

This gives us a basic livejs setup.For our mvc application. Aside from dynamically loading our css, livejs also does the same for html and javascript.

Next post I’ll show you how to use livejs to get this same sort of live refresh setup on external devices, like phones or tablets, to make it easier to test responsive designs.

TDD: A Case Study with Silverlight

One of my goals for the new year was to follow TDD on a real project at work. I actually got my chance very early this year with a fairly basic Silverlight project. The project was short and simple, basically a fancy list of links and resources managed in sharepoint and exposed in a silverlight interface allowing a variety of queries and searches. It was large enough to be more than just a toy project, but small enough that I didn’t worry about doing much damage by trying out TDD for the first time.

I learned alot, and I think the work I did makes a good case study for someone interested in getting started with TDD. In my next few blog posts. I plan to walk readers through my development environment, the specifics of the techniques I followed, and the lessons I learned.

The Environment

As I said at the start, the project was written in Silverlight. For my testing I used the Silverlight Unit Test Framework, which allows for asynchronous testing, which is vitally important for any webservices based integration testing. On top of that I used a fantastic continuous test runner named Statlight. Statlight is a small console application that automatically runs your unit tests every time your test project’s .xap file changes. This means that running your tests is as easy as hitting Ctrl + Shift + B to build the project and Statlight does the rest. I quickly got in the habit of building after every code change so that I was getting instant feedback on what I was doing.

The Process

Since this was an experiment, I tried to stick as close to the rules of TDD as possible. This meant I never wrote a line of code until I had already written a test covering it, and that my tests were extremely granular. Even simple tasks like parsing a single line of XML returned from a webservice had a test devoted to it. I also tried not to overthink some of the details of my design, instead trying to put of design decisions until I had already written the test necessitating them.

The Result

Overall, my experience was hugely positive. I’m convinced that TDD definitely makes me more effective and productive and I want to leverage it wherever I can in the future. In general I found there were 3 major benefits to TDD, and I learned 3 lessons about how to do TDD better next time. Let’s start with the good

Flow – It was shocking how good it felt to be able to code without stopping. With TDD my brain stayed in code mode for hours at a time. Usually, I slip in and out of this mode out the day, especially when I’m manually testing code I’ve just written. With TDD, that never happened, and it made my concentration and focus 20x better. When I’m manually testing, there are all sorts of interruptions and opportunities for distraction. Waiting for the page I’m testing to load? I’ll just go browse google reader for a bit. Stepping through a tedious bit of code so I can examine the value of one variable? Let me just skim this email while I do that. With TDD though, my brain never gets an opportunity to slip away from the task at hand. Throughout the day I was laser focused on whatever I was doing.

Similarly, if I did have to step away for an interruption (meetings, lunch, help another dev, etc.) it was easy to get back into the flow and figure out where I was. Just hit Ctrl + Shift + B and see what test failed. Since each test was so small and covered such a small area,  I didn’t have a ton of details about what I was doing slip away when I got distracted.

Design – I didn’t totally abandon upfront design, but I did do less design than I usually do. I mostly sketched out the layers at the boundaries of the application, the pieces that interacted with the user and the pieces that interacted with the data source, SharePoint, since both of those were external pieces that I couldn’t exercise complete control over. Once I had those layers designed though, I let TDD evolve the internal architecture of the application, which actually led to a couple of neat design decisions I don’t think I would have come up with otherwise. The coolest of these was how I handled loading up a given model for a given page. In our application the same view could be wired up to a variety of different models. The specific model depended on the url the user used. I ended up with two separate objects which handled this process, the Model Locator which parsed the incoming URL, and the Model Map, which tied each model to a path-like-string which represented how the data was maintained in the data store. The Model Locator would use the URL to extract the key elements to identify the right model, and then pass those into the Model Map, which would use those elements to find the right model by building the path representation for the model. The end result was a nice decoupling between the path structure the user used to browse to a model, and the way it was actually handled by the data layer. If I had been designing up front, I am almost positive I would have missed this approach, and put too much of the logic into the Model Locator itself, tightly coupling the data structure and the navigation structure. Instead, I put off making any decisions about how the Model Locator interacted with the data until the last minute, and by then it was clear that a new class would improve the design significantly.

Refactoring Ease of Mind – Not everything about this project was perfect. In fact, towards the middle there were some significant pain points because I had to be temporarily put on another higher priority project. To keep things moving another developer was assigned to the project. There wasn’t enough time invested in communication and as a result, he ended up taking a different approach in some key ares, and duplicating some work I’d already done. By the time I came back, his code was wired up to the UI, and it didn’t make sense to try and reincorporate the pieces of my code that were performing some of the same functions. Unfortunately, there were a number of pieces that handled things like search and model location that were still expecting the classes defined in my code. All of those had to be modified to work with his architecture instead.

This would have been a really scary refactoring to do in the timeline we had, except for the the automated tests I already had covering all of my code. With only a few minor tweaks, that test suite was modified to test my search services using his new classes, and we had extremely detailed information about where my code was now broken. After less than a day of work, we’d switched everything over without a hitch. And because of the tests, we had confidence that everything would work fine.

I won’t say much more in summary, because I think the benefits speak for themselves. Next post, I’ll talk about what I’d do differently next time, and how I plan to get better at TDD in the future.

Custom assertions with should.js

Lately I’ve been playing with nodejs and vows, doing some TDD on a side project at home. I love the very readable syntax of should.js, which lets you frame your assertions as almost natural english. However, pretty early on I realized I wanted to add my own custom assertions to the should object to abstract away some of the messy details of my testing and keep the code readable. In the past I’ve used custom asserts with .NET for testing, and I find it allows you to quickly express domain specific concepts even inside your tests, for better readability and clarity.

One particular example was a test where I wanted to make sure the elements in a <ul> where the same as those in a javascript array. Rather than trying to parse out the list into another array and do a comparison in the test body, I wanted to have an assertion that was something like $(“#list”).children().should.consistOfTheElementsIn(array), where consistOfTheElementsIn will handle the parsing and comparison.

After a little bit a playing around, I worked out a pretty simple way to do this. Basically I create a new node module called customShould.js. customShould.js require’s should, and then exports the should object. Additionally, customShould adds a new method to the “Assertion” object created by should.js. Here’s the code


var should = require('should.js');

exports = module.exports = should;

should.Assertion.prototype.aHtmlListThatConsistOf =
 function(list){
 var compareArrays = function(first,second){
 if (first.length != second.length) { return false; }
 var a = first.sort(),
 b = second.sort();
 for (var i = 0; second[i]; i++) {
 if (a[i] !== b[i]) {
 return false;
 }
 }
 return true;
 }

var matches = this.match(/<li>.?*<//li>/gi);
 for(matchIndex in matches){
 matches[matchIndex] = matches.replace("<li>","").replace("</li>","");
 }
 this.assert(compareArray(matches, list), "lists do not match");
 }

It’s all pretty straight forward. Then to use your custom asserts, you just require your customShould.js module instead of the normal should module.