Greg's Blog

helping me remember what I figure out

Three Things I Learned About PhantomJs This Week

| Comments

This week we had a bit of a fun moment with one of our feature tests. As a little background we are using a combination of PhatomJs, Cucumber.js and WebDriver.io for our end to end/user journey tests.

One test failed repeatedly when we used to PhantomJs but if we switched to Chrome it would pass. The test itself was sending an Async POST request to an API and then based on the response the browser would redirect to the created resource (well it did more, but that was the basic premise).

After some head scratching and painful debugging we eventually narrowed it down to our Location header disappearing from the API response. We initially thought this might be a bug in PhantomJs, because we could clearly see the server sending the header and in the debug tools it would show up as well; however it turns out that this is PhantomJs’ security model. By default PhatomJs only handles ‘standard’ headers. Apparently Location isn’t one of them. The solution in the end is quite simple, simply start PhatomJs with the --web-security flag set to false: ./node_modules/.bin/phantomjs --web-security=false

Note that this also turns off SSL certificate checking, but since we are using this for test purposes we are fine. Your mileage may vary though depending on what you are using PhantomJs for.

The other thing I learned during this episode was that there is no need to use or install the Selenium stand alone server as you can launch PhantomJs with webdriver support (in the shape of GhostDriver I believe): ./node_modules/.bin/phantomjs --webdriver=4444

To debug the test we used the following flag, which allows you to then remote debug using a browser and the web inspector interface: ./node_modules/.bin/phantomjs --remote-debugger-port=9000

Open the browser and go to http://localhost:9000 and you will be presented with a list of PhantomJs sessions. By selecting one you can start your debugging.

So there you are three things I learned about PhantomJs.

Refactoring Using Hash#fetch

| Comments

Last night I decided to add a simple memoization pattern to my coder_wall gem to stop unnecessary network calls to retrieve data from the CoderWall API. Memoization is a form of caching, well there’s more to it than that. So let’s refer to Wikipedia for a more detailed explanation and more reading if you are interested:

In computing, memoization is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again.

My intent was to reduce calls to the CoderWall API for the same username. Here is what the code for fecthing data used to look like:

1
2
3
4
5
6
7
8
9
10
11
# Fetch data from CoderWall
def fetch(username)
  uri = uri_for_user(username)
    json = send_request(uri)

  begin   
      JSON.parse(json.read)
    rescue JSON::ParserError
        raise InvalidJson, 'Received invalid json in response'
    end
end

I created a @response instance variable that would store the parsed results of the call. If the username key existed in the hash just return the result otherwise go over the wire to get the data:

1
2
3
4
5
6
7
8
9
10
11
12
13
# Fetch data from CoderWall
def fetch(username)
  return @response[username] unless @response[username].nil?

  uri = uri_for_user(username)
    json = send_request(uri)
 
    begin
      @response[username] = JSON.parse(json.read)
    rescue JSON::ParserError
        raise InvalidJson, 'Received invalid json in response'
    end
end

So far so good, until I ran Metric_fu and I got two warnings: Nil checking and duplicate calls. That immediately reminded me of a passage in Avdi Grimm’s excellent book Confident Ruby. Armed with that knowledge I was wable to make one change that removed the code smells:

1
2
3
4
5
6
7
8
9
10
11
12
13
# Fetch data from CoderWall
def fetch(username)
  @response.fetch(username) do
        uri = uri_for_user(username)
        json = send_request(uri)

        begin
          @response[username] = JSON.parse(json.read)
        rescue JSON::ParserError
          raise InvalidJson, 'Received invalid json in response'
        end
  end
end

Because @response is a Hash I was able to leverage using the fetch method in conjunction with passing a block to it, thus avoiding any of Nil checking. There’s a little more to it, but really you should read all about it in Avdi’s book, it is a veritable treasure trove of patterns.

Raml and Osprey - a Better Way to Build Mock Apis

| Comments

It is generally considered a good idea to develop and test your application against Mock services rather than the real thing. As we have been embarking on a new project, the team discussed the need for Mock services and how we would manage these. On prior projects, the effort in maintaining these was somewhat painful and I was a little weary. One of the tools to help deal with the maintenance was RAML. RAML stands for: RESTful API Modeling Language, it

… is a simple and succinct way of describing practically-RESTful APIs. It encourages reuse, enables discovery and pattern-sharing, and aims for merit-based emergence of best practices. The goal is to help our current API ecosystem by solving immediate problems and then encourage ever-better API patterns. RAML is built on broadly-used standards such as YAML and JSON and is a non-proprietary, vendor-neutral open spec.

Our intention is to use this to define the APIs we want to build and interface with. Also to use the specification we create to validate both development and integration, and use that specification as the foundation for our Mocks.

After further research I discovered osprey, which is a

… JavaScript framework, based on Node and Express, for rapidly building applications that expose APIs described via RAML, the RESTful API Modeling Language.

While it’s still in development and doesn’t yet support documentation and the validation, you are able to very quickly standup a mock service with an api described with RAML. There’s also a CLI that allows you to scaffold an api based on a specification defined using RAML.

I decided to spike things a little using the CLI tool. Once the CLI has been installed globally using NPM, you can simply run the following:

osprey new --name hello-world

By default you should have a api.raml file created and stored under src/assets/raml. I then used that file to define my spike API using RAML:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
#%RAML 0.8
---
title: Hello World API
baseUri: http://localhost:3000/api/{version}
version: v1
/users:
    get:
        description: Return all users
        responses:
            200:
                body:
                    application/json:
                        example: |
                            {
                               "data": [
                                   { "name": "foo" },
                                   { "name": "fee"}
                               ],
                               "success": true,
                               "status": 200
                             }
    /{usermame}:
        get:
            description: Say hello to the given username
            responses:
                200:
                   body:
                     application/json:
                      example: |
                         {
                           "data": {
                             "message": "Hello foo",
                           },
                           "success": true,
                           "status": 200
                         }

If you are familiar with YAML and JSON you will find the output very readable, even if you aren’t I think you can still agree that this is quite accessible.

To validate the specification I had created, without having to start the server, I could run:

1
2
3
$ osprey list src/assets/raml/api.raml
GET             /users                                                                                          
GET             /users/{usermame}

Before being able to run the service I had to make a couple changes to the server javascript file app.js.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
var express = require('express');
var path = require('path');
var osprey = require('osprey');

var app = module.exports = express();

app.use(express.bodyParser());
app.use(express.methodOverride());
app.use(express.compress());
app.use(express.logger('dev'));

app.set('port', process.env.PORT || 3000);

api = osprey.create('/api/v1', app, {
    ramlFile: path.join(__dirname, '/assets/raml/api.raml'),
    logLevel: 'debug'  //  logLevel: off->No logs | info->Show Osprey modules initializations | debug->Show all
});

if (!module.parent) {
    var port = app.get('port');
    app.listen(port);
    console.log('listening on port ' + port);
}

To start the mock api service simply run:

node /app/server.js

And you can now start exploring the API using your browser or any other tool you like to use to interact with an API. Being able to provide example data, quickly allows you to scaffold your mock service with data for your app.

It’s only been a couple of hours of playing with these two tools, but it already feels so much easier to work with and maintain, than having to write a bunch of code to handle requests and responses.

Working With Function Arguments

| Comments

LAst week I was working on build tasks to daemonise some of the services we intend to use for our project. I decided to use forever and ended up with a call that looks something like this:

let task = execForeverCommand('start', 'path/to/service');

or

let task = execForeverCommand('start', 'path/to/service', 'some', 'other', 'option');

The execForeverCommand would build up a command to execute by concatenating a variable length list of function arguments into one single string. What follows are three different approaches I took to build up that string based off of those arguments.

My initial intention was to just use arguments.join(" "); however function arguments are not an array, instead they are an Array like object, therefore I opted to use a for-in loop:

function execForeverCommand() {
    let commands = '';
    for (var argument in arguments) {
        if(arguments.hasOwnProperty(argument)) {
            commands += ' ' + arguments[argument];
        }
    }

    return shell.task('./node_modules/forever/bin/forever ' + commands);
}

That worked, but is very verbose. Having a working solution, I spent some time reading through the MDN article I referenced above in more detail. I straight away realised that I could change the code to use Array.prototype.slice.call and combine that with my initial plan:

function execForeverCommand() {
    let commands = Array.prototype.slice.call(arguments).join(" ");

    return shell.task('./node_modules/forever/bin/forever ' + commands);
}

Now those astute readers might have spotted the use of let in these functions. On this project we are using ES6 features (with the assistance of Babel). This gave me a third option: Rest Parameters. Thanks to rest parameters I rewrote the function one last time, effetively going full circle and implementing my originally intended solution, i.e. by using Array.prototype.join().:

function execForeverCommand(...commands) {
    return shell.task('./node_modules/forever/bin/forever ' + commands.join(" "));
}

Build Your Own Technology Radar

| Comments

This weeks Ruby Rogues episode had Neal Ford on to talk about the ThoughtWorks Technology Radar. One of the things that Neal discussed was creating your own Technology Radar.

I am always on the lookout for new ways of working, particularly to make my learning better. Neal recommends putting together such a radar to help focus how we learn by being more strategic, rather then tactical. That and it serves as reminder of the things we might want to look into :) (well for me at least).

So here’s my first go at putting such a list together:

Hold

  • RequireJs (Tools)

Assess

  • Gulp (Tools)
  • Scala (Languages and Frameworks)
  • Swift (Languages and Frameworks)
  • ReactJs (Languages and Frameworks)
  • Living CSS Style Guides (Techniques)
  • Gradle (Tools)
  • Play Framework (Languages and Frameworks)
  • SnapCI (Tools)
  • ES6 Transpilers (Tools)

Trial

  • Browserify (Languages and Frameworks)
  • Functional programming (Techniques)
  • ES6 (Languages and Frameworks)
  • Dashing (Languages and Frameworks)
  • Phantomas (Tools)
  • Build your own Technology Radar (Techniques)
  • Docker (Tools)
  • Programming by Intention (Techniques)
  • AWS (Platforms)
  • CircleCi (Tools)

Adopt

  • Grunt (Tools)
  • Vagrant (Tools)
  • Heroku (Platforms)
  • Linode (Platforms)
  • CodeShip (Tools)
  • Git Pull Request Workflow (Techniques)

The next step is to experiment with this visual tool for displaying your own Technology Radar. Let’s revisit this post in 6 months to see how effective this technique was and what has changed in my technology bubble.

Have you put your own Technology Radar together yet?

Building Coder_wally Using Metrics - Part Deux

| Comments

Earlier last week I posted a short piece on building Coder_Wally and some of the tools I used to improve the code. This post continues on from where I left on talking about Reek and RuboCop.

RuboCop was an interesting tool, as it helped me write code in more idomatic way, i.e. more like a Ruby developer would. Some of the methods I had written used things like get_ or has_. The Ruby way you don’t bother with the get_something, instead it just becomes something. has_something simply becomes something?.

One thing that bit me though was changing " to ' when no string interpolation was happening. Changing this:

spec.files = `git ls-files -z`.split("\x0") 

to

spec.files  = `git ls-files -z`.split('\x0')

The Gem would no longer build or install throwing string contains null byte message error (somewhat after the fact). The problem: single quotes around split('\x0'). Need to look into exactly why this was a problem. I hope I am not being unfair to RuboCop, but it’s primary focus is to help your code follow the Ruby Style guide and not point out code hot spots.

Reek on the other hand was awesome for finding hotspots. It flagged a bunch of potential code smells (duplicate calls and nested iterators) and pointed out where methods on certain classes were exhibiting feature envy and utility function behaviour. I am linking to the docs as they are really insightful.

The outcome of refactoring the code was a few more classes. I extracted the exception handling to it’s own class and created and error code finder. Methods that were manipulating objects to get there work done, were updated to have less knowledge and only work on what they needed. Take this method as an example:

# parse account information from data
def parse_accounts(response)
  Account.new(response[accounts]) if response[accounts]
end

This method actually has two problems for one it was making duplicate calls (response[accounts]). This could have been fixed by extracting the calls to a variable; however by fixing the underlying problem (the utility function behaviour) would also fix that issue. The method knew too much about response object and what it contains in order to get it’s work done, the change is quite simple extract the knowledge to the calling method:

# parse account information from data
def parse_accounts(accounts)
  Account.new(accounts) if accounts
end

...

# build CoderWall object from API response
def build(response)
  badges = parse_badges(response['badges'])
  accounts = parse_accounts(response['accounts'])
  user = parse_user(response)

  CoderWall.new badges, user, accounts
end

Interestingly enough after extracting my Exception Handler object, I was now warned about a Control Parameter smell. The solution there was to revert the extraction of helper method for request errors that raised an exception based on the status code by inlining them back again.
I found this iterative approach of running RuboCop and Reek, really helpful and it certainly led to cleaner looking code. At least I thinks so :). Again, do not blindly follow metrics, but use your judgement. In the absence of having someone to review your work these tools certainly help. Overall an interesting exercise and productive exercise.

Presence and a Future Worth Wanting

| Comments

James Whittaker is pissed off, really pissed off. We are held hostage by our browser and we should not stand for it.

That’s how he opens his talk on his vision for the future. I stumbled across this again the other week, when I came across this post, where he also goes over some tips for stage presence.

Really quickly let’s go over these as they are super handy. There are 5 (well really 4.5) bullet points of about stage presence worth keeping in mind:

  1. Come out swinging
  2. Attention span interlude
  3. Know your shit
  4. Make it epic
  5. Be brief, be right, be gone

He gives regular sessions on campus about Stage presence and I was able to watch one of the recordings and thoroughly enjoyed it and recommend watching it, if you get the chance.

This post is not about stage presence though, rather his view of the future. It does provide a nice lead in though :). Reading that post reminded me that I saw James Whittaker give a talk in person at our office on this topic, and he certainly came out swinging and kept on swinging for the whole duration. Below is a link to a similar talk he did:

A Future Worth Wanting - James Whittaker, Microsoft

For the most part I enjoyed his talk about his vision of the future. The jist was that we shouldn’t have to go to the web to the find (to hunt) the information we are after. We shouldn’t need to context switch. We don’t need apps to do that either (to gather). Our tools should be context aware and fetch the information for us (to farm). His example centered around going to a concert with his daughter after having received an email from her, asking him go with her to see Of Monsters and Men (loved that album and thanks for putting me on to them ;)).

His tools, in this case Outlook, should be context aware and be smart enough to fetch maps/travel directions/suggestions for restaurants and book the tickets. He bemoans the context switch out of whatever tool you are in to open a browser or an App to complete those tasks. He firmly(?) believes that Microsoft is one of the few companies that is in a position to deliver on this proposition based on the tools and services they offer. These tools are ‘Super apps’! Things like Outlook + ‘Bing knows’

While I agree with the premise, at the time I came away from the talk feeling another walled garden in the making. Rather than building open APIs and services it felt heavily slanted to being embedded in the Microsoft ecosystem. Despite mentioning Twitter and Facebook as well (and does talk tongue in cheek about ‘this is branding’ when referring to the XBox and Surface), I couldn’t shake that feeling. For what it’s worth I have similar sentiments towwards Apple… but they do make lovely products…

I believe that in order for companies (particularly Microsoft) to remain relevant, they need to be more open and allow tools from all sources to build on these systems simply and efficiently. Innovation seems to happen more frequently and rapidly outside of large bureaucratic companies and while they have the resources to deliver, they are slow to do so. Just in case you didn’t realise, I work at Skype, at least for now.

The web was built on openness and I find it quite tragic that more and more we are seeing tiered service provisioning, vendor lock in and data lock in. Yes, yes, companies need to make money, I am not that naive, but there are surely better ways.

I don’t think we should be locked into a world where the only way achieve this vision is with my Windows Phone (or iPhone for that matter), i.e. one ecosystem. Maybe I am the odd one out, at work I have a Windows machine, my phone is an Android device, my home setup is a Mac+iPad. Building so called ‘Super apps’ for all platforms is a big ask. And therein lies the crux of the matter… All of these devices already have a ‘Super App’ in common: the browser! We have had it for decades! Yes it was a pain to build for all of the different makes and versions; and while there are still problems, the last 3 or so years have seen an incredible convergence in supported features and functionality.

We might have afforded the browser an ‘incredible’ amount of power, I use it for almost everything. I don’t mind using apps; however my context switch happens when I need to leave the browser to use Outlook for example. I would argue that we need to invest more into making our ‘web apps’ better (services, uis, browsers). Turn these web apps into ‘Super apps’ that leverage the Ueber App that is the venerable browser, so that I don’t need to have Outlook or something else open to get what I need. Rather than invest in a walled garden of comfort that is proprietary and closed. It should run on any device, anywhere I am connected and that is the browser! All hail the Ueber app home of the super apps.

Building My Coder_wally Gem Using Metrics

| Comments

A few weeks back I decided to add CoderWall badges to the feed on my site. I could have just grabbed an existing gem but I decided to build my own. If you are truly keen you can also find it over at Rubygems.org and add to the other 1,245 downloads :).

To get the ball rolling I followed the steps described over at How I Start. The first stab ended up looking like this:

require "coder_wally/version"
require "open-uri"
require "json"

# All code in the gem is namespaced under this module.
module CoderWally
    # The URL of API we'll use.
    Url = "https://coderwall.com/%s.json"

    class Badge 
        attr_reader :name, :badge, :description, :created

        def initialize(hashed_badge)
            @name = hashed_badge.fetch("name")
            @badge = hashed_badge.fetch("badge")
            @description = hashed_badge.fetch("description") 
            @created = hashed_badge.fetch("created")
        end
    end

    def CoderWally.get_badges_for username
        raise(ArgumentError, "Plesae provide a username") if username.empty?
        url = URI.parse(Url % username)
        response = JSON.load(open(url))      

        response["badges"].map { |badge| Badge.new(badge) }
    end
end

It simply fetched JSON from the API for a given username and returned a collection of badges. Over the next couple of iterations I reworked a few things and added support for user details and accounts. For testing purposes (and to speed things up) I used Webmock to fake responses from the service. The most interesting thing to solve, was how to dynamically assign attr_accessors to the account object. I eventually found that you could do so by using a combination of singleton_class.class_eval and self.instance_variable_set. With the features done I looked around at other gems and what their README’s and tool chain looked like.

The first tool I decided to investigate was Flog which Sandy Metz used in a talk I saw recently. The Initial run yielded the following output:

 find lib -name \*.rb | xargs flog                                                                                                                                    
 84.3: flog total
 6.0: flog/method average

19.1: CoderWally::Client#build_coder_wall lib/coder_wally/client.rb:47
16.2: CoderWally::Client#send_request  lib/coder_wally/client.rb:20
11.1: CoderWally::Account#initialize   lib/coder_wally/account.rb:6
 8.8: main#none

The Client object could use a little love. To start things of, I decided to pull all of the API related calls into their own class, so anything relating to send_request:

 89.0: flog total
 5.2: flog/method average

16.4: CoderWally::Client#build_coder_wall lib/coder_wally/client.rb:27
16.2: CoderWally::API#send_request     lib/coder_wally/api.rb:13
11.1: CoderWally::Account#initialize   lib/coder_wally/account.rb:6
 9.9: main#none

That just moved the complexity around, but at least the Client object was now a little simpler, so to fix the complexity I extracted methods from the the send_request, which looked started off as follows:

# Dispatch the request
    def send_request url
      begin
        open(url)
      rescue OpenURI::HTTPError => error
        raise UserNotFoundError, 'User not found' if  error.io.status[0] == '404'
        raise ServerError, 'Server error' if  error.io.status[0] == '500'
      end
    end

Not overly complicated, but let’s follow the advice and see if we can’t improve on this, by extracting methods. I ended up with this:

  def send_request(url)
    begin
      open(url)
    rescue OpenURI::HTTPError => error
      handle_user_not_found_error(error)
      handle_server_error(error)
    end
  end

  # Parse status code from error
  def get_status_code(error)
    error.io.status[0]
  end

  # Raise server error
  def handle_server_error(error)
    raise ServerError, 'Server error' if  get_status_code(error) == '500'
  end

  # Raise user not found error
  def handle_user_not_found_error(error)
    raise UserNotFoundError, 'User not found' if  get_status_code(error) == '404'
  end

Running flog showed that the code in the send_request method was now no longer being flagged as complicated.

86.1: flog total
 4.3: flog/method average

15.1: CoderWally::Client#build_coder_wall lib/coder_wally/client.rb:27
11.1: CoderWally::Account#initialize   lib/coder_wally/account.rb:6
 9.9: main#none
 6.3: CoderWally::API#uri_for_user     lib/coder_wally/api.rb:36
 5.7: CoderWally::Badge#initialize     lib/coder_wally/badge.rb:9
 5.0: CoderWally::User#initialize      lib/coder_wally/user.rb:9

Next I tackled CoderWally::Client#build_coder_wall method. This led to creating a Coderwall builder object with simpler and more single purpose methods:

module CoderWally
    # Builds the CoderWall object from the response
    class Builder
            # Instantiate class
            def initialize
            end

            # parse badges from data
            def parse_badges(data)
                data['badges'].map { |badge| Badge.new(badge) } if data['badges']
            end

            # parse account information from data
            def parse_accounts(data)
                Account.new(data['accounts']) if data['accounts']
            end

            # parse user information from data
            def parse_user(data)
                User.new(data['name'], data['username'],
                        data['location'], data['team'], data['endorsements'])
            end

            # build CoderWall object from API response
            def build response
                badges = parse_badges(response)
                accounts = parse_accounts(response)
                user = parse_user(response)

                CoderWall.new badges, user, accounts
            end
    end
end 

Still not happy with all of the names, but it did feel and look better than this:

# Builds a CoderWall object
    def build_coder_wall(username)
        json_response = JSON.load(send_request(uri_for_user(username)))
        badges = json_response['badges'].map { |badge| Badge.new(badge) }
        accounts = Account.new(json_response['accounts'])
        user = User.new(json_response['name'], json_response['username'],
                json_response['location'], json_response['team'], json_response['endorsements'])

        CoderWall.new badges, user, accounts
    end

Flog agreed as well, and while the flog total was on the up, the method average kept going down (we started with a 6.0 average and ended up with 3.9):

93.4: flog total
    3.9: flog/method average

11.1: CoderWally::Account#initialize   lib/coder_wally/account.rb:6
    11.0: main#none
    7.0: CoderWally::Builder#parse_user   lib/coder_wally/builder.rb:15
    6.3: CoderWally::API#uri_for_user     lib/coder_wally/api.rb:38
    5.7: CoderWally::Badge#initialize     lib/coder_wally/badge.rb:9
    5.0: CoderWally::Builder#build        lib/coder_wally/builder.rb:20
    5.0: CoderWally::User#initialize      lib/coder_wally/user.rb:9
    4.0: CoderWally::API#send_request     lib/coder_wally/api.rb:13
    3.9: CoderWally::API#get_status_code  lib/coder_wally/api.rb:23

I am going to stop here. In a follow up post I will talk specifically about Rubocop and Metric_fu and how they further impacted the design and reability of the code. Before I go though, I wanted to finish up with some thoughts on using Flog and how it changed my code.

I started with one object that did everything and through a series of refactorings I ended up with several smaller more cohesive objects that also followed the Single Responsibility principle more closely (I wasn’t there yet and probably still am not).

I felt that my initial implementation was simple and readable enough. But that’s just the thing isn’t it? We feel that our code is good enough, but statistics can back these ‘feelings’ up or indeed refute them. I am not saying that one should blinbdly follow these kind of metrics and drive our code based off of these, but they are a good source of information and as this little experiment has shown can help improve the code. In the absence of being able to pair with somone or have someone else review your code, Flog proved very useful. Overall I am happier with having a class for API calls and who’s methods are more intention revealing. Likewise with my builder object and it’s methods, in the next post I will show how I continued on the improvement path for that particular class using Metric_fu (Reek in particular) and Rubocop.

Publishing Blog Posts the Git and Ci Way

| Comments

I recently switched over the source control of my blog from Bitbucket to Github, because I wanted to try out a new workflow with regards to editing and publishing posts.

As I tend to create a new git branch for each post I am working on, I wanted to use the pull request approach to publishing postsI first came across this idea, thanks those wonderful folks over at ThoughtBot. Now granted I am not collaborating with others on posts; however I still find this review process handy. Reading the post in a different context has been beneficial. Furthermore I now tend to give myself a few days between writing and posting as a result of this process. Using Github and their editor I can review, re-read and edit posts at my convenience. So far it’s worked well for me.

This got me thinking though: are there any other improvements I could make to my workflow… Well yes there are. As I mentioned I tend to create a branch for each post, followed by running the new post rake task. I started by modifying the tasks for posts and pages to create a new branch for me using the title. Then I realised I could take it even further, create an initial commit and create a new tracked remote branch. Here’s what the output looks like:

blog git:(change-workflow) rake 'new_post[test branch]'
mkdir -p source/_posts
Creating new post: source/_posts/2015-01-24-test-branch.markdown
Switched to a new branch 'test-branch'
[test-branch c5c786a] created new post entry: test-branch
2 files changed, 8 insertions(+), 4 deletions(-)
create mode 100644 source/_posts/2015-01-24-test-branch.markdown
Counting objects: 8, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (8/8), done.
Writing objects: 100% (8/8), 801 bytes | 0 bytes/s, done.
Total 8 (delta 6), reused 0 (delta 0)
To git@github.com:gregstewart/blog.git
* [new branch]      test-branch -> test-branch
Branch test-branch set up to track remote branch test-branch from origin.

The code for this is pretty straight forward and not foolproof, but a good starting point:

# create git branch
def create_branch(branch_name)
    exec "git checkout -b #{branch_name}; git add .; git commit -m 'created new post entry: #{branch_name}'; git push -u origin #{branch_name}"
end

The next thing I wanted to improve upon was the publishing step. Commit/Push/Generate and Deploy were the steps I used in the past, a bit long winded and repetitive. Also if I was not at home, then I had to wait to publish an update. Given how I am now using Pull Requests and use Github to sign off on and merge these, why not use CI to build and publish the Blog on merge to master? So I created a new project over at CodeShip, left the test settings empty, but under deployment added:

bundle exec rake generate
bundle exec rake deploy

Now whenever I merge a pull request, CI takes over and publishes my post to my server! Note that, if like me you use rsync, you will need to add CodeShips public key to your authorized_keys in order for Octopress’ rsync publishing to work. This post is the first to feature this new workflow!

Update: it turns out you can complete this workflow using Bitbucket as well!

Refactoring Your Grunt File

| Comments

Things left unchecked over time just grow to be unwieldy. Take this Gruntfile for example:

module.exports = function (grunt) {
    'use strit';
    grunt.initConfig({
        express: {
            test: {
                options: {
                    script: './server.js'
                }
            }
        },
        cucumberjs: {
            src: 'tests/e2e/features/',
            options: {
                steps: 'tests/e2e/steps/'
            }
        },
        less: {
            production: {
                options: {
                    paths: ['app/css/'],
                    cleancss: true
                },
                files: {
                    'app/css/main.css': 'src/less/main.less'
                }
            }
        },
        copy: {
            fonts: {
                expand: true,
                src: ['bower_components/bootstrap/fonts/*'],
                dest: 'app/fonts/',
                filter: 'isFile',
                flatten: true
            }
        },
        bower: {
            install: {
                options: {
                    cleanTargetDir:false,
                    targetDir: './bower_components'
                }
            }
        },
        browserify: {
            code: {
                dest: 'app/js/main.min.js',
                src: 'node_modules/weatherly/js/**/*.js',
                options: {
                    transform: ['uglifyify']
                }
            },
            test: {
                dest: 'app/js/test.js',
                src: 'tests/unit/**/*.js'
            }
        },
        karma: {
            dev: {
                configFile: 'karma.conf.js'
            },
            ci: {
                configFile: 'karma.conf.js',
                singleRun: true,
                autoWatch: false,
                reporters: ['progress']
            }
        }
    });
    grunt.loadNpmTasks('grunt-express-server');
    grunt.loadNpmTasks('grunt-selenium-webdriver');
    grunt.loadNpmTasks('grunt-cucumber');
    grunt.loadNpmTasks('grunt-contrib-less');
    grunt.loadNpmTasks('grunt-contrib-copy');
    grunt.loadNpmTasks('grunt-browserify');
    grunt.loadNpmTasks('grunt-bower-task');
    grunt.loadNpmTasks('grunt-karma');
    grunt.registerTask('generate', ['less:production', 'copy:fonts', 'browserify:code']);
    grunt.registerTask('build', ['bower:install', 'generate']);
    grunt.registerTask('e2e', [
        'selenium_start',
        'express:test',
        'cucumberjs',
        'selenium_stop',
        'express:test:stop'
    ]);
    grunt.registerTask('test', ['build', 'karma:ci', 'e2e']);
    grunt.registerTask('heroku:production', 'build');
};

It’s the Gruntfile for the project in my book. I’ll be honest the principle reason I want to refactor this file is because it makes the book editing quite painful and for the reader being able to make changes is quite difficult. However the same thing can be said for people working with the file, it’s getting to be difficult to see what is happening in this file, so let’s make this better for all.

Let’s start with the karma tasks, these can be extracted to a file called test.js (I am keeping this generic, just in case I decide to switch testing frameworks at a later stage) and let’s save it under a folder called build:

(function (module) {
    'use strict';
    var config = {
        dev: {
            configFile: 'karma.conf.js'
        },
        ci: {
            configFile: 'karma.conf.js',
            singleRun: true,
            autoWatch: false,
            reporters: ['progress']
        }
    };
    module.exports = function (grunt) {
        grunt.loadNpmTasks('grunt-karma');

        grunt.config('karma', config);
    }
})(module);

I have extracted the task configuration and the loading of the task from the Gruntfile.js, leaving us with:

 module.exports = function (grunt) {
    'use strit';
    grunt.initConfig({
        express: {
            test: {
                options: {
                    script: './server.js'
                }
            }
        },
        cucumberjs: {
            src: 'tests/e2e/features/',
            options: {
                steps: 'tests/e2e/steps/'
            }
        },
        less: {
            production: {
                options: {
                    paths: ['app/css/'],
                    cleancss: true
                },
                files: {
                    'app/css/main.css': 'src/less/main.less'
                }
            }
        },
        copy: {
            fonts: {
                expand: true,
                src: ['bower_components/bootstrap/fonts/*'],
                dest: 'app/fonts/',
                filter: 'isFile',
                flatten: true
            }
        },
        bower: {
            install: {
                options: {
                    cleanTargetDir:false,
                    targetDir: './bower_components'
                }
            }
        },
        browserify: {
            code: {
                dest: 'app/js/main.min.js',
                src: 'node_modules/weatherly/js/**/*.js',
                options: {
                    transform: ['uglifyify']
                }
            },
            test: {
                dest: 'app/js/test.js',
                src: 'tests/unit/**/*.js'
            }
        }
    });

    grunt.loadTasks('build');

    grunt.loadNpmTasks('grunt-express-server');
    grunt.loadNpmTasks('grunt-selenium-webdriver');
    grunt.loadNpmTasks('grunt-cucumber');
    grunt.loadNpmTasks('grunt-contrib-less');
    grunt.loadNpmTasks('grunt-contrib-copy');
    grunt.loadNpmTasks('grunt-browserify');
    grunt.loadNpmTasks('grunt-bower-task');

    grunt.registerTask('generate', ['less:production', 'copy:fonts', 'browserify:code']);
    grunt.registerTask('build', ['bower:install', 'generate']);

    grunt.registerTask('e2e', [
        'selenium_start',
        'express:test',
        'cucumberjs',
        'selenium_stop',
        'express:test:stop'
    ]);

    grunt.registerTask('test', ['build', 'karma:ci', 'e2e']);

    grunt.registerTask('heroku:production', 'build');
};

Apart from removing the code for Karma, I also added the grunt.loadTasks directive pointing it to our new created build folder. To validate that everything is still ok, just run grunt karma:dev. Let’s do the same for our browserify task, once again create a new file (called browserify.js) and save it under our build folder:

(function(module) {
    'use strict';
    var config = {
        code: {
            dest: 'app/js/main.min.js',
            src: 'node_modules/weatherly/js/**/*.js',
            options: {
                transform: ['uglifyify']
            }
        },
        test: {
            dest: 'app/js/test.js',
            src: 'tests/unit/**/*.js'
        }
    };

    module.exports = function(grunt) {
        grunt.loadNpmTasks('grunt-browserify');
        grunt.config('browserify', config);
    }
})(module);

And remove the code from the Gruntfile.js:

module.exports = function (grunt) {
    'use strit';
    grunt.initConfig({
        express: {
            test: {
                options: {
                    script: './server.js'
                }
            }
        },
        cucumberjs: {
            src: 'tests/e2e/features/',
            options: {
                steps: 'tests/e2e/steps/'
            }
        },
        less: {
            production: {
                options: {
                    paths: ['app/css/'],
                    cleancss: true
                },
                files: {
                    'app/css/main.css': 'src/less/main.less'
                }
            }
        },
        copy: {
            fonts: {
                expand: true,
                src: ['bower_components/bootstrap/fonts/*'],
                dest: 'app/fonts/',
                filter: 'isFile',
                flatten: true
            }
        },
        bower: {
            install: {
                options: {
                    cleanTargetDir:false,
                    targetDir: './bower_components'
                }
            }
        }
    });

    grunt.loadTasks('build');

    grunt.loadNpmTasks('grunt-express-server');
    grunt.loadNpmTasks('grunt-selenium-webdriver');
    grunt.loadNpmTasks('grunt-cucumber');
    grunt.loadNpmTasks('grunt-contrib-less');
    grunt.loadNpmTasks('grunt-contrib-copy');
    grunt.loadNpmTasks('grunt-bower-task');


    grunt.registerTask('generate', ['less:production', 'copy:fonts', 'browserify:code']);
    grunt.registerTask('build', ['bower:install', 'generate']);

    grunt.registerTask('e2e', [
        'selenium_start',
        'express:test',
        'cucumberjs',
        'selenium_stop',
        'express:test:stop'
    ]);

    grunt.registerTask('test', ['build', 'karma:ci', 'e2e']);
    grunt.registerTask('heroku:production', 'build');
};

Let’s test our task by running grunt browserify:code or grunt browserify:test. To speed things up a little in the following I am just going to show the extracted code.

Bower.js

(function(module) {
    'use strict';
    var config = {
        install: {
            options: {
                cleanTargetDir: false,
                targetDir: './bower_components'
            }
        }
    };

    module.exports = function(grunt) {
        grunt.loadNpmTasks('grunt-bower-task');
        grunt.config('bower', config);
    }
})(module);

Copy.js

(function(module) {
    'use strict';
    var config = {
        fonts: {
            expand: true,
            src: ['bower_components/bootstrap/fonts/*'],
            dest: 'app/fonts/',
            filter: 'isFile',
            flatten: true
        }
    };

    module.exports = function(grunt) {
        grunt.loadNpmTasks('grunt-contrib-copy');

        grunt.config('copy', config);
    }
})(module);

Less.js

(function(module) {
    'use strict';
    var config = {
        production: {
            options: {
                paths: ['app/css/'],
                cleancss: true
            },
            files: {
                'app/css/main.css': 'src/less/main.less'
            }
        }
    };

    module.exports = function(grunt) {
        grunt.loadNpmTasks('grunt-contrib-less');

        grunt.config('less', config);
    }
})(module);

Cucumber.js

(function(module) {
    'use strict';
    var config = {
        src: 'tests/e2e/features/',
        options: {
            steps: 'tests/e2e/steps/'
        }
    };

    module.exports = function(grunt) {
        grunt.loadNpmTasks('grunt-selenium-webdriver');
        grunt.loadNpmTasks('grunt-cucumber');

        grunt.config('cucumberjs', config);
    }
})(module);

Express.js

(function(module) {
    'use strict';
    var config = {
        test: {
            options: {
                script: './server.js'
            }
        }
    };

    module.exports = function(grunt) {
        grunt.loadNpmTasks('grunt-express-server');

        grunt.config('express', config);
    }
})(module);

Leaving us now with a Gruntfile that is so much more lightweight and only concerns itself with loading and registering tasks:

module.exports = function (grunt) {
    'use strit';
    grunt.loadTasks('build');

    grunt.registerTask('generate', ['less:production', 'copy:fonts', 'browserify:code']);
    grunt.registerTask('build', ['bower:install', 'generate']);

    grunt.registerTask('e2e', [
        'selenium_start',
        'express:test',
        'cucumberjs',
        'selenium_stop',
        'express:test:stop'
    ]);

    grunt.registerTask('test', ['build', 'karma:ci', 'e2e']);
    grunt.registerTask('heroku:production', 'build');
};