Greg's Blog

helping me remember what I figure out

Why Use Node.js

| Comments

Recently we were asked why we recommend the use of Node.js on our project. While Stuart and I are putting together a presentation and working on a blog post, it got me wondering what it is about Node.js that I like so much and why we should use it.

To be honest, I do have a love/intense dislike relationship with JavaScript; however having adopted a functional programming paradigm and buying into immutable data, JavaScript development has been given a fresh impetus. Combine that with React and our Arch framework, and I am having fun building a front end application. I even enjoy working with Hapi.js at our API layer, but I think that maybe we should opt for another language there. Don’t get me wrong this combination has allowed us to get out of the blocks quickly.

When looking for arguments into why we should use Node.js in the enterprise, the following benefits get attributed to using it, in brackets are the companies that have attested these benefits:

  • Massive performance gains (LinkedIn, Groupon, PayPal, Walmart and Ebay)
  • Great for Mobile development (Walmart and Yahoo)
  • Vibrant community
  • Built from day one around Async model and event driven
  • Easier to find people that can work on Node than say Erlang
  • Contributors are maturing

When it comes to the performance claims, we need to put together a pretty consistent story that backs a lot of these statements and disavows the others. When looking into this for our presentation, the information is spread across tweets and blog posts. TO convince Enterprise decision makers, I think we would need something more cohesive.

While this quote relates to Go it’s still relevant as it relates to async programming:

An asynchronous model had many other benefits. We were also able to instrument everything the API was doing with counters and metrics, because these were no longer blocking operations that interfered with communicating to other services. We could downsize our provisioned API server pool by about 90%. And we were also able to remove silos of isolated Rails API servers from our stack, drastically simplifying our architecture.

This is one of those facts that backs the productivity increase and total cost of ownership reduction by choosing the right tool for the job. As said this is for Go, however there are many quotes to be found that back these claims in the JavaScript and Node.Js space, given it’s event based non-blocking architecture. Of course performance gains from this architecture are not a guaranteed outcome, bad coding practice can undo these advantages easily.

The vibrant community claim are both a benefit and a detriment. I find the rate of change and churn dizzying at times. I think Neal Ford put it well when discussing the ThoughtWorks’ Technology Radar:

Well, we find that places, technology locations that have a lot of churn ends up getting a lot ofin Asses that never make it into Trial. So, we went through this recently with JavaScript frameworks because they’re like mushrooms after a rainstorm. They keep popping up and going away. So, one of the things we do for every Radar is call out themes that we’ve seen across all the blips. And one of the themes that we called out then was churn in the JavaScript space. Because at the time I think there were two common build and dependency management tools. And one was in the process of replacing the other one and you needed all three of them to get something to work. And so, there was just a lot of craziness in that space.

It is proving difficult to ignore the new shiny and this is compounded by other people’s enthusiasm for experimenting with new tools and frameworks. This can have an impact on productivity as you can be forever adopting and re-writing things and it requires discipline to evaluate the tools and when to apply them to a project. On the plus side, it shows the community is driving change and improvements.

On the flip side there are however still a lot of common misconceptions:

  • Just a JS Dev, which is clearly not true. JS Devs are just as Software Engineering focused as Java Developers. This is evident in the maturing of contributors to Open Source projects
  • It’s a server. Again not true, it’s more akin to a JVM or runtime
  • It’s just JavaScript – look to the advances of ES6 and the future of the language. It supports TDD and DI, Static Code analysis, Error handling/Logging all the stuff the Enterprise loves
  • It is slow. I think those days are behind us – measure it – V8 is fast, as are many other engines (Shakra/Spidermonkey, etc…), Nashorn JVM based JS engine is also available

Let’s consider some other advantages:

  • Cross skilling between front end and back end teams, between the whole team. We blur the boundaries between front and back end specialists, and this to me is a good thing. It also helps with pluging knowledge gaps and knowledge being concentrated with one team member or area of the team
  • It has a pretty decent package management system with NPM
  • It’s a foundation (backed by Joyent, IBM, PayPal, Microsoft, Fidelity and the Linux Foundation)

To expand a little on the NPM point. If you consider Modularisation and NPM, you find yourself in a win win situation.

Modularization via Node Modules was a big win as well, as we were able to share components across teams and easily manage them through a NPM

Smaller/modular code is easier to maintain and debug. More modular code, is more composable and indeed more re-usable.

I touched on this briefly at the top, but when you consider the ability to write code that runs on both the server and client (as you do with with Isomorphic apps), you add great value for your clients. Time to first render using JavaScript that was rendered on the server is good for the user experience. People all to often focus on the value this approach offers to SEO (it does add value by the way), however I think if you consider Single Page Applications that can seemlessly fall back to a Request/Response model you have a real winner on your hands. While turning off JavaScript on the client is an argument as well, the reality is very few people do this. BUT a lot of devices have poor JavaScript support, to the point where they might as well be categorised as having JavaScript turned off (I am looking at you BlackBerry in the enterprise). Having an isomorphic solution up your sleeve in these situations is worth it’s weight in gold.

There are many things that speak to Node.js being a great choice for developing and delivering applications across the spectrum of businesses. I hope I have also made a few points that back up why this is a great platform to work and have fun delivering solutions with.

Are You Using Docker?

| Comments

Are you using Docker for development? For continous integration? For deployment? No? Why not? This is not an inflamatory question, I am genuinely interested in hearing why you would not embrace Docker or more broadly speaking: containers.

Development

We have been using Docker on our most recent project and it’s been an awesome experience. I am sure you have experienced this at least once on every porject:

Works on my machine

Using containers has almost completely eliminated the old adage but it works on my machine. Eliminating variance of any kind in developer machine setup is very important and by adopting containers we are very close to almost having next to zero variance. I say next to zero, because the hardware is still likely to be different; however the in terms of dependencies and system configruation thanks to Docker we can eliminate the variance. All configuration for the container resides in our repository and any issues encountered so far have usually been resolved by installing dependencies after a pull or updating the container by executing the build command.

Clueless

Getting new team members onboarded is incredibly efficient as well: check the repo out, run npm i (install dependencies) and our docker run script (build the container) and you are up and running. I think this alone should convince you to use Docker now. The days of spending hours tinkering with the setup, debugging and pouring over out dated documentation are numbered!

CI

So your devevelopment machine setup and environment differences are basically eliminated. What about getting ready for deployment? Using this configuration, you can now confidently and easily build your code/app on your CI sever as well. No need for extra configurations between CI and dev environments, it’s the same container. Always want a clean base line for each build? Yout got it, since on re-build, your whole stack is clean with each build. Sure it adds a little time to your build, however I think the extra couple of minutes it takes to re-build the container and push it to a registry after a successful build, are definitely worth it.

At time of writing, it takes us on average 8 minutes to build, test and deploy to AWS after merging a Pull Request. Granted your mileage may vary but to give you some idea we run some 200 unit tests, 30 integration tests and 10 feature tests (and yes we need to improve our coverage…) and it’s all written in Node.js.

Continuous delivery

Another thing to consider is: does your CI environment not support your language of choice to build your product(s)? Containers can help here as well.

Application deployment

Your dev envinronment is consistent, your build is consistent and now we come to the top of the chain: deploying your application.

Dev ops problem now

So far I have avoided using Immutable Infrastructure to describe containerisation, but it is another key aspect here. A quick search for Immutable Infrastructure throws up tons of results, maybe just the sign of a fad, but I believe there is so much more to it. The focus is on dev ops in a lot of these posts and rightfully so; however I think the chaps over at CodeShip sum up the points best.

So being able to develop against what will be in production, then confidently, reliably and repeatedly build and deploy your application and environment is no longer a pipe dream.

All configuration is held in one place, so spinning up new instances to support increased demand is now so much easier, than any other provisioning mechanism I have seen. Just check out this video:

Docker On Diego

In a similar managing and moving toward Blue/green deployments has also never been easier either, especially when you add to it the tooling behind AWS.

Is it perfect?

Well truth be told: No. Not yet at least… As I mentioned variance still exists; after all the container has to run on some machine in some data center. Docker on Windows and OS X has a few kinks and runs inside a VM. We have come across a few issues, dealing with the file system (watching for file changes in development and read/writes across shared volumes on the host and containers). DNS going walk about on the host VM have also plagued each of us at least once on this project.

Given that the code runs inside of a container, debugging has had a few challenges. Having said that it pushed a focus on logging to the start of the cycle rather than leaving it to later stages.

So there are a few issues, but this should not deter you from seriously looking into using Docker.

Two Cool Use Cases for Vagrant

| Comments

I have been using Vagrant on and off for a couple of years now to set up dev environments. Admittedly Docker has recently been my prefered way for setting up such environments. Last week I came across two other uses cases for Vagrant that I wanted to share.

We were tasked with setting up Jenkins on a server and while we were waiting for the environment to be made available, Stuart went ahead and built a box using the same target OS to work through and document the steps needed to install Jenkins. Once done, we just ran vagrant destroy and vagrant up to quickly repeat and validate that the steps we had jotted down were correct and that we had everything we needed. Such a quick and easy to prepare and validate an install. As a result installing Jenkins on the target environment only took me about 20 minutes.

The other use I came across was, when working with a Bluemix buildpack. I was setting up a Nginx based reverse proxy for our app, but I wanted to upgrade the Nginx version. Reading through the documentation for the buildpack, I saw probably the coolest use yet for Vagrant. Simply run vagrant up and it spins up two instance of Ubuntu (Lucid and Trusty), patches itself, builds the Nginx binaries and moves them to a distribution folder once done. To upgrade Nginx was a doddle as a result: simply update the target version (and the PCRE version), run vagrant up and a few minutes later you have two new sets of binaries that can be pushed to Bluemix with the community buildpack. Be sure to also check out the tests!

So there you have it, Vagrant is not only great for solo devs and dev teams as a sandboxed dev environment, but you can try out installations and build binaries with a few simple commands.

Clojure Data Structures

| Comments

Always easier to remember things when you write them down :).

Syntax

Operations follow this pattern:

1
(operator operand1 operand2 operandn)

No commas between operands, just whitespace.

Clojure uses prefix notation as opposed to infix notation which is more familiar in other languages

1
2
(+ 1 1)
=> 2

Equality

1
2
3
4
5
6
(= 1 1)
=> true
(= "test" "test")
=> true
(= [1 2 3] [1 2 3])
=> true

Strings

Use double quotes to delineate strings, e.g. : "This is a string"

For concatenation use str function:

1
2
3
(def name "string")
(str "This is a " name)
=> "This is a string"

Maps

Map values can be of any type and can be nested.

1
2
3
{:first 1
 :second {:name "Greg" :surname "Stewart"}
 :third "My name"}

Use get to look up values and get-in to look up values in nested maps. Instead of get you can treat it as a function with the key as a parameter.

1
2
3
4
5
6
7
8
9
10
(def my_map {:first 1
#_=>  :second {:name "Greg" :surname "Stewart"}
#_=>  :third "My name"})

(get my_map :first)
=> 1
(get-in my_map [:second :name])
=> "Greg"
(my_map :first)
=> 1

Keywords

In these examples :first is a keyword. Key words can be used as functions:

1
2
(:first my map)
=> 1

Vectors

Think array in other languages. Elements of a Vector can be of any type and you can retrieve values using get as well.

1
2
3
(def my_vector [1 "a" {:name "Greg"}])
(get my_vector 0)
=> 1

Can also be created using vector function:

1
2
(vector "hello" "world" "!")
=> ["hello" "world" "!"]

Using conj you add elements to a vector. Elements get added to the end of a vector.

Lists

Like vectors, however you can’t use get to retrieve values. Use nth instead

1
2
3
(def my_list '("foo" "bar" "baz"))
(nth my_list 1)
=> "bar"

Lists can be created using the list function. Use conj to add items to a list. Unlike vectors they get added to the beginning of the list.

Sets

Collection of unique values. Created either using #{} or set.

1
2
(set [3 3 3 4 4])
#{4 3}

Use get to retrieve values. You can create sets using hash-set or sorted-set as well:

1
2
3
4
(hash-set 3 1 3 3 2 4 4)
=> #{1 4 3 2}
(sorted-set 3 1 3 3 2 4 4)
=> #{1 2 3 4}

Symbols

Another assignment method, however apparently we can manipulate them as if they were data. Not sure what that means yet.

Quoting

' is referred to as quoting. Used this to define a list. Used in macros.

Exploring the Open Closed Principle

| Comments

At the start of the year I watched sandy Metz’s talk: All the Little Things. It’s an absolutely brilliant and once again inspiring talk.

RailsConf 2014 - All the Little Things by Sandi Metz

She touches on many interesting and thought provoking topcis. The one I would like to focus on with this post is the open closed principle:

In object-oriented programming, the open/closed principle states “software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification”; that is, such an entity can allow its behaviour to be extended without modifying its source code.

In essence you should be able to add a feature to a certain part of your application without having to modify the existing code. When I first came across this idea, at first this seems unachievable. How can you add a feature without touching existing code? The talk got me thinking about some of my code and I was keen to explore applying this to my code.

So toward the backend of February I embarked on a refactoring exercise of the core feature of my site Teacupinastorm. For some time I had been meaning to add a few new feeds to the page, but adding them was a bit of a slog, as I needed to touch way to many files in order to add one feed. Sounded like a prime candidate to explore the Open Close principle in practical manner.

As I mentioned, in order to add a feed I needed to edit at least two files and then create a new object to fetch and format the feed data it into a standard structure that my view could use. What really helped me with this exercise was that the functionality had decent test coverage.

At the heart we have the Page Object, which basically co-ordinates the calls to the various APIs and quite a bit more. This is a another smell, it goes against the Single responsibility principle. This is what it used to look like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
class Page
  attr_reader :items

  def initialize
    @items = []
    @parser_factory = ParserFactory.new
  end

  def fetch
    parser_configurations = {wordpress: {count: 10}, delicious: {count: 5}, instagram: {count: 6}, github: {count: 5},
                  twitter: {count: 4}, vimeo: {count: 1}, foursquare: {count: 10}}

    parser_configurations.each do |parser_configuration|
      parser_type = parser_configuration[0]
      feed_item_count = parser_configuration[1][:count]

      parser = @parser_factory.build parser_type
      feed_items = parser.get_last_user_events feed_item_count

      feed_items.each do |item|
        parser_configuration = set_page_item(parser_type, item[:date], item[:content], item[:url], item[:thumbnail], item[:location])
        @items.push(parser_configuration)
      end

    end

  end

  def sort_by_date
    @items.sort! { |x, y| y[:date] <=> x[:date] }
  end

  def set_page_item(type, date, content, url, thumbnail, location)
    page_item = {}
    page_item[:type] = type
    page_item[:date] = fix_date(date, type)
    page_item[:content] = content
    page_item[:url] = url
    page_item[:thumbnail] = thumbnail
    page_item[:location] = location
    page_item
  end

  def fix_date(date, type)
    return DateTime.new if date.nil?

    (type == :instagram || type == :foursquare) ? DateTime.parse(Time.at(date.to_i).to_s) : DateTime.parse(date.to_s)
  end

  def get_by_type(type)
    @items.select { |v| v[:type] =~ Regexp.new(type) }
  end

end

It does a lot and it also had some inefficiencies. It also had a high churn rate. All smells asking to be improved upon.

One of the first things I did was move the parser_configuration out of this object. It’s a perfect canditate for a configuration object. So I moved that into it’s own yaml file and let rails load that into application scope. Now when I add a new feed, I no longer need to touch this file, but just add it to the yaml file.

Next I looked at the ParserFactory. Basically it took a type and and returned an object that would fetch the data. Another candidate to refactor so that I would not need to edit this file when I added a new feed.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
class ParserFactory

  def build (type)

    case type
      when :foursquare
        parser = FoursquareParser.new
      when :instagram
        parser = InstagramParser.new
      when :delicious
        parser = DeliciousParser.new
      when :github
        parser = GithubParser.new
      when :twitter
        parser = TwitterParser.new
      when :vimeo
        parser = VimeoParser.new
      when :wordpress
        parser = WordpressParser.new
      else
        raise 'Unknown parser requested'
    end

    parser
  end

end

The individual parsers were actually fecthing the data and formatting the response into a standard format for the view. If you watched Sandy’s video you will recognise the pattern here. Once a new feed was added I had to add a new case. I re-worked the code to this:

1
2
3
4
5
6
7
8
9
10
11
class WrapperFactory

  def build (type)
    begin
      Object::const_get(type + "Wrapper").new
    rescue
      raise 'Unknown parser requested: ' + type
    end
  end

end

They objects themselves were more wrappers, so I re-named the factory object and the individual objects. I can’t quite get rid the “Wrapper” part as some the gem names would clash with the class names. Need to work on that some more.

So the wrappers massaged the content of the response into the right format by looping over the result set and return the collection to the Page object. Then I would loop again in the Page object to set the page item. Redundant looping, let’s address this.

I looked at the set_page_item and fix_date methods. For starters they seemed related and did not belong in this object so I extracted them into a PageItem object. Furthermore fix_date checked the feed type to format the date. I decided to move the responsibility for creating this object into the individual wrappers and then just appending the result to the items collection.

Now the Page object looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
class Page
  attr_reader :items

  def initialize
    @items = []
    @wrapper_factory = WrapperFactory.new
  end

  def fetch_page_items

    FEED_CONFIGS.each do |feed_configuration|
      parser_type = feed_configuration[0]
      feed_item_count = feed_configuration[1]['count']

      wrapper = @wrapper_factory.build parser_type.to_s.capitalize

      @items.concat(wrapper.get_last_user_events(feed_item_count))
    end
  end

  def fetch_sorted_page_items
    fetch_page_items
    sort_by_date
  end

  def sort_by_date
    @items.sort! { |x, y| y.date <=> x.date }
  end

  def get_by_type(type)
    @items.select { |v| v.type == type }
  end

end

A little simpler, but more importantly when it comes to adding a new feed I no longer need to edit this file or indeed the Factory object. It’s safe to say that both WrapperFactory and Page are now open for extension and closed for modification. The next time I add a feed, I do not need to touch these two objects. I simply update my configuration file and create a feed type wrapper.

However now PageItem is not open closed. What if I add a new feed and I need fix the date? Now I would need to adjust the fix_date method in that object. So I decided to extract that method from the PageItem into it’s own module. I adjusted the code to be more generic and put the responsibility on parsing the date back on the individual feed wrappers. Ultimately they have more knowledge about the data they are handling and it’s certainly not the PageItem’s responsibility to that job.

The code overall is better to reason about and each object has a more concrete responsibility now and more importantly when I add a new feed I no longer have to touch Page, PageItem or WrapperFactory.

A Quarter of the Way In

| Comments

So here we are a quarter of the way into 2015 already, the sun is out and it is getting warmer in London.

In January, I spelt out some goals I wanted to achieve for 2015 and four months into the year seems like a good moment to take stock of things.

The first thing I wanted to achieve was to write one blog post a week. So far I have posted 15 times (including this one) and we are up to 16 weeks into the year – only 1 post behind! That’s actually not too bad. I haven’t chalked up as many analytical posts as I wanted to, but I am pleased to have gotten into the habit of writing one post a week.

On the book front – I had set myself a goal of reading one book a month. To date I have finished three, so once again a little behind the set expectation:

I am in the process of reading:

My own book writing though has languished… I am not even sure I want to go into the whys and whats and I probably should to get the ball rolling again. Let’s add that one to the TODO list.

On the side project front, I surprised myself a little and released a gem – which has been downloaded 2557 times to date. I suspect the majority are mine ;) That was an interesting experiment and I blogged about it quite a bit.

I also left Skype after 2 years and releasing the Skype for Web experience (which actually had it’s first and to that point only release back in November). I now work for Red Badger as a Technical Architect and I am having a lot of fun again. A upcoming blog post will describe how we are building and continously shipping an isomorphic app using React, Arch and Docker to AWS using Circle CI. I cannot emphasies enough, how much changing your tooling and being able to use the best in breed tools can mean to your personal (/developer) happiness, productivity and enthusiasm.

So in summary, a little behind the content output and intake I wanted to achieve. I did release one side project. I have migrated things around on my site a little to make the refresh a little easier. Changed jobs. On a personal note, my son James got his British citizenship and Jodie got her indefinite leave to remain. So far 2015 has been kind to us, let’s hope it continues – knock on wood!

Picking a New Language to Learn

| Comments

I started writing this post with the idea to just layout what was important to me in choosing what language to pick up this year and go through the options. I didn’t really expect to make a choice by the end of it.

In doing this post and thinking about what I wanted out of a language, the community around it and doing the research, there was only one real winner in the end. It does help to put things into writing… The TL;DR is this year I will be looking at: Clojure? Why it ticks all of the boxes I set out in this post.


Most years I try to learn a new language and typically the choice has been straight forward. For some reason this year I have struggled with this. Maybe it’s because there’s such a proliferation of interesting languages out there. Maybe it’s because I am torn between picking between OO and a Functional paradigm.

At the top of the list are Scala, Go, Clojure and Elixir. All but one are in the functional realm of programming languages; Go being the odd one out. However it does seems to have a huge traction right now. On the other hand there is something about Elixir that really appeals to me, maybe it’s because it’s described as being close to Ruby and be focused on developer happiness… and it’s the shiny new hotness.

Oddly enough only Scala featured in my Technology Radar. Swift was one that I listed, but does not figure at all in my shortlist. This tells me I need to leverage my radar a bit more and also think about what goes into it a little more deeply.

What matters to me

Things that are important to me when making the choice are: the testing story, build tools, CI support, dependency management and the backing of a web framework.

Scala

Scala ships with SBT as the build tool. Circle CI, Codeship and SnapCI, all support Scala.

You have a few choices on the web framework side of this with the Play framework, Scalatra and Lift.

What about testing? The first two things I came across were ScalaTest and Specs2. Being built on the JVM, you can also leverage Maven/Gradle for build automation and dependency management.

Elixir

The CI story for Elixir is a little murky, there are custom scripts out there to run builds on Circle CI. As a web framework there is the Phoenix Framework. The testing story doesn’t look fabulous yet, but it’s good enough. Elixir comes with Mix for dealing with dependencies. It’s still early days, but being on the front line could be a good thing and well there’s the whole developer happiness thing that just can’t be discounted.

Clojure

As for Clojure, well there are quite a few options for the web framework side of things with Caribou, Luminusweb and Pedestal.

ThoughtWorks’ CI service, Snap CI, has Clojure covered. Codeship also provide support.

In terms of build automation tools you have Leiningen and Clojar looks like a good source of libraries.

The testing story is also a good one, it comes with it’s own test framework, but also has many other options, such as speclj and Midje. All in all it looks like Clojure ticks all of the boxes, thanks to it’s wide adoption and maturity. The only downside, which is also one of it’s advantages, is that it runs on the JVM and hence allows you to leverage the rich Java eco system. Oh my there are a lot braces to digest as well :).

Go

Codeship provides outamated builds. Go ships with a test framework as well as benchmarking tools, so that covers the automated testing angle. There are other solutions as well such as GoConvey or GinkGo.

For web frameworks both Revel and Martini look good. With regards to build tools and dependency management, these are also built into the language with go build and go get respectively.

Final thoughts

All of the languages address the things that are important to me, with varying degrees of maturity. However there’s the question does the language jell with me? To help me with that I have found an awesome resource that allows me to explore the languages: Exercism, the brain child of Katrina Owen. She refers to them as a set of toy problems to solve and you can go very deep into the solutions, but it also provides with you with a good experimentation platform.

The other thing I remembered was this book : Seven Languages in Seven Weeks. I have been recently thumbing through it again and it provides a great introduction to some of the languages I am considering as well suggesting a few exercises for further exploration.

Writing all this down seems like a lot of consideration for something that I used to do on a whim. However now that I went through this exercise, I know which language I would like to get to know this year: Clojure

How to Test Your Gem Against Multiple Ruby Versions Using Circle Ci

| Comments

My work on my little gem continues to make steady progress. This week I wanted I carried out some major re-working of the API. I wanted to follow the Parallel Change pattern for these changes, as I didn’t want to completely break the API. However there was at least one breaking change, given that I moved from:

client = CoderWally::Client.new
coder_wally = client.get_everything_for ARGV[0]

To:

client = CoderWally::Client.new ARGV[0]

For the record you can still call client.get_everything_for ARGV[0], but you will see a deprecation warning. The prefered approach now is to call client.user.everything.

The other thing that I wanted to experiment with, was running a build against multiple versions of Ruby. In Circle Ci, this is actually really straightforward. All you need to do is override the dependency and test steps in your circle.yml file. I wanted to run a build against ruby: 2.0.0-p568, 2.1.5 and 2.2.0, so here’s what my config file now looks like:

dependencies:
  override:
    - 'rvm-exec 2.0.0-p598 bundle install'
    - 'rvm-exec 2.1.5 bundle install'
    - 'rvm-exec 2.2.0 bundle install'

test:
  override:
    - 'rvm-exec 2.0.0-p598 bundle exec rake'
    - 'rvm-exec 2.1.5 bundle exec rake'
    - 'rvm-exec 2.2.0 bundle exec rake'

While this was easy to set up there were a cople of learnings:

  • Do not specify a bundler version in your gems dev dependencies. It’s just more flexible to trust the system and ruby version that is running the bundle install command. If you do, then you need to install the corresponding version on the build server. Also if you want to go back to older versions of ruby that aren’t supported by the bundler version you have specified, then there’s more fuffing about.
  • The other thing I learned had to do with Minitest and Ruby 2.2.0. The call to require it failed. To get the build to pass on Circle Ci, I had to add a dev dependency to my Gemspec.

I wanted to test running against older versions of Ruby and the latest JRuby, but when I had a quick go, Webmock was telling me that I should stub my requests, which I am doing, but for some reason they aren’t being recognised in this configuration.

A Couple of Bundler Tricks

| Comments

Quite literally two things, no more no less.

To install a specific version of bundler do:

gem install bundler -v x.x.x

Where x.x.x is the version to install. Probably well known, but I had to look it up. Then use that version run, instead of the the latest one you have installed:

bundle _x.x.x_ install

Those _ surrounding the version number are not a typo and it does look odd, but it works…

Three Things I Learned About PhantomJs This Week

| Comments

This week we had a bit of a fun moment with one of our feature tests. As a little background we are using a combination of PhatomJs, Cucumber.js and WebDriver.io for our end to end/user journey tests.

One test failed repeatedly when we used to PhantomJs but if we switched to Chrome it would pass. The test itself was sending an Async POST request to an API and then based on the response the browser would redirect to the created resource (well it did more, but that was the basic premise).

After some head scratching and painful debugging we eventually narrowed it down to our Location header disappearing from the API response. We initially thought this might be a bug in PhantomJs, because we could clearly see the server sending the header and in the debug tools it would show up as well; however it turns out that this is PhantomJs’ security model. By default PhatomJs only handles ‘standard’ headers. Apparently Location isn’t one of them. The solution in the end is quite simple, simply start PhatomJs with the --web-security flag set to false: ./node_modules/.bin/phantomjs --web-security=false

Note that this also turns off SSL certificate checking, but since we are using this for test purposes we are fine. Your mileage may vary though depending on what you are using PhantomJs for.

The other thing I learned during this episode was that there is no need to use or install the Selenium stand alone server as you can launch PhantomJs with webdriver support (in the shape of GhostDriver I believe): ./node_modules/.bin/phantomjs --webdriver=4444

To debug the test we used the following flag, which allows you to then remote debug using a browser and the web inspector interface: ./node_modules/.bin/phantomjs --remote-debugger-port=9000

Open the browser and go to http://localhost:9000 and you will be presented with a list of PhantomJs sessions. By selecting one you can start your debugging.

So there you are three things I learned about PhantomJs.