Greg's Blog

helping me remember what I figure out

Clojure-data-structures

| Comments

Always easier to remember things when you write them down :).

Syntax

Operations follow this pattern:

1
(operator operand1 operand2 operandn)

No commas between operands, just whitespace.

Clojure uses prefix notation as opposed to infix notation which is more familiar in other languages

1
2
(+ 1 1)
=> 2

Equality

1
2
3
4
5
6
(= 1 1)
=> true
(= "test" "test")
=> true
(= [1 2 3] [1 2 3])
=> true

Strings

Use double quotes to delineate strings, e.g. : "This is a string"

For concatenation use str function:

1
2
3
(def name "string")
(str "This is a " name)
=> "This is a string"

Maps

Map values can be of any type and can be nested.

1
2
3
{:first 1
 :second {:name "Greg" :surname "Stewart"}
 :third "My name"}

Use get to look up values and get-in to look up values in nested maps. Instead of get you can treat it as a function with the key as a parameter.

1
2
3
4
5
6
7
8
9
10
(def my_map {:first 1
#_=>  :second {:name "Greg" :surname "Stewart"}
#_=>  :third "My name"})

(get my_map :first)
=> 1
(get-in my_map [:second :name])
=> "Greg"
(my_map :first)
=> 1

Keywords

In these examples :first is a keyword. Key words can be used as functions:

1
2
(:first my map)
=> 1

Vectors

Think array in other languages. Elements of a Vector can be of any type and you can retrieve values using get as well.

1
2
3
(def my_vector [1 "a" {:name "Greg"}])
(get my_vector 0)
=> 1

Can also be created using vector function:

1
2
(vector "hello" "world" "!")
=> ["hello" "world" "!"]

Using conj you add elements to a vector. Elements get added to the end of a vector.

Lists

Like vectors, however you can’t use get to retrieve values. Use nth instead

1
2
3
(def my_list '("foo" "bar" "baz"))
(nth my_list 1)
=> "bar"

Lists can be created using the list function. Use conj to add items to a list. Unlike vectors they get added to the beginning of the list.

Sets

Collection of unique values. Created either using #{} or set.

1
2
(set [3 3 3 4 4])
#{4 3}

Use get to retrieve values. You can create sets using hash-set or sorted-set as well:

1
2
3
4
(hash-set 3 1 3 3 2 4 4)
=> #{1 4 3 2}
(sorted-set 3 1 3 3 2 4 4)
=> #{1 2 3 4}

Symbols

Another assignment method, however apparently we can manipulate them as if they were data. Not sure what that means yet.

Quoting

' is referred to as quoting. Used this to define a list. Used in macros.

Exploring the Open Closed Principle

| Comments

At the start of the year I watched sandy Metz’s talk: All the Little Things. It’s an absolutely brilliant and once again inspiring talk.

RailsConf 2014 - All the Little Things by Sandi Metz

She touches on many interesting and thought provoking topcis. The one I would like to focus on with this post is the open closed principle:

In object-oriented programming, the open/closed principle states “software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification”; that is, such an entity can allow its behaviour to be extended without modifying its source code.

In essence you should be able to add a feature to a certain part of your application without having to modify the existing code. When I first came across this idea, at first this seems unachievable. How can you add a feature without touching existing code? The talk got me thinking about some of my code and I was keen to explore applying this to my code.

So toward the backend of February I embarked on a refactoring exercise of the core feature of my site Teacupinastorm. For some time I had been meaning to add a few new feeds to the page, but adding them was a bit of a slog, as I needed to touch way to many files in order to add one feed. Sounded like a prime candidate to explore the Open Close principle in practical manner.

As I mentioned, in order to add a feed I needed to edit at least two files and then create a new object to fetch and format the feed data it into a standard structure that my view could use. What really helped me with this exercise was that the functionality had decent test coverage.

At the heart we have the Page Object, which basically co-ordinates the calls to the various APIs and quite a bit more. This is a another smell, it goes against the Single responsibility principle. This is what it used to look like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
class Page
  attr_reader :items

  def initialize
    @items = []
    @parser_factory = ParserFactory.new
  end

  def fetch
    parser_configurations = {wordpress: {count: 10}, delicious: {count: 5}, instagram: {count: 6}, github: {count: 5},
                  twitter: {count: 4}, vimeo: {count: 1}, foursquare: {count: 10}}

    parser_configurations.each do |parser_configuration|
      parser_type = parser_configuration[0]
      feed_item_count = parser_configuration[1][:count]

      parser = @parser_factory.build parser_type
      feed_items = parser.get_last_user_events feed_item_count

      feed_items.each do |item|
        parser_configuration = set_page_item(parser_type, item[:date], item[:content], item[:url], item[:thumbnail], item[:location])
        @items.push(parser_configuration)
      end

    end

  end

  def sort_by_date
    @items.sort! { |x, y| y[:date] <=> x[:date] }
  end

  def set_page_item(type, date, content, url, thumbnail, location)
    page_item = {}
    page_item[:type] = type
    page_item[:date] = fix_date(date, type)
    page_item[:content] = content
    page_item[:url] = url
    page_item[:thumbnail] = thumbnail
    page_item[:location] = location
    page_item
  end

  def fix_date(date, type)
    return DateTime.new if date.nil?

    (type == :instagram || type == :foursquare) ? DateTime.parse(Time.at(date.to_i).to_s) : DateTime.parse(date.to_s)
  end

  def get_by_type(type)
    @items.select { |v| v[:type] =~ Regexp.new(type) }
  end

end

It does a lot and it also had some inefficiencies. It also had a high churn rate. All smells asking to be improved upon.

One of the first things I did was move the parser_configuration out of this object. It’s a perfect canditate for a configuration object. So I moved that into it’s own yaml file and let rails load that into application scope. Now when I add a new feed, I no longer need to touch this file, but just add it to the yaml file.

Next I looked at the ParserFactory. Basically it took a type and and returned an object that would fetch the data. Another candidate to refactor so that I would not need to edit this file when I added a new feed.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
class ParserFactory

  def build (type)

    case type
      when :foursquare
        parser = FoursquareParser.new
      when :instagram
        parser = InstagramParser.new
      when :delicious
        parser = DeliciousParser.new
      when :github
        parser = GithubParser.new
      when :twitter
        parser = TwitterParser.new
      when :vimeo
        parser = VimeoParser.new
      when :wordpress
        parser = WordpressParser.new
      else
        raise 'Unknown parser requested'
    end

    parser
  end

end

The individual parsers were actually fecthing the data and formatting the response into a standard format for the view. If you watched Sandy’s video you will recognise the pattern here. Once a new feed was added I had to add a new case. I re-worked the code to this:

1
2
3
4
5
6
7
8
9
10
11
class WrapperFactory

  def build (type)
    begin
      Object::const_get(type + "Wrapper").new
    rescue
      raise 'Unknown parser requested: ' + type
    end
  end

end

They objects themselves were more wrappers, so I re-named the factory object and the individual objects. I can’t quite get rid the “Wrapper” part as some the gem names would clash with the class names. Need to work on that some more.

So the wrappers massaged the content of the response into the right format by looping over the result set and return the collection to the Page object. Then I would loop again in the Page object to set the page item. Redundant looping, let’s address this.

I looked at the set_page_item and fix_date methods. For starters they seemed related and did not belong in this object so I extracted them into a PageItem object. Furthermore fix_date checked the feed type to format the date. I decided to move the responsibility for creating this object into the individual wrappers and then just appending the result to the items collection.

Now the Page object looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
class Page
  attr_reader :items

  def initialize
    @items = []
    @wrapper_factory = WrapperFactory.new
  end

  def fetch_page_items

    FEED_CONFIGS.each do |feed_configuration|
      parser_type = feed_configuration[0]
      feed_item_count = feed_configuration[1]['count']

      wrapper = @wrapper_factory.build parser_type.to_s.capitalize

      @items.concat(wrapper.get_last_user_events(feed_item_count))
    end
  end

  def fetch_sorted_page_items
    fetch_page_items
    sort_by_date
  end

  def sort_by_date
    @items.sort! { |x, y| y.date <=> x.date }
  end

  def get_by_type(type)
    @items.select { |v| v.type == type }
  end

end

A little simpler, but more importantly when it comes to adding a new feed I no longer need to edit this file or indeed the Factory object. It’s safe to say that both WrapperFactory and Page are now open for extension and closed for modification. The next time I add a feed, I do not need to touch these two objects. I simply update my configuration file and create a feed type wrapper.

However now PageItem is not open closed. What if I add a new feed and I need fix the date? Now I would need to adjust the fix_date method in that object. So I decided to extract that method from the PageItem into it’s own module. I adjusted the code to be more generic and put the responsibility on parsing the date back on the individual feed wrappers. Ultimately they have more knowledge about the data they are handling and it’s certainly not the PageItem’s responsibility to that job.

The code overall is better to reason about and each object has a more concrete responsibility now and more importantly when I add a new feed I no longer have to touch Page, PageItem or WrapperFactory.

A Quarter of the Way In

| Comments

So here we are a quarter of the way into 2015 already, the sun is out and it is getting warmer in London.

In January, I spelt out some goals I wanted to achieve for 2015 and four months into the year seems like a good moment to take stock of things.

The first thing I wanted to achieve was to write one blog post a week. So far I have posted 15 times (including this one) and we are up to 16 weeks into the year – only 1 post behind! That’s actually not too bad. I haven’t chalked up as many analytical posts as I wanted to, but I am pleased to have gotten into the habit of writing one post a week.

On the book front – I had set myself a goal of reading one book a month. To date I have finished three, so once again a little behind the set expectation:

I am in the process of reading:

My own book writing though has languished… I am not even sure I want to go into the whys and whats and I probably should to get the ball rolling again. Let’s add that one to the TODO list.

On the side project front, I surprised myself a little and released a gem – which has been downloaded 2557 times to date. I suspect the majority are mine ;) That was an interesting experiment and I blogged about it quite a bit.

I also left Skype after 2 years and releasing the Skype for Web experience (which actually had it’s first and to that point only release back in November). I now work for Red Badger as a Technical Architect and I am having a lot of fun again. A upcoming blog post will describe how we are building and continously shipping an isomorphic app using React, Arch and Docker to AWS using Circle CI. I cannot emphasies enough, how much changing your tooling and being able to use the best in breed tools can mean to your personal (/developer) happiness, productivity and enthusiasm.

So in summary, a little behind the content output and intake I wanted to achieve. I did release one side project. I have migrated things around on my site a little to make the refresh a little easier. Changed jobs. On a personal note, my son James got his British citizenship and Jodie got her indefinite leave to remain. So far 2015 has been kind to us, let’s hope it continues – knock on wood!

Picking a New Language to Learn

| Comments

I started writing this post with the idea to just layout what was important to me in choosing what language to pick up this year and go through the options. I didn’t really expect to make a choice by the end of it.

In doing this post and thinking about what I wanted out of a language, the community around it and doing the research, there was only one real winner in the end. It does help to put things into writing… The TL;DR is this year I will be looking at: Clojure? Why it ticks all of the boxes I set out in this post.


Most years I try to learn a new language and typically the choice has been straight forward. For some reason this year I have struggled with this. Maybe it’s because there’s such a proliferation of interesting languages out there. Maybe it’s because I am torn between picking between OO and a Functional paradigm.

At the top of the list are Scala, Go, Clojure and Elixir. All but one are in the functional realm of programming languages; Go being the odd one out. However it does seems to have a huge traction right now. On the other hand there is something about Elixir that really appeals to me, maybe it’s because it’s described as being close to Ruby and be focused on developer happiness… and it’s the shiny new hotness.

Oddly enough only Scala featured in my Technology Radar. Swift was one that I listed, but does not figure at all in my shortlist. This tells me I need to leverage my radar a bit more and also think about what goes into it a little more deeply.

What matters to me

Things that are important to me when making the choice are: the testing story, build tools, CI support, dependency management and the backing of a web framework.

Scala

Scala ships with SBT as the build tool. Circle CI, Codeship and SnapCI, all support Scala.

You have a few choices on the web framework side of this with the Play framework, Scalatra and Lift.

What about testing? The first two things I came across were ScalaTest and Specs2. Being built on the JVM, you can also leverage Maven/Gradle for build automation and dependency management.

Elixir

The CI story for Elixir is a little murky, there are custom scripts out there to run builds on Circle CI. As a web framework there is the Phoenix Framework. The testing story doesn’t look fabulous yet, but it’s good enough. Elixir comes with Mix for dealing with dependencies. It’s still early days, but being on the front line could be a good thing and well there’s the whole developer happiness thing that just can’t be discounted.

Clojure

As for Clojure, well there are quite a few options for the web framework side of things with Caribou, Luminusweb and Pedestal.

ThoughtWorks’ CI service, Snap CI, has Clojure covered. Codeship also provide support.

In terms of build automation tools you have Leiningen and Clojar looks like a good source of libraries.

The testing story is also a good one, it comes with it’s own test framework, but also has many other options, such as speclj and Midje. All in all it looks like Clojure ticks all of the boxes, thanks to it’s wide adoption and maturity. The only downside, which is also one of it’s advantages, is that it runs on the JVM and hence allows you to leverage the rich Java eco system. Oh my there are a lot braces to digest as well :).

Go

Codeship provides outamated builds. Go ships with a test framework as well as benchmarking tools, so that covers the automated testing angle. There are other solutions as well such as GoConvey or GinkGo.

For web frameworks both Revel and Martini look good. With regards to build tools and dependency management, these are also built into the language with go build and go get respectively.

Final thoughts

All of the languages address the things that are important to me, with varying degrees of maturity. However there’s the question does the language jell with me? To help me with that I have found an awesome resource that allows me to explore the languages: Exercism, the brain child of Katrina Owen. She refers to them as a set of toy problems to solve and you can go very deep into the solutions, but it also provides with you with a good experimentation platform.

The other thing I remembered was this book : Seven Languages in Seven Weeks. I have been recently thumbing through it again and it provides a great introduction to some of the languages I am considering as well suggesting a few exercises for further exploration.

Writing all this down seems like a lot of consideration for something that I used to do on a whim. However now that I went through this exercise, I know which language I would like to get to know this year: Clojure

How to Test Your Gem Against Multiple Ruby Versions Using Circle Ci

| Comments

My work on my little gem continues to make steady progress. This week I wanted I carried out some major re-working of the API. I wanted to follow the Parallel Change pattern for these changes, as I didn’t want to completely break the API. However there was at least one breaking change, given that I moved from:

client = CoderWally::Client.new
coder_wally = client.get_everything_for ARGV[0]

To:

client = CoderWally::Client.new ARGV[0]

For the record you can still call client.get_everything_for ARGV[0], but you will see a deprecation warning. The prefered approach now is to call client.user.everything.

The other thing that I wanted to experiment with, was running a build against multiple versions of Ruby. In Circle Ci, this is actually really straightforward. All you need to do is override the dependency and test steps in your circle.yml file. I wanted to run a build against ruby: 2.0.0-p568, 2.1.5 and 2.2.0, so here’s what my config file now looks like:

dependencies:
  override:
    - 'rvm-exec 2.0.0-p598 bundle install'
    - 'rvm-exec 2.1.5 bundle install'
    - 'rvm-exec 2.2.0 bundle install'

test:
  override:
    - 'rvm-exec 2.0.0-p598 bundle exec rake'
    - 'rvm-exec 2.1.5 bundle exec rake'
    - 'rvm-exec 2.2.0 bundle exec rake'

While this was easy to set up there were a cople of learnings:

  • Do not specify a bundler version in your gems dev dependencies. It’s just more flexible to trust the system and ruby version that is running the bundle install command. If you do, then you need to install the corresponding version on the build server. Also if you want to go back to older versions of ruby that aren’t supported by the bundler version you have specified, then there’s more fuffing about.
  • The other thing I learned had to do with Minitest and Ruby 2.2.0. The call to require it failed. To get the build to pass on Circle Ci, I had to add a dev dependency to my Gemspec.

I wanted to test running against older versions of Ruby and the latest JRuby, but when I had a quick go, Webmock was telling me that I should stub my requests, which I am doing, but for some reason they aren’t being recognised in this configuration.

A Couple of Bundler Tricks

| Comments

Quite literally two things, no more no less.

To install a specific version of bundler do:

gem install bundler -v x.x.x

Where x.x.x is the version to install. Probably well known, but I had to look it up. Then use that version run, instead of the the latest one you have installed:

bundle _x.x.x_ install

Those _ surrounding the version number are not a typo and it does look odd, but it works…

Three Things I Learned About PhantomJs This Week

| Comments

This week we had a bit of a fun moment with one of our feature tests. As a little background we are using a combination of PhatomJs, Cucumber.js and WebDriver.io for our end to end/user journey tests.

One test failed repeatedly when we used to PhantomJs but if we switched to Chrome it would pass. The test itself was sending an Async POST request to an API and then based on the response the browser would redirect to the created resource (well it did more, but that was the basic premise).

After some head scratching and painful debugging we eventually narrowed it down to our Location header disappearing from the API response. We initially thought this might be a bug in PhantomJs, because we could clearly see the server sending the header and in the debug tools it would show up as well; however it turns out that this is PhantomJs’ security model. By default PhatomJs only handles ‘standard’ headers. Apparently Location isn’t one of them. The solution in the end is quite simple, simply start PhatomJs with the --web-security flag set to false: ./node_modules/.bin/phantomjs --web-security=false

Note that this also turns off SSL certificate checking, but since we are using this for test purposes we are fine. Your mileage may vary though depending on what you are using PhantomJs for.

The other thing I learned during this episode was that there is no need to use or install the Selenium stand alone server as you can launch PhantomJs with webdriver support (in the shape of GhostDriver I believe): ./node_modules/.bin/phantomjs --webdriver=4444

To debug the test we used the following flag, which allows you to then remote debug using a browser and the web inspector interface: ./node_modules/.bin/phantomjs --remote-debugger-port=9000

Open the browser and go to http://localhost:9000 and you will be presented with a list of PhantomJs sessions. By selecting one you can start your debugging.

So there you are three things I learned about PhantomJs.

Refactoring Using Hash#fetch

| Comments

Last night I decided to add a simple memoization pattern to my coder_wall gem to stop unnecessary network calls to retrieve data from the CoderWall API. Memoization is a form of caching, well there’s more to it than that. So let’s refer to Wikipedia for a more detailed explanation and more reading if you are interested:

In computing, memoization is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again.

My intent was to reduce calls to the CoderWall API for the same username. Here is what the code for fecthing data used to look like:

1
2
3
4
5
6
7
8
9
10
11
# Fetch data from CoderWall
def fetch(username)
  uri = uri_for_user(username)
    json = send_request(uri)

  begin   
      JSON.parse(json.read)
    rescue JSON::ParserError
        raise InvalidJson, 'Received invalid json in response'
    end
end

I created a @response instance variable that would store the parsed results of the call. If the username key existed in the hash just return the result otherwise go over the wire to get the data:

1
2
3
4
5
6
7
8
9
10
11
12
13
# Fetch data from CoderWall
def fetch(username)
  return @response[username] unless @response[username].nil?

  uri = uri_for_user(username)
    json = send_request(uri)
 
    begin
      @response[username] = JSON.parse(json.read)
    rescue JSON::ParserError
        raise InvalidJson, 'Received invalid json in response'
    end
end

So far so good, until I ran Metric_fu and I got two warnings: Nil checking and duplicate calls. That immediately reminded me of a passage in Avdi Grimm’s excellent book Confident Ruby. Armed with that knowledge I was wable to make one change that removed the code smells:

1
2
3
4
5
6
7
8
9
10
11
12
13
# Fetch data from CoderWall
def fetch(username)
  @response.fetch(username) do
        uri = uri_for_user(username)
        json = send_request(uri)

        begin
          @response[username] = JSON.parse(json.read)
        rescue JSON::ParserError
          raise InvalidJson, 'Received invalid json in response'
        end
  end
end

Because @response is a Hash I was able to leverage using the fetch method in conjunction with passing a block to it, thus avoiding any of Nil checking. There’s a little more to it, but really you should read all about it in Avdi’s book, it is a veritable treasure trove of patterns.

Raml and Osprey - a Better Way to Build Mock Apis

| Comments

It is generally considered a good idea to develop and test your application against Mock services rather than the real thing. As we have been embarking on a new project, the team discussed the need for Mock services and how we would manage these. On prior projects, the effort in maintaining these was somewhat painful and I was a little weary. One of the tools to help deal with the maintenance was RAML. RAML stands for: RESTful API Modeling Language, it

… is a simple and succinct way of describing practically-RESTful APIs. It encourages reuse, enables discovery and pattern-sharing, and aims for merit-based emergence of best practices. The goal is to help our current API ecosystem by solving immediate problems and then encourage ever-better API patterns. RAML is built on broadly-used standards such as YAML and JSON and is a non-proprietary, vendor-neutral open spec.

Our intention is to use this to define the APIs we want to build and interface with. Also to use the specification we create to validate both development and integration, and use that specification as the foundation for our Mocks.

After further research I discovered osprey, which is a

… JavaScript framework, based on Node and Express, for rapidly building applications that expose APIs described via RAML, the RESTful API Modeling Language.

While it’s still in development and doesn’t yet support documentation and the validation, you are able to very quickly standup a mock service with an api described with RAML. There’s also a CLI that allows you to scaffold an api based on a specification defined using RAML.

I decided to spike things a little using the CLI tool. Once the CLI has been installed globally using NPM, you can simply run the following:

osprey new --name hello-world

By default you should have a api.raml file created and stored under src/assets/raml. I then used that file to define my spike API using RAML:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
#%RAML 0.8
---
title: Hello World API
baseUri: http://localhost:3000/api/{version}
version: v1
/users:
    get:
        description: Return all users
        responses:
            200:
                body:
                    application/json:
                        example: |
                            {
                               "data": [
                                   { "name": "foo" },
                                   { "name": "fee"}
                               ],
                               "success": true,
                               "status": 200
                             }
    /{usermame}:
        get:
            description: Say hello to the given username
            responses:
                200:
                   body:
                     application/json:
                      example: |
                         {
                           "data": {
                             "message": "Hello foo",
                           },
                           "success": true,
                           "status": 200
                         }

If you are familiar with YAML and JSON you will find the output very readable, even if you aren’t I think you can still agree that this is quite accessible.

To validate the specification I had created, without having to start the server, I could run:

1
2
3
$ osprey list src/assets/raml/api.raml
GET             /users                                                                                          
GET             /users/{usermame}

Before being able to run the service I had to make a couple changes to the server javascript file app.js.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
var express = require('express');
var path = require('path');
var osprey = require('osprey');

var app = module.exports = express();

app.use(express.bodyParser());
app.use(express.methodOverride());
app.use(express.compress());
app.use(express.logger('dev'));

app.set('port', process.env.PORT || 3000);

api = osprey.create('/api/v1', app, {
    ramlFile: path.join(__dirname, '/assets/raml/api.raml'),
    logLevel: 'debug'  //  logLevel: off->No logs | info->Show Osprey modules initializations | debug->Show all
});

if (!module.parent) {
    var port = app.get('port');
    app.listen(port);
    console.log('listening on port ' + port);
}

To start the mock api service simply run:

node /app/server.js

And you can now start exploring the API using your browser or any other tool you like to use to interact with an API. Being able to provide example data, quickly allows you to scaffold your mock service with data for your app.

It’s only been a couple of hours of playing with these two tools, but it already feels so much easier to work with and maintain, than having to write a bunch of code to handle requests and responses.

Working With Function Arguments

| Comments

LAst week I was working on build tasks to daemonise some of the services we intend to use for our project. I decided to use forever and ended up with a call that looks something like this:

let task = execForeverCommand('start', 'path/to/service');

or

let task = execForeverCommand('start', 'path/to/service', 'some', 'other', 'option');

The execForeverCommand would build up a command to execute by concatenating a variable length list of function arguments into one single string. What follows are three different approaches I took to build up that string based off of those arguments.

My initial intention was to just use arguments.join(" "); however function arguments are not an array, instead they are an Array like object, therefore I opted to use a for-in loop:

function execForeverCommand() {
    let commands = '';
    for (var argument in arguments) {
        if(arguments.hasOwnProperty(argument)) {
            commands += ' ' + arguments[argument];
        }
    }

    return shell.task('./node_modules/forever/bin/forever ' + commands);
}

That worked, but is very verbose. Having a working solution, I spent some time reading through the MDN article I referenced above in more detail. I straight away realised that I could change the code to use Array.prototype.slice.call and combine that with my initial plan:

function execForeverCommand() {
    let commands = Array.prototype.slice.call(arguments).join(" ");

    return shell.task('./node_modules/forever/bin/forever ' + commands);
}

Now those astute readers might have spotted the use of let in these functions. On this project we are using ES6 features (with the assistance of Babel). This gave me a third option: Rest Parameters. Thanks to rest parameters I rewrote the function one last time, effetively going full circle and implementing my originally intended solution, i.e. by using Array.prototype.join().:

function execForeverCommand(...commands) {
    return shell.task('./node_modules/forever/bin/forever ' + commands.join(" "));
}