Learn Go With Tests - Chris James PDF
Learn Go With Tests - Chris James PDF
Go with Tests
Chris James
Art by Denise
Translations:
中文
Português
日本語
Background
I have some experience introducing Go to development teams and have tried
different approaches as to how to grow a team from some people curious about
Go into highly effective writers of Go systems.
An approach we tried was to take the blue book and every week discuss the next
chapter along with the exercises.
I love this book but it requires a high level of commitment. The book is very
detailed in explaining concepts, which is obviously great but it means that the
progress is slow and steady - this is not for everyone.
I found that whilst a small number of people would read chapter X and do the
exercises, many people didn't.
Katas are fun but they are usually limited in their scope for learning a language;
you're unlikely to use goroutines to solve a kata.
Another problem is when you have varying levels of enthusiasm. Some people
just learn way more of the language than others and when demonstrating what
they have done end up confusing people with features the others are not familiar
with.
This ends up making the learning feel quite unstructured and ad hoc.
What did work
By far the most effective way was by slowly introducing the fundamentals of the
language by reading through go by example, exploring them with examples and
discussing them as a group. This was a more interactive approach than "read
chapter x for homework".
Over time the team gained a solid foundation of the grammar of the language so
we could then start to build systems.
It doesn't matter how artistic you think you are, you are unlikely to write good
music without understanding the fundamentals and practicing the mechanics.
What I like to do is explore concepts and then solidify the ideas with tests. Tests
verify the code I write is correct and documents the feature I have learned.
Feedback
Add issues/submit PRs here or [tweet me @quii](https://twitter.com/quii)
MIT license
Why unit tests and how to make them
work for you
Here's a link to a video of me chatting about this topic
Software
The promise of software is that it can change. This is why it is called soft ware, it
is malleable compared to hardware. A great engineering team should be an
amazing asset to a company, writing systems that can evolve with a business to
keep delivering value.
So why are we so bad at it? How many projects do you hear about that outright
fail? Or become "legacy" and have to be entirely re-written (and the re-writes
often fail too!)
How does a software system "fail" anyway? Can't it just be changed until it's
correct? That's what we're promised!
A lot of people are choosing Go to build systems because it has made a number
of choices which one hopes will make it more legacy-proof.
Even with all these great properties we can still make terrible systems, so we
should look to the past and understand lessons in software engineering that apply
no matter how shiny (or not) your language is.
In 1974 a clever software engineer called Manny Lehman wrote Lehman's laws
of software evolution.
These forces seem like important things to understand if we have any hope of
not being in an endless cycle of shipping systems that turn into legacy and then
get re-written over and over again.
It feels obvious that a system has to change or it becomes less useful but how
often is this ignored?
Many teams are incentivised to deliver a project on a particular date and then
moved on to the next project. If the software is "lucky" there is at least some
kind of hand-off to another set of individuals to maintain it, but they didn't write
it of course.
People often concern themselves with trying to pick a framework which will
help them "deliver quickly" but not focusing on the longevity of the system in
terms of how it needs to evolve.
Even if you're an incredible software engineer, you will still fall victim to not
knowing the future needs of your system. As the business changes some of the
brilliant code you wrote is now no longer relevant.
Lehman was on a roll in the 70s because he gave us another law to chew on.
The Law of Increasing Complexity
As a system evolves, its complexity increases unless work is done to reduce
it
What he's saying here is we can't have software teams as blind feature factories,
piling more and more features on to software in the hope it will survive in the
long run.
Refactoring
There are many facets of software engineering that keeps software malleable,
such as:
Developer empowerment
Generally "good" code. Sensible separation of concerns, etc etc
Communication skills
Architecture
Observability
Deployability
Automated tests
Feedback loops
I am going to focus on refactoring. It's a phrase that gets thrown around a lot "we
need to refactor this" - said to a developer on their first day of programming
without a second thought.
Where does the phrase come from? How is refactoring just different from
writing code?
I know that I and many others have thought we were doing refactoring but we
were mistaken
So what is it?
Factorisation
When learning maths at school you probably learned about factorisation. Here's
a very simple example
We can take some important lessons from this. When we factorise the
expression we have not changed the meaning of the expression. Both of them
equal 3/4 but we have made it easier for us to work with; by changing 1/2 to 2/4
it fits into our "domain" easier.
When you refactor your code, you are trying to find ways of making your code
easier to understand and "fit" into your current understanding of what the system
needs to do. Crucially you should not be changing behaviour.
An example in Go
if language == "es" {
return "Hola, " + name
}
if language == "fr" {
return "Bonjour, " + name
}
// imagine dozens more languages
return "Hello, " + name
}
The nature of this refactor isn't actually important, what's important is I haven't
changed behaviour.
When refactoring you can do whatever you like, add interfaces, new types,
functions, methods etc. The only rule is you don't change behaviour
We don't want to have to be thinking about lots of things at once because that's
when we make mistakes. I've witnessed so many refactoring endeavours fail
because the developers are biting off more than they can chew.
When I was doing factorisations in maths classes with pen and paper I would
have to manually check that I hadn't changed the meaning of the expressions in
my head. How do we know we aren't changing behaviour when refactoring when
working with code, especially on a system that is non-trivial?
Those who choose not to write tests will typically be reliant on manual testing.
For anything other than a small project this will be a tremendous time-sink and
does not scale in the long run.
In order to safely refactor you need unit tests because they provide
An example in Go
A unit test for our Hello function could look like this
func TestHello(t *testing.T) {
got := Hello(“Chris”, es)
want := "Hola, Chris"
if got != want {
t.Errorf("got %q want %q", got, want)
}
}
At the command line I can run go test and get immediate feedback as to
whether my refactoring efforts have altered behaviour. In practice it's best to
learn the magic button to run your tests within your editor/IDE.
All within a very tight feedback loop so you don't go down rabbit holes and
make mistakes.
Having a project where all your key behaviours are unit tested and give you
feedback well under a second is a very empowering safety net to do bold
refactoring when you need to. This helps us manage the incoming force of
complexity that Lehman describes.
On the other you have people describing experiences of unit tests actually
hindering refactoring.
Ask yourself, how often do you have to change your tests when refactoring?
Over the years I have been on many projects with very good test coverage and
yet the engineers are reluctant to refactor because of the perceived effort of
changing tests.
We write our unit tests around our square to make sure the sides are equal and
then we write some tests around our triangles. We want to make sure our
triangles render correctly so we assert that the angles sum up to 180 degrees,
perhaps check we make 2 of them, etc etc. Test coverage is really important and
writing these tests is pretty easy so why not?
A few weeks later The Law of Continuous Change strikes our system and a new
developer makes some changes. She now believes it would be better if squares
were formed with 2 rectangles instead of 2 triangles.
She tries to do this refactor and gets mixed signals from a number of failing
tests. Has she actually broken important behaviours here? She now has to dig
through these triangle tests and try and understand what's going on.
It's not actually important that the square was formed out of triangles but our
tests have falsely elevated the importance of our implementation details.
If I am saying just test behaviour, should we not just only write system/black-
box tests? These kind of tests do have lots of value in terms of verifying key user
journeys but they are typically expensive to write and slow to run. For that
reason they're not too helpful for refactoring because the feedback loop is slow.
In addition black box tests don't tend to help you very much with root causes
compared to unit tests.
I like to imagine these units as simple Lego bricks which have coherent APIs
that I can combine with other bricks to make bigger systems. Underneath these
APIs there could be dozens of things (types, functions et al) collaborating to
make them work how they need to.
For instance if you were writing a bank in Go, you might have an "account"
package. It will present an API that does not leak implementation detail and is
easy to integrate with.
If you have these units that follow these properties you can write unit tests
against their public APIs. By definition these tests can only be testing useful
behaviour. Underneath these units I am free to refactor the implementation as
much as I need to and the tests for the most part should not get in the way.
Refactoring
Unit tests
Unit design
What we can start to see is that these facets of software design reinforce each
other.
Refactoring
Gives us signals about our unit tests. If we have to do manual checks, we
need more tests. If tests are wrongly failing then our tests are at the wrong
abstraction level (or have no value and should be deleted).
Helps us handle the complexities within and between our units.
Unit tests
Give a safety net to refactor.
Verify and document the behaviour of our units.
This is the bad old days of software where an analyst team would spend 6
months writing a requirements document and an architect team would spend
another 6 months coming up with a design and a few years later the whole
project fails.
Agile teaches us that we need to work iteratively, starting small and evolving the
software so that we get fast feedback on the design of our software and how it
works with real users; TDD enforces this approach.
TDD addresses the laws that Lehman talks about and other lessons hard learned
through history by encouraging a methodology of constantly refactoring and
delivering iteratively.
Small steps
Write a small test for a small amount of desired behaviour
Check the test fails with a clear error (red)
Write the minimal amount of code to make the test pass (green)
Refactor
Repeat
As you become proficient, this way of working will become natural and fast.
You'll come to expect this feedback loop to not take very long and feel uneasy if
you're in a state where the system isn't "green" because it indicates you may be
down a rabbit hole.
You'll always be driving small & useful functionality comfortably backed by the
feedback from your tests.
Wrapping up
The strength of software is that we can change it. Most software will require
change over time in unpredictable ways; but don't try and over-engineer
because it's too hard to predict the future.
Instead we need to make it so we can keep our software malleable. In order
to change software we have to refactor it as it evolves or it will turn into a
mess
A good test suite can help you refactor quicker and in a less stressful
manner
Writing good unit tests is a design problem so think about structuring your
code so you have meaningful units that you can integrate together like Lego
bricks.
TDD can help and force you to design well factored software iteratively,
backed by tests to help future work as it arrives.
Hello, World
You can find all the code for this chapter here
So if you're on a unix based OS and you are happy to stick with Go's
conventions about $GOPATH (which is the easiest way of setting up) you could
run mkdir -p $GOPATH/src/github.com/$USER/hello.
For subsequent chapters, you can make a new folder with whatever name you
like to put the code in e.g $GOPATH/src/github.com/{your-user-id}/integers
for the next chapter might be sensible. Some readers of this book like to make an
enclosing folder for all the work such as "learn-go-with-tests/hello". In short, it's
up to you how you structure your folders.
Create a file in this directory called hello.go and write this code. To run it type
go run hello.go.
package main
import "fmt"
func main() {
fmt.Println("Hello, world")
}
How it works
When you write a program in Go you will have a main package defined with a
main func inside it. Packages are ways of grouping up related Go code together.
The func keyword is how you define a function with a name and a body.
With import "fmt" we are importing a package which contains the Println
function that we use to print.
How to test
How do you test this? It is good to separate your "domain" code from the outside
world (side-effects). The fmt.Println is a side effect (printing to stdout) and the
string we send in is our domain.
package main
import "fmt"
func main() {
fmt.Println(Hello())
}
We have created a new function again with func but this time we've added
another keyword string in the definition. This means this function returns a
string.
Now create a new file called hello_test.go where we are going to write a test
for our Hello function
package main
import "testing"
if got != want {
t.Errorf("got %q want %q", got, want)
}
}
Before explaining, let's just run the code. Run go test in your terminal. It
should've passed! Just to check, try deliberately breaking the test by changing
the want string.
Notice how you have not had to pick between multiple testing frameworks and
then figure out how to install. Everything you need is built in to the language and
the syntax is the same as the rest of the code you will write.
Writing tests
Writing a test is just like writing a function, with a few rules
For now it's enough to know that your t of type *testing.T is your "hook" into
the testing framework so you can do things like t.Fail() when you want to fail.
if
Declaring variables
We're declaring some variables with the syntax varName := value, which lets
us re-use some values in our test for readability.
t.Errorf
We are calling the Errorf method on our t which will print out a message and
fail the test. The f stands for format which allows us to build a string with values
inserted into the placeholder values %q. When you made the test fail it should be
clear how it works.
You can read more about the placeholder strings in the fmt go doc. For tests %q
is very useful as it wraps your values in double quotes.
Go doc
Another quality of life feature of Go is the documentation. You can launch the
docs locally by running godoc -http :8000. If you go to localhost:8000/pkg
you will see all the packages installed on your system.
The vast majority of the standard library has excellent documentation with
examples. Navigating to http://localhost:8000/pkg/testing/ would be worthwhile
to see what's available to you.
If you don't have godoc command, then maybe you are using the newer version
of Go (1.14 or later) which is no longer including godoc. You can manually
install it with go get golang.org/x/tools/cmd/godoc.
Hello, YOU
Now that we have a test we can iterate on our software safely.
In the last example we wrote the test after the code had been written just so you
could get an example of how to write a test and declare a function. From this
point on we will be writing tests first.
Let's start by capturing these requirements in a test. This is basic test driven
development and allows us to make sure our test is actually testing what we
want. When you retrospectively write tests there is the risk that your test may
continue to pass even if the code doesn't work as intended.
package main
import "testing"
if got != want {
t.Errorf("got %q want %q", got, want)
}
}
In this case the compiler is telling you what you need to do to continue. We have
to change our function Hello to accept an argument.
If you try and run your tests again your main.go will fail to compile because
you're not passing an argument. Send in "world" to make it pass.
func main() {
fmt.Println(Hello("world"))
}
Now when you run your tests you should see something like
hello_test.go:10: got 'Hello, world' want 'Hello, Chris''
Let's make the test pass by using the name argument and concatenate it with
Hello,
When you run the tests they should now pass. Normally as part of the TDD cycle
we should now refactor.
There's not a lot to refactor here, but we can introduce another language feature,
constants.
Constants
Constants are defined like so
After refactoring, re-run your tests to make sure you haven't broken anything.
To be clear, the performance boost is incredibly negligible for this example! But
it's worth thinking about creating constants to capture the meaning of values and
sometimes to aid performance.
if got != want {
t.Errorf("got %q want %q", got, want)
}
})
if got != want {
t.Errorf("got %q want %q", got, want)
}
})
}
Here we are introducing another tool in our testing arsenal, subtests. Sometimes
it is useful to group tests around a "thing" and then have subtests describing
different scenarios.
A benefit of this approach is you can set up shared code that can be used in the
other tests.
It is important that your tests are clear specifications of what the code needs to
do.
}
What have we done here?
We've refactored our assertion into a function. This reduces duplication and
improves readability of our tests. In Go you can declare functions inside other
functions and assign them to variables. You can then call them, just like normal
functions. We need to pass in t *testing.T so that we can tell the test code to
fail when we need to.
t.Helper() is needed to tell the test suite that this method is a helper. By doing
this when it fails the line number reported will be in our function call rather than
inside our test helper. This will help other developers track down problems
easier. If you still don't understand, comment it out, make a test fail and observe
the test output.
Now that we have a well-written failing test, let's fix the code, using an if.
If we run our tests we should see it satisfies the new requirement and we haven't
accidentally broken the other functionality.
Discipline
Let's go over the cycle again
Write a test
Make the compiler pass
Run the test, see that it fails and check the error message is meaningful
Write enough code to make the test pass
Refactor
On the face of it this may seem tedious but sticking to the feedback loop is
important.
Not only does it ensure that you have relevant tests, it helps ensure you design
good software by refactoring with the safety of tests.
Seeing the test fail is an important check because it also lets you see what the
error message looks like. As a developer it can be very hard to work with a
codebase when failing tests do not give a clear idea as to what the problem is.
By ensuring your tests are fast and setting up your tools so that running tests is
simple you can get in to a state of flow when writing your code.
By not writing tests you are committing to manually checking your code by
running your software which breaks your state of flow and you won't be saving
yourself any time, especially in the long run.
We should be confident that we can use TDD to flesh out this functionality
easily!
Write a test for a user passing in Spanish. Add it to the existing suite.
When you try and run the test again it will complain about not passing through
enough arguments to Hello in your other tests and in hello.go
./hello.go:15:19: not enough arguments in call to Hello
have (string)
want (string, string)
Fix them by passing through empty strings. Now all your tests should compile
and pass, apart from our new scenario
hello_test.go:29: got 'Hello, Elodie' want 'Hola, Elodie'
We can use if here to check the language is equal to "Spanish" and if so change
the message
if language == "Spanish" {
return "Hola, " + name
}
Now it is time to refactor. You should see some problems in the code, "magic"
strings, some of which are repeated. Try and refactor it yourself, with every
change make sure you re-run the tests to make sure your refactoring isn't
breaking anything.
if language == spanish {
return spanishHelloPrefix + name
}
French
Write a test asserting that if you pass in "French" you get "Bonjour, "
See it fail, check the error message is easy to read
Do the smallest reasonable change in the code
You may have written something that looks roughly like this
if language == spanish {
return spanishHelloPrefix + name
}
if language == french {
return frenchHelloPrefix + name
}
switch
prefix := englishHelloPrefix
switch language {
case french:
prefix = frenchHelloPrefix
case spanish:
prefix = spanishHelloPrefix
}
Write a test to now include a greeting in the language of your choice and you
should see how simple it is to extend our amazing function.
one...last...refactor?
You could argue that maybe our function is getting a little big. The simplest
refactor for this would be to extract out some functionality into another function.
func Hello(name string, language string) string {
if name == "" {
name = "World"
}
Wrapping up
Who knew you could get so much out of Hello, world?
This is of course trivial compared to "real world" software but the principles still
stand. TDD is a skill that needs practice to develop but by being able to break
problems down into smaller components that you can test you will have a much
easier time writing software.
Integers
You can find all the code for this chapter here
Integers work as you would expect. Let's write an Add function to try things out.
Create a test file called adder_test.go and write this code.
Note: Go source files can only have one package per directory, make sure that
your files are organised separately. Here is a good explanation on this.
package integers
import "testing"
if sum != expected {
t.Errorf("expected '%d' but got '%d'", expected, sum)
}
}
You will notice that we're using %d as our format strings rather than %q. That's
because we want it to print an integer rather than a string.
Also note that we are no longer using the main package, instead we've defined a
package named integers, as the name suggests this will group functions for
working with integers such as Add.
package integers
When you have more than one argument of the same type (in our case two
integers) rather than having (x int, y int) you can shorten it to (x, y int).
Now run the tests and we should be happy that the test is correctly reporting
what is wrong.
adder_test.go:10: expected '4' but got '0'
If you have noticed we learnt about named return value in the last section but
aren't using the same here. It should generally be used when the meaning of the
result isn't clear from context, in our case it's pretty much clear that Add function
will add the parameters. You can refer this wiki for more details.
We could write another test, with some different numbers to force that test to fail
but that feels like a game of cat and mouse.
Once we're more familiar with Go's syntax I will introduce a technique called
"Property Based Testing", which would stop annoying developers and help you
find bugs.
Refactor
There's not a lot in the actual code we can really improve on here.
This is great because it aids the usability of code you are writing. It is preferable
that a user can understand the usage of your code by just looking at the type
signature and documentation.
You can add documentation to functions with comments, and these will appear
in Go Doc just like when you look at the standard library's documentation.
Often code examples that can be found outside the codebase, such as a readme
file often become out of date and incorrect compared to the actual code because
they don't get checked.
Go examples are executed just like tests so you can be confident examples
reflect what the code actually does.
As with typical tests, examples are functions that reside in a package's _test.go
files. Add the following ExampleAdd function to the adder_test.go file.
func ExampleAdd() {
sum := Add(1, 5)
fmt.Println(sum)
// Output: 6
}
(If your editor doesn't automatically import packages for you, the compilation
step will fail because you will be missing import "fmt" in adder_test.go. It is
strongly recommended you research how to have these kind of errors fixed for
you automatically in whatever editor you are using.)
If your code changes so that the example is no longer valid, your build will fail.
Running the package's test suite, we can see the example function is executed
with no further arrangement from us:
$ go test -v
=== RUN TestAdder
--- PASS: TestAdder (0.00s)
=== RUN ExampleAdd
--- PASS: ExampleAdd (0.00s)
Please note that the example function will not be executed if you remove the
comment //Output: 6. Although the function will be compiled, it won't be
executed.
By adding this code the example will appear in the documentation inside godoc,
making your code even more accessible.
Inside here you'll see a list of all the packages in your $GOPATH, so assuming you
wrote this code in somewhere like $GOPATH/src/github.com/{your_id} you'll
be able to find your example documentation.
If you publish your code with examples to a public URL, you can share the
documentation of your code at pkg.go.dev. For example, here is the finalised
API for this chapter. This web interface allows you to search for documentation
of standard library packages and third-party packages.
Wrapping up
What we have covered:
Arrays allow you to store multiple elements of the same type in a variable in a
particular order.
When you have an array, it is very common to have to iterate over them. So let's
use our new-found knowledge of for to make a Sum function. Sum will take an
array of numbers and return the total.
package main
import "testing"
numbers := [5]int{1, 2, 3, 4, 5}
got := Sum(numbers)
want := 15
if got != want {
t.Errorf("got %d want %d given, %v", got, want, numbers)
}
}
Arrays have a fixed capacity which you define when you declare the variable.
We can initialize an array in two ways:
It is sometimes useful to also print the inputs to the function in the error message
and we are using the %v placeholder which is the "default" format, which works
well for arrays.
package main
To get the value out of an array at a particular index, just use array[index]
syntax. In this case, we are using for to iterate 5 times to work through the array
and add each item onto sum.
Refactor
Let's introduce range to help clean up our code
range lets you iterate over an array. Every time it is called it returns two values,
the index and the value. We are choosing to ignore the index value by using _
blank identifier.
You may be thinking it's quite cumbersome that arrays have a fixed length, and
most of the time you probably won't be using them!
Go has slices which do not encode the size of the collection and instead can have
any size.
The next requirement will be to sum collections of varying sizes.
got := Sum(numbers)
want := 15
if got != want {
t.Errorf("got %d want %d given, %v", got, want, numbers)
}
})
got := Sum(numbers)
want := 6
if got != want {
t.Errorf("got %d want %d given, %v", got, want, numbers)
}
})
Break the existing API by changing the argument to Sum to be a slice rather
than an array. When we do this we will know we have potentially ruined
someone's day because our other test will not compile!
Create a new function
In our case, no-one else is using our function so rather than having two functions
to maintain let's just have one.
If you try to run the tests they will still not compile, you will have to change the
first test to pass in a slice rather than an array.
Refactor
We had already refactored Sum and all we've done is changing from arrays to
slices, so there's not a lot to do here. Remember that we must not neglect our test
code in the refactoring stage and we have some to do here.
got := Sum(numbers)
want := 15
if got != want {
t.Errorf("got %d want %d given, %v", got, want, numbers)
}
})
got := Sum(numbers)
want := 6
if got != want {
t.Errorf("got %d want %d given, %v", got, want, numbers)
}
})
It is important to question the value of your tests. It should not be a goal to have
as many tests as possible, but rather to have as much confidence as possible in
your code base. Having too many tests can turn in to a real problem and it just
adds more overhead in maintenance. Every test has a cost.
In our case, you can see that having two tests for this function is redundant. If it
works for a slice of one size it's very likely it'll work for a slice of any size
(within reason).
Go's built-in testing toolkit features a coverage tool, which can help identify
areas of your code you have not covered. I do want to stress that having 100%
coverage should not be your goal, it's just a tool to give you an idea of your
coverage. If you have been strict with TDD, it's quite likely you'll have close to
100% coverage anyway.
Try running
go test -cover
PASS
coverage: 100.0% of statements
Now delete one of the tests and check the coverage again.
Now that we are happy we have a well-tested function you should commit your
great work before taking on the next challenge.
We need a new function called SumAll which will take a varying number of
slices, returning a new slice containing the totals for each slice passed in.
For example
or
if got != want {
t.Errorf("got %v want %v", got, want)
}
}
Try and run the test
./sum_test.go:23:9: undefined: SumAll
Go can let you write variadic functions that can take a variable number of
arguments.
Go does not let you use equality operators with slices. You could write a
function to iterate over each got and want slice and check their values but for
convenience sake, we can use reflect.DeepEqual which is useful for seeing if
any two variables are the same.
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v want %v", got, want)
}
}
(make sure you import reflect in the top of your file to have access to
DeepEqual)
It's important to note that reflect.DeepEqual is not "type safe", the code will
compile even if you did something a bit silly. To see this in action, temporarily
change the test to:
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v want %v", got, want)
}
}
What we have done here is try to compare a slice with a string. Which makes
no sense, but the test compiles! So while using reflect.DeepEqual is a
convenient way of comparing slices (and other things) you must be careful when
using it.
Change the test back again and run it, you should have test output looking like
this
sum_test.go:30: got [] want [3 9]
return sums
}
There's a new way to create a slice. make allows you to create a slice with a
starting capacity of the len of the numbersToSum we need to work through.
You can index slices like arrays with mySlice[N] to get the value out or assign it
a new value with =
Refactor
As mentioned, slices have a capacity. If you have a slice with a capacity of 2 and
try to do mySlice[10] = 1 you will get a runtime error.
However, you can use the append function which takes a slice and a new value,
returning a new slice with all the items in it.
return sums
}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v want %v", got, want)
}
}
return sums
}
Slices can be sliced! The syntax is slice[low:high] If you omit the value on
one of the sides of the : it captures everything to the side of it. In our case, we
are saying "take from 1 to the end" with numbers[1:]. You might want to invest
some time in writing other tests around slices and experimenting with the slice
operator so you can be familiar with it.
Refactor
Not a lot to refactor this time.
What do you think would happen if you passed in an empty slice into our
function? What is the "tail" of an empty slice? What happens when you tell Go
to capture all elements from myEmptySlice[1:]?
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v want %v", got, want)
}
})
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v want %v", got, want)
}
})
Oh no! It's important to note the test has compiled, it is a runtime error. Compile
time errors are our friend because they help us write software that works,
runtime errors are our enemies because they affect our users.
return sums
}
Refactor
Our tests have some repeated code around assertion again, let's extract that into a
function
A handy side-effect of this is this adds a little type-safety to our code. If a silly
developer adds a new test with checkSums(t, got, "dave") the compiler will
stop them in their tracks.
$ go test
./sum_test.go:52:21: cannot use "dave" (type string) as type []int in argument
Wrapping up
We have covered
Arrays
Slices
The various ways to make them
How they have a fixed capacity but you can create new slices from old ones
using append
How to slice, slices!
len to get the length of an array or slice
Test coverage tool
reflect.DeepEqual and why it's useful but can reduce the type-safety of
your code
We've used slices and arrays with integers but they work with any other type too,
including arrays/slices themselves. So you can declare a variable of [][]string
if you need to.
Check out the Go blog post on slices for an in-depth look into slices. Try writing
more tests to demonstrate what you learn from reading it.
Another handy way to experiment with Go other than writing tests is the Go
playground. You can try most things out and you can easily share your code if
you need to ask questions. I have made a go playground with a slice in it for you
to experiment with.
Here is an example of slicing an array and how changing the slice affects the
original array; but a "copy" of the slice will not affect the original array. Another
example of why it's a good idea to make a copy of a slice after slicing a very
large slice.
Structs, methods & interfaces
You can find all the code for this chapter here
if got != want {
t.Errorf("got %.2f want %.2f", got, want)
}
}
Notice the new format string? The f is for our float64 and the .2 means print 2
decimal places.
if got != want {
t.Errorf("got %.2f want %.2f", got, want)
}
}
if got != want {
t.Errorf("got %.2f want %.2f", got, want)
}
}
Refactor
Our code does the job, but it doesn't contain anything explicit about rectangles.
An unwary developer might try to supply the width and height of a triangle to
these functions without realising they will return the wrong answer.
We could just give the functions more specific names like RectangleArea. A
neater solution is to define our own type called Rectangle which encapsulates
this concept for us.
We can create a simple type using a struct. A struct is just a named collection of
fields where you can store data.
Now let's refactor the tests to use Rectangle instead of plain float64s.
if got != want {
t.Errorf("got %.2f want %.2f", got, want)
}
}
if got != want {
t.Errorf("got %.2f want %.2f", got, want)
}
}
Remember to run your tests before attempting to fix, you should get a helpful
error like
./shapes_test.go:7:18: not enough arguments in call to Perimeter
have (Rectangle)
want (float64, float64)
You can access the fields of a struct with the syntax of myStruct.field.
I hope you'll agree that passing a Rectangle to a function conveys our intent
more clearly but there are more benefits of using structs that we will get on to.
if got != want {
t.Errorf("got %g want %g", got, want)
}
})
if got != want {
t.Errorf("got %g want %g", got, want)
}
})
As you can see, the 'f' has been replaced by 'g', using 'f' it could be difficult to
know the exact decimal number, with 'g' we get a complete decimal number in
the error message (fmt options).
You can have functions with the same name declared in different packages.
So we could create our Area(Circle) in a new package, but that feels
overkill here.
We can define methods on our newly defined types instead.
Methods are very similar to functions but they are called by invoking them on an
instance of a particular type. Where you can just call functions wherever you
like, such as Area(rectangle) you can only call methods on "things".
An example will help so let's change our tests first to call methods instead and
then fix the code.
func TestArea(t *testing.T) {
if got != want {
t.Errorf("got %g want %g", got, want)
}
})
if got != want {
t.Errorf("got %g want %g", got, want)
}
})
I would like to reiterate how great the compiler is here. It is so important to take
the time to slowly read the error messages you get, it will help you in the long
run.
The syntax for declaring methods is almost the same as functions and that's
because they're so similar. The only difference is the syntax of the method
receiver func (receiverName ReceiverType) MethodName(args).
When your method is called on a variable of that type, you get your reference to
its data via the receiverName variable. In many other programming languages
this is done implicitly and you access the receiver via this.
r Rectangle
If you try to re-run the tests they should now compile and give you some failing
output.
If you re-run the tests the rectangle tests should be passing but circle should still
be failing.
To make circle's Area function pass we will borrow the Pi constant from the
math package (remember to import it).
Refactor
There is some duplication in our tests.
All we want to do is take a collection of shapes, call the Area() method on them
and then check the result.
We want to be able to write some kind of checkArea function that we can pass
both Rectangles and Circles to, but fail to compile if we try to pass in
something that isn't a shape.
We are creating a helper function like we have in other exercises but this time
we are asking for a Shape to be passed in. If we try to call this with something
that isn't a shape, then it will not compile.
How does something become a shape? We just tell Go what a Shape is using an
interface declaration
We're creating a new type just like we did with Rectangle and Circle but this
time it is an interface rather than a struct.
Once you add this to the code, the tests will pass.
Wait, what?
This is quite different to interfaces in most other programming languages.
Normally you have to write code to say My type Foo implements interface
Bar.
In Go interface resolution is implicit. If the type you pass in matches what the
interface is asking for, it will compile.
Decoupling
Notice how our helper does not need to concern itself with whether the shape is
a Rectangle or a Circle or a Triangle. By declaring an interface the helper is
decoupled from the concrete types and just has the method it needs to do its job.
This kind of approach of using interfaces to declare only what you need is very
important in software design and will be covered in more detail in later sections.
Further refactoring
Now that you have some understanding of structs we can introduce "table driven
tests".
Table driven tests are useful when you want to build a list of test cases that can
be tested in the same manner.
areaTests := []struct {
shape Shape
want float64
}{
{Rectangle{12, 6}, 72.0},
{Circle{10}, 314.1592653589793},
}
The only new syntax here is creating an "anonymous struct", areaTests. We are
declaring a slice of structs by using []struct with two fields, the shape and the
want. Then we fill the slice with cases.
We then iterate over them just like we do any other slice, using the struct fields
to run our tests.
You can see how it would be very easy for a developer to introduce a new shape,
implement Area and then add it to the test cases. In addition, if a bug is found
with Area it is very easy to add a new test case to exercise it before fixing it.
Table based tests can be a great item in your toolbox but be sure that you have a
need for the extra noise in the tests. If you wish to test various implementations
of an interface, or if the data being passed in to a function has lots of different
requirements that need testing then they are a great fit.
Let's demonstrate all this by adding another shape and testing it; a triangle.
areaTests := []struct {
shape Shape
want float64
}{
{Rectangle{12, 6}, 72.0},
{Circle{10}, 314.1592653589793},
{Triangle{12, 6}, 36.0},
}
for _, tt := range areaTests {
got := tt.shape.Area()
if got != tt.want {
t.Errorf("got %g want %g", got, tt.want)
}
}
Try again
./shapes_test.go:25:8: cannot use Triangle literal (type Triangle) as type Shap
Triangle does not implement Shape (missing Area method)
It's telling us we cannot use a Triangle as a shape because it does not have an
Area() method, so add an empty implementation to get the test working
Refactor
Again, the implementation is fine but our tests could do with some improvement.
It's not immediately clear what all the numbers represent and you should be
aiming for your tests to be easily understood.
So far you've only been shown syntax for creating instances of structs
MyStruct{val1, val2} but you can optionally name the fields.
(emphasis mine)
Now our tests (at least the list of cases) make assertions of truth about shapes
and their areas.
We knew this was in relation to Triangle because we were just working with it,
but what if a bug slipped in to the system in one of 20 cases in the table? How
would a developer know which case failed? This is not a great experience for the
developer, they will have to manually look through the cases to find out which
case actually failed.
We can change our error message into %#v got %.2f want %.2f. The %#v
format string will print out our struct with the values in its field, so the developer
can see at a glance the properties that are being tested.
To increase the readability of our test cases further we can rename the want field
into something more descriptive like hasArea.
One final tip with table driven tests is to use t.Run and to name the test cases.
By wrapping each case in a t.Run you will have clearer test output on failures as
it will print the name of the case
--- FAIL: TestArea (0.00s)
--- FAIL: TestArea/Rectangle (0.00s)
shapes_test.go:33: main.Rectangle{Width:12, Height:6} got 72.00 want 72
And you can run specific tests within your table with go test -run
TestArea/Rectangle.
areaTests := []struct {
name string
shape Shape
hasArea float64
}{
{name: "Rectangle", shape: Rectangle{Width: 12, Height: 6}, hasArea:
{name: "Circle", shape: Circle{Radius: 10}, hasArea: 314.1592653589793
{name: "Triangle", shape: Triangle{Base: 12, Height: 6}, hasArea:
}
Wrapping up
This was more TDD practice, iterating over our solutions to basic mathematic
problems and learning new language features motivated by our tests.
Declaring structs to create your own data types which lets you bundle
related data together and make the intent of your code clearer
Declaring interfaces so you can define functions that can be used by
different types (parametric polymorphism)
Adding methods so you can add functionality to your data types and so you
can implement interfaces
Table based tests to make your assertions clearer and your suites easier to
extend & maintain
This was an important chapter because we are now starting to define our own
types. In statically typed languages like Go, being able to design your own types
is essential for building software that is easy to understand, to piece together and
to test.
Interfaces are a great tool for hiding complexity away from other parts of the
system. In our case our test helper code did not need to know the exact shape it
was asserting on, only how to "ask" for it's area.
As you become more familiar with Go you start to see the real strength of
interfaces and the standard library. You'll learn about interfaces defined in the
standard library that are used everywhere and by implementing them against
your own types you can very quickly re-use a lot of great functionality.
Pointers & errors
You can find all the code for this chapter here
We learned about structs in the last section which let us capture a number of
values related around a concept.
At some point you may wish to use structs to manage state, exposing methods to
let users change the state in a way that you can control.
Fintech loves Go and uhhh bitcoins? So let's show what an amazing banking
system we can make.
wallet := Wallet{}
wallet.Deposit(10)
got := wallet.Balance()
want := 10
if got != want {
t.Errorf("got %d want %d", got, want)
}
}
In the previous example we accessed fields directly with the field name,
however in our very secure wallet we don't want to expose our inner state to the
rest of the world. We want to control access via methods.
Now we've made our wallet, try and run the test again
Remember to only do enough to make the tests run. We need to make sure our
test fails correctly with a clear error message.
In our case we want our methods to be able to manipulate this value but no one
else.
Remember we can access the internal balance field in the struct using the
"receiver" variable.
With our career in fintech secured, run our tests and bask in the passing test
wallet_test.go:15: got 0 want 10
????
Well this is confusing, our code looks like it should work, we add the new
amount onto our balance and then the balance method should return the current
state of it.
In Go, when you call a function or a method the arguments are copied.
wallet := Wallet{}
wallet.Deposit(10)
got := wallet.Balance()
want := 10
if got != want {
t.Errorf("got %d want %d", got, want)
}
}
The \n escape character, prints new line after outputting the memory address.
We get the pointer to a thing with the address of symbol; &.
You can see that the addresses of the two balances are different. So when we
change the value of the balance inside the code, we are working on a copy of
what came from the test. Therefore the balance in the test is unchanged.
We can fix this with pointers. Pointers let us point to some values and then let us
change them. So rather than taking a copy of the Wallet, we take a pointer to the
wallet so we can change it.
The difference is the receiver type is *Wallet rather than Wallet which you can
read as "a pointer to a wallet".
Now you might wonder, why did they pass? We didn't dereference the pointer in
the function, like so:
and seemingly addressed the object directly. In fact, the code above using (*w)
is absolutely valid. However, the makers of Go deemed this notation
cumbersome, so the language permits us to write w.balance, without explicit
dereference. These pointers to structs even have their own name: struct pointers
and they are automatically dereferenced.
Refactor
We said we were making a Bitcoin wallet but we have not mentioned them so
far. We've been using int because they're a good type for counting things!
It seems a bit overkill to create a struct for this. int is fine in terms of the way
it works but it's not descriptive.
wallet := Wallet{}
wallet.Deposit(Bitcoin(10))
got := wallet.Balance()
want := Bitcoin(10)
if got != want {
t.Errorf("got %d want %d", got, want)
}
}
By doing this we're making a new type and we can declare methods on them.
This can be very useful when you want to add some domain specific
functionality on top of existing types.
This interface is defined in the fmt package and lets you define how your type is
printed when used with the %s format string in prints.
As you can see, the syntax for creating a method on a type alias is the same as it
is on a struct.
Next we need to update our test format strings so they will use String() instead.
if got != want {
t.Errorf("got %s want %s", got, want)
}
wallet.Deposit(Bitcoin(10))
got := wallet.Balance()
want := Bitcoin(10)
if got != want {
t.Errorf("got %s want %s", got, want)
}
})
wallet.Withdraw(Bitcoin(10))
got := wallet.Balance()
want := Bitcoin(10)
if got != want {
t.Errorf("got %s want %s", got, want)
}
})
}
Refactor
There's some duplication in our tests, lets refactor that out.
if got != want {
t.Errorf("got %s want %s", got, want)
}
}
What should happen if you try to Withdraw more than is left in the account? For
now, our requirement is to assume there is not an overdraft facility.
In Go, if you want to indicate an error it is idiomatic for your function to return
an err for the caller to check and act on.
if err == nil {
t.Error("wanted an error but didn't get one")
}
})
We want Withdraw to return an error if you try to take out more than you have
and the balance should stay the same.
nil is synonymous with null from other programming languages. Errors can be
nil because the return type of Withdraw will be error, which is an interface. If
you see a function that takes arguments or returns values that are interfaces, they
can be nillable.
Like null if you try to access a value that is nil it will throw a runtime panic.
This is bad! You should make sure that you check for nils.
The wording is perhaps a little unclear, but our previous intent with Withdraw
was just to call it, it will never return a value. To make this compile we will need
to change it so it has a return type.
Again, it is very important to just write enough code to satisfy the compiler. We
correct our Withdraw method to return error and for now we have to return
something so let's just return nil.
w.balance -= amount
return nil
}
Refactor
Let's make a quick test helper for our error check just to help our test read clearer
Hopefully when returning an error of "oh no" you were thinking that we might
iterate on that because it doesn't seem that useful to return.
Assuming that the error ultimately gets returned to the user, let's update our test
to assert on some kind of error message rather than just the existence of an error.
if got.Error() != want {
t.Errorf("got %q, want %q", got, want)
}
}
We've introduced t.Fatal which will stop the test if it is called. This is because
we don't want to make any more assertions on the error returned if there isn't one
around. Without this the test would carry on to the next step and panic because
of a nil pointer.
w.balance -= amount
return nil
}
Refactor
We have duplication of the error message in both the test code and the Withdraw
code.
It would be really annoying for the test to fail if someone wanted to re-word the
error and it's just too much detail for our test. We don't really care what the exact
wording is, just that some kind of meaningful error around withdrawing is
returned given a certain condition.
In Go, errors are values, so we can refactor it out into a variable and have a
single source of truth for it.
w.balance -= amount
return nil
}
This is a positive change in itself because now our Withdraw function looks very
clear.
Next we can refactor our test code to use this value instead of specific strings.
if got != want {
t.Errorf("got %q want %q", got, want)
}
}
if got != want {
t.Errorf("got %q, want %q", got, want)
}
}
I have moved the helpers out of the main test function just so when someone
opens up a file they can start reading our assertions first, rather than some
helpers.
Another useful property of tests is that they help us understand the real usage of
our code so we can make sympathetic code. We can see here that a developer
can simply call our code and do an equals check to ErrInsufficientFunds and
act accordingly.
Unchecked errors
Whilst the Go compiler helps you a lot, sometimes there are things you can still
miss and error handling can sometimes be tricky.
There is one scenario we have not tested. To find it, run the following in a
terminal to install errcheck, one of many linters available for Go.
go get -u github.com/kisielk/errcheck
What this is telling us is that we have not checked the error being returned on
that line of code. That line of code on my computer corresponds to our normal
withdraw scenario because we have not checked that if the Withdraw is
successful that an error is not returned.
if got != want {
t.Errorf("got %s want %s", got, want)
}
}
if got != want {
t.Errorf("got %s, want %s", got, want)
}
}
Wrapping up
Pointers
Go copies values when you pass them to functions/methods so if you're
writing a function that needs to mutate state you'll need it to take a pointer
to the thing you want to change.
The fact that Go takes a copy of values is useful a lot of the time but
sometimes you won't want your system to make a copy of something, in
which case you need to pass a reference. Examples could be very large data
or perhaps things you intend only to have one instance of (like database
connection pools).
nil
Pointers can be nil
When a function returns a pointer to something, you need to make sure you
check if it's nil or you might raise a runtime exception, the compiler won't
help you here.
Useful for when you want to describe a value that could be missing
Errors
Errors are the way to signify failure when calling a function/method.
By listening to our tests we concluded that checking for a string in an error
would result in a flaky test. So we refactored to use a meaningful value
instead and this resulted in easier to test code and concluded this would be
easier for users of our API too.
This is not the end of the story with error handling, you can do more
sophisticated things but this is just an intro. Later sections will cover more
strategies.
Don’t just check errors, handle them gracefully
Pointers and errors are a big part of writing Go that you need to get comfortable
with. Thankfully the compiler will usually help you out if you do something
wrong, just take your time and read the error.
Maps
You can find all the code for this chapter here
In arrays & slices, you saw how to store values in order. Now, we will look at a
way to store items by a key and look them up quickly.
Maps allow you to store items in a manner similar to a dictionary. You can think
of the key as the word and the value as the definition. And what better way is
there to learn about Maps than to build our own dictionary?
First, assuming we already have some words with their definitions in the
dictionary, if we search for a word, it should return the definition of it.
package main
import "testing"
if got != want {
t.Errorf("got %q want %q given, %q", got, want, "test")
}
}
Declaring a Map is somewhat similar to an array. Except, it starts with the map
keyword and requires two types. The first is the key type, which is written inside
the []. The second is the value type, which goes right after the [].
The key type is special. It can only be a comparable type because without the
ability to tell if 2 keys are equal, we have no way to ensure that we are getting
the correct value. Comparable types are explained in depth in the language spec.
The value type, on the other hand, can be any type you want. It can even be
another map.
package main
Refactor
if got != want {
t.Errorf("got %q want %q", got, want)
}
}
In dictionary_test.go:
got := dictionary.Search("test")
want := "this is just a test"
assertStrings(t, got, want)
}
We started using the Dictionary type, which we have not defined yet. Then
called Search on the Dictionary instance.
In dictionary.go:
Here we created a Dictionary type which acts as a thin wrapper around map.
With the custom type defined, we can create the Search method.
We actually get nothing back. This is good because the program can continue to
run, but there is a better approach. The function can report that the word is not in
the dictionary. This way, the user isn't left wondering if the word doesn't exist or
if there is just no definition (this might not seem very useful for a dictionary.
However, it's a scenario that could be key in other usecases).
if err == nil {
t.Fatal("expected to get an error.")
}
Your test should now fail with a much clearer error message.
dictionary_test.go:22: expected to get an error.
Write enough code to make it pass
In order to make this pass, we are using an interesting property of the map
lookup. It can return 2 values. The second value is a boolean which indicates if
the key was found successfully.
This property allows us to differentiate between a word that doesn't exist and a
word that just doesn't have a definition.
Refactor
var ErrNotFound = errors.New("could not find the word you were looking for"
We can get rid of the magic error in our Search function by extracting it into a
variable. This will also allow us to have a better test.
if got != want {
t.Errorf("got error %q want %q", got, want)
}
}
By creating a new helper we were able to simplify our test, and start using our
ErrNotFound variable so our test doesn't fail if we change the error text in the
future.
if got != want {
t.Errorf("got %q want %q", got, want)
}
}
In this test, we are utilizing our Search function to make the validation of the
dictionary a little easier.
Adding to a map is also similar to an array. You just need to specify a key and
set it equal to a value.
Pointers, copies, et al
An interesting property of maps is that you can modify them without passing as
an address to it (e.g &myMap)
This may make them feel like a "reference type", but as Dave Cheney describes
they are not.
So when you pass a map to a function/method, you are indeed copying it, but
just the pointer part, not the underlying data structure that contains the data.
A gotcha with maps is that they can be a nil value. A nil map behaves like an
empty map when reading, but attempts to write to a nil map will cause a
runtime panic. You can read more about maps here.
var m map[string]string
Instead, you can initialize an empty map like we were doing above, or use the
make keyword to create a map for you:
// OR
Both approaches create an empty hash map and point dictionary at it. Which
ensures that you will never get a runtime panic.
Refactor
There isn't much to refactor in our implementation but the test could use a little
simplification.
dictionary.Add(word, definition)
if definition != got {
t.Errorf("got %q want %q", got, definition)
}
}
We made variables for word and definition, and moved the definition assertion
into its own helper function.
Our Add is looking good. Except, we didn't consider what happens when the
value we are trying to add already exists!
Map will not throw an error if the value already exists. Instead, they will go
ahead and overwrite the value with the newly provided value. This can be
convenient in practice, but makes our function name less than accurate. Add
should not modify existing values. It should only add new words to our
dictionary.
For this test, we modified Add to return an error, which we are validating against
a new error variable, ErrWordExists. We also modified the previous test to
check for a nil error, as well as the assertError function.
var (
ErrNotFound = errors.New("could not find the word you were looking for"
ErrWordExists = errors.New("cannot add word because it already exists"
)
func (d Dictionary) Add(word, definition string) error {
d[word] = definition
return nil
}
Now we get two more errors. We are still modifying the value, and returning a
nil error.
switch err {
case ErrNotFound:
d[word] = definition
case nil:
return ErrWordExists
default:
return err
}
return nil
}
Here we are using a switch statement to match on the error. Having a switch
like this provides an extra safety net, in case Search returns an error other than
ErrNotFound.
Refactor
We don't have too much to refactor, but as our error usage grows we can make a
few modifications.
const (
ErrNotFound = DictionaryErr("could not find the word you were looking for
ErrWordExists = DictionaryErr("cannot add word because it already exists"
)
We made the errors constant; this required us to create our own DictionaryErr
type which implements the error interface. You can read more about the details
in this excellent article by Dave Cheney. Simply put, it makes the errors more
reusable and immutable.
dictionary.Update(word, newDefinition)
Update is very closely related to Add and will be our next implementation.
With that in place, we are able to see that we need to change the definition of the
word.
dictionary_test.go:55: got 'this is just a test' want 'new definition'
We added yet another error type for when the word does not exist. We also
modified Update to return an error value.
We get 3 errors this time, but we know how to deal with these.
const (
ErrNotFound = DictionaryErr("could not find the word you were looki
ErrWordExists = DictionaryErr("cannot add word because it already exi
ErrWordDoesNotExist = DictionaryErr("cannot update word because it does not
)
switch err {
case ErrNotFound:
return ErrWordDoesNotExist
case nil:
d[word] = definition
default:
return err
}
return nil
}
This function looks almost identical to Add except we switched when we update
the dictionary and when we return an error.
Having specific errors gives you more information about what went wrong. Here
is an example in a web app:
You can redirect the user when ErrNotFound is encountered, but display an
error message when ErrWordDoesNotExist is encountered.
dictionary.Delete(word)
_, err := dictionary.Search(word)
if err != ErrNotFound {
t.Errorf("Expected %q to be deleted", word)
}
}
Our test creates a Dictionary with a word and then checks if the word has been
removed.
After we add this, the test tells us we are not deleting the word.
dictionary_test.go:78: Expected 'test' to be deleted
Go has a built-in function delete that works on maps. It takes two arguments.
The first is the map and the second is the key to be removed.
The delete function returns nothing, and we based our Delete method on the
same notion. Since deleting a value that's not there has no effect, unlike our
Update and Add methods, we don't need to complicate the API with errors.
Wrapping up
In this section, we covered a lot. We made a full CRUD (Create, Read, Update
and Delete) API for our dictionary. Throughout the process we learned how to:
Create maps
Search for items in maps
Add new items to maps
Update items in maps
Delete items from a map
Learned more about errors
How to create errors that are constants
Writing error wrappers
Dependency Injection
You can find all the code for this chapter here
It is assumed that you have read the structs section before as some understanding
of interfaces will be needed for this.
We want to write a function that greets someone, just like we did in the hello-
world chapter but this time we are going to be testing the actual printing.
But how can we test this? Calling fmt.Printf prints to stdout, which is pretty
hard for us to capture using the testing framework.
What we need to do is to be able to inject (which is just a fancy word for pass
in) the dependency of printing.
Our function doesn't need to care where** or how the printing happens, so we
should accept an interface rather than a concrete type.**
// It returns the number of bytes written and any write error encountered.
func Printf(format string, a ...interface{}) (n int, err error) {
return Fprintf(os.Stdout, format, a...)
}
Interesting! Under the hood Printf just calls Fprintf passing in os.Stdout.
What exactly is an os.Stdout? What does Fprintf expect to get passed to it for
the 1st argument?
An io.Writer
As you write more Go code you will find this interface popping up a lot because
it's a great general purpose interface for "put this data somewhere".
So we know under the covers we're ultimately using Writer to send our greeting
somewhere. Let's use this existing abstraction to make our code testable and
more reusable.
got := buffer.String()
want := "Hello, Chris"
if got != want {
t.Errorf("got %q want %q", got, want)
}
}
The buffer type from the bytes package implements the Writer interface.
So we'll use it in our test to send in as our Writer and then we can check what
was written to it after we invoke Greet
The test fails. Notice that the name is getting printed out, but it's going to stdout.
Write enough code to make it pass
Use the writer to send the greeting to the buffer in our test. Remember
fmt.Fprintf is like fmt.Printf but instead takes a Writer to send the string to,
whereas fmt.Printf defaults to stdout.
Refactor
Earlier the compiler told us to pass in a pointer to a bytes.Buffer. This is
technically correct but not very useful.
func main() {
Greet(os.Stdout, "Elodie")
}
If we change our code to use the more general purpose interface we can now use
it in both tests and in our application.
package main
import (
"fmt"
"os"
"io"
)
func main() {
Greet(os.Stdout, "Elodie")
}
More on io.Writer
What other places can we write data to using io.Writer? Just how general
purpose is our Greet function?
The internet
Run the following
package main
import (
"fmt"
"io"
"net/http"
)
func main() {
http.ListenAndServe(":5000", http.HandlerFunc(MyGreeterHandler))
}
Run the program and go to http://localhost:5000. You'll see your greeting
function being used.
HTTP servers will be covered in a later chapter so don't worry too much about
the details.
When you write an HTTP handler, you are given an http.ResponseWriter and
the http.Request that was used to make the request. When you implement your
server you write your response using the writer.
Wrapping up
Our first round of code was not easy to test because it wrote data to somewhere
we couldn't control.
Motivated by our tests we refactored the code so we could control where the data
was written by injecting a dependency which allowed us to:
Test our code If you can't test a function easily, it's usually because of
dependencies hard-wired into a function or global state. If you have a
global database connection pool for instance that is used by some kind of
service layer, it is likely going to be difficult to test and they will be slow to
run. DI will motivate you to inject in a database dependency (via an
interface) which you can then mock out with something you can control in
your tests.
Separate our concerns, decoupling where the data goes from how to
generate it. If you ever feel like a method/function has too many
responsibilities (generating data and writing to a db? handling HTTP
requests and doing domain level logic?) DI is probably going to be the tool
you need.
Allow our code to be re-used in different contexts The first "new"
context our code can be used in is inside tests. But further on if someone
wants to try something new with your function they can inject their own
dependencies.
What about mocking? I hear you need that for DI and
also it's evil
Mocking will be covered in detail later (and it's not evil). You use mocking to
replace real things you inject with a pretend version that you can control and
inspect in your tests. In our case though, the standard library had something
ready for us to use.
The more familiar you are with the standard library the more you'll see these
general purpose interfaces which you can then re-use in your own code to make
your software reusable in a number of contexts.
You have been asked to write a program which counts down from 3, printing
each number on a new line (with a 1 second pause) and when it reaches zero it
will print "Go!" and exit.
3
2
1
Go!
We'll tackle this by writing a function called Countdown which we will then put
inside a main program so it looks something like this:
package main
func main() {
Countdown()
}
While this is a pretty trivial program, to test it fully we will need as always to
take an iterative, test-driven approach.
What do I mean by iterative? We make sure we take the smallest steps we can to
have useful software.
We don't want to spend a long time with code that will theoretically work after
some hacking because that's often how developers fall down rabbit holes. It's an
important skill to be able to slice up requirements as small as you can so
you can have working software.
Print 3
Print 3, 2, 1 and Go!
Wait a second between each line
Write the test first
Our software needs to print to stdout and we saw how we could use DI to
facilitate testing this in the DI section.
Countdown(buffer)
got := buffer.String()
want := "3"
if got != want {
t.Errorf("got %q want %q", got, want)
}
}
In main we will send to os.Stdout so our users see the countdown printed
to the terminal.
In test we will send to bytes.Buffer so our tests can capture what data is
being generated.
Try again
The compiler is telling you what your function signature could be, so update it.
Perfect!
Refactor
We know that while *bytes.Buffer works, it would be better to use a general
purpose interface instead.
To complete matters, let's now wire up our function into a main so we have some
working software to reassure ourselves we're making progress.
package main
import (
"fmt"
"io"
"os"
)
func main() {
Countdown(os.Stdout)
}
Yes this seems trivial but this approach is what I would recommend for any
project. Take a thin slice of functionality and make it work end-to-end,
backed by tests.
Countdown(buffer)
got := buffer.String()
want := `3
2
1
Go!`
if got != want {
t.Errorf("got %q want %q", got, want)
}
}
The backtick syntax is another way of creating a string but lets you put things
like newlines which is perfect for our test.
Use a for loop counting backwards with i-- and use fmt.Fprintln to print to
out with our number followed by a newline character. Finally use fmt.Fprint to
send "Go!" aftward.
Refactor
There's not much to refactor other than refactoring some magic values into
named constants.
If you run the program now, you should get the desired output but we don't have
it as a dramatic countdown with the 1 second pauses.
Go lets you achieve this with time.Sleep. Try adding it in to our code.
time.Sleep(1 * time.Second)
fmt.Fprint(out, finalWord)
}
Mocking
The tests still pass and the software works as intended but we have some
problems: - Our tests take 4 seconds to run. - Every forward thinking post about
software development emphasises the importance of quick feedback loops. -
Slow tests ruin developer productivity. - Imagine if the requirements get more
sophisticated warranting more tests. Are we happy with 4s added to the test run
for every new test of Countdown? - We have not tested an important property of
our function.
We have a dependency on Sleeping which we need to extract so we can then
control it in our tests.
I made a design decision that our Countdown function would not be responsible
for how long the sleep is. This simplifies our code a little for now at least and
means a user of our function can configure that sleepiness however they like.
Spies are a kind of mock which can record how a dependency is used. They can
record the arguments sent in, how many times it has been called, etc. In our case,
we're keeping track of how many times Sleep() is called so we can check it in
our test.
Update the tests to inject a dependency on our Spy and assert that the sleep has
been called 4 times.
Countdown(buffer, spySleeper)
got := buffer.String()
want := `3
2
1
Go!`
if got != want {
t.Errorf("got %q want %q", got, want)
}
if spySleeper.Calls != 4 {
t.Errorf("not enough calls to sleeper, want 4 got %d", spySleeper.Calls
}
}
time.Sleep(1 * time.Second)
fmt.Fprint(out, finalWord)
}
If you try again, your main will no longer compile for the same reason
./main.go:26:11: not enough arguments in call to Countdown
have (*os.File)
want (io.Writer, Sleeper)
func main() {
sleeper := &DefaultSleeper{}
Countdown(os.Stdout, sleeper)
}
Sleep
Print N
Sleep
Print N-1
Sleep
Print Go!
etc
Our latest change only asserts that it has slept 4 times, but those sleeps could
occur out of sequence.
When writing tests if you're not confident that your tests are giving you
sufficient confidence, just break it! (make sure you have committed your
changes to source control first though). Change the code to the following
sleeper.Sleep()
fmt.Fprint(out, finalWord)
}
If you run your tests they should still be passing even though the implementation
is wrong.
Let's use spying again with a new test to check the order of operations is correct.
We can now add a sub-test into our test suite which verifies our sleeps and prints
operate in the order we hope
want := []string{
sleep,
write,
sleep,
write,
sleep,
write,
sleep,
write,
}
if !reflect.DeepEqual(want, spySleepPrinter.Calls) {
t.Errorf("wanted calls %v got %v", want, spySleepPrinter.Calls)
}
})
This test should now fail. Revert Countdown back to how it was to fix the test.
We now have two tests spying on the Sleeper so we can now refactor our test so
one is testing what is being printed and the other one is ensuring we're sleeping
in between the prints. Finally we can delete our first spy as it's not used
anymore.
got := buffer.String()
want := `3
2
1
Go!`
if got != want {
t.Errorf("got %q want %q", got, want)
}
})
want := []string{
sleep,
write,
sleep,
write,
sleep,
write,
sleep,
write,
}
if !reflect.DeepEqual(want, spySleepPrinter.Calls) {
t.Errorf("wanted calls %v got %v", want, spySleepPrinter.Calls)
}
})
}
We now have our function and its 2 important properties properly tested.
We are using duration to configure the time slept and sleep as a way to pass in
a sleep function. The signature of sleep is the same as for time.Sleep allowing
us to use time.Sleep in our real implementation and the following spy in our
tests:
With our spy in place, we can create a new test for the configurable sleeper.
spyTime := &SpyTime{}
sleeper := ConfigurableSleeper{sleepTime, spyTime.Sleep}
sleeper.Sleep()
if spyTime.durationSlept != sleepTime {
t.Errorf("should have slept for %v but slept for %v", sleepTime, spyTim
}
}
There should be nothing new in this test and it is setup very similar to the
previous mock tests.
You should see a very clear error message indicating that we do not have a
Sleep method created on our ConfigurableSleeper.
Write the minimal amount of code for the test to run and
check failing test output
With this change all of the tests should be passing again and you might wonder
why all the hassle as the main program didn't change at all. Hopefully it becomes
clear after the following section.
func main() {
sleeper := &ConfigurableSleeper{1 * time.Second, time.Sleep}
Countdown(os.Stdout, sleeper)
}
If we run the tests and the program manually, we can see that all the behavior
remains the same.
If your mocking code is becoming complicated or you are having to mock out
lots of things to test something, you should listen to that bad feeling and think
about your code. Usually it is a sign of
The thing you are testing is having to do too many things (because it has
too many dependencies to mock)
Break the module apart so it does less
Its dependencies are too fine-grained
Think about how you can consolidate some of these dependencies into one
meaningful module
Your test is too concerned with implementation details
Favour testing expected behaviour rather than the implementation
This is usually a sign of you testing too much implementation detail. Try to
make it so your tests are testing useful behaviour unless the implementation is
really important to how the system runs.
It is sometimes hard to know what level to test exactly but here are some thought
processes and rules I try to follow:
The definition of refactoring is that the code changes but the behaviour
stays the same. If you have decided to do some refactoring in theory you
should be able to do make the commit without any test changes. So when
writing a test ask yourself
Am I testing the behaviour I want, or the implementation details?
If I were to refactor this code, would I have to make lots of changes to the
tests?
Although Go lets you test private functions, I would avoid it as private
functions are implementation detail to support public behaviour. Test the
public behaviour. Sandi Metz describes private functions as being "less
stable" and you don't want to couple your tests to them.
I feel like if a test is working with more than 3 mocks then it is a red flag
- time for a rethink on the design
Use spies with caution. Spies let you see the insides of the algorithm you
are writing which can be very useful but that means a tighter coupling
between your test code and the implementation. Be sure you actually care
about these details if you're going to spy on them
You should only use a mock generator that generates test doubles against an
interface. Any tool that overly dictates how tests are written, or that use lots of
'magic', can get in the sea.
Wrapping up
More on TDD approach
More on TDD approach
When faced with less trivial examples, break the problem down into "thin
vertical slices". Try to get to a point where you have working software
backed by tests as soon as you can, to avoid getting in rabbit holes and
taking a "big bang" approach.
Once you have some working software it should be easier to iterate with
small steps until you arrive at the software you need.
Martin Fowler.
Mocking
Without mocking important areas of your code will be untested. In our
case we would not be able to test that our code paused between each print
but there are countless other examples. Calling a service that can fail?
Wanting to test your system in a particular state? It is very hard to test these
scenarios without mocking.
Without mocks you may have to set up databases and other third parties
things just to test simple business rules. You're likely to have slow tests,
resulting in slow feedback loops.
By having to spin up a database or a webservice to test something you're
likely to have fragile tests due to the unreliability of such services.
Once a developer learns about mocking it becomes very easy to over-test every
single facet of a system in terms of the way it works rather than what it does.
Always be mindful about the value of your tests and what impact they would
have in future refactoring.
In this post about mocking we have only covered Spies which are a kind of
mock. The "proper" term for mocks though are "test doubles"
> The generic term he uses is a Test Double (think stunt double). Test Double is
a generic term for any case where you replace a production object for testing
purposes.
Under test doubles, there are various types like stubs, spies and indeed mocks!
Check out Martin Fowler's post for more detail.
Concurrency
You can find all the code for this chapter here
Here's the setup: a colleague has written a function, CheckWebsites, that checks
the status of a list of URLs.
package concurrency
return results
}
It returns a map of each URL checked to a boolean value - true for a good
response, false for a bad response.
You also have to pass in a WebsiteChecker which takes a single URL and
returns a boolean. This is used by the function to check all the websites.
Using dependency injection has allowed them to test the function without
making real HTTP calls, making it reliable and fast.
package concurrency
import (
"reflect"
"testing"
)
func mockWebsiteChecker(url string) bool {
if url == "waat://furhurterwe.geds" {
return false
}
return true
}
want := map[string]bool{
"http://google.com": true,
"http://blog.gypsydave5.com": true,
"waat://furhurterwe.geds": false,
}
if !reflect.DeepEqual(want, got) {
t.Fatalf("Wanted %v, got %v", want, got)
}
}
The function is in production and being used to check hundreds of websites. But
your colleague has started to get complaints that it's slow, so they've asked you
to help speed it up.
Write a test
Let's use a benchmark to test the speed of CheckWebsites so that we can see the
effect of our changes.
package concurrency
import (
"testing"
"time"
)
The benchmark tests CheckWebsites using a slice of one hundred urls and uses a
new fake implementation of WebsiteChecker. slowStubWebsiteChecker is
deliberately slow. It uses time.Sleep to wait exactly twenty milliseconds and
then it returns true.
When we run the benchmark using go test -bench=. (or if you're in Windows
Powershell go test -bench="."):
pkg: github.com/gypsydave5/learn-go-with-tests/concurrency/v0
BenchmarkCheckWebsites-4 1 2249228637 ns/op
PASS
ok github.com/gypsydave5/learn-go-with-tests/concurrency/v0 2.268s
For instance, this morning I made a cup of tea. I put the kettle on and then, while
I was waiting for it to boil, I got the milk out of the fridge, got the tea out of the
cupboard, found my favourite mug, put the teabag into the cup and then, when
the kettle had boiled, I put the water in the cup.
What I didn't do was put the kettle on and then stand there blankly staring at the
kettle until it boiled, then do everything else once the kettle had boiled.
If you can understand why it's faster to make tea the first way, then you can
understand how we will make CheckWebsites faster. Instead of waiting for a
website to respond before sending a request to the next website, we will tell our
computer to make the next request while it is waiting.
package concurrency
return results
}
Because the only way to start a goroutine is to put go in front of a function call,
we often use anonymous functions when we want to start a goroutine. An
anonymous function literal looks just the same as a normal function declaration,
but without a name (unsurprisingly). You can see one above in the body of the
for loop.
Anonymous functions have a number of features which make them useful, two
of which we're using above. Firstly, they can be executed at the same time that
the're declared - this is what the () at the end of the anonymous function is
doing. Secondly they maintain access to the lexical scope they are defined in -
all the variables that are available at the point when you declare the anonymous
function are also available in the body of the function.
The body of the anonymous function above is just the same as the loop body was
before. The only difference is that each iteration of the loop will start a new
goroutine, concurrent with the current process (the WebsiteChecker function)
each of which will add its result to the results map.
None of the goroutines that our for loop started had enough time to add their
result to the results map; the WebsiteChecker function is too fast for them, and
it returns the still empty map.
To fix this we can just wait while all the goroutines do their work, and then
return. Two seconds ought to do it, right?
package concurrency
import "time"
time.Sleep(2 * time.Second)
return results
}
Now when we run the tests you get (or don't get - see above):
To fix this:
package concurrency
import (
"time"
)
time.Sleep(2 * time.Second)
return results
}
By giving each anonymous function a parameter for the url - u - and then calling
the anonymous function with the url as the argument, we make sure that the
value of u is fixed as the value of url for the iteration of the loop that we're
launching the goroutine in. u is a copy of the value of url, and so can't be
changed.
But if you're unlucky (this is more likely if you run them with the benchmark as
you'll get more tries)
goroutine 8 [running]:
runtime.throw(0x12c5895, 0x15)
/usr/local/Cellar/go/1.9.3/libexec/src/runtime/panic.go:605 +0x95 fp=0x
runtime.mapassign_faststr(0x1271d80, 0xc42007acf0, 0x12c6634, 0x17, 0x0)
/usr/local/Cellar/go/1.9.3/libexec/src/runtime/hashmap_fast.go:
github.com/gypsydave5/learn-go-with-tests/concurrency/v3.WebsiteChecker.func1
/Users/gypsydave5/go/src/github.com/gypsydave5/learn-go-with-tests/conc
runtime.goexit()
/usr/local/Cellar/go/1.9.3/libexec/src/runtime/asm_amd64.s:2337
created by github.com/gypsydave5/learn-go-with-tests/concurrency/v3.WebsiteChec
/Users/gypsydave5/go/src/github.com/gypsydave5/learn-go-with-tests/conc
This is long and scary, but all we need to do is take a breath and read the
stacktrace: fatal error: concurrent map writes. Sometimes, when we run
our tests, two of the goroutines write to the results map at exactly the same time.
Maps in Go don't like it when more than one thing tries to write to them at once,
and so fatal error.
This is a race condition, a bug that occurs when the output of our software is
dependent on the timing and sequence of events that we have no control over.
Because we cannot control exactly when each goroutine writes to the results
map, we are vulnerable to two goroutines writing to it at the same time.
Go can help us to spot race conditions with its built in race detector. To enable
this feature, run the tests with the race flag: go test -race.
The details are, again, hard to read - but WARNING: DATA RACE is pretty
unambiguous. Reading into the body of the error we can see two different
goroutines performing writes on a map:
Write at 0x00c420084d20 by goroutine 8:
On top of that we can see the line of code where the write is happening:
/Users/gypsydave5/go/src/github.com/gypsydave5/learn-go-with-
tests/concurrency/v3/websiteChecker.go:12
Everything you need to know is printed to your terminal - all you have to do is
be patient enough to read it.
Channels
We can solve this data race by coordinating our goroutines using channels.
Channels are a Go data structure that can both receive and send values. These
operations, along with their details, allow communication between different
processes.
In this case we want to think about the communication between the parent
process and each of the goroutines that it makes to do the work of running the
WebsiteChecker function with the url.
package concurrency
return results
}
Now when we iterate over the urls, instead of writing to the map directly we're
sending a result struct for each call to wc to the resultChannel with a send
statement. This uses the <- operator, taking a channel on the left and a value on
the right:
// Send statement
resultChannel <- result{u, wc(u)}
The next for loop iterates once for each of the urls. Inside we're using a receive
expression, which assigns a value received from a channel to a variable. This
also uses the <- operator, but with the two operands now reversed: the channel is
now on the right and the variable that we're assigning to is on the left:
// Receive expression
result := <-resultChannel
By sending the results into a channel, we can control the timing of each write
into the results map, ensuring that it happens one at a time. Although each of the
calls of wc, and each send to the result channel, is happening in parallel inside its
own process, each of the results is being dealt with one at a time as we take
values out of the result channel with the receive expression.
We have parallelized the part of the code that we wanted to make faster, while
making sure that the part that cannot happen in parallel still happens linearly.
And we have communicated across the multiple processes involved by using
channels.
pkg: github.com/gypsydave5/learn-go-with-tests/concurrency/v2
BenchmarkCheckWebsites-8 100 23406615 ns/op
PASS
ok github.com/gypsydave5/learn-go-with-tests/concurrency/v2 2.377s
Wrapping up
This exercise has been a little lighter on the TDD than usual. In a way we've
been taking part in one long refactoring of the CheckWebsites function; the
inputs and outputs never changed, it just got faster. But the tests we had in place,
as well as the benchmark we wrote, allowed us to refactor CheckWebsites in a
way that maintained confidence that the software was still working, while
demonstrating that it had actually become faster.
goroutines, the basic unit of concurrency in Go, which let us check more
than one website at the same time.
anonymous functions, which we used to start each of the concurrent
processes that check websites.
channels, to help organize and control the communication between the
different processes, allowing us to avoid a race condition bug.
the race detector which helped us debug problems with concurrent code
Make it fast
One formulation of an agile way of building software, often misattributed to
Kent Beck, is:
Where 'work' is making the tests pass, 'right' is refactoring the code, and 'fast' is
optimizing the code to make it, for example, run quickly. We can only 'make it
fast' once we've made it work and made it right. We were lucky that the code we
were given was already demonstrated to be working, and didn't need to be
refactored. We should never try to 'make it fast' before the other two steps have
been performed because
You have been asked to make a function called WebsiteRacer which takes two
URLs and "races" them by hitting them with an HTTP GET and returning the
URL which returned first. If none of them return within 10 seconds then it
should return an error.
want := fastURL
got := Racer(slowURL, fastURL)
if got != want {
t.Errorf("got %q, want %q", got, want)
}
}
We know this isn't perfect and has problems but it will get us going. It's
important not to get too hung-up on getting things perfect first time.
startB := time.Now()
http.Get(b)
bDuration := time.Since(startB)
return b
}
1. We use time.Now() to record just before we try and get the URL.
2. Then we use http.Get to try and get the contents of the URL. This function
returns an http.Response and an error but so far we are not interested in
these values.
3. time.Since takes the start time and returns a time.Duration of the
difference.
Once we have done this we simply compare the durations to see which is the
quickest.
Problems
This may or may not make the test pass for you. The problem is we're reaching
out to real websites to test our own logic.
Testing code that uses HTTP is so common that Go has tools in the standard
library to help you test it.
Slow
Flaky
Can't test edge cases
Let's change our tests to use mocks so we have reliable servers to test against
that we can control.
slowURL := slowServer.URL
fastURL := fastServer.URL
want := fastURL
got := Racer(slowURL, fastURL)
if got != want {
t.Errorf("got %q, want %q", got, want)
}
slowServer.Close()
fastServer.Close()
}
The syntax may look a bit busy but just take your time.
All it's really saying is it needs a function that takes a ResponseWriter and a
Request, which is not too surprising for an HTTP server.
It turns out there's really no extra magic here, this is also how you would write
a real HTTP server in Go. The only difference is we are wrapping it in an
httptest.NewServer which makes it easier to use with testing, as it finds an
open port to listen on and then you can close it when you're done with your test.
Inside our two servers, we make the slow one have a short time.Sleep when we
get a request to make it slower than the other one. Both servers then write an OK
response with w.WriteHeader(http.StatusOK) back to the caller.
If you re-run the test it will definitely pass now and should be faster. Play with
these sleeps to deliberately break the test.
Refactor
We have some duplication in both our production code and test code.
return b
}
defer slowServer.Close()
defer fastServer.Close()
slowURL := slowServer.URL
fastURL := fastServer.URL
want := fastURL
got := Racer(slowURL, fastURL)
if got != want {
t.Errorf("got %q, want %q", got, want)
}
}
defer
By prefixing a function call with defer it will now call that function at the end
of the containing function.
Sometimes you will need to cleanup resources, such as closing a file or in our
case closing a server so that it does not continue to listen to a port.
You want this to execute at the end of the function, but keep the instruction near
where you created the server for the benefit of future readers of the code.
Synchronising processes
Why are we testing the speeds of the websites one after another when Go is
great at concurrency? We should be able to check both at the same time.
We don't really care about the exact response times of the requests, we just
want to know which one comes back first.
To do this, we're going to introduce a new construct called select which helps
us synchronise processes really easily and clearly.
ping
We have defined a function ping which creates a chan struct{} and returns it.
In our case, we don't care what type is sent to the channel, we just want to signal
we are done and closing the channel works perfectly!
Why struct{} and not another type like a bool? Well, a chan struct{} is the
smallest data type available from a memory perspective so we get no allocation
versus a bool. Since we are closing and not sending anything on the chan, why
allocate anything?
Inside the same function, we start a goroutine which will send a signal into that
channel once we have completed http.Get(url).
Notice how we have to use make when creating a channel; rather than say var ch
chan struct{}. When you use var the variable will be initialised with the
"zero" value of the type. So for string it is "", int it is 0, etc.
For channels the zero value is nil and if you try and send to it with <- it will
block forever because you cannot send to nil channels
If you recall from the concurrency chapter, you can wait for values to be sent to
a channel with myVar := <-ch. This is a blocking call, as you're waiting for a
value.
What select lets you do is wait on multiple channels. The first one to send a
value "wins" and the code underneath the case is executed.
We use ping in our select to set up two channels for each of our URLs.
Whichever one writes to its channel first will have its code executed in the
select, which results in its URL being returned (and being the winner).
After these changes, the intent behind our code is very clear and the
implementation is actually simpler.
Timeouts
Our final requirement was to return an error if Racer takes longer than 10
seconds.
defer serverA.Close()
defer serverB.Close()
if err == nil {
t.Error("expected an error but didn't get one")
}
})
We've made our test servers take longer than 10s to return to exercise this
scenario and we are expecting Racer to return two values now, the winning URL
(which we ignore in this test with _) and an error.
Change the signature of Racer to return the winner and an error. Return nil for
our happy cases.
The compiler will complain about your first test only looking for one value so
change that line to got, _ := Racer(slowURL, fastURL), knowing that we
should check we don't get an error in our happy scenario.
Slow tests
The problem we have is that this test takes 10 seconds to run. For such a simple
bit of logic, this doesn't feel great.
What we can do is make the timeout configurable. So in our test, we can have a
very short timeout and then when the code is used in the real world it can be set
to 10 seconds.
Our tests now won't compile because we're not supplying a timeout.
Before rushing in to add this default value to both our tests let's listen to them.
Our users and our first test can use Racer (which uses ConfigurableRacer under
the hood) and our sad path test can use ConfigurableRacer.
defer slowServer.Close()
defer fastServer.Close()
slowURL := slowServer.URL
fastURL := fastServer.URL
want := fastURL
got, err := Racer(slowURL, fastURL)
if err != nil {
t.Fatalf("did not expect an error but got one %v", err)
}
if got != want {
t.Errorf("got %q, want %q", got, want)
}
})
t.Run("returns an error if a server doesn't respond within 10s",
server := makeDelayedServer(25 * time.Millisecond)
defer server.Close()
if err == nil {
t.Error("expected an error but didn't get one")
}
})
}
I added one final check on the first test to verify we don't get an error.
Wrapping up
select
httptest
A convenient way of creating test servers so you can have reliable and
controllable tests.
Using the same interfaces as the "real" net/http servers which is consistent
and less for you to learn.
Reflection
You can find all the code for this chapter here
From Twitter
What is interface?
We have enjoyed the type-safety that Go has offered us in terms of functions that
work with known types, such as string, int and our own types like
BankAccount.
This means that we get some documentation for free and the compiler will
complain if you try and pass the wrong type to a function.
You may come across scenarios though where you want to write a function
where you don't know the type at compile time.
Go lets us get around this with the type interface{} which you can think of as
just any type.
Our function will need to be able to work with lots of different things. As always
we'll take an iterative approach, writing tests for each new thing we want to
support and refactoring along the way until we're done.
expected := "Chris"
var got []string
x := struct {
Name string
}{expected}
if len(got) != 1 {
t.Errorf("wrong number of function calls, got %d want %d", len
}
}
We want to store a slice of strings (got) which stores which strings were
passed into fn by walk. Often in previous chapters, we have made dedicated
types for this to spy on function/method invocations but in this case, we can
just pass in an anonymous function for fn that closes over got.
We use an anonymous struct with a Name field of type string to go for the
simplest "happy" path.
Finally, call walk with x and the spy and for now just check the length of
got, we'll be more specific with our assertions once we've got something
very basic working.
The test should now be passing. The next thing we'll need to do is make a more
specific assertion on what our fn is being called with.
if got[0] != expected {
t.Errorf("got %q, want %q", got[0], expected)
}
This code is very unsafe and very naive but remembers our goal when we are in
"red" (the tests failing) is to write the smallest amount of code possible. We then
write more tests to address our concerns.
We need to use reflection to have a look at x and try and look at its properties.
The reflect package has a function ValueOf which returns us a Value of a given
variable. This has ways for us to inspect a value, including its fields which we
use on the next line.
We then make some very optimistic assumptions about the value passed in
We look at the first and only field, there may be no fields at all which
would cause a panic
We then call String() which returns the underlying value as a string but
we know it would be wrong if the field was something other than a string.
Refactor
Our code is passing for the simple case but we know our code has a lot of
shortcomings.
We should refactor our test into a table based test to make this easier to continue
testing new scenarios.
cases := []struct{
Name string
Input interface{}
ExpectedCalls []string
} {
{
"Struct with one string field",
struct {
Name string
}{ "Chris"},
[]string{"Chris"},
},
}
if !reflect.DeepEqual(got, test.ExpectedCalls) {
t.Errorf("got %v, want %v", got, test.ExpectedCalls)
}
})
}
}
Now we can easily add a scenario to see what happens if we have more than one
string field.
{
"Struct with two string fields",
struct {
Name string
City string
}{"Chris", "London"},
[]string{"Chris", "London"},
}
val has a method NumField which returns the number of fields in the value. This
lets us iterate over the fields and call fn which passes our test.
Refactor
It doesn't look like there's any obvious refactors here that would improve the
code so let's press on.
The next shortcoming in walk is that it assumes every field is a string. Let's
write a test for this scenario.
{
"Struct with non string field",
struct {
Name string
Age int
}{"Chris", 33},
[]string{"Chris"},
},
if field.Kind() == reflect.String {
fn(field.String())
}
}
}
Refactor
Again it looks like the code is reasonable enough for now.
The next scenario is what if it isn't a "flat" struct? In other words, what happens
if we have a struct with some nested fields?
{
"Nested fields",
struct {
Name string
Profile struct {
Age int
City string
}
}{"Chris", struct {
Age int
City string
}{33, "London"}},
[]string{"Chris", "London"},
},
But we can see that when you get inner anonymous structs the syntax gets a little
messy. There is a proposal to make it so the syntax would be nicer.
Let's just refactor this by making a known type for this scenario and reference it
in the test. There is a little indirection in that some of the code for our test is
outside the test but readers should be able to infer the structure of the struct by
looking at the initialisation.
Now we can add this to our cases which reads a lot clearer than before
{
"Nested fields",
Person{
"Chris",
Profile{33, "London"},
},
[]string{"Chris", "London"},
},
The problem is we're only iterating on the fields on the first level of the type's
hierarchy.
if field.Kind() == reflect.String {
fn(field.String())
}
if field.Kind() == reflect.Struct {
walk(field.Interface(), fn)
}
}
}
The solution is quite simple, we again inspect its Kind and if it happens to be a
struct we just call walk again on that inner struct.
Refactor
switch field.Kind() {
case reflect.String:
fn(field.String())
case reflect.Struct:
walk(field.Interface(), fn)
}
}
}
When you're doing a comparison on the same value more than once generally
refactoring into a switch will improve readability and make your code easier to
extend.
{
"Pointers to things",
&Person{
"Chris",
Profile{33, "London"},
},
[]string{"Chris", "London"},
},
if val.Kind() == reflect.Ptr {
val = val.Elem()
}
switch field.Kind() {
case reflect.String:
fn(field.String())
case reflect.Struct:
walk(field.Interface(), fn)
}
}
}
You can't use NumField on a pointer Value, we need to extract the underlying
value before we can do that by using Elem().
Refactor
Let's encapsulate the responsibility of extracting the reflect.Value from a
given interface{} into a function.
if val.Kind() == reflect.Ptr {
val = val.Elem()
}
return val
}
This actually adds more code but I feel the abstraction level is right.
{
"Slices",
[]Profile {
{33, "London"},
{34, "Reykjavík"},
},
[]string{"London", "Reykjavík"},
},
if val.Kind() == reflect.Slice {
for i:=0; i< val.Len(); i++ {
walk(val.Index(i).Interface(), fn)
}
return
}
switch field.Kind() {
case reflect.String:
fn(field.String())
case reflect.Struct:
walk(field.Interface(), fn)
}
}
}
Refactor
This works but it's yucky. No worries, we have working code backed by tests so
we are free to tinker all we like.
Our code at the moment does this but doesn't reflect it very well. We just have a
check at the start to see if it's a slice (with a return to stop the rest of the code
executing) and if it's not we just assume it's a struct.
Let's rework the code so instead we check the type first and then do our work.
switch val.Kind() {
case reflect.Struct:
for i:=0; i<val.NumField(); i++ {
walk(val.Field(i).Interface(), fn)
}
case reflect.Slice:
for i:=0; i<val.Len(); i++ {
walk(val.Index(i).Interface(), fn)
}
case reflect.String:
fn(val.String())
}
}
Looking much better! If it's a struct or a slice we iterate over its values calling
walk on each one. Otherwise, if it's a reflect.String we can call fn.
numberOfValues := 0
var getField func(int) reflect.Value
switch val.Kind() {
case reflect.String:
fn(val.String())
case reflect.Struct:
numberOfValues = val.NumField()
getField = val.Field
case reflect.Slice:
numberOfValues = val.Len()
getField = val.Index
}
Otherwise, our switch will extract out two things depending on the type
{
"Arrays",
[2]Profile {
{33, "London"},
{34, "Reykjavík"},
},
[]string{"London", "Reykjavík"},
},
numberOfValues := 0
var getField func(int) reflect.Value
switch val.Kind() {
case reflect.String:
fn(val.String())
case reflect.Struct:
numberOfValues = val.NumField()
getField = val.Field
case reflect.Slice, reflect.Array:
numberOfValues = val.Len()
getField = val.Index
}
numberOfValues := 0
var getField func(int) reflect.Value
switch val.Kind() {
case reflect.String:
fn(val.String())
case reflect.Struct:
numberOfValues = val.NumField()
getField = val.Field
case reflect.Slice, reflect.Array:
numberOfValues = val.Len()
getField = val.Index
case reflect.Map:
for _, key := range val.MapKeys() {
walk(val.MapIndex(key).Interface(), fn)
}
}
for i:=0; i< numberOfValues; i++ {
walk(getField(i).Interface(), fn)
}
}
However, by design you cannot get values out of a map by index. It's only done
by key, so that breaks our abstraction, darn.
Refactor
How do you feel right now? It felt like maybe a nice abstraction at the time but
now the code feels a little wonky.
switch val.Kind() {
case reflect.String:
fn(val.String())
case reflect.Struct:
for i := 0; i< val.NumField(); i++ {
walkValue(val.Field(i))
}
case reflect.Slice, reflect.Array:
for i:= 0; i<val.Len(); i++ {
walkValue(val.Index(i))
}
case reflect.Map:
for _, key := range val.MapKeys() {
walkValue(val.MapIndex(key))
}
}
}
We've introduced walkValue which DRYs up the calls to walk inside our switch
so that they only have to extract out the reflect.Values from val.
To fix this, we'll need to move our assertion with the maps to a new test where
we do not care about the order.
go func() {
aChannel <- Profile{33, "Berlin"}
aChannel <- Profile{34, "Katowice"}
close(aChannel)
}()
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
switch val.Kind() {
case reflect.String:
fn(val.String())
case reflect.Struct:
for i := 0; i < val.NumField(); i++ {
walkValue(val.Field(i))
}
case reflect.Slice, reflect.Array:
for i := 0; i < val.Len(); i++ {
walkValue(val.Index(i))
}
case reflect.Map:
for _, key := range val.MapKeys() {
walkValue(val.MapIndex(key))
}
case reflect.Chan:
for v, ok := val.Recv(); ok; v, ok = val.Recv() {
walk(v.Interface(), fn)
}
}
}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
switch val.Kind() {
case reflect.String:
fn(val.String())
case reflect.Struct:
for i := 0; i < val.NumField(); i++ {
walkValue(val.Field(i))
}
case reflect.Slice, reflect.Array:
for i := 0; i < val.Len(); i++ {
walkValue(val.Index(i))
}
case reflect.Map:
for _, key := range val.MapKeys() {
walkValue(val.MapIndex(key))
}
case reflect.Chan:
for v, ok := val.Recv(); ok; v, ok = val.Recv() {
walk(v.Interface(), fn)
}
case reflect.Func:
valFnResult := val.Call(nil)
for _, res := range valFnResult {
walk(res.Interface(), fn)
}
}
}
Wrapping up
Introduced some of the concepts from the reflect package.
Used recursion to traverse arbitrary data structures.
Did an in retrospect bad refactor but didn't get too upset about it. By
working iteratively with tests it's not such a big deal.
This only covered a small aspect of reflection. The Go blog has an excellent
post covering more details.
Now that you know about reflection, do your best to avoid using it.
Sync
You can find all the code for this chapter here
We'll start with an unsafe counter and verify its behaviour works in a single-
threaded environment.
Then we'll exercise it's unsafeness with multiple goroutines trying to use it via a
test and fix it.
if counter.Value() != 3 {
t.Errorf("got %d, want %d", counter.Value(), 3)
}
})
}
Refactor
There's not a lot to refactor but given we're going to write more tests around
Counter we'll write a small assertion function assertCount so the test reads a bit
clearer.
assertCounter(t, counter, 3)
})
Next steps
That was easy enough but now we have a requirement that it must be safe to use
in a concurrent environment. We will need to write a failing test to exercise this.
var wg sync.WaitGroup
wg.Add(wantedCount)
This will loop through our wantedCount and fire a goroutine to call
counter.Inc().
By waiting for wg.Wait() to finish before making our assertions we can be sure
all of our goroutines have attempted to Inc the Counter.
The test will probably fail with a different number, but nonetheless it
demonstrates it does not work when multiple goroutines are trying to mutate the
value of the counter at the same time.
What this means is any goroutine calling Inc will acquire the lock on Counter if
they are first. All the other goroutines will have to wait for it to be Unlocked
before getting access.
If you now re-run the test it should now pass because each goroutine has to wait
its turn before making a change.
It can be argued that it can make the code a bit more elegant.
This looks nice but while programming is a hugely subjective discipline, this is
bad and wrong.
Sometimes people forget that embedding types means the methods of that type
becomes part of the public interface; and you often will not want that.
Remember that we should be very careful with our public APIs, the moment we
make something public is the moment other code can couple themselves to it.
We always want to avoid unnecessary coupling.
Exposing Lock and Unlock is at best confusing but at worst potentially very
harmful to your software if callers of your type start calling these methods.
Showing how a user of this API can wrongly change the state of the lock
Copying mutexes
Our test passes but our code is still a bit dangerous
If you run go vet on your code you should get an error like the following
sync/v2/sync_test.go:16: call of assertCounter copies lock value: v1.Counter co
sync/v2/sync_test.go:39: assertCounter passes lock by value: v1.Counter contain
When we pass our Counter (by value) to assertCounter it will try and create a
copy of the mutex.
To solve this we should pass in a pointer to our Counter instead, so change the
signature of assertCounter
Our tests will no longer compile because we are trying to pass in a Counter
rather than a *Counter. To solve this I prefer to create a constructor which
shows readers of your API that it would be better to not initialise the type
yourself.
Wrapping up
We've covered a few things from the sync package
Paraphrasing:
go vet
Remember to use go vet in your build scripts as it can alert you to some subtle
bugs in your code before they hit your poor users.
If you don't manage this your snappy Go application that you're so proud of
could start having difficult to debug performance problems.
In this chapter we'll use the package context to help us manage long-running
processes.
We're going to start with a classic example of a web server that when hit kicks
off a potentially long-running process to fetch some data for it to return in the
response.
We will exercise a scenario where a user cancels the request before the data can
be retrieved and we'll make sure the process is told to give up.
I've set up some code on the happy path to get us started. Here is our server
code.
svr.ServeHTTP(response, request)
if response.Body.String() != data {
t.Errorf(`got "%s", want "%s"`, response.Body.String(), data)
}
}
Now that we have a happy path, we want to make a more realistic scenario
where the Store can't finish aFetch before the user cancels the request.
Let's add a new test where we cancel the request before 100 milliseconds and
check the store to see if it gets cancelled.
response := httptest.NewRecorder()
svr.ServeHTTP(response, request)
if !store.cancelled {
t.Errorf("store was not told to cancel")
}
})
From the Go Blog: Context
The context package provides functions to derive new Context values from
existing ones. These values form a tree: when a Context is canceled, all
Contexts derived from it are also canceled.
It's important that you derive your contexts so that cancellations are propagated
throughout the call stack for a given request.
This makes this test pass but it doesn't feel good does it! We surely shouldn't be
cancelling Store before we fetch on every request.
We'll need to update our happy path test to assert that it does not get cancelled.
svr.ServeHTTP(response, request)
if response.Body.String() != data {
t.Errorf(`got "%s", want "%s"`, response.Body.String(), data)
}
if store.cancelled {
t.Error("it should not have cancelled the store")
}
})
Run both tests and the happy path test should now be failing and now we're
forced to do a more sensible implementation.
go func() {
data <- store.Fetch()
}()
select {
case d := <-data:
fmt.Fprint(w, d)
case <-ctx.Done():
store.Cancel()
}
}
}
context has a method Done() which returns a channel which gets sent a signal
when the context is "done" or "cancelled". We want to listen to that signal and
call store.Cancel if we get it but we want to ignore it if our Store manages to
Fetch before it.
To manage this we run Fetch in a goroutine and it will write the result into a
new channel data. We then use select to effectively race to the two
asynchronous processes and then we either write a response or Cancel.
Refactor
We can refactor our test code a bit by making assertion methods on our spy
svr.ServeHTTP(response, request)
if response.Body.String() != data {
t.Errorf(`got "%s", want "%s"`, response.Body.String(), data)
}
store.assertWasNotCancelled()
})
response := httptest.NewRecorder()
svr.ServeHTTP(response, request)
store.assertWasCancelled()
})
}
Does it make sense for our web server to be concerned with manually cancelling
Store? What if Store also happens to depend on other slow-running processes?
We'll have to make sure that Store.Cancel correctly propagates the cancellation
to all of its dependants.
(Pause for a moment and think of the ramifications of every function having to
send in a context, and the ergonomics of that.)
Feeling a bit uneasy? Good. Let's try and follow that approach though and
instead pass through the context to our Store and let it be responsible. That way
it can also pass the context through to it's dependants and they too can be
responsible for stopping themselves.
go func() {
var result string
for _, c := range s.response {
select {
case <-ctx.Done():
s.t.Log("spy store got cancelled")
return
default:
time.Sleep(10 * time.Millisecond)
result += string(c)
}
}
data <- result
}()
select {
case <-ctx.Done():
return "", ctx.Err()
case res := <-data:
return res, nil
}
}
We have to make our spy act like a real method that works with context.
We are simulating a slow process where we build the result slowly by appending
the string, character by character in a goroutine. When the goroutine finishes its
work it writes the string to the data channel. The goroutine listens for the
ctx.Done and will stop the work if a signal is sent in that channel.
Finally the code uses another select to wait for that goroutine to finish its work
or for the cancellation to occur.
It's similar to our approach from before, we use Go's concurrency primitives to
make two asynchronous processes race each other to determine what we return.
You'll take a similar approach when writing your own functions and methods
that accept a context so make sure you understand what's going on.
Finally we can update our tests. Comment out our cancellation test so we can fix
the happy path test first.
svr.ServeHTTP(response, request)
if response.Body.String() != data {
t.Errorf(`got "%s", want "%s"`, response.Body.String(), data)
}
})
Our happy path should be... happy. Now we can fix the other test.
response := &SpyResponseWriter{}
svr.ServeHTTP(response, request)
if response.written {
t.Error("a response should not have been written")
}
})
if err != nil {
return // todo: log error however you like
}
fmt.Fprint(w, data)
}
}
We can see after this that the server code has become simplified as it's no longer
explicitly responsible for cancellation, it simply passes through context and
relies on the downstream functions to respect any cancellations that may occur.
Wrapping up
What we've covered
How to test a HTTP handler that has had the request cancelled by the client.
How to use context to manage cancellation.
How to write a function that accepts context and uses it to cancel itself by
using goroutines, select and channels.
Follow Google's guidelines as to how to manage cancellation by
propagating request scoped context through your call-stack.
How to roll your own spy for http.ResponseWriter if you need it.
The problem with context.Values is that it's just an untyped map so you have
no type-safety and you have to handle it not actually containing your value. You
have to create a coupling of map keys from one module to another and if
someone changes something things start breaking.
But...
On other hand, it can be helpful to include information that is orthogonal to a
request in a context, such as a trace id. Potentially this information would not be
needed by every function in your call-stack and would make your functional
signatures very messy.
Additional material
I really enjoyed reading Context should go away for Go 2 by Michal Štrba.
His argument is that having to pass context everywhere is a smell, that it's
pointing to a deficiency in the language in respect to cancellation. He says it
would better if this was somehow solved at the language level, rather than
at a library level. Until that happens, you will need context if you want to
manage long running processes.
The Go blog further describes the motivation for working with context and
has some examples
Roman Numerals
You can find all the code for this chapter here
Some companies will ask you to do the Roman Numeral Kata as part of the
interview process. This chapter will show how you can tackle it with TDD.
If you haven't heard of Roman Numerals they are how the Romans wrote down
numbers.
You build them by sticking symbols together and those symbols represent
numbers
Seems easy but there's a few interesting rules. V means five, but IV is 4 (not
IIII).
MCMLXXXIV is 1984. That looks complicated and it's hard to imagine how we can
write code to figure this out right from the start.
As this book stresses, a key skill for software developers is to try and identify
"thin vertical slices" of useful functionality and then iterating. The TDD
workflow helps facilitate iterative development.
If you've got this far in the book this is hopefully feeling very boring and routine
to you. That's a good thing.
Refactor
Not much to refactor yet.
I know it feels weird just to hard-code the result but with TDD we want to stay
out of "red" for as long as possible. It may feel like we haven't accomplished
much but we've defined our API and got a test capturing one of our rules; even if
the "real" code is pretty dumb.
Now use that uneasy feeling to write a new test to force us to write slightly less
dumb code.
if got != want {
t.Errorf("got %q, want %q", got, want)
}
})
if got != want {
t.Errorf("got %q, want %q", got, want)
}
})
}
Try to run the test
=== RUN TestRomanNumerals/2_gets_converted_to_II
--- FAIL: TestRomanNumerals/2_gets_converted_to_II (0.00s)
numeral_test.go:20: got 'I', want 'II'
Yup, it still feels like we're not actually tackling the problem. So we need to
write more tests to drive us forward.
Refactor
We have some repetition in our tests. When you're testing something which feels
like it's a matter of "given input X, we expect Y" you should probably use table
based tests.
We can now easily add more cases without having to write any more test
boilerplate.
Refactor
OK so I'm starting to not enjoy these if statements and if you look at the code
hard enough you can see that we're building a string of I based on the size of
arabic.
We "know" that for more complicated numbers we will be doing some kind of
arithmetic and string concatenation.
Let's try a refactor with these thoughts in mind, it might not be suitable for the
end solution but that's OK. We can always throw our code away and start afresh
with the tests we have to guide us.
return result.String()
}
The code looks better to me and describes the domain as we know it right now.
Instead you take the next highest symbol and then "subtract" by putting a symbol
to the left of it. Not all symbols can be used as subtractors; only I (1), X (10) and
C (100).
if arabic == 4 {
return "IV"
}
return result.String()
}
Refactor
I don't "like" that we have broken our string building pattern and I want to carry
on with it.
return result.String()
}
In order for 4 to "fit" with my current thinking I now count down from the
Arabic number, adding symbols to our string as we progress. Not sure if this will
work in the long run but let's see!
return result.String()
}
Refactor
Repetition in loops like this are usually a sign of an abstraction waiting to be
called out. Short-circuiting loops can be an effective tool for readability but it
could also be telling you something else.
We are looping over our Arabic number and if we hit certain symbols we are
calling break but what we are really doing is subtracting over i in a ham-fisted
manner.
return result.String()
}
Given the signals I'm reading from our code, driven from our tests of some
very basic scenarios I can see that to build a Roman Numeral I need to
subtract from arabic as I apply symbols
The for loop no longer relies on an i and instead we will keep building our
string until we have subtracted enough symbols away from arabic.
I'm pretty sure this approach will be valid for 6 (VI), 7 (VII) and 8 (VIII) too.
Nonetheless add the cases in to our test suite and check (I won't include the code
for brevity, check the github for samples if you're unsure).
9 follows the same rule as 4 in that we should subtract I from the representation
of the following number. 10 is represented in Roman Numerals with X; so
therefore 9 should be IX.
Refactor
It feels like the code is still telling us there's a refactor somewhere but it's not
totally obvious to me, so let's keep going.
I'll skip the code for this too, but add to your test cases a test for 10 which should
be X and make it pass before reading on.
Here are a few tests I added as I'm confident up to 39 our code should work
If you've ever done OO programming, you'll know that you should view switch
statements with a bit of suspicion. Usually you are capturing a concept or data
inside some imperative code when in fact it could be captured in a class structure
instead.
Go isn't strictly OO but that doesn't mean we ignore the lessons OO offers
entirely (as much as some would like to tell you).
Our switch statement is describing some truths about Roman Numerals along
with behaviour.
return result.String()
}
This feels much better. We've declared some rules around the numerals as data
rather than hidden in an algorithm and we can see how we just work through the
Arabic number, trying to add symbols to our result if they fit.
Does this abstraction work for bigger numbers? Extend the test suite so it works
for the Roman number for 50 which is L.
Here are some test cases, try and make them pass.
Need help? You can see what symbols to add in this gist.
Arabic Roman
100 C
500 D
1000 M
Take the same approach for the remaining symbols, it should just be a matter of
adding data to both the tests and our array of symbols.
I didn't change the algorithm, all I had to do was update the allRomanNumerals
array.
Move the cases variable outside of the test as a package variable in a var block.
Notice I am using the slice functionality to just run one of the tests for now
(cases[:1]) as trying to make all of those tests pass all at once is too big a leap
Next, change the slice index in our test to move to the next test case (e.g.
cases[:2]). Make it pass yourself with the dumbest code you can think of,
continue writing dumb code (best book ever right?) for the third case too. Here's
my dumb code.
Through the dumbness of real code that works we can start to see a pattern like
before. We need to iterate through the input and build something, in this case a
total.
// earlier..
type RomanNumerals []RomanNumeral
return 0
}
// later..
func ConvertToArabic(roman string) int {
total := 0
for i := 0; i < len(roman); i++ {
symbol := roman[i]
// look ahead to next symbol if we can and, the current symbol is base
if i+1 < len(roman) && symbol == 'I' {
nextSymbol := roman[i+1]
if value != 0 {
total += value
i++ // move past this character too for the next loop
} else {
total++
}
} else {
total++
}
}
return total
}
This is horrible but it does work. It's so bad I felt the need to add comments.
Refactor
I'm not entirely convinced this will be the long-term approach and there's
potentially some interesting refactors we could do, but I'll resist that in case our
approach is totally wrong. I'd rather make a few more tests pass first and see. For
the meantime I made the first if statement slightly less horrible.
if value != 0 {
total += value
i++ // move past this character too for the next loop
} else {
total++
}
} else {
total++
}
}
return total
}
// look ahead to next symbol if we can and, the current symbol is base
if couldBeSubtractive(i, symbol, roman) {
nextSymbol := roman[i+1]
Refactor
When you index strings in Go, you get a byte. This is why when we build up the
string again we have to do stuff like string([]byte{symbol}). It's repeated a
couple of times, let's just move that functionality so that ValueOf takes some
bytes instead.
return 0
}
If you start moving our cases[:xx] number through you'll see that quite a few
are passing now. Remove the slice operator entirely and see which ones fail,
here's some examples from my suite
=== RUN TestConvertingToArabic/'XL'_gets_converted_to_40
--- FAIL: TestConvertingToArabic/'XL'_gets_converted_to_40 (0.00s)
numeral_test.go:62: got 60, want 40
=== RUN TestConvertingToArabic/'XLVII'_gets_converted_to_47
--- FAIL: TestConvertingToArabic/'XLVII'_gets_converted_to_47 (0.00s)
numeral_test.go:62: got 67, want 47
=== RUN TestConvertingToArabic/'XLIX'_gets_converted_to_49
--- FAIL: TestConvertingToArabic/'XLIX'_gets_converted_to_49 (0.00s)
numeral_test.go:62: got 69, want 49
total += allRomanNumerals.ValueOf(symbol)
And all the tests pass! Now that we have fully working software we can indulge
ourselves in some refactoring, with confidence.
Refactor
Here is all the code I finished up with. I had a few failed attempts but as I keep
emphasising, that's fine and the tests help me play around with the code freely.
import "strings"
return result.String()
}
return 0
}
My main problem with the previous code is similar to our refactor from earlier.
We had too many concerns coupled together. We wrote an algorithm which was
trying to extract Roman Numerals from a string and then find their values.
So I created a new type windowedRoman which took care of extracting the
numerals, offering a Symbols method to retrieve them as a slice. This meant our
ConvertToArabic function could simply iterate over the symbols and total them.
I broke the code down a bit by extracting some functions, especially around the
wonky if statement to figure out if the symbol we are currently dealing with is a
two character subtractive symbol.
There's probably a more elegant way but I'm not going to sweat it. The code is
there and it works and it is tested. If I (or anyone else) finds a better way they
can safely change it - the hard work is done.
The tests we have written so far can be described as "example" based tests where
we provide the tooling some examples around our code to verify.
What if we could take these rules that we know about our domain and somehow
exercise them against our code?
Property based tests help you do this by throwing random data at your code and
verifying the rules you describe always hold true. A lot of people think property
based tests are mainly about random data but they would be mistaken. The real
challenge about property based tests is having a good understanding of your
domain so you can write these properties.
Rationale of property
Our first test will check that if we transform a number into Roman, when we use
our other function to convert it back to a number that we get what we originally
had.
This feels like a good test to build us confidence because it should break if
there's a bug in either. The only way it could pass is if they have the same kind
of bug; which isn't impossible but feels unlikely.
Technical explanation
We're using the testing/quick package from the standard library
Reading from the bottom, we provide quick.Check a function that it will run
against a number of random inputs, if the function returns false it will be seen
as failing the check.
Our assertion function above takes a random number and runs our functions to
test the property.
Try running it; your computer may hang for a while, so kill it when you're bored
:)
What's going on? Try adding the following to the assertion code.
go assertion := func(arabic int) bool { if arabic <0 || arabic >
3999 { log.Println(arabic) return true } roman :=
ConvertToRoman(arabic) fromRoman := ConvertToArabic(roman) return
fromRoman == arabic }
Just running this very simple property has exposed a flaw in our implementation.
We used int as our input but: - You can't do negative numbers with Roman
Numerals - Given our rule of a max of 3 consecutive symbols we can't represent
a value greater than 3999 (well, kinda) and int has a much higher maximum
value than 3999.
This is great! We've been forced to think more deeply about our domain which is
a real strength of property based tests.
Clearly int is not a great type. What if we tried something a little more
appropriate?
uint16
Go has types for unsigned integers, which means they cannot be negative; so
that rules out one class of bug in our code immediately. By adding 16, it means it
is a 16 bit integer which can store a max of 65535, which is still too big but gets
us closer to what we need.
Try updating the code to use uint16 rather than int. I updated assertion in the
test to give a bit more visibility.
assertion := func(arabic uint16) bool {
if arabic > 3999 {
return true
}
t.Log("testing", arabic)
roman := ConvertToRoman(arabic)
fromRoman := ConvertToArabic(roman)
return fromRoman == arabic
}
If you run the test they now actually run and you can see what is being tested.
You can run multiple times to see our code stands up well to the various values!
This gives me a lot of confidence that our code is working how we want.
The default number of runs quick.Check performs is 100 but you can change
that with a config.
Further work
Can you write property tests that check the other properties we described?
Can you think of a way of making it so it's impossible for someone to call
our code with a number greater than 3999?
You could return an error
Or create a new type that cannot represent > 3999
What do you think is best?
Wrapping up
More TDD practice with iterative development
Did the thought of writing code that converts 1984 into MCMLXXXIV feel
intimidating to you at first? It did to me and I've been writing software for quite
a long time.
The trick, as always, is to get started with something simple and take small
steps.
At no point in this process did we make any large leaps, do any huge
refactorings, or get in a mess.
I can hear someone cynically saying "this is just a kata". I can't argue with that,
but I still take this same approach for every project I work on. I never ship a big
distributed system in my first step, I find the simplest thing the team could ship
(usually a "Hello world" website) and then iterate on small bits of functionality
in manageable chunks, just like how we did here.
The skill is knowing how to split work up, and that comes with practice and with
some lovely TDD to help you on your way.
Postscript
This book is reliant on valuable feedback from the community. Dave is an
enormous help in practically every chapter. But he had a real rant about my use
of 'Arabic numerals' in this chapter so, in the interests of full disclosure, here's
what he said.
Just going to write up why a value of type int isn't really an 'arabic
numeral'. This might be me being way too precise so I'll completely
understand if you tell me to f off.
The 1 has a value of one thousand because its the first digit in a four digit
numeral.
Roman are built using a reduced number of digits (I, V etc...) mainly as
values to produce the numeral. There's a bit of positional stuff but it's
mostly I always representing 'one'.
So, given this, is int an 'Arabic number'? The idea of a number is not at all
tied to its representation - we can see this if we ask ourselves what the
correct representation of this number is:
255
11111111
two-hundred and fifty-five
FF
377
Yes, this is a trick question. They're all correct. They're the representation
of the same number in the decimal, binary, English, hexadecimal and octal
number systems respectively.
ConvertToRoman(255)
ConvertToRoman(0xFF)
Really, we're not 'converting' from an Arabic numeral at all, we're 'printing'
- representing - an int as a Roman numeral - and ints are not numerals,
Arabic or otherwise; they're just numbers. The ConvertToRoman function is
more like strconv.Itoa in that it's turning an int into a string.
But every other version of the kata doesn't care about this distinction so
:shrug:
Mathematics
You can find all the code for this chapter here
For all the power of modern computers to perform huge sums at lightning speed,
the average developer rarely uses any mathematics to do their job. But not today!
Today we'll use mathematics to solve a real problem. And not boring
mathematics - we're going to use trigonometry and vectors and all sorts of stuff
that you always said you'd never have to use after highschool.
The Problem
You want to make an SVG of a clock. Not a digital clock - no, that would be
easy - an analogue clock, with hands. You're not looking for anything fancy, just
a nice function that takes a Time from the time package and spits out an SVG of
a clock with all the hands - hour, minute and second - pointing in the right
direction. How hard can that be?
First we're going to need an SVG of a clock for us to play with. SVGs are a
fantastic image format to manipulate programmatically because they're written
as a series of shapes, described in XML. So this clock:
an svg of a clock
It's a circle with three lines, each of the lines starting in the middle of the circle
(x=150, y=150), and ending some distance away.
So what we're going to do is reconstruct the above somehow, but change the
lines so they point in the appropriate directions for a given time.
An Acceptance Test
Before we get too stuck in, lets think about an acceptance test. We've got an
example clock, so let's think about what the important parameters are going to
be.
<line x1="150" y1="150" x2="114.150000" y2="132.260000"
style="fill:none;stroke:#000;stroke-width:7px;"/>
The centre of the clock (the attributes x1 and y1 for this line) is the same for each
hand of the clock. The numbers that need to change for each hand of the clock -
the parameters to whatever builds the SVG - are the x2 and y2 attributes. We'll
need an X and a Y for each of the hands of the clock.
I could think about more parameters - the radius of the clockface circle, the size
of the SVG, the colours of the hands, their shape, etc... but it's better to start off
by solving a simple, concrete problem with a simple, concrete solution, and then
to start adding parameters to make it generalised.
So we'll say that - every clock has a centre of (150, 150) - the hour hand is 50
long - the minute hand is 80 long - the second hand is 90 long.
A thing to note about SVGs: the origin - point (0,0) - is at the top left hand
corner, not the bottom left as we might expect. It'll be important to remember this
when we're working out where what numbers to plug in to our lines.
Finally, I'm not deciding how to construct the SVG - we could use a template
from the text/template package, or we could just send bytes into a
bytes.Buffer or a writer. But we know we'll need those numbers, so let's focus
on testing something that creates them.
package clockface_test
import (
"testing"
"time"
"github.com/gypsydave5/learn-go-with-tests/math/v1/clockface"
)
if got != want {
t.Errorf("Got %v, wanted %v", got, want)
}
}
Remember how SVGs plot their coordinates from the top left hand corner? To
place the second hand at midnight we expect that it hasn't moved from the centre
of the clockface on the X axis - still 150 - and the Y axis is the length of the hand
'up' from the centre; 150 minus 90.
So a Point where the tip of the second hand should go, and a function to get it.
Write the minimal amount of code for the test to run and
check the failing test output
Let's implement those types to get the code to compile
package clockface
import "time"
// SecondHand is the unit vector of the second hand of an analogue clock at tim
// represented as a Point.
func SecondHand(t time.Time) Point {
return Point{}
}
// SecondHand is the unit vector of the second hand of an analogue clock at tim
// represented as a Point.
func SecondHand(t time.Time) Point {
return Point{150, 60}
}
Refactor
No need to refactor yet - there's barely enough code!
if got != want {
t.Errorf("Got %v, wanted %v", got, want)
}
}
Same idea, but now the second hand is pointing downwards so we add the length
to the Y axis.
Thinking time
How are we going to solve this problem?
Every minute the second hand goes through the same 60 states, pointing in 60
different directions. When it's 0 seconds it points to the top of the clockface,
when it's 30 seconds it points to the bottom of the clockface. Easy enough.
So if I wanted to think about in what direction the second hand was pointing at,
say, 37 seconds, I'd want the angle between 12 o'clock and 37/60ths around the
circle. In degrees this is (360 / 60 ) * 37 = 222, but it's easier just to
remember that it's 37/60 of a complete rotation.
But the angle is only half the story; we need to know the X and Y coordinate that
the tip of the second hand is pointing at. How can we work that out?
Math
Imagine a circle with a radius of 1 drawn around the origin - the coordinate 0,
0.
picture of the unit circle
This is called the 'unit circle' because... well, the radius is 1 unit!
The circumference of the circle is made of points on the grid - more coordinates.
The x and y components of each of these coordinates form a triangle, the
hypotenuse of which is always 1 - the radius of the circle
picture of the unit circle with a point defined on the circumference
Now, trigonometry will let us work out the lengths of X and Y for each triangle
if we know the angle they make with the origin. The X coordinate will be cos(a),
and the Y coordinate will be sin(a), where a is the angle made between the line
and the (positive) x axis.
picture of the unit circle with the x and y elements of a ray defined as cos(a) and
sin(a) respectively, where a is the angle made by the ray with the x axis
One final twist - because we want to measure the angle from 12 o'clock rather
than from the X axis (3 o'clock), we need to swap the axis around; now x =
sin(a) and y = cos(a).
unit circle ray defined from by angle from y axis
So now we know how to get the angle of the second hand (1/60th of a circle for
each second) and the X and Y coordinates. We'll need functions for both sin and
cos.
math
Happily the Go math package has both, with one small snag we'll need to get our
heads around; if we look at the description of math.Cos:
Now that we've done some reading, some learning and some thinking, we can
write our next test.
Write the test first
All this maths is hard and confusing. I'm not confident I understand what's going
on - so let's write a test! We don't need to solve the whole problem in one go -
let's start off with working out the correct angle, in radians, for the second hand
at a particular time.
I'm going to write these tests within the clockface package; they may never get
exported, and they may get deleted (or moved) once I have a better grip on
what's going on.
I'm also going to comment out the acceptance test that I was working on while
I'm working on these tests - I don't want to get distracted by that test while I'm
getting this one to pass.
package clockface
import (
"math"
"testing"
"time"
)
if want != got {
t.Fatalf("Wanted %v radians, but got %v", want, got)
}
}
Here we're testing that 30 seconds past the minute should put the second hand at
halfway around the clock. And it's our first use of the math package! If a full turn
of a circle is 2π radians, we know that halfway round should just be π radians.
math.Pi provides us with a value for π.
Write the minimal amount of code for the test to run and
check the failing test output
PASS
ok github.com/gypsydave5/learn-go-with-tests/math/v2/clockface 0.011s
Refactor
Nothing needs refactoring yet
I added a couple of helper functions to make writing this table based test a little
less tedious. testName converts a time into a digital watch format (HH:MM:SS),
and simpleTime constructs a time.Time using only the parts we actually care
about (again, hours, minutes and seconds).2
These two functions should help make these tests (and future tests) a little easier
to write and maintain.
Time to implement all of that maths stuff we were talking about above:
One second is (2π / 60) radians... cancel out the 2 and we get π/30 radians.
Multiply that by the number of seconds (as a float64) and we should now have
all the tests passing...
--- FAIL: TestSecondsInRadians (0.00s)
--- FAIL: TestSecondsInRadians/00:00:30 (0.00s)
clockface_test.go:24: Wanted 3.141592653589793 radians, but got 3.14159
FAIL
exit status 1
FAIL github.com/gypsydave5/learn-go-with-tests/math/v3/clockface 0.006s
Wait, what?
Now (1) may not seem all that appealing, but it's often the only way to make
floating point equality work. Being inaccurate by some infinitesimal fraction is
frankly not going to matter for the purposes of drawing a clockface, so we could
write a function that defines a 'close enough' equality for our angles. But there's a
simple way we can get the accuracy back: we rearrange the equation so that
we're no longer dividing down and then multiplying up. We can do it all by just
dividing.
So instead of
numberOfSeconds * π / 30
we can write
π / (30 / numberOfSeconds)
which is equivalent.
In Go:
In Go if you try to explicitly divide by zero you will get a compilation error.
package main
import (
"fmt"
)
func main() {
fmt.Println(10.0 / 0.0) // fails to compile
}
Obviously the compiler can't always predict that you'll divide by zero, such as
our t.Second()
Try this
func main() {
fmt.Println(10.0/zero())
}
It will print +Inf (infinity). Dividing by +Inf seems to result in zero and we can
see this with the following:
package main
import (
"fmt"
"math"
)
func main() {
fmt.Println(secondsinradians())
}
Again, let's keep this as simple as possible and only work with the unit circle;
the circle with a radius of 1. This means that our hands will all have a length of
one but, on the bright side, it means that the maths will be easy for us to
swallow.
Write the minimal amount of code for the test to run and
check the failing test output
check the failing test output
PASS
ok github.com/gypsydave5/learn-go-with-tests/math/v4/clockface 0.007s
picture of the unit circle with the x and y elements of a ray defined as cos(a) and
sin(a) respectively, where a is the angle made by the ray with the x axis
We now want the equation that produces X and Y. Let's write it into seconds:
return Point{x, y}
}
Now we get
--- FAIL: TestSecondHandPoint (0.00s)
--- FAIL: TestSecondHandPoint/00:00:30 (0.00s)
clockface_test.go:43: Wanted {0 -1} Point, but got {1.2246467991473515e
--- FAIL: TestSecondHandPoint/00:00:45 (0.00s)
clockface_test.go:43: Wanted {-1 0} Point, but got {-1 -1.8369701987210
FAIL
exit status 1
FAIL github.com/gypsydave5/learn-go-with-tests/math/v4/clockface 0.007s
Wait, what (again)? Looks like we've been cursed by the floats once more - both
of those unexpected numbers are infinitesimal - way down at the 16th decimal
place. So again we can either choose to try to increase precision, or to just say
that they're roughly equal and get on with our lives.
One option to increase the accuracy of these angles would be to use the rational
type Rat from the math/big package. But given the objective is to draw an SVG
and not the moon landings I think we can live with a bit of fuzziness.
We've defined two functions to define approximate equality between two Points
- they'll work if the X and Y elements are within 0.0000001 of each other. That's
still pretty accurate.
Refactor
I'm still pretty happy with this.
Fun times!
// SecondHand is the unit vector of the second hand of an analogue clock at tim
// represented as a Point.
func SecondHand(t time.Time) Point {
p := secondHandPoint(t)
p = Point{p.X * 90, p.Y * 90} // scale
p = Point{p.X, -p.Y} // flip
p = Point{p.X + 150, p.Y + 150} // translate
return p
}
Refactor
There's a few magic numbers here that should get pulled out as constants, so let's
do that
const secondHandLength = 90
const clockCentreX = 150
const clockCentreY = 150
// SecondHand is the unit vector of the second hand of an analogue clock at tim
// represented as a Point.
func SecondHand(t time.Time) Point {
p := secondHandPoint(t)
p = Point{p.X * secondHandLength, p.Y * secondHandLength}
p = Point{p.X, -p.Y}
p = Point{p.X + clockCentreX, p.Y + clockCentreY} //translate
return p
}
Let's do this thing - because there's nothing worse than not delivering some value
when it's just sitting there waiting to get out into the world to dazzle people.
Let's draw a second hand!
We're going to stick a new directory under our main clockface package
directory, called (confusingly), clockface. In there we'll put the main package
that will create the binary that will build an SVG:
├── clockface
│ └── main.go
├── clockface.go
├── clockface_acceptance_test.go
└── clockface_test.go
and inside main.go
package main
import (
"fmt"
"io"
"os"
"time"
"github.com/gypsydave5/learn-go-with-tests/math/v6/clockface"
)
func main() {
t := time.Now()
sh := clockface.SecondHand(t)
io.WriteString(os.Stdout, svgStart)
io.WriteString(os.Stdout, bezel)
io.WriteString(os.Stdout, secondHandTag(sh))
io.WriteString(os.Stdout, svgEnd)
}
Oh boy am I not trying to win any prizes for beautiful code with this mess - but
it does the job. It's writing an SVG out to os.Stdout - one string at a time.
If we build this
go build
Refactor
Refactor
This stinks. Well, it doesn't quite stink stink, but I'm not happy about it.
Yeah, I guess I screwed up. This feels wrong. Let's try and recover with a more
SVG-centric test.
What are our options? Well, we could try testing that the characters spewing out
of the SVGWriter contain things that look like the sort of SVG tag we're
expecting for a particular time. For instance:
var b strings.Builder
clockface.SVGWriter(&b, tm)
got := b.String()
if !strings.Contains(got, want) {
t.Errorf("Expected to find the second hand %v, in the SVG output %v"
}
}
Not only will it still pass if I don't produce a valid SVG (as it's only testing that a
string appears in the output), but it will also fail if I make the smallest,
unimportant change to that string - if I add an extra space between the attributes,
for instance.
The biggest smell is really that I'm testing a data structure - XML - by looking at
its representation as a series of characters - as a string. This is never, ever a good
idea as it produces problems just like the ones I outline above: a test that's both
too fragile and not sensitive enough. A test that's testing the wrong thing!
So the only solution is to test the output as XML. And to do that we'll need to
parse it.
Parsing XML
encoding/xml is the Go package that can handle all things to do with simple
XML parsing.
So we'll need a struct to unmarshall our XML into. We could spend some time
working out what the correct names for all of the nodes and attributes, and how
to write the correct structure but, happily, someone has written zek a program
that will automate all of that hard work for us. Even better, there's an online
version at https://www.onlinetool.io/xmltogo/. Just paste the SVG from the top
of the file into one box and - bam - out pops:
b := bytes.Buffer{}
clockface.SVGWriter(&b, tm)
svg := Svg{}
xml.Unmarshal(b.Bytes(), &svg)
x2 := "150"
y2 := "60"
# github.com/gypsydave5/learn-go-with-tests/math/v7b/clockface_test [github.com
./clockface_acceptance_test.go:41:2: undefined: clockface.SVGWriter
FAIL github.com/gypsydave5/learn-go-with-tests/math/v7b/clockface [build fai
package clockface
import (
"fmt"
"io"
"time"
)
const (
secondHandLength = 90
clockCentreX = 150
clockCentreY = 150
)
The most beautiful SVG writer? No. But hopefully it'll do the job...
--- FAIL: TestSVGWriterAtMidnight (0.00s)
clockface_acceptance_test.go:56: Expected to find the second hand with x2 o
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graph
<svg xmlns="http://www.w3.org/2000/svg"
width="100%"
height="100%"
viewBox="0 0 300 300"
version="2.0"><circle cx="150" cy="150" r="100" style="fill:#fff;s
FAIL
exit status 1
FAIL github.com/gypsydave5/learn-go-with-tests/math/v7b/clockface 0.008s
Oooops! The %f format directive is printing our coordinates to the default level
of precision - six decimal places. We should be explicit as to what level of
precision we're expecting for the coordinates. Let's say three decimal places.
x2 := "150.000"
y2 := "60.000"
We get:
PASS
ok github.com/gypsydave5/learn-go-with-tests/math/v7b/clockface 0.006s
package main
import (
"os"
"time"
"github.com/gypsydave5/learn-go-with-tests/math/v7b/clockface"
)
func main() {
t := time.Now()
clockface.SVGWriter(os.Stdout, t)
}
And we can write a test for another time following the same pattern, but not
before...
Refactor
Three things stick out:
1. We're not really testing for all of the information we need to ensure is
present - what about the x1 values, for instance?
2. Also, those attributes for x1 etc. aren't really strings are they? They're
numbers!
3. Do I really care about the style of the hand? Or, for that matter, the empty
Text node that's been generated by zak?
We can do better. Let's make a few adjustments to the Svg struct, and the tests, to
sharpen everything up.
Made the important parts of the struct named types -- the Line and the
Circle
Turned the numeric attributes into float64s instead of strings.
Deleted unused attributes like Style and Text
Renamed Svg to SVG because it's the right thing to do.
This will let us assert more precisely on the line we're looking for:
clockface.SVGWriter(&b, tm)
svg := SVG{}
xml.Unmarshal(b.Bytes(), &svg)
t.Errorf("Expected to find the second hand line %+v, in the SVG lines %+v"
}
Finally we can take a leaf out of the unit tests' tables, and we can write a helper
function containsLine(line Line, lines []Line) bool to really make these
tests shine:
svg := SVG{}
xml.Unmarshal(b.Bytes(), &svg)
if !containsLine(c.line, svg.Line) {
t.Errorf("Expected to find the second hand line %+v, in the SVG
}
})
}
}
svg := SVG{}
xml.Unmarshal(b.Bytes(), &svg)
if !containsLine(c.line, svg.Line) {
t.Errorf("Expected to find the minute hand line %+v, in the SVG
}
})
}
}
We'd better start building some other clockhands, Much in the same way as we
produced the tests for the second hand, we can iterate to produce the following
set of tests. Again we'll comment out our acceptance test while we get this
working:
Write the minimal amount of code for the test to run and
check the failing test output
Rather than working out how far to push the minute hand around the clockface
for every second from scratch, here we can just leverage the secondsInRadians
function. For every second the minute hand will move 1/60th of the angle the
second hand moves.
secondsInRadians(t) / 60
Then we just add on the movement for the minutes - similar to the movement of
the second hand.
And...
PASS
ok github.com/gypsydave5/learn-go-with-tests/math/v8/clockface
And, frankly, I'm bored of testing that function. I'm confident I know how it
works. So it's on to the next one.
Write the minimal amount of code for the test to run and
check the failing test output
return Point{x, y}
}
PASS
ok github.com/gypsydave5/learn-go-with-tests/math/v9/clockface 0.009s
Refactor
We've definitely got a bit of repetition in the minuteHandPoint and
secondHandPoint - I know because we just copied and pasted one to make the
other. Let's DRY it out with a function.
return Point{x, y}
}
PASS
ok github.com/gypsydave5/learn-go-with-tests/math/v9/clockface 0.007s
Now we can uncomment the acceptance test and get to work drawing the minute
hand
PASS
ok github.com/gypsydave5/learn-go-with-tests/math/v9/clockface 0.006s
But the proof of the pudding is in the eating - if we now compile and run our
clockface program, we should see something like
a clock with second and minute hands
Refactor
Let's remove the duplication from the secondHand and minuteHand functions,
putting all of that scale, flip and translate logic all in one place.
PASS
ok github.com/gypsydave5/learn-go-with-tests/math/v9/clockface 0.007s
svg := SVG{}
xml.Unmarshal(b.Bytes(), &svg)
if !containsLine(c.line, svg.Line) {
t.Errorf("Expected to find the hour hand line %+v, in the SVG l
}
})
}
}
Again, let's comment this one out until we've got the some coverage with the
lower level tests:
Write the minimal amount of code for the test to run and
check the failing test output
PASS
ok github.com/gypsydave5/learn-go-with-tests/math/v10/clockface 0.007s
Remember, this is not a 24 hour clock; we have to use the remainder operator to
get the remainder of the current hour divided by 12.
PASS
ok github.com/gypsydave5/learn-go-with-tests/math/v10/clockface 0.008s
So the only question is by what factor to reduce the size of that angle. One full
turn is one hour for the minute hand, but for the hour hand it's twelve hours. So
we just divide the angle returned by minutesInRadians by twelve:
and behold:
--- FAIL: TestHoursInRadians (0.00s)
--- FAIL: TestHoursInRadians/00:01:30 (0.00s)
clockface_test.go:104: Wanted 0.013089969389957472 radians, but got 0.0
FAIL
exit status 1
FAIL github.com/gypsydave5/learn-go-with-tests/math/v10/clockface 0.007s
Let's update our test to use roughlyEqualFloat64 for the comparison of the
angles.
PASS
ok github.com/gypsydave5/learn-go-with-tests/math/v10/clockface 0.007s
Refactor
If we're going to use roughlyEqualFloat64 in one of our radians tests, we
should probably use it for all of them. That's a nice and simple refactor.
Wait, am I just going to throw two test cases out there at once? Isn't this bad
TDD?
On TDD Zealotry
Test driven development is not a religion. Some people might act like it is -
usually people who don't do TDD but who are happy to moan on Twitter or
Dev.to that it's only done by zealots and that they're 'being pragmatic' when they
don't write tests. But it's not a religion. It's tool.
I know what the two tests are going to be - I've tested two other clock hands in
exactly the same way - and I already know what my implementation is going to
be - I wrote a function for the general case of changing an angle into a point in
the minute hand iteration.
I'm not going to plough through TDD ceremony for the sake of it. Tests are a
tool to help me write better code. TDD is a technique to help me write better
code. Neither tests nor TDD are an end in themselves.
My confidence has increased, so I feel I can make larger strides forward. I'm
going to 'skip' a few steps, because I know where I am, I know where I'm going
and I've been down this road before.
But also note: I'm not skipping writing the tests entirely.
As I said, I know where I am and I know where I'm going. Why pretend
otherwise? The tests will soon tell me if I'm wrong.
PASS
ok github.com/gypsydave5/learn-go-with-tests/math/v11/clockface 0.009s
svg := SVG{}
xml.Unmarshal(b.Bytes(), &svg)
if !containsLine(c.line, svg.Line) {
t.Errorf("Expected to find the hour hand line %+v, in the SVG l
}
})
}
}
const (
secondHandLength = 90
minuteHandLength = 80
hourHandLength = 50
clockCentreX = 150
clockCentreY = 150
)
// ...
and so...
PASS
ok github.com/gypsydave5/learn-go-with-tests/math/v12/clockface 0.007s
Refactor
Looking at clockface.go, there are a few 'magic numbers' floating about. They
are all based around how many hours/minutes/seconds there are in a half-turn
around a clockface. Let's refactor so that we make explicit their meaning.
const (
secondsInHalfClock = 30
secondsInClock = 2 * secondsInHalfClock
minutesInHalfClock = 30
minutesInClock = 2 * minutesInHalfClock
hoursInHalfClock = 6
hoursInClock = 2 * hoursInHalfClock
)
Why do this? Well, it makes explicit what each number means in the equation. If
- when - we come back to this code, these names will help us to understand
what's going on.
Moreover, should we ever want to make some really, really WEIRD clocks -
ones with 4 hours for the hour hand, and 20 seconds for the second hand say -
these constants could easily become parameters. We're helping to leave that door
open (even if we never go through it).
Wrapping up
Do we need to do anything else?
First, let's pat ourselves on the back - we've written a program that makes an
SVG clockface. It works and it's great. It will only ever make one sort of
clockface - but that's fine! Maybe you only want one sort of clockface. There's
nothing wrong with a program that solves a specific problem and nothing else.
We can work on this project and turn it into something more general - a library
for calculating clockface angles and/or vectors.
In fact, providing the library along with the program is a really good idea. It
costs us nothing, while increasing the utility of our program and helping to
document how it works.
APIs should come with programs, and vice versa. An API that you must
write C code to use, which cannot be invoked easily from the command
line, is harder to learn and use. And contrariwise, it's a royal pain to have
interfaces whose only open, documented form is a program, so you cannot
invoke them easily from a C program. -- Henry Spencer, in The Art of Unix
Programming
In my final take on this program, I've made the unexported functions within
clockface into a public API for the library, with functions to calculate the angle
and unit vector for each of the clock hands. I've also split the SVG generation
part into its own package, svg, which is then used by the clockface program
directly. Naturally I've documented each of the functions and packages.
We could refactor our code to do any of these things, and we can do so because
because it doesn't matter how we produce our SVG, what's important is that it's
an SVG that we produce. As such, the part of our system that needs to know the
most about SVGs - that needs to be the strictest about what constitutes an SVG -
is the test for the SVG output; it needs to have enough context and knowledge
about SVGs for us to be confident that we're outputting an SVG.
We may have felt odd that we were pouring a lot of time and effort into those
SVG tests - importing an XML library, parsing XML, refactoring the structs -
but that test code is a valuable part of our codebase - possibly more valuable
than the current production code. It will help guarantee that the output is always
a valid SVG, no matter what we choose to use to produce it.
Tests are not second class citizens - they are not 'throwaway' code. Good tests
will last a lot longer than the particular version of the code they are testing. You
should never feel like you're spending 'too much time' writing your tests. It's
usually a wise investment.
2. This is a lot easier than writing a name out by hand as a string and then
having to keep it in sync with the actual time. Believe me you don't want to
do that...↩
3. Missattributed because, like all great authors, Kent Beck is more quoted
than read. Beck himself attributes it to Phlip.↩
Build an application
Now that you have hopefully digested the Go Fundamentals section you have a
solid grounding of a majority of Go's language features and how to do TDD.
Each chapter will iterate on the previous one, expanding the application's
functionality as our product owner dictates.
New concepts will be introduced to help facilitate writing great code but most of
the new material will be learning what can be accomplished from Go's standard
library.
By the end of this you should have a strong grasp as to how to iteratively write
an application in Go, backed by tests.
You have been asked to create a web server where users can track how many
games players have won.
Make the test work quickly, committing whatever sins necessary in process.
You can commit these sins because you will refactor afterwards backed by the
safety of the tests.
What if you don't do this?
The more changes you make while in red, the more likely you are to add more
problems, not covered by tests.
The idea is to be iteratively writing useful code with small steps, driven by tests
so that you don't fall into a rabbit hole for hours.
GET will need a PlayerStore thing to get scores for a player. This should be
an interface so when we test we can create a simple stub to test our code
without needing to have implemented any actual storage code.
For POST we can spy on its calls to PlayerStore to make sure it stores
players correctly. Our implementation of saving won't be coupled to
retrieval.
For having some working software quickly we can make a very simple in-
memory implementation and then later we can create an implementation
backed by whatever storage mechanism we prefer.
By doing this very small step, we can make the important start of getting an
overall project structure working correctly without having to worry too much
about our application logic.
This will start a web server listening on a port, creating a goroutine for every
request and running it against a Handler.
Let's write a test for a function PlayerServer that takes in those two arguments.
The request sent in will be to get a player's score, which we expect to be "20".
PlayerServer(response, request)
got := response.Body.String()
want := "20"
if got != want {
t.Errorf("got %q, want %q", got, want)
}
})
}
In order to test our server, we will need a Request to send in and we'll want to
spy on what our handler writes to the ResponseWriter.
Define PlayerServer
func PlayerServer() {}
Try again
./server_test.go:13:14: too many arguments in call to PlayerServer
have (*httptest.ResponseRecorder, *http.Request)
want ()
import "net/http"
We'll have actual working software, we don't want to write tests for the
sake of it, it's good to see the code in action.
As we refactor our code, it's likely we will change the structure of the
program. We want to make sure this is reflected in our application too as
part of the incremental approach.
Create a new file for our application and put this code in.
package main
import (
"log"
"net/http"
)
func main() {
handler := http.HandlerFunc(PlayerServer)
if err := http.ListenAndServe(":5000", handler); err != nil {
log.Fatalf("could not listen on port 5000 %v", err)
}
}
So far all of our application code has been in one file, however, this isn't best
practice for larger projects where you'll want to separate things into different
files.
To run this, do go build which will take all the .go files in the directory and
build you a program. You can then execute it with ./myprogram.
http.HandlerFunc
http.ListenAndServe(":5000"...)
What we're going to do now is write another test to force us into making a
positive change to try and move away from the hard-coded value.
PlayerServer(response, request)
got := response.Body.String()
want := "10"
if got != want {
t.Errorf("got %q, want %q", got, want)
}
})
if player == "Floyd" {
fmt.Fprint(w, "10")
return
}
}
This test has forced us to actually look at the request's URL and make a decision.
So whilst in our heads, we may have been worrying about player stores and
interfaces the next logical step actually seems to be about routing.
If we had started with the store code the amount of changes we'd have to do
would be very large compared to this. This is a smaller step towards our final
goal and was driven by tests.
We're resisting the temptation to use any routing libraries right now, just the
smallest step to get our test passing.
r.URL.Path returns the path of the request which we can then use
strings.TrimPrefix to trim away /players/ to get the requested player. It's
not very robust but will do the trick for now.
Refactor
We can simplify the PlayerServer by separating out the score retrieval into a
function
fmt.Fprint(w, GetPlayerScore(player))
}
if name == "Floyd" {
return "10"
}
return ""
}
And we can DRY up some of the code in the tests by making some helpers
PlayerServer(response, request)
PlayerServer(response, request)
We moved the score calculation out of the main body of our handler into a
function GetPlayerScore. This feels like the right place to separate the concerns
using interfaces.
Finally, we will now implement the Handler interface by adding a method to our
new struct and putting in our existing handler code.
The only other change is we now call our store.GetPlayerScore to get the
score, rather than the local function we defined (which we can now delete).
server.ServeHTTP(response, request)
server.ServeHTTP(response, request)
Notice we're still not worrying about making stores just yet, we just want the
compiler passing as soon as we can.
You should be in the habit of prioritising having code that compiles and then
code that passes the tests.
By adding more functionality (like stub stores) whilst the code isn't compiling,
we are opening ourselves up to potentially more compilation problems.
func main() {
server := &PlayerServer{}
This is because we have not passed in a PlayerStore in our tests. We'll need to
make a stub one up.
A map is a quick and easy way of making a stub key/value store for our tests.
Now let's create one of these stores for our tests and send it into our
PlayerServer.
server.ServeHTTP(response, request)
server.ServeHTTP(response, request)
Our tests now pass and are looking better. The intent behind our code is clearer
now due to the introduction of the store. We're telling the reader that because we
have this data in a PlayerStore that when you use it with a PlayerServer you
should get the following responses.
We'll need to make an implementation of one, but that's difficult right now as
we're not storing any meaningful data so it'll have to be hard-coded for the time
being.
func main() {
server := &PlayerServer{&InMemoryPlayerStore{}}
If you run go build again and hit the same URL you should get "123". Not
great, but until we store data that's the best we can do.
Whilst the POST scenario gets us closer to the "happy path", I feel it'll be easier to
tackle the missing player scenario first as we're in that context already. We'll get
to the rest later.
server.ServeHTTP(response, request)
got := response.Code
want := http.StatusNotFound
if got != want {
t.Errorf("got status %d want %d", got, want)
}
})
w.WriteHeader(http.StatusNotFound)
fmt.Fprint(w, p.store.GetPlayerScore(player))
}
Sometimes I heavily roll my eyes when TDD advocates say "make sure you just
write the minimal amount of code to make it pass" as it can feel very pedantic.
But this scenario illustrates the example well. I have done the bare minimum
(knowing it is not correct), which is write a StatusNotFound on all responses
but all our tests are passing!
By doing the bare minimum to make the tests pass it can highlight gaps in
your tests. In our case, we are not asserting that we should be getting a
StatusOK when players do exist in the store.
Update the other two tests to assert on the status and fix the code.
server.ServeHTTP(response, request)
server.ServeHTTP(response, request)
server.ServeHTTP(response, request)
We're checking the status in all our tests now so I made a helper assertStatus
to facilitate that.
Now our first two tests fail because of the 404 instead of 200, so we can fix
PlayerServer to only return not found if the score is 0.
score := p.store.GetPlayerScore(player)
if score == 0 {
w.WriteHeader(http.StatusNotFound)
}
fmt.Fprint(w, score)
}
Storing scores
Now that we can retrieve scores from a store it now makes sense to be able to
store new scores.
server.ServeHTTP(response, request)
For a start let's just check we get the correct status code if we hit the particular
route with POST. This lets us drive out the functionality of accepting a different
kind of request and handling it differently to GET /players/{name}. Once this
works we can then start asserting on our handler's interaction with the store.
if r.Method == http.MethodPost {
w.WriteHeader(http.StatusAccepted)
return
}
score := p.store.GetPlayerScore(player)
if score == 0 {
w.WriteHeader(http.StatusNotFound)
}
fmt.Fprint(w, score)
}
Refactor
The handler is looking a bit muddled now. Let's break the code up to make it
easier to follow and isolate the different functionality into new functions.
switch r.Method {
case http.MethodPost:
p.processWin(w)
case http.MethodGet:
p.showScore(w, r)
}
score := p.store.GetPlayerScore(player)
if score == 0 {
w.WriteHeader(http.StatusNotFound)
}
fmt.Fprint(w, score)
}
This makes the routing aspect of ServeHTTP a bit clearer and means our next
iterations on storing can just be inside processWin.
Next, we want to check that when we do our POST /players/{name} that our
PlayerStore is told to record the win.
Now extend our test to check the number of invocations for a start
server.ServeHTTP(response, request)
if len(store.winCalls) != 1 {
t.Errorf("got %d calls to RecordWin want %d", len(store.winCalls),
}
})
}
store := StubPlayerStore{
map[string]int{},
nil,
}
--- FAIL: TestStoreWins (0.00s)
--- FAIL: TestStoreWins/it_records_wins_when_POST (0.00s)
server_test.go:80: got 0 calls to RecordWin want 1
Try and run the tests and we should be back to compiling code - but the test is
still failing.
Now that PlayerStore has RecordWin we can call it within our PlayerServer
request := newPostWinRequest(player)
response := httptest.NewRecorder()
server.ServeHTTP(response, request)
if len(store.winCalls) != 1 {
t.Fatalf("got %d calls to RecordWin want %d", len(store.winCalls),
}
if store.winCalls[0] != player {
t.Errorf("did not store correct winner got %q want %q", store.winCalls[
}
})
Now that we know there is one element in our winCalls slice we can safely
reference the first one and check it is equal to player.
Refactor
We can DRY up this code a bit as we're extracting the player name the same way
in two places
switch r.Method {
case http.MethodPost:
p.processWin(w, player)
case http.MethodGet:
p.showScore(w, player)
}
}
if score == 0 {
w.WriteHeader(http.StatusNotFound)
}
fmt.Fprint(w, score)
}
We could start writing some tests around our InMemoryPlayerStore but it's only
here temporarily until we implement a more robust way of persisting player
scores (i.e. a database).
What we'll do for now is write an integration test between our PlayerServer
and InMemoryPlayerStore to finish off the functionality. This will let us get to
our goal of being confident our application is working, without having to
directly test InMemoryPlayerStore. Not only that, but when we get around to
implementing PlayerStore with a database, we can test that implementation
with the same integration test.
Integration tests
Integration tests can be useful for testing that larger areas of your system work
but you must bear in mind:
For that reason, it is recommended that you research The Test Pyramid.
server.ServeHTTP(httptest.NewRecorder(), newPostWinRequest(player))
server.ServeHTTP(httptest.NewRecorder(), newPostWinRequest(player))
server.ServeHTTP(httptest.NewRecorder(), newPostWinRequest(player))
response := httptest.NewRecorder()
server.ServeHTTP(response, newGetScoreRequest(player))
assertStatus(t, response.Code, http.StatusOK)
This is allowed! We still have a test checking things should be working correctly
but it is not around the specific unit we're working with (InMemoryPlayerStore).
If I were to get stuck in this scenario, I would revert my changes back to the
failing test and then write more specific unit tests around InMemoryPlayerStore
to help me drive out a solution.
func NewInMemoryPlayerStore() *InMemoryPlayerStore {
return &InMemoryPlayerStore{map[string]int{}}
}
The integration test passes, now we just need to change main to use
NewInMemoryPlayerStore()
package main
import (
"log"
"net/http"
)
func main() {
server := &PlayerServer{NewInMemoryPlayerStore()}
Great! You've made a REST-ish service. To take this forward you'd want to pick
a data store to persist the scores longer than the length of time the program runs.
Refactor
We are almost there! Lets take some effort to prevent concurrency errors like
these
fatal error: concurrent map read and map write
Wrapping up
http.Handler
In the previous chapter we created a web server to store how many games
players have won.
Our product owner has a new requirement; to have a new endpoint called
/league which returns a list of all players stored. She would like this to be
returned as JSON.
// server.go
package main
import (
"fmt"
"net/http"
)
switch r.Method {
case http.MethodPost:
p.processWin(w, player)
case http.MethodGet:
p.showScore(w, player)
}
}
if score == 0 {
w.WriteHeader(http.StatusNotFound)
}
fmt.Fprint(w, score)
}
// InMemoryPlayerStore.go
package main
// main.go
package main
import (
"log"
"net/http"
)
func main() {
server := &PlayerServer{NewInMemoryPlayerStore()}
You can find the corresponding tests in the link at the top of the chapter.
server.ServeHTTP(response, request)
Before worrying about actual scores and JSON we will try and keep the changes
small with the plan to iterate toward our goal. The simplest start is to check we
can hit /league and get an OK back.
goroutine 6 [running]:
testing.tRunner.func1(0xc42010c3c0)
/usr/local/Cellar/go/1.10/libexec/src/testing/testing.go:742 +0x29d
panic(0x1274d60, 0x1438240)
/usr/local/Cellar/go/1.10/libexec/src/runtime/panic.go:505 +0x229
github.com/quii/learn-go-with-tests/json-and-io/v2.(*PlayerServer).ServeHTTP(0x
/Users/quii/go/src/github.com/quii/learn-go-with-tests/json-and-io/v2/serve
Your PlayerServer should be panicking like this. Go to the line of code in the
stack trace which is pointing to server.go.
player := r.URL.Path[len("/players/"):]
In the previous chapter, we mentioned this was a fairly naive way of doing our
routing. What is happening is it's trying to split the string of the path starting at
an index beyond /league so it is slice bounds out of range.
Let's commit some sins and get the tests passing in the quickest way we can,
knowing we can refactor it with safety once we know the tests are passing.
router := http.NewServeMux()
switch r.Method {
case http.MethodPost:
p.processWin(w, player)
case http.MethodGet:
p.showScore(w, player)
}
}))
router.ServeHTTP(w, r)
}
When the request starts we create a router and then we tell it for x path use
y handler.
So for our new endpoint, we use http.HandlerFunc and an anonymous
function to w.WriteHeader(http.StatusOK) when /league is requested to
make our new test pass.
For the /players/ route we just cut and paste our code into another
http.HandlerFunc.
Finally, we handle the request that came in by calling our new router's
ServeHTTP (notice how ServeMux is also an http.Handler?)
Refactor
ServeHTTP is looking quite big, we can separate things out a bit by refactoring
our handlers into separate methods.
router := http.NewServeMux()
router.Handle("/league", http.HandlerFunc(p.leagueHandler))
router.Handle("/players/", http.HandlerFunc(p.playersHandler))
router.ServeHTTP(w, r)
}
switch r.Method {
case http.MethodPost:
p.processWin(w, player)
case http.MethodGet:
p.showScore(w, player)
}
}
It's quite odd (and inefficient) to be setting up a router as a request comes in and
then calling it. What we ideally want to do is have some kind of
NewPlayerServer function which will take our dependencies and do the one-
time setup of creating the router. Each request can then just use that one instance
of the router.
p.router.Handle("/league", http.HandlerFunc(p.leagueHandler))
p.router.Handle("/players/", http.HandlerFunc(p.playersHandler))
return p
}
p.store = store
router := http.NewServeMux()
router.Handle("/league", http.HandlerFunc(p.leagueHandler))
router.Handle("/players/", http.HandlerFunc(p.playersHandler))
p.Handler = router
return p
}
Embedding
We changed the second property of PlayerServer, removing the named
property router http.ServeMux and replaced it with http.Handler; this is
called embedding.
Effective Go - Embedding
What this means is that our PlayerServer now has all the methods that
http.Handler has, which is just ServeHTTP.
This lets us remove our own ServeHTTP method, as we are already exposing one
via the embedded type.
Embedding is a very interesting language feature. You can use it with interfaces
to compose new interfaces.
And you can use it with concrete types too, not just interfaces. As you'd expect if
you embed a concrete type you'll have access to all its public methods and fields.
Any downsides?
You must be careful with embedding types because you will expose all public
methods and fields of the type you embed. In our case, it is ok because we
embedded just the interface that we wanted to expose (http.Handler).
If we had been lazy and embedded http.ServeMux instead (the concrete type) it
would still work but users of PlayerServer would be able to add new routes to
our server because Handle(path, handler) would be public.
When embedding types, really think about what impact that has on your
public API.
Now we've restructured our application we can easily add new routes and have
the start of the /league endpoint. We now need to make it return some useful
information.
[
{
"Name":"Bill",
"Wins":10
},
{
"Name":"Alice",
"Wins":15
}
]
server.ServeHTTP(response, request)
err := json.NewDecoder(response.Body).Decode(&got)
if err != nil {
t.Fatalf("Unable to parse response from server %q into slice of Pla
}
In my experience tests that assert against JSON strings have the following
problems.
Instead, we should look to parse the JSON into data structures that are relevant
for us to test with.
Data modelling
Given the JSON data model, it looks like we need an array of Player with some
fields so we have created a new type to capture this.
JSON decoding
To parse JSON into our data model we create a Decoder from encoding/json
package and then call its Decode method. To create a Decoder it needs an
io.Reader to read from which in our case is our response spy's Body.
Decode takes the address of the thing we are trying to decode into which is why
we declare an empty slice of Player the line before.
Parsing JSON can fail so Decode can return an error. There's no point
continuing the test if that fails so we check for the error and stop the test with
t.Fatalf if it happens. Notice that we print the response body along with the
error as it's important for someone running the test to see what string cannot be
parsed.
Our endpoint currently does not return a body so it cannot be parsed into JSON.
json.NewEncoder(w).Encode(leagueTable)
w.WriteHeader(http.StatusOK)
}
Refactor
It would be nice to introduce a separation of concern between our handler and
getting the leagueTable as we know we're going to not hard-code that very
soon.
Next, we'll want to extend our test so that we can control exactly what data we
want back.
Next, update our current test by putting some players in the league property of
our stub and assert they get returned from our server.
server.ServeHTTP(response, request)
err := json.NewDecoder(response.Body).Decode(&got)
if err != nil {
t.Fatalf("Unable to parse response from server %q into slice of Pla
}
if !reflect.DeepEqual(got, wantedLeague) {
t.Errorf("got %v want %v", got, wantedLeague)
}
})
}
Now we can update our handler code to call that rather than returning a hard-
coded list. Delete our method getLeagueTable() and then update
leagueHandler to call GetLeague().
For StubPlayerStore it's pretty easy, just return the league field we added
earlier.
So let's just get the compiler happy for now and live with the uncomfortable
feeling of an incomplete implementation in our InMemoryStore.
What this is really telling us is that later we're going to want to test this but let's
park that for now.
Try and run the tests, the compiler should pass and the tests should be passing!
Refactor
The test code does not convey out intent very well and has a lot of boilerplate we
can refactor away.
request := newLeagueRequest()
response := httptest.NewRecorder()
server.ServeHTTP(response, request)
if err != nil {
t.Fatalf("Unable to parse response from server %q into slice of Player,
}
return
}
One final thing we need to do for our server to work is make sure we return a
content-type header in the response so machines can recognise we are
returning JSON.
if response.Result().Header.Get("content-type") != "application/json"
t.Errorf("response did not have content-type of application/json, got %v"
}
Refactor
Add a helper for assertContentType.
Now that we have sorted out PlayerServer for now we can turn our attention to
InMemoryPlayerStore because right now if we tried to demo this to the product
owner /league will not work.
The quickest way for us to get some confidence is to add to our integration test,
we can hit the new endpoint and check we get back the correct response from
/league.
server.ServeHTTP(httptest.NewRecorder(), newPostWinRequest(player))
server.ServeHTTP(httptest.NewRecorder(), newPostWinRequest(player))
server.ServeHTTP(httptest.NewRecorder(), newPostWinRequest(player))
All we need to do is iterate over the map and convert each key/value to a Player.
Wrapping up
We've continued to safely iterate on our program using TDD, making it support
new endpoints in a maintainable way with a router and it can now return JSON
for our consumers. In the next chapter, we will cover persisting the data and
sorting our league.
Routing. The standard library offers you an easy to use type to do routing.
It fully embraces the http.Handler interface in that you assign routes to
Handlers and the router itself is also a Handler. It does not have some
features you might expect though such as path variables (e.g /users/{id}).
You can easily parse this information yourself but you might want to
consider looking at other routing libraries if it becomes a burden. Most of
the popular ones stick to the standard library's philosophy of also
implementing http.Handler.
Type embedding. We touched a little on this technique but you can learn
more about it from Effective Go. If there is one thing you should take away
from this is that it can be extremely useful but always thinking about your
public API, only expose what's appropriate.
JSON deserializing and serializing. The standard library makes it very
trivial to serialise and deserialise your data. It is also open to configuration
and you can customise how these data transformations work if necessary.
IO and sorting
You can find all the code for this chapter here
Our product owner is somewhat perturbed by the software losing the scores
when the server was restarted. This is because our implementation of our store is
in-memory. She is also not pleased that we didn't interpret the /league endpoint
should return the players ordered by the number of wins!
// server.go
package main
import (
"encoding/json"
"fmt"
"net/http"
)
p.store = store
router := http.NewServeMux()
router.Handle("/league", http.HandlerFunc(p.leagueHandler))
router.Handle("/players/", http.HandlerFunc(p.playersHandler))
p.Handler = router
return p
}
switch r.Method {
case http.MethodPost:
p.processWin(w, player)
case http.MethodGet:
p.showScore(w, player)
}
}
if score == 0 {
w.WriteHeader(http.StatusNotFound)
}
fmt.Fprint(w, score)
}
func (p *PlayerServer) processWin(w http.ResponseWriter, player string
p.store.RecordWin(player)
w.WriteHeader(http.StatusAccepted)
}
// InMemoryPlayerStore.go
package main
// main.go
package main
import (
"log"
"net/http"
)
func main() {
server := NewPlayerServer(NewInMemoryPlayerStore())
You can find the corresponding tests in the link at the top of the chapter.
This keeps the data very portable and is relatively simple to implement.
It won't scale especially well but given this is a prototype it'll be fine for now. If
our circumstances change and it's no longer appropriate it'll be simple to swap it
out for something different because of the PlayerStore abstraction we have
used.
We will keep the InMemoryPlayerStore for now so that the integration tests
keep passing as we develop our new store. Once we are confident our new
implementation is sufficient to make the integration test pass we will swap it in
and then delete InMemoryPlayerStore.
For this work to be complete we'll need to implement PlayerStore so we'll write
tests for our store calling the methods we need to implement. We'll start with
GetLeague.
store := FileSystemPlayerStore{database}
got := store.GetLeague()
want := []Player{
{"Cleo", 10},
{"Chris", 33},
}
Try again
# github.com/quii/learn-go-with-tests/json-and-io/v7
./FileSystemStore_test.go:15:28: too many values in struct initializer
./FileSystemStore_test.go:17:15: store.GetLeague undefined (type FileSystemPlay
It's complaining because we're passing in a Reader but not expecting one and it
doesn't have GetLeague defined yet.
Refactor
We have done this before! Our test code for the server had to decode the JSON
from the response.
We haven't got a strategy yet for dealing with parsing errors but let's press on.
Seeking problems
There is a flaw in our implementation. First of all, let's remind ourselves how
io.Reader is defined.
With our file, you can imagine it reading through byte by byte until the end.
What happens if you try to Read a second time?
// read again
got = store.GetLeague()
assertLeague(t, got, want)
We want this to pass, but if you run the test it doesn't.
The problem is our Reader has reached the end so there is nothing more to read.
We need a way to tell it to go back to the start.
Try running the test, it now passes! Happily for us string.NewReader that we
used in our test also implements ReadSeeker so we didn't have to make any
other changes.
store := FileSystemPlayerStore{database}
got := store.GetPlayerScore("Chris")
want := 33
if got != want {
t.Errorf("got %d want %d", got, want)
}
})
return wins
}
Refactor
You will have seen dozens of test helper refactorings so I'll leave this to you to
make it work
store := FileSystemPlayerStore{database}
got := store.GetPlayerScore("Chris")
want := 33
assertScoreEquals(t, got, want)
})
How do we write? We'd normally use a Writer but we already have our
ReadSeeker. Potentially we could have two dependencies but the standard
library already has an interface for us ReadWriteSeeker which lets us do all the
things we'll need to do with a file.
See if it compiles
I don't think there's an especially wrong answer here, but by choosing to use a
third party library I would have to explain dependency management! So we will
use files instead.
Before adding our test we need to make our other tests compile by replacing the
strings.Reader with an os.File.
Let's create a helper function which will create a temporary file with some data
inside it
if err != nil {
t.Fatalf("could not create temp file %v", err)
}
tmpfile.Write([]byte(initialData))
removeFile := func() {
tmpfile.Close()
os.Remove(tmpfile.Name())
}
TempFile creates a temporary file for us to use. The "db" value we've passed in
is a prefix put on a random file name it will create. This is to ensure it won't
clash with other files by accident.
You'll notice we're not only returning our ReadWriteSeeker (the file) but also a
function. We need to make sure that the file is removed once the test is finished.
We don't want to leak details of the files into the test as it's prone to error and
uninteresting for the reader. By returning a removeFile function, we can take
care of the details in our helper and all the caller has to do is run defer
cleanDatabase().
store := FileSystemPlayerStore{database}
got := store.GetLeague()
want := []Player{
{"Cleo", 10},
{"Chris", 33},
}
// read again
got = store.GetLeague()
assertLeague(t, got, want)
})
store := FileSystemPlayerStore{database}
got := store.GetPlayerScore("Chris")
want := 33
assertScoreEquals(t, got, want)
})
}
Run the tests and they should be passing! There were a fair amount of changes
but now it feels like we have our interface definition complete and it should be
very easy to add new tests from now.
Let's get the first iteration of recording a win for an existing player
store := FileSystemPlayerStore{database}
store.RecordWin("Chris")
got := store.GetPlayerScore("Chris")
want := 34
assertScoreEquals(t, got, want)
})
f.database.Seek(0,0)
json.NewEncoder(f.database).Encode(league)
}
When you range over a slice you are returned the current index of the loop (in
our case i) and a copy of the element at that index. Changing the Wins value of a
copy won't have any effect on the league slice that we iterate on. For that
reason, we need to get the reference to the actual value by doing league[i] and
then changing that value instead.
Refactor
In GetPlayerScore and RecordWin, we are iterating over []Player to find a
player by name.
Now if anyone has a League they can easily find a given player.
Change our PlayerStore interface to return League rather than []Player. Try to
re-run the tests, you'll get a compilation problem because we've changed the
interface but it's very easy to fix; just change the return type from []Player to
League.
player := f.GetLeague().Find(name)
if player != nil {
return player.Wins
}
return 0
}
if player != nil {
player.Wins++
}
f.database.Seek(0, 0)
json.NewEncoder(f.database).Encode(league)
}
This is looking much better and we can see how we might be able to find other
useful functionality around League can be refactored.
We now need to handle the scenario of recording wins of new players.
store := FileSystemPlayerStore{database}
store.RecordWin("Pepper")
got := store.GetPlayerScore("Pepper")
want := 1
assertScoreEquals(t, got, want)
})
if player != nil {
player.Wins++
} else {
league = append(league, Player{name, 1})
}
f.database.Seek(0, 0)
json.NewEncoder(f.database).Encode(league)
}
The happy path is looking ok so we can now try using our new Store in the
integration test. This will give us more confidence that the software works and
then we can delete the redundant InMemoryPlayerStore.
If you run the test it should pass and now we can delete InMemoryPlayerStore.
main.go will now have compilation problems which will motivate us to now use
our new store in the "real" code.
package main
import (
"log"
"net/http"
"os"
)
func main() {
db, err := os.OpenFile(dbFileName, os.O_RDWR|os.O_CREATE, 0666)
if err != nil {
log.Fatalf("problem opening %s %v", dbFileName, err)
}
store := &FileSystemPlayerStore{db}
server := NewPlayerServer(store)
Running the program now persists the data in a file in between restarts, hooray!
We can create a constructor which can do some of this initialisation for us and
store the league as a value in our FileSystemStore to be used on the reads
instead.
player := f.league.Find(name)
if player != nil {
return player.Wins
}
return 0
}
if player != nil {
player.Wins++
} else {
f.league = append(f.league, Player{name, 1})
}
f.database.Seek(0, 0)
json.NewEncoder(f.database).Encode(f.league)
}
If you try to run the tests it will now complain about initialising
FileSystemPlayerStore so just fix them by calling our new constructor.
Another problem
There is some more naivety in the way we are dealing with files which could
create a very nasty bug down the line.
When we RecordWin, we Seek back to the start of the file and then write the new
data—but what if the new data was smaller than what was there before?
In our current case, this is impossible. We never edit or delete scores so the data
can only get bigger. However, it would be irresponsible for us to leave the code
like this; it's not unthinkable that a delete scenario could come up.
How will we test for this though? What we need to do is first refactor our code
so we separate out the concern of the kind of data we write, from the writing. We
can then test that separately to check it works how we hope.
We'll create a new type to encapsulate our "when we write we go from the
beginning" functionality. I'm going to call it Tape. Create a new file with the
following:
package main
import "io"
Notice that we're only implementing Write now, as it encapsulates the Seek part.
This means our FileSystemStore can just have a reference to a Writer instead.
return &FileSystemPlayerStore{
database: &tape{database},
league: league,
}
}
Finally, we can get the amazing payoff we wanted by removing the Seek call
from RecordWin. Yes, it doesn't feel much, but at least it means if we do any
other kind of writes we can rely on our Write to behave how we need it to. Plus
it will now let us test the potentially problematic code separately and fix it.
Let's write the test where we want to update the entire contents of a file with
something that is smaller than the original contents.
tape := &tape{file}
tape.Write([]byte("abc"))
file.Seek(0, 0)
newFileContents, _ := ioutil.ReadAll(file)
got := string(newFileContents)
want := "abc"
if got != want {
t.Errorf("got %q want %q", got, want)
}
}
As we thought! It writes the data we want, but leaves the rest of the original data
remaining.
We don't need to create a new encoder every time we write, we can initialise one
in our constructor and use that instead.
return &FileSystemPlayerStore{
database: json.NewEncoder(&tape{file}),
league: league,
}
}
Use it in RecordWin.
We were not confident that our implementation would work if we added any
kind of edit or delete functionality. We did not want to leave the code like that,
especially if this was being worked on by more than one person who may not be
aware of the shortcomings of our initial approach.
Finally, it's just one test! If we decide to change the way it works it won't be a
disaster to just delete the test but we have at the very least captured the
requirement for future maintainers.
Interfaces
We started off the code by using io.Reader as that was the easiest path for us to
unit test our new PlayerStore. As we developed the code we moved on to
io.ReadWriter and then io.ReadWriteSeeker. We then found out there was
nothing in the standard library that actually implemented that apart from
*os.File. We could've taken the decision to write our own or use an open
source one but it felt pragmatic just to make temporary files for the tests.
But what is this really giving us? Bear in mind we are not mocking and it is
unrealistic for a file system store to take any type other than an *os.File so we
don't need the polymorphism that interfaces give us.
Don't be afraid to chop and change types and experiment like we have here. The
great thing about using a statically typed language is the compiler will help you
with every change.
Error handling
Before we start working on sorting we should make sure we're happy with our
current code and remove any technical debt we may have. It's an important
principle to get to working software as quickly as possible (stay out of the red
state) but that doesn't mean we should ignore error cases!
It was pragmatic to ignore that at the time as we already had failing tests. If we
had tried to tackle it at the same time, we would have been juggling two things at
once.
if err != nil {
return nil, fmt.Errorf("problem loading player store from file %s, %v"
}
return &FileSystemPlayerStore{
database: json.NewEncoder(&tape{file}),
league: league,
}, nil
}
Remember it is very important to give helpful error messages (just like your
tests). People on the internet jokingly say that most Go code is:
if err != nil {
return err
}
That is 100% not idiomatic. Adding contextual information (i.e what you were
doing to cause the error) to your error messages makes operating your software
far easier.
if err != nil {
log.Fatalf("problem creating file system player store, %v ", err)
}
In the tests we should assert there is no error. We can make a helper to help with
this.
Work through the other compilation problems using this helper. Finally, you
should have a failing test:
=== RUN TestRecordingWinsAndRetrievingThem
--- FAIL: TestRecordingWinsAndRetrievingThem (0.00s)
server_integration_test.go:14: didn't expect an error but got one, problem
We cannot parse the league because the file is empty. We weren't getting errors
before because we always just ignored them.
Let's fix our big integration test by putting some valid JSON in it:
Now that all the tests are passing, we need to handle the scenario where the file
is empty.
_, err := NewFileSystemPlayerStore(database)
assertNoError(t, err)
})
file.Seek(0, 0)
if err != nil {
return nil, fmt.Errorf("problem getting file info from file %s, %v"
}
if info.Size() == 0 {
file.Write([]byte("[]"))
file.Seek(0, 0)
}
league, err := NewLeague(file)
if err != nil {
return nil, fmt.Errorf("problem loading player store from file %s, %v"
}
return &FileSystemPlayerStore{
database: json.NewEncoder(&tape{file}),
league: league,
}, nil
}
file.Stat returns stats on our file, which lets us check the size of the file. If it's
empty, we Write an empty JSON array and Seek back to the start, ready for the
rest of the code.
Refactor
Our constructor is a bit messy now, so let's extract the initialise code into a
function:
if err != nil {
return fmt.Errorf("problem getting file info from file %s, %v"
}
if info.Size()==0 {
file.Write([]byte("[]"))
file.Seek(0, 0)
}
return nil
}
if err != nil {
return nil, fmt.Errorf("problem initialising player db file, %v"
}
if err != nil {
return nil, fmt.Errorf("problem loading player store from file %s, %v"
}
return &FileSystemPlayerStore{
database: json.NewEncoder(&tape{file}),
league: league,
}, nil
}
Sorting
Our product owner wants /league to return the players sorted by their scores,
from highest to lowest.
The main decision to make here is where in the software should this happen. If
we were using a "real" database we would use things like ORDER BY so the
sorting is super fast. For that reason, it feels like implementations of
PlayerStore should be responsible.
got := store.GetLeague()
want := []Player{
{"Chris", 33},
{"Cleo", 10},
}
// read again
got = store.GetLeague()
assertLeague(t, got, want)
})
The order of the JSON coming in is in the wrong order and our want will check
that it is returned to the caller in the correct order.
sort.Slice
Slice sorts the provided slice given the provided less function.
Easy!
Wrapping up
What we've covered
The Seeker interface and its relation to Reader and Writer.
Working with files.
Creating an easy to use helper for testing with files that hides all the messy
stuff.
sort.Slice for sorting slices.
Using the compiler to help us safely make structural changes to the
application.
Breaking rules
Most rules in software engineering aren't really rules, just best practices that
work 80% of the time.
We discovered a scenario where one of our previous "rules" of not testing
internal functions was not helpful for us so we broke the rule.
It's important when breaking rules to understand the trade-off you are
making. In our case, we were ok with it because it was just one test and
would've been very difficult to exercise the scenario otherwise.
In order to be able to break the rules you must understand them first. An
analogy is with learning guitar. It doesn't matter how creative you think you
are, you must understand and practice the fundamentals.
For now, it will just need to be able to record a player's win when the user types
Ruth wins. The intention is to eventually be a tool for helping users play poker.
The product owner wants the database to be shared amongst the two applications
so that the league updates according to wins recorded in the new application.
Before we get stuck into our new work we should structure our project to
accommodate this.
So far all the code has lived in one folder, in a path looking like this
$GOPATH/src/github.com/your-name/my-app
In order for you to make an application in Go, you need a main function inside a
package main. So far all of our "domain" code has lived inside package main
and our func main can reference everything.
This was fine so far and it is good practice not to go over-the-top with package
structure. If you take the time to look through the standard library you will see
very little in the way of lots of folders and structure.
Thankfully it's pretty straightforward to add structure when you need it.
Inside the existing project create a cmd directory with a webserver directory
inside that (e.g mkdir -p cmd/webserver).
If you have tree installed you should run it and your structure should look like
this
.
├── FileSystemStore.go
├── FileSystemStore_test.go
├── cmd
│ └── webserver
│ └── main.go
├── league.go
├── server.go
├── server_integration_test.go
├── server_test.go
├── tape.go
└── tape_test.go
We now effectively have a separation between our application and the library
code but we now need to change some package names. Remember when you
build a Go application its package must be main.
The paths will be different on your computer, but it should be similar to this:
package main
import (
"github.com/quii/learn-go-with-tests/command-line/v1"
"log"
"net/http"
"os"
)
func main() {
db, err := os.OpenFile(dbFileName, os.O_RDWR|os.O_CREATE, 0666)
if err != nil {
log.Fatalf("problem opening %s %v", dbFileName, err)
}
if err != nil {
log.Fatalf("problem creating file system player store, %v ", err)
}
server := poker.NewPlayerServer(store)
The full path may seem a bit jarring, but this is how you can import any publicly
available library into your code.
Final checks
Inside the root run go test and check they're still passing
Go inside our cmd/webserver and do go run main.go
Visit http://localhost:5000/league and you should see it's still working
Walking skeleton
Before we get stuck into writing tests, let's add a new application that our project
will build. Create another directory inside cmd called cli (command line
interface) and add a main.go with the following
package main
import "fmt"
func main() {
fmt.Println("Let's play poker")
}
The first requirement we'll tackle is recording a win when the user types
{PlayerName} wins.
Before we jump too far ahead though, let's just write a test to check it integrates
with the PlayerStore how we'd like.
Inside CLI_test.go (in the root of the project, not inside cmd)
func TestCLI(t *testing.T) {
playerStore := &StubPlayerStore{}
cli := &CLI{playerStore}
cli.PlayPoker()
if len(playerStore.winCalls) != 1 {
t.Fatal("expected a win call but didn't get any")
}
}
Remember we're just trying to get the test running so we can check the test fails
how we'd hope
--- FAIL: TestCLI (0.00s)
cli_test.go:30: expected a win call but didn't get any
FAIL
Next, we need to simulate reading from Stdin (the input from the user) so that
we can record wins for specific players.
if len(playerStore.winCalls) < 1 {
t.Fatal("expected a win call but didn't get any")
}
got := playerStore.winCalls[0]
want := "Chris"
if got != want {
t.Errorf("didn't record correct winner, got %q, want %q", got, want)
}
}
os.Stdin is what we'll use in main to capture the user's input. It is a *File under
the hood which means it implements io.Reader which as we know by now is a
handy way of capturing text.
The test passes. We'll add another test to force us to write some real code next,
but first, let's refactor.
Refactor
In server_test we earlier did checks to see if wins are recorded as we have
here. Let's DRY that assertion up into a helper
if len(store.winCalls) != 1 {
t.Fatalf("got %d calls to RecordWin want %d", len(store.winCalls),
}
if store.winCalls[0] != winner {
t.Errorf("did not store correct winner got %q want %q", store.winCalls[
}
}
Now let's write another test with different user input to force us into actually
reading it.
Now that we have some passing tests, we should wire this up into main.
Remember we should always strive to have fully-integrated working software as
quickly as we can.
In main.go add the following and run it. (you may have to adjust the path of the
second dependency to match what's on your computer)
package main
import (
"fmt"
"github.com/quii/learn-go-with-tests/command-line/v3"
"log"
"os"
)
if err != nil {
log.Fatalf("problem opening %s %v", dbFileName, err)
}
if err != nil {
log.Fatalf("problem creating file system player store, %v ", err)
}
This highlights the importance of integrating your work. We rightfully made the
dependencies of our CLI private (because we don't want them exposed to users
of CLIs) but haven't made a way for users to construct it.
package mypackage_test
In all other examples so far, when we make a test file we declare it as being in
the same package that we are testing.
This is fine and it means on the odd occasion where we want to test something
internal to the package we have access to the unexported types.
But given we have advocated for not testing internal things generally, can Go
help enforce that? What if we could test our code where we only have access to
the exported types (like our main does)?
An adage with TDD is that if you cannot test your code then it is probably hard
for users of your code to integrate with it. Using package foo_test will help
with this by forcing you to test your code as if you are importing it like users of
your package will.
Before fixing main let's change the package of our test inside CLI_test.go to
poker_test.
If you have a well-configured IDE you will suddenly see a lot of red! If you run
the compiler you'll get the following errors
./CLI_test.go:12:19: undefined: StubPlayerStore
./CLI_test.go:17:3: undefined: assertPlayerWin
./CLI_test.go:22:19: undefined: StubPlayerStore
./CLI_test.go:27:3: undefined: assertPlayerWin
We have now stumbled into more questions on package design. In order to test
our software we made unexported stubs and helper functions which are no
longer available for us to use in our CLI_test because the helpers are defined in
the _test.go files in the poker package.
This is a subjective discussion. One could argue that you do not want to pollute
your package's API with code to facilitate tests.
In the presentation "Advanced Testing with Go" by Mitchell Hashimoto, it is
described how at HashiCorp they advocate doing this so that users of the
package can write tests without having to re-invent the wheel writing stubs. In
our case, this would mean anyone using our poker package won't have to create
their own stub PlayerStore if they wish to work with our code.
Anecdotally I have used this technique in other shared packages and it has
proved extremely useful in terms of users saving time when integrating with our
packages.
So let's create a file called testing.go and add our stub and our helpers.
package poker
import "testing"
if len(store.winCalls) != 1 {
t.Fatalf("got %d calls to RecordWin want %d", len(store.winCalls),
}
if store.winCalls[0] != winner {
t.Errorf("did not store correct winner got %q want %q", store.winCalls[
}
}
You'll need to make the helpers public (remember exporting is done with a
capital letter at the start) if you want them to be exposed to importers of our
package.
In our CLI test you'll need to call the code as if you were using it within a
different package.
By doing this, we can then simplify and refactor our reading code
Change the test to use the constructor instead and we should be back to the tests
passing.
Finally, we can go back to our new main.go and use the constructor we just
made
Refactor
We have some repetition in our respective applications where we are opening a
file and creating a FileSystemStore from its contents. This feels like a slight
weakness in our package's design so we should make a function in it to
encapsulate opening a file from a path and returning you the PlayerStore.
if err != nil {
return nil, nil, fmt.Errorf("problem opening %s %v", path, err)
}
closeFunc := func() {
db.Close()
}
if err != nil {
return nil, nil, fmt.Errorf("problem creating file system player store,
}
Now refactor both of our applications to use this function to create the store.
package main
import (
"fmt"
"github.com/quii/learn-go-with-tests/command-line/v3"
"log"
"os"
)
func main() {
store, close, err := poker.FileSystemPlayerStoreFromFile(dbFileName)
if err != nil {
log.Fatal(err)
}
defer close()
package main
import (
"github.com/quii/learn-go-with-tests/command-line/v3"
"log"
"net/http"
)
func main() {
store, close, err := poker.FileSystemPlayerStoreFromFile(dbFileName)
if err != nil {
log.Fatal(err)
}
defer close()
server := poker.NewPlayerServer(store)
Wrapping up
Package structure
This chapter meant we wanted to create two applications, re-using the domain
code we've written so far. In order to do this, we needed to update our package
structure so that we had separate folders for our respective mains.
By doing this we ran into integration problems due to unexported values so this
further demonstrates the value of working in small "slices" and integrating often.
The product owner wants us to expand the functionality of our command line
application by helping a group of people play Texas-Holdem Poker.
Our application will help keep track of when the blind should go up, and how
much it should be.
When it starts it asks how many players are playing. This determines the
amount of time there is before the "blind" bet goes up.
There is a base amount of time of 5 minutes.
For every player, 1 minute is added.
e.g 6 players equals 11 minutes for the blind.
After the blind time expires the game should alert the players the new
amount the blind bet is.
The blind starts at 100 chips, then 200, 400, 600, 1000, 2000 and continue
to double until the game ends (our previous functionality of "Ruth wins"
should still finish the game)
time.AfterFunc
We want to be able to schedule our program to print the blind bet values at
certain durations dependant on the number of players.
To limit the scope of what we need to do, we'll forget about the number of
players part for now and just assume there are 5 players so we'll test that every
10 minutes the new value of the blind bet is printed.
As usual the standard library has us covered with func AfterFunc(d Duration,
f func()) *Timer
AfterFunc waits for the duration to elapse and then calls f in its own
goroutine. It returns a Timer that can be used to cancel the call using its
Stop method.
time.Duration
A Duration represents the elapsed time between two instants as an int64
nanosecond count.
The time library has a number of constants to let you multiply those
nanoseconds so they're a bit more readable for the kind of scenarios we'll be
doing
5 * time.Second
Testing this may be a little tricky though. We'll want to verify that each time
period is scheduled with the correct blind amount but if you look at the signature
of time.AfterFunc its second argument is the function it will run. You cannot
compare functions in Go so we'd be unable to test what function has been sent
in. So we'll need to write some kind of wrapper around time.AfterFunc which
will take the time to run and the amount to print so we can spy on that.
if len(blindAlerter.alerts) != 1 {
t.Fatal("expected a blind alert to be scheduled")
}
})
You'll notice we've made a SpyBlindAlerter which we are trying to inject into
our CLI and then checking that after we call PlayerPoker that an alert is
scheduled.
(Remember we are just going for the simplest scenario first and then we'll
iterate.)
Your other tests will now fail as they don't have a BlindAlerter passed in to
NewCLI.
Spying on BlindAlerter is not relevant for the other tests so in the test file add
Then use that in the other tests to fix the compilation problems. By labelling it as
a "dummy" it is clear to the reader of the test that it is not important.
> Dummy objects are passed around but never actually used. Usually they are
just used to fill parameter lists.
The tests should now compile and our new test fails.
=== RUN TestCLI
=== RUN TestCLI/it_schedules_printing_of_blind_values
--- FAIL: TestCLI (0.00s)
--- FAIL: TestCLI/it_schedules_printing_of_blind_values (0.00s)
CLI_test.go:38: expected a blind alert to be scheduled
To make the test pass, we can call our BlindAlerter with anything we like
Next we'll want to check it schedules all the alerts we'd hope for, for 5 players
cases := []struct{
expectedScheduleTime time.Duration
expectedAmount int
} {
{0 * time.Second, 100},
{10 * time.Minute, 200},
{20 * time.Minute, 300},
{30 * time.Minute, 400},
{40 * time.Minute, 500},
{50 * time.Minute, 600},
{60 * time.Minute, 800},
{70 * time.Minute, 1000},
{80 * time.Minute, 2000},
{90 * time.Minute, 4000},
{100 * time.Minute, 8000},
}
if len(blindAlerter.alerts) <= i {
t.Fatalf("alert %d was not scheduled %v", i, blindAlerter.a
}
alert := blindAlerter.alerts[i]
amountGot := alert.amount
if amountGot != c.expectedAmount {
t.Errorf("got amount %d, want %d", amountGot, c.expectedAmo
}
gotScheduledTime := alert.scheduledAt
if gotScheduledTime != c.expectedScheduleTime {
t.Errorf("got scheduled time of %v, want %v", gotScheduledT
}
})
}
})
Table-based test works nicely here and clearly illustrate what our requirements
are. We run through the table and check the SpyBlindAlerter to see if the alert
has been scheduled with the correct values.
blinds := []int{100, 200, 300, 400, 500, 600, 800, 1000, 2000, 4000
blindTime := 0 * time.Second
for _, blind := range blinds {
cli.alerter.ScheduleAlertAt(blindTime, blind)
blindTime = blindTime + 10 * time.Minute
}
userInput := cli.readLine()
cli.playerStore.RecordWin(extractWinner(userInput))
}
It's not a lot more complicated than what we already had. We're just now
iterating over an array of blinds and calling the scheduler on an increasing
blindTime
Refactor
We can encapsulate our scheduled alerts into a method just to make PlayPoker
read a little clearer.
Finally our tests are looking a little clunky. We have two anonymous structs
representing the same thing, a ScheduledAlert. Let's refactor that into a new
type and then make some helpers to compare them.
We've added a String() method to our type so it prints nicely if the test fails
cases := []scheduledAlert {
{0 * time.Second, 100},
{10 * time.Minute, 200},
{20 * time.Minute, 300},
{30 * time.Minute, 400},
{40 * time.Minute, 500},
{50 * time.Minute, 600},
{60 * time.Minute, 800},
{70 * time.Minute, 1000},
{80 * time.Minute, 2000},
{90 * time.Minute, 4000},
{100 * time.Minute, 8000},
}
if len(blindAlerter.alerts) <= i {
t.Fatalf("alert %d was not scheduled %v", i, blindAlerter.alert
}
got := blindAlerter.alerts[i]
assertScheduledAlert(t, got, want)
})
}
})
We've spent a fair amount of time here writing tests and have been somewhat
naughty not integrating with our application. Let's address that before we pile on
any more requirements.
Try running the app and it won't compile, complaining about not enough args to
NewCLI.
Create BlindAlerter.go and move our BlindAlerter interface and add the new
things below
package poker
import (
"time"
"fmt"
"os"
)
Remember that any type can implement an interface, not just structs. If you are
making a library that exposes an interface with one function defined it is a
common idiom to also expose a MyInterfaceFunc type.
This type will be a func which will also implement your interface. That way
users of your interface have the option to implement your interface with just a
function; rather than having to create an empty struct type.
We then create the function StdOutAlerter which has the same signature as the
function and just use time.AfterFunc to schedule it to print to os.Stdout.
Before running you might want to change the blindTime increment in CLI to be
10 seconds rather than 10 minutes just so you can see it in action.
You should see it print the blind values as we'd expect every 10 seconds. Notice
how you can still type Shaun wins into the CLI and it will stop the program how
we'd expect.
The game won't always be played with 5 people so we need to prompt the user to
enter a number of players before the game starts.
We don't care about our other collaborators in this test just yet so we've made
some dummies in our test file.
We should be a little wary that we now have 4 dependencies for CLI, that feels
like maybe it is starting to have too many responsibilities. Let's live with it for
now and see if a refactoring emerges as we add this new functionality.
t.Run("it prompts the user to enter the number of players", func(t *testing.T)
stdout := &bytes.Buffer{}
cli := poker.NewCLI(dummyPlayerStore, dummyStdIn, stdout, dummyBlindAlerter
cli.PlayPoker()
got := stdout.String()
want := "Please enter the number of players: "
if got != want {
t.Errorf("got %q, want %q", got, want)
}
})
We pass in what will be os.Stdout in main and see what is written.
Now the other tests will fail to compile because they don't have an io.Writer
being passed into NewCLI.
Then finally we can write our prompt at the start of the game
Refactor
We have a duplicate string for the prompt which we should extract into a
constant
Now we need to send in a number and extract it out. The only way we'll know if
it has had the desired effect is by seeing what blind alerts were scheduled.
t.Run("it prompts the user to enter the number of players", func(t *testing.T)
stdout := &bytes.Buffer{}
in := strings.NewReader("7\n")
blindAlerter := &SpyBlindAlerter{}
got := stdout.String()
want := poker.PlayerPrompt
if got != want {
t.Errorf("got %q, want %q", got, want)
}
cases := []scheduledAlert{
{0 * time.Second, 100},
{12 * time.Minute, 200},
{24 * time.Minute, 300},
{36 * time.Minute, 400},
}
if len(blindAlerter.alerts) <= i {
t.Fatalf("alert %d was not scheduled %v", i, blindAlerter.alert
}
got := blindAlerter.alerts[i]
assertScheduledAlert(t, got, want)
})
}
})
We remove our dummy for StdIn and instead send in a mocked version
representing our user entering 7
We also remove our dummy on the blind alerter so we can see that the
number of players has had an effect on the scheduling
We test what alerts are scheduled
numberOfPlayers, _ := strconv.Atoi(cli.readLine())
cli.scheduleBlindAlerts(numberOfPlayers)
userInput := cli.readLine()
cli.playerStore.RecordWin(extractWinner(userInput))
}
blinds := []int{100, 200, 300, 400, 500, 600, 800, 1000, 2000, 4000
blindTime := 0 * time.Second
for _, blind := range blinds {
cli.alerter.ScheduleAlertAt(blindTime, blind)
blindTime = blindTime + blindIncrement
}
}
While our new test has been fixed, a lot of others have failed because now our
system only works if the game starts with a user entering a number. You'll need
to fix the tests by changing the user inputs so that a number followed by a
newline is added (this is highlighting yet more flaws in our approach right now).
Refactor
This all feels a bit horrible right? Let's listen to our tests.
We can refactor toward our Game first and our test should continue to pass. Once
we've made the structural changes we want we can think about how we can
refactor the tests to reflect our new separation of concerns
For now don't change the external interface of NewCLI as we don't want to
change the test code and the client code at the same time as that is too much to
juggle and we could end up breaking things.
// game.go
type Game struct {
alerter BlindAlerter
store PlayerStore
}
blinds := []int{100, 200, 300, 400, 500, 600, 800, 1000, 2000, 4000
blindTime := 0 * time.Second
for _, blind := range blinds {
p.alerter.ScheduleAlertAt(blindTime, blind)
blindTime = blindTime + blindIncrement
}
}
// cli.go
type CLI struct {
in *bufio.Scanner
out io.Writer
game *Game
}
numberOfPlayersInput := cli.readLine()
numberOfPlayers, _ := strconv.Atoi(strings.Trim(numberOfPlayersInput,
cli.game.Start(numberOfPlayers)
winnerInput := cli.readLine()
winner := extractWinner(winnerInput)
cli.game.Finish(winner)
}
Constructing Game with its existing dependencies (which we'll refactor next)
Interpreting user input as method invocations for Game
We want to try to avoid doing "big" refactors which leave us in a state of failing
tests for extended periods as that increases the chances of mistakes. (If you are
working in a large/distributed team this is extra important)
The first thing we'll do is refactor Game so that we inject it into CLI. We'll do the
smallest changes in our tests to facilitate that and then we'll see how we can
break up the tests into the themes of parsing user input and game management.
This feels like an improvement already. We have less dependencies and our
dependency list is reflecting our overall design goal of CLI being concerned
with input/output and delegating game specific actions to a Game.
If you try and compile there are problems. You should be able to fix these
problems yourself. Don't worry about making any mocks for Game right now, just
initialise real Games just to get everything compiling and tests green.
Here's an example of one of the setups for the tests being fixed
stdout := &bytes.Buffer{}
in := strings.NewReader("7\n")
blindAlerter := &SpyBlindAlerter{}
game := poker.NewGame(blindAlerter, dummyPlayerStore)
cli := poker.NewCLI(in, stdout, game)
cli.PlayPoker()
It shouldn't take much effort to fix the tests and be back to green again (that's the
point!) but make sure you fix main.go too before the next stage.
// main.go
game := poker.NewGame(poker.BlindAlerterFunc(poker.StdOutAlerter), store)
cli := poker.NewCLI(os.Stdin, os.Stdout, game)
cli.PlayPoker()
Now that we have extracted out Game we should move our game specific
assertions into tests separate from CLI.
This is just an exercise in copying our CLI tests but with less dependencies
game.Start(5)
cases := []poker.ScheduledAlert{
{At: 0 * time.Second, Amount: 100},
{At: 10 * time.Minute, Amount: 200},
{At: 20 * time.Minute, Amount: 300},
{At: 30 * time.Minute, Amount: 400},
{At: 40 * time.Minute, Amount: 500},
{At: 50 * time.Minute, Amount: 600},
{At: 60 * time.Minute, Amount: 800},
{At: 70 * time.Minute, Amount: 1000},
{At: 80 * time.Minute, Amount: 2000},
{At: 90 * time.Minute, Amount: 4000},
{At: 100 * time.Minute, Amount: 8000},
}
checkSchedulingCases(cases, t, blindAlerter)
})
t.Run("schedules alerts on game start for 7 players", func(t *testing.T) {
blindAlerter := &poker.SpyBlindAlerter{}
game := poker.NewGame(blindAlerter, dummyPlayerStore)
game.Start(7)
cases := []poker.ScheduledAlert{
{At: 0 * time.Second, Amount: 100},
{At: 12 * time.Minute, Amount: 200},
{At: 24 * time.Minute, Amount: 300},
{At: 36 * time.Minute, Amount: 400},
}
checkSchedulingCases(cases, t, blindAlerter)
})
game.Finish(winner)
poker.AssertPlayerWin(t, store, winner)
}
The intent behind what happens when a game of poker starts is now much
clearer.
Make sure to also move over the test for when the game ends.
Once we are happy we have moved the tests over for game logic we can simplify
our CLI tests so they reflect our intended responsibilities clearer
To do this we'll have to make it so CLI no longer relies on a concrete Game type
but instead accepts an interface with Start(numberOfPlayers) and
Finish(winner). We can then create a spy of that type and verify the correct
calls are made.
It's here we realise that naming is awkward sometimes. Rename Game to
TexasHoldem (as that's the kind of game we're playing) and the new interface
will be called Game. This keeps faithful to the notion that our CLI is oblivious to
the actual game we're playing and what happens when you Start and Finish.
Replace all references to *Game inside CLI and replace them with Game (our new
interface). As always keep re-running tests to check everything is green while
we are refactoring.
Now that we have decoupled CLI from TexasHoldem we can use spies to check
that Start and Finish are called when we expect them to, with the correct
arguments.
Replace any CLI test which is testing any game specific logic with checks on
how our GameSpy is called. This will then reflect the responsibilities of CLI in
our tests clearly.
Here is an example of one of the tests being fixed; try and do the rest yourself
and check the source code if you get stuck.
t.Run("it prompts the user to enter the number of players and starts the ga
stdout := &bytes.Buffer{}
in := strings.NewReader("7\n")
game := &GameSpy{}
gotPrompt := stdout.String()
wantPrompt := poker.PlayerPrompt
if gotPrompt != wantPrompt {
t.Errorf("got %q, want %q", gotPrompt, wantPrompt)
}
if game.StartedWith != 7 {
t.Errorf("wanted Start called with 7 but got %d", game.StartedWith)
}
})
Now that we have a clean separation of concerns, checking edge cases around IO
in our CLI should be easier.
We need to address the scenario where a user puts a non numeric value when
prompted for the number of players:
Our code should not start the game and it should print a handy error to the user
and then exit.
t.Run("it prints an error when a non numeric value is entered and does not star
stdout := &bytes.Buffer{}
in := strings.NewReader("Pies\n")
game := &GameSpy{}
You'll need to add to our GameSpy a field StartCalled which only gets set if
Start is called
if err != nil {
return
}
Next we need to inform the user of what they did wrong so we'll assert on what
is printed to stdout.
gotPrompt := stdout.String()
wantPrompt := poker.PlayerPrompt + "you're so silly"
if gotPrompt != wantPrompt {
t.Errorf("got %q, want %q", gotPrompt, wantPrompt)
}
We are storing everything that gets written to stdout so we still expect the
poker.PlayerPrompt. We then just check an additional thing gets printed. We're
not too bothered about the exact wording for now, we'll address it when we
refactor.
if err != nil {
fmt.Fprint(cli.out, "you're so silly")
return
}
Refactor
Now refactor the message into a constant like PlayerPrompt
Finally our testing around what has been sent to stdout is quite verbose, let's
write an assert function to clean it up.
Using the vararg syntax (...string) is handy here because we need to assert on
varying amounts of messages.
Use this helper in both of the tests where we assert on messages sent to the user.
There are a number of tests that could be helped with some assertX functions so
practice your refactoring by cleaning up our tests so they read nicely.
Take some time and think about the value of some of the tests we've driven out.
Remember we don't want more tests than necessary, can you refactor/remove
some of them and still be confident it all works ?
t.Run("start game with 3 players and finish game with 'Chris' as winner"
game := &GameSpy{}
stdout := &bytes.Buffer{}
cli.PlayPoker()
assertMessagesSentToUser(t, stdout, poker.PlayerPrompt)
assertGameStartedWith(t, game, 3)
assertFinishCalledWith(t, game, "Chris")
})
cli.PlayPoker()
assertGameStartedWith(t, game, 8)
assertFinishCalledWith(t, game, "Cleo")
})
t.Run("it prints an error when a non numeric value is entered and does not
game := &GameSpy{}
stdout := &bytes.Buffer{}
in := userSends("pies")
assertGameNotStarted(t, game)
assertMessagesSentToUser(t, stdout, poker.PlayerPrompt, poker.BadPlayer
})
}
The tests now reflect the main capabilities of CLI, it is able to read user input in
terms of how many people are playing and who won and handles when a bad
value is entered for number of players. By doing this it is clear to the reader what
CLI does, but also what it doesn't do.
What happens if instead of putting Ruth wins the user puts in Lloyd is a
killer ?
Finish this chapter by writing a test for this scenario and making it pass.
Wrapping up
Wrapping up
A quick project recap
For the past 5 chapters we have slowly TDD'd a fair amount of code
time.Afterfunc
A very handy way of scheduling a function call after a specific duration. It is
well worth investing time looking at the documentation for time as it has a lot of
time saving functions and methods for you to work with.
Our tests got messy. We had too many assertions (check this input, schedules
these alerts, etc) and too many dependencies. We could visually see it was
cluttered; it is so important to listen to your tests.
If your tests look messy try and refactor them.
If you've done this and they're still a mess it is very likely pointing to a flaw
in your design
This is one of the real strengths of tests.
Even though the tests and the production code was a bit cluttered we could freely
refactor backed by our tests.
Remember when you get into these situations to always take small steps and re-
run the tests after every change.
It would've been dangerous to refactor both the test code and the production
code at the same time, so we first refactored the production code (in the current
state we couldn't improve the tests much) without changing its interface so we
could rely on our tests as much as we could while changing things. Then we
refactored the tests after the design improved.
After refactoring the dependency list reflected our design goal. This is another
benefit of DI in that it often documents intent. When you rely on global variables
responsibilities become very unclear.
In this chapter we'll learn how to use WebSockets to improve our application.
Project recap
We have two applications in our poker codebase
Command line app. Prompts the user to enter the number of players in a
game. From then on informs the players of what the "blind bet" value is,
which increases over time. At any point a user can enter "{Playername}
wins" to finish the game and record the victor in a store.
Web app. Allows users to record winners of games and displays a league
table. Shares the same store as the command line app.
Next steps
The product owner is thrilled with the command line application but would
prefer it if we could bring that functionality to the browser. She imagines a web
page with a text box that allows the user to enter the number of players and when
they submit the form the page displays the blind value and automatically updates
it when appropriate. Like the command line application the user can declare the
winner and it'll get saved in the database.
On the face of it, it sounds quite simple but as always we must emphasise taking
an iterative approach to writing software.
First of all we will need to serve HTML. So far all of our HTTP endpoints have
returned either plaintext or JSON. We could use the same techniques we know
(as they're all ultimately strings) but we can also use the html/template package
for a cleaner solution.
We also need to be able to asynchronously send messages to the user saying The
blind is now *y* without having to refresh the browser. We can use
WebSockets to facilitate this.
For that reason the first thing we'll do is create a web page with a form for the
user to record a winner. Rather than using a plain form, we will use WebSockets
to send that data to our server for it to record.
After that we'll work on the blind alerts by which point we will have a bit of
infrastructure code set up.
It is of course possible but for the sake of brevity I won't be including any
explanations for it.
Sorry folks. Lobby O'Reilly to pay me to make a "Learn JavaScript with tests".
p.store = store
router := http.NewServeMux()
router.Handle("/league", http.HandlerFunc(p.leagueHandler))
router.Handle("/players/", http.HandlerFunc(p.playersHandler))
p.Handler = router
return p
}
The easiest thing we can do for now is check when we GET /game that we get a
200.
server.ServeHTTP(response, request)
router.Handle("/game", http.HandlerFunc(p.game))
Refactor
The server code is already fine due to us slotting in more code into the existing
well-factored code very easily.
We can tidy up the test a little by adding a test helper function newGameRequest
to make the request to /game. Try writing this yourself.
request := newGameRequest()
response := httptest.NewRecorder()
server.ServeHTTP(response, request)
if (window['WebSocket']) {
const conn = new WebSocket('ws://' + document.location.host
WebSocket is built into most modern browsers so we don't need to worry about
bringing in any libraries. The web page won't work for older browsers, but we're
ok with that for this scenario.
1. Write a browser based test, using something like Selenium. These tests are
the most "realistic" of all approaches because they start an actual web
browser of some kind and simulates a user interacting with it. These tests
can give you a lot of confidence your system works but are more difficult to
write than unit tests and much slower to run. For the purposes of our
product this is overkill.
2. Do an exact string match. This can be ok but these kind of tests end up
being very brittle. The moment someone changes the markup you will have
a test failing when in practice nothing has actually broken.
3. Check we call the correct template. We will be using a templating library
from the standard lib to serve the HTML (discussed shortly) and we could
inject in the thing to generate the HTML and spy on its call to check we're
doing it right. This would have an impact on our code's design but doesn't
actually test a great deal; other than we're calling it with the correct
template file. Given we will only have the one template in our project the
chance of failure here seems low.
So in the book "Learn Go with Tests" for the first time, we're not going to write
a test.
if err != nil {
http.Error(w, fmt.Sprintf("problem loading template %s", err.Error()),
return
}
tmpl.Execute(w, nil)
}
html/template is a Go package for creating HTML. In our case we call
template.ParseFiles, giving the path of our html file. Assuming there is no
error you can then Execute the template, which writes it to an io.Writer. In our
case we want it to Write to the internet, so we give it our http.ResponseWriter.
As we have not written a test, it would be prudent to manually test our web
server just to make sure things are working as we'd hope. Go to cmd/webserver
and run the main.go file. Visit http://localhost:5000/game.
You should have got an error about not being able to find the template. You can
either change the path to be relative to your folder, or you can have a copy of the
game.html in the cmd/webserver directory. I chose to create a symlink (ln -s
../../game.html game.html) to the file inside the root of the project so if I
make changes they are reflected when running the server.
If you make this change and run again you should see our UI.
Now we need to test that when we get a string over a WebSocket connection to
our server that we declare it as a winner of a game.
This will fetch the code for the excellent Gorilla WebSocket library. Now we
can update our tests for our new requirement.
Make sure that you have an import for the websocket library. My IDE
automatically did it for me, so should yours.
To test what happens from the browser we have to open up our own WebSocket
connection and write to it.
Our previous tests around our server just called methods on our server but now
we need to have a persistent connection to our server. To do that we use
httptest.NewServer which takes a http.Handler and will spin it up and listen
for connections.
Finally we assert on the player store to check the winner was recorded.
router.Handle("/ws", http.HandlerFunc(p.webSocket))
Now that we have a connection opened, we'll want to listen for a message and
then record it as the winner.
The issue is timing. There is a delay between our WebSocket connection reading
the message and recording the win and our test finishes before it happens. You
can test this by putting a short time.Sleep before the final assertion.
Let's go with that for now but acknowledge that putting in arbitrary sleeps into
tests is very bad practice.
time.Sleep(10 * time.Millisecond)
AssertPlayerWin(t, store, winner)
Refactor
We committed many sins to make this test work both in the server code and the
test code but remember this is the easiest way for us to work.
We have nasty, horrible, working software backed by a test, so now we are free
to make it nice and know we won't break anything accidentally.
We can move the upgrader to a private value inside our package because we
don't need to redeclare it on every WebSocket connection request
if err != nil {
return nil, fmt.Errorf("problem opening %s %v", htmlTemplatePath, err)
}
p.template = tmpl
p.store = store
router := http.NewServeMux()
router.Handle("/league", http.HandlerFunc(p.leagueHandler))
router.Handle("/players/", http.HandlerFunc(p.playersHandler))
router.Handle("/game", http.HandlerFunc(p.game))
router.Handle("/ws", http.HandlerFunc(p.webSocket))
p.Handler = router
return p, nil
}
Similarly I created another helper mustDialWS so that I could hide nasty error
noise when creating the WebSocket connection.
if err != nil {
t.Fatalf("could not open a ws connection on %s %v", url, err)
}
return ws
}
Finally in our test code we can create a helper to tidy up sending messages
We've made a trivial web form that lets users record the winner of a game. Let's
iterate on it to make it so the user can start a game by providing a number of
players and the server will push messages to the client informing them of what
the blind value is as time passes.
First of all update game.html to update our client side code for the new
requirements
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Lets play poker</title>
</head>
<body>
<section id="game">
<div id="game-start">
<label for="player-count">Number of players</label>
<input type="number" id="player-count"/>
<button id="start-game">Start</button>
</div>
<div id="declare-winner">
<label for="winner">Winner</label>
<input type="text" id="winner"/>
<button id="winner-button">Declare winner</button>
</div>
<div id="blind-value"/>
</section>
<section id="game-end">
<h1>Another great game of poker everyone!</h1>
<p><a href="/league">Go check the league table</a></p>
</section>
</body>
<script type="application/javascript">
const startGame = document.getElementById('game-start')
const declareWinner = document.getElementById('declare-winner')
const submitWinnerButton = document.getElementById('winner-button'
const winnerInput = document.getElementById('winner')
declareWinner.hidden = true
gameEndContainer.hidden = true
document.getElementById('start-game').addEventListener('click', event
startGame.hidden = true
declareWinner.hidden = false
if (window['WebSocket']) {
const conn = new WebSocket('ws://' + document.location.host
conn.onopen = function () {
conn.send(numberOfPlayers)
}
}
})
</script>
</html>
The main changes is bringing in a section to enter the number of players and a
section to display the blind value. We have a little logic to show/hide the user
interface depending on the stage of the game.
When the user was prompted in the CLI for number of players it would Start
the game which would kick off the blind alerts and when the user declared the
winner they would Finish. This is the same requirements we have now, just a
different way of getting the inputs; so we should look to re-use this concept if we
can.
This works in CLI because we always want to send the alerts to os.Stdout but
this won't work for our web server. For every request we get a new
http.ResponseWriter which we then upgrade to *websocket.Conn. So we can't
know when constructing our dependencies where our alerts need to go.
The idea of a StdoutAlerter doesn't fit our new model so just rename it to
Alerter
It doesn't make any sense for TexasHoldem to know where to send blind alerts.
Let's now update Game so that when you start a game you declare where the
alerts should go.
Let the compiler tell you what you need to fix. The change isn't so bad:
If you've got everything right, everything should be green! Now we can try and
use Game within Server.
t.Run("start game with 3 players and finish game with 'Chris' as winner"
game := &GameSpy{}
out := &bytes.Buffer{}
in := userSends("3", "Chris wins")
It looks like we should be able to test drive out a similar outcome using GameSpy
t.Run("start a game with 3 players and declare Ruth the winner", func
game := &poker.GameSpy{}
winner := "Ruth"
server := httptest.NewServer(mustMakePlayerServer(t, dummyPlayerStore, game
ws := mustDialWS(t, "ws"+strings.TrimPrefix(server.URL, "http")+
defer server.Close()
defer ws.Close()
time.Sleep(10 * time.Millisecond)
assertGameStartedWith(t, game, 3)
assertFinishCalledWith(t, game, winner)
})
The final error is where we are trying to pass in Game to NewPlayerServer but it
doesn't support it yet
./server_test.go:21:38: too many arguments in call to "github.com/quii/learn-go
have ("github.com/quii/learn-go-with-tests/WebSockets/v2".PlayerStore, "git
want ("github.com/quii/learn-go-with-tests/WebSockets/v2".PlayerStore)
Finally!
=== RUN TestGame/start_a_game_with_3_players_and_declare_Ruth_the_winner
--- FAIL: TestGame (0.01s)
--- FAIL: TestGame/start_a_game_with_3_players_and_declare_Ruth_the_winner
server_test.go:146: wanted Start called with 3 but got 0
server_test.go:147: expected finish called with 'Ruth' but got ''
FAIL
if err != nil {
return nil, fmt.Errorf("problem opening %s %v", htmlTemplatePath, err)
}
p.game = game
// etc
_, numberOfPlayersMsg, _ := conn.ReadMessage()
numberOfPlayers, _ := strconv.Atoi(string(numberOfPlayersMsg))
p.game.Start(numberOfPlayers, ioutil.Discard) //todo: Don't discard the bli
_, winner, _ := conn.ReadMessage()
p.game.Finish(string(winner))
}
We are not going to send the blind messages anywhere just yet as we need to
have a think about that. When we call game.Start we send in ioutil.Discard
which will just discard any messages written to it.
For now start the web server up. You'll need to update the main.go to pass a
Game to the PlayerServer
func main() {
db, err := os.OpenFile(dbFileName, os.O_RDWR|os.O_CREATE, 0666)
if err != nil {
log.Fatalf("problem opening %s %v", dbFileName, err)
}
if err != nil {
log.Fatalf("problem creating file system player store, %v ", err)
}
if err != nil {
log.Fatalf("problem creating player server %v", err)
}
Discounting the fact we're not getting blind alerts yet, the app does work! We've
managed to re-use Game with PlayerServer and it has taken care of all the
details. Once we figure out how to send our blind alerts through to the web
sockets rather than discarding them it should all work.
Refactor
The way we're using WebSockets is fairly basic and the error handling is fairly
naive, so I wanted to encapsulate that in a type just to remove that messiness
from the server code. We may wish to revisit it later but for now this'll tidy
things up a bit
if err != nil {
log.Printf("problem upgrading connection to WebSockets %v\n"
}
return &playerServerWS{conn}
}
numberOfPlayersMsg := ws.WaitForMsg()
numberOfPlayers, _ := strconv.Atoi(numberOfPlayersMsg)
p.game.Start(numberOfPlayers, ioutil.Discard) //todo: Don't discard the bli
winner := ws.WaitForMsg()
p.game.Finish(winner)
}
Once we figure out how to not discard the blind messages we're done.
Let's not write a test!
Sometimes when we're not sure how to do something, it's best just to play
around and try things out! Make sure your work is committed first because once
we've figured out a way we should drive it through a test.
We need to pass in an io.Writer for the game to write the blind alerts to.
Wouldn't it be nice if we could pass in our playerServerWS from before? It's our
wrapper around our WebSocket so it feels like we should be able to send that to
our Game to send messages to.
Give it a go:
numberOfPlayersMsg := ws.WaitForMsg()
numberOfPlayers, _ := strconv.Atoi(numberOfPlayersMsg)
p.game.Start(numberOfPlayers, ws)
//etc...
This seems too easy! Try and run the application and see if it works.
Beforehand edit TexasHoldem so that the blind increment time is shorter so you
can see it in action
You should see it working! The blind amount increments in the browser as if by
magic.
Now let's revert the code and think how to test it. In order to implement it all we
did was pass through to StartGame was playerServerWS rather than
ioutil.Discard so that might make you think we should perhaps spy on the call
to verify it works.
Spying is great and helps us check implementation details but we should always
try and favour testing the real behaviour if we can because when you decide to
refactor it's often spy tests that start failing because they are usually checking
implementation details that you're trying to change.
Our test currently opens a websocket connection to our running server and sends
messages to make it do things. Equally we should be able to test the messages
our server sends back over the websocket connection.
Currently our GameSpy does not send any data to out when you call Start. We
should change it so we can configure it to send a canned message and then we
can check that message gets sent to the websocket. This should give us
confidence that we have configured things correctly whilst still exercising the
real behaviour we want.
FinishedCalled bool
FinishCalledWith string
}
This now means when we exercise PlayerServer when it tries to Start the
game it should end up sending messages through the websocket if things are
working right.
t.Run("start a game with 3 players, send some blind alerts down WS and declare
wantedBlindAlert := "Blind is 100"
winner := "Ruth"
defer server.Close()
defer ws.Close()
writeWSMessage(t, ws, "3")
writeWSMessage(t, ws, winner)
time.Sleep(10 * time.Millisecond)
assertGameStartedWith(t, game, 3)
assertFinishCalledWith(t, game, winner)
_, gotBlindAlert, _ := ws.ReadMessage()
if string(gotBlindAlert) != wantedBlindAlert {
t.Errorf("got blind alert %q, want %q", string(gotBlindAlert), wantedBl
}
})
select {
case <-time.After(d):
t.Error("timed out")
case <-done:
}
}
What within does is take a function assert as an argument and then runs it in a
go routine. If/When the function finishes it will signal it is done via the done
channel.
While that happens we use a select statement which lets us wait for a channel
to send a message. From here it is a race between the assert function and
time.After which will send a signal when the duration has occurred.
Finally I made a helper function for our assertion just to make things a bit neater
t.Run("start a game with 3 players, send some blind alerts down WS and declare
wantedBlindAlert := "Blind is 100"
winner := "Ruth"
defer server.Close()
defer ws.Close()
writeWSMessage(t, ws, "3")
writeWSMessage(t, ws, winner)
time.Sleep(tenMS)
assertGameStartedWith(t, game, 3)
assertFinishCalledWith(t, game, winner)
within(t, tenMS, func() { assertWebsocketGotMsg(t, ws, wantedBlindAlert) })
})
numberOfPlayersMsg := ws.WaitForMsg()
numberOfPlayers, _ := strconv.Atoi(numberOfPlayersMsg)
p.game.Start(numberOfPlayers, ws)
winner := ws.WaitForMsg()
p.game.Finish(winner)
}
Refactor
The server code was a very small change so there's not a lot to change here but
the test code still has a time.Sleep call because we have to wait for our server to
do its work asynchronously.
We can refactor
our helpers assertGameStartedWith and
assertFinishCalledWith so that they can retry their assertions for a short
period before failing.
Here's how you can do it for assertFinishCalledWith and you can use the
same approach for the other helper.
if !passed {
t.Errorf("expected finish called with %q but got %q", winner, game.Fini
}
}
Wrapping up
Our application is now complete. A game of poker can be started via a web
browser and the users are informed of the blind bet value as time goes by via
WebSockets. When the game finishes they can record the winner which is
persisted using code we wrote a few chapters ago. The players can find out who
is the best (or luckiest) poker player using the website's /league endpoint.
Through the journey we have made mistakes but with the TDD flow we have
never been very far away from working software. We were free to keep iterating
and experimenting.
The final chapter will retrospect on the approach, the design we've arrived at and
tie up some loose ends.
WebSockets
Convenient way of sending messages between clients and servers that does
not require the client to keep polling the server. Both the client and server
code we have is very simple.
Trivial to test, but you have to be wary of the asynchronous nature of the
tests
In my _test.go I have a TestGetData which calls GetData() but that will use
os.exec, instead I would like for it to use my testdata.
What is a good way to achieve this? When calling GetData should I have a
"test" flag mode so it will read a file ie GetData(mode string)?
A few things
I have taken the liberty of guessing what the code might look like
out, _ := cmd.StdoutPipe()
var payload Payload
decoder := xml.NewDecoder(out)
return strings.ToUpper(payload.Message)
}
<payload>
<message>Happy New Year!</message>
</payload>
if got != want {
t.Errorf("got %q, want %q", got, want)
}
}
Testable code
Testable code is decoupled and single purpose. To me it feels like there are two
main concerns for this code
The first part is just copying the example from the standard lib.
The second part is where we have our business logic and by looking at the code
we can see where the "seam" in our logic starts; it's where we get our
io.ReadCloser. We can use this existing abstraction to separate concerns and
make our code testable.
The problem with GetData is the business logic is coupled with the means of
getting the XML. To make our design better we need to decouple them
Our TestGetData can act as our integration test between our two concerns so
we'll keep hold of that to make sure it keeps working.
cmd.Start()
data, _ := ioutil.ReadAll(out)
cmd.Wait()
return bytes.NewReader(data)
}
if got != want {
t.Errorf("got %q, want %q", got, want)
}
}
Now that GetData takes its input from just an io.Reader we have made it
testable and it is no longer concerned how the data is retrieved; people can re-use
the function with anything that returns an io.Reader (which is extremely
common). For example we could start fetching the XML from a URL instead of
the command line.
got := GetData(input)
want := "CATS ARE THE BEST ANIMAL"
if got != want {
t.Errorf("got %q, want %q", got, want)
}
}
By separating the concerns and using existing abstractions within Go testing our
important business logic is a breeze.
Error types
You can find all the code here
Creating your own types for errors can be an elegant way of tidying up your
code, making your code easier to use and test.
if err != nil {
return "", fmt.Errorf("problem fetching from %s, %v", url, err)
}
if res.StatusCode != http.StatusOK {
return "", fmt.Errorf("did not get 200 from %s, got %d", url, res.Statu
}
defer res.Body.Close()
body, _ := ioutil.ReadAll(res.Body) // ignoring err for brevity
It's not uncommon to write a function that might fail for different reasons and we
want to make sure we handle each scenario correctly.
As Pedro says, we could write a test for the status error like so.
t.Run("when you don't get a 200 you get a status error", func(t *testing.T) {
_, err := DumbGetter(svr.URL)
if err == nil {
t.Fatal("expected an error")
}
want := fmt.Sprintf("did not get 200 from %s, got %d", svr.URL, http.Status
got := err.Error()
if got != want {
t.Errorf(`got "%v", want "%v"`, got, want)
}
})
This test creates a server which always returns StatusTeapot and then we use its
URL as the argument to DumbGetter so we can see it handles non 200 responses
correctly.
What does this tell us? The ergonomics of our test would be reflected on another
bit of code trying to use our code.
How does a user of our code react to the specific kind of errors we return? The
best they can do is look at the error string which is extremely error prone and
horrible to write.
What we should do
With TDD we have the benefit of getting into the mindset of:
What we could do for DumbGetter is provide a way for users to use the type
system to understand what kind of error has happened.
t.Run("when you don't get a 200 you get a status error", func(t *testing.T) {
_, err := DumbGetter(svr.URL)
if err == nil {
t.Fatal("expected an error")
}
if !isStatusErr {
t.Fatalf("was not a BadStatusError, got %T", err)
}
When we run the test, it tells us we didn't return the right kind of error
--- FAIL: TestDumbGetter (0.00s)
--- FAIL: TestDumbGetter/when_you_dont_get_a_200_you_get_a_status_error (0.
error-types_test.go:56: was not a BadStatusError, got *errors.errorStri
Let's fix DumbGetter by updating our error handling code to use our type
if res.StatusCode != http.StatusOK {
return "", BadStatusError{URL: url, Status: res.StatusCode}
}
Our DumbGetter function has become simpler, it's no longer concerned with
the intricacies of an error string, it just creates a BadStatusError.
Our tests now reflect (and document) what a user of our code could do if
they decided they wanted to do some more sophisticated error handling
than just logging. Just do a type assertion and then you get easy access to
the properties of the error.
It is still "just" an error, so if they choose to they can pass it up the call
stack or log it like any other error.
Wrapping up
If you find yourself testing for multiple error conditions don't fall in to the trap
of comparing the error messages.
This leads to flaky and difficult to read/write tests and it reflects the difficulties
the users of your code will have if they also need to start doing things differently
depending on the kind of errors that have occurred.
Always make sure your tests reflect how you'd like to use your code, so in this
respect consider creating error types to encapsulate your kinds of errors. This
makes handling different kinds of errors easier for users of your code and also
makes writing your error handling code simpler and easier to read.
Addendum
As of Go 1.13 there are new ways to work with errors in the standard library
which is covered in the Go Blog
t.Run("when you don't get a 200 you get a status error", func(t *testing.T) {
_, err := DumbGetter(svr.URL)
if err == nil {
t.Fatal("expected an error")
}
if got != want {
t.Errorf("got %v, want %v", got, want)
}
})
In this case we are using errors.As to try and extract our error into our custom
type. It returns a bool to denote success and extracts it into got for us.