Growing Object-Oriented Software, Guided by Tests

Category: Programming
Author: Steve Freeman, Nat Pryce
4.4
All Stack Overflow 38
This Year Stack Overflow 3
This Month Reddit 3

Comments

by [deleted]   2019-08-24

So, DHH's objection really isn't to unit testing, but to extreme decoupling, and in a way, his article misses the point entirely.

His complaint is that in the pursuit of testability, Jim Weirich introduced the Hexagonal Architecture. But testability is not the purpose of the Hexagonal Architecture. It is one way to remove Rails as a dependency on the application logic, following the principles of Clean Architecture. That doesn't mean you don't write tests against the Rails controllers, those are absolutely mandatory. It just means, theoretically, you could ditch Rails. Whether that is a reasonable goal to code for, you be the judge.

You don't have to design the code in that way, simply in the pursuit of testability. The style of testing promoted by Weirich is the so called London Style of TDD, promoted by British developers Steve Freeman and Nat Pryce in their famous book Growing Object Oriented Software Guided by Tests. Whereas, the Chicago Style popularized by Martin Fowler (who ironically, is an Englishman), promotes testing with real objects, and test doubles as necessary for behavior verification. which correlates more closely with Kent Beck's original conception of unit testing.

by fatboyxpc   2019-08-24

There's a lot of varied answers in here, but PM me if you want to video chat / screenshare sometime and I can walk you through some outside in style testing that should really help clarify the whole ordeal. Alternatively, check out Growing Object Oriented Software Guided by Tests, or if you're into Laravel, Test Driven Laravel which is a great testing resource that is specifically tailored to Laravel (but you could learn a ton while still using a different framework). It's basically a video form of the book I mentioned using Laravel for the example instead of Java and C#.

by [deleted]   2019-08-24

> it is the process of making your change/fixture in a running state in the scope of your program.

That sounds like the definition of working software.

> It isn't possible to have a feature "deployable" in every sprint

Depends on how you scope the feature.

> only have it inert

It isn't inert. It has to work for demo.

> It's literally crazy to suggest that you can turn around any and every request or story so that it is usable by the customer at the end of every sprint

Again, you are slicing the features so that the scope of the featiure is small. Once you learn how to do the vertical slicing, it really isn't that difficult. The best book I've ever read that describes how to do vertical slicing is Growing Object Oriented Software Guided by Tests.

> Sounds like a bug farm, considering it'd need testing in that time also

Since each feature has a small scope in order to be implemented in a sprint, I don't see how that is the case.

by anonymous   2019-07-21

I think your example has a lot in common with the example used here:

http://www.m3p.co.uk/blog/2009/03/08/mock-roles-not-objects-live-and-in-person/

Using your original example and replacing the Entity with Hero, fall() with jumpFrom(Balcony), and draw() as moveTo(Room) it becomes remarkably similar. If you use the mock object approach Steve Freeman suggests, your first implementation was not so bad after all. I believe @Colin Hebert has given the best answer when he pointed in this direction. There's no need to expose anything here. You use the mock objects to verify whether the behavior of the hero has taken place.

Note that the author of the article has co-written a great book that may help you:

http://www.amazon.com/Growing-Object-Oriented-Software-Guided-Tests/dp/0321503627

There are some good papers freely available as PDF by the authors about using mock objects to guide your design in TDD.

by anonymous   2019-07-21

Get Growing Object-Oriented Software, Guided by Tests. It has some great tips about how to test database access.

Personally, I usually break the DAO tests in 2, a unit test with a mocked database to test functionality on the DAO, and an integration test, to test the queries against the DB. If your DAO only has database access code, you won't need a unit test.

One of the suggestions from the book that I took, is that the (integration) test has to commit the changes to the DB. I've learn to do this, after using hibernate and figuring out that the test was marked for rollback and the DB never got the insert statement. If you use triggers or any kind of validation (even FKs) I think this is a must.

Another thing, stay away from dbunit, it's a great framwork to start working, but it becomes hellish when a project becomes something more than tiny. My preference here, is to have a set of Test Data Builder classes to create the data, and insert it in the setup of the test or in the test itself.

And check dbmigrate, it's not for testing, but it will help you to manage scripts to upgrade and downgrade your DB schema.

In the scenario where the DB server is shared, I've creates one schema/user per environment. Since each developer has his own "local" environment, he also owns one schema.

by anonymous   2019-07-21

On top of other books mentioned, there is a new good book with loads of examples: Growing Object-Oriented Software, Guided by Tests

by anonymous   2019-07-21

Have a look at my related answer at

Designing a Test class for a custom Barrier

It's biased towards Java but has a reasonable summary of the options.

In summary though (IMO) its not the use of some fancy framework that will ensure correctness but how you go about designing you multithreaded code. Splitting the concerns (concurrency and functionality) goes a huge way towards raising confidence. Growing Object Orientated Software Guided By Tests explains some options better than I can.

Static analysis and formal methods (see, Concurrency: State Models and Java Programs) is an option but I've found them to be of limited use in commercial development.

Don't forget that any load/soak style tests are rarely guaranteed to highlight problems.

Good luck!

by pmarreck   2018-07-27
> None is willing to take on the risks associated with technical debt repayment.

This risk is vastly mitigated with good test coverage.

https://smile.amazon.com/Growing-Object-Oriented-Software-Gu...

https://codeclimate.com/blog/refactoring-without-good-tests/

by anonymous   2017-08-20

In addition to what Paul T Davies and Magnus Backeus have said. I think that at the end of the day it would be a people and cultural issue. If people are open minded and willing to learn it will be relatively easy to convince them. If they consider you a 'junior' (which is a bad sign because the only thing that matters is what you say not how old/experienced you are) you can appeal to a 'higher authority':

Stored procedures are dead and you are not the only one who thinks so:

It is startling to us that we continue to find new systems in 2011 that implement significant business logic in stored procedures. Programming languages commonly used to implement stored procedures lack expressiveness, are difficult to test, and discourage clean modular design. You should only consider stored procedures executing within the database engine in exceptional circumstances, where there is a proven performance issue.

There is no point in convincing people that are not willing to improve and learn. Even if you manage to win one argument and squeeze in NHibernate, for example, they may end up writing the same tightly coupled, untestable, data-or-linq-oriented code as they did before. DDD is hard and it will require changing a lot of assumptions, hurt egos etc. Depending on the size of the company it may be a constant battle that is not worth starting.

Driving Technical Change is the book that might help you to deal with these issues. It includes several behavior stereotypes that you may encounter:

  • The Uninformed
  • The Herd
  • The Cynic
  • The Burned
  • The Time Crunched
  • The Boss
  • The Irrational

Good luck!

by Ruben Bartelink   2017-08-20

Other hanselminutes episodes on testing:

  • #112 The Past, Present and Future of .NET Unit Testing Frameworks - listened a while back, remember being slightly underwhelmed, but still worth a listen
  • #103 Quetzal Bradley on Testing after Unit Tests - extremely interesting, giving deep insight into how to think about the purpose of coverage metrics etc.
  • #146 Test Driven Development is Design - The Last Word on TDD (with Scott Bellware) - lives up to its name in that it slams home a lot of core concepts that you "always knew anyway". IIRC the podcast gives a favourable mention to the Newkirk/Vorontsov book - which I wouldnt particularly second (it's a long time since I read it -- I might just not have been ready to absorb its messages)

Other podcasts:

  • Herding Code Episode 42: Scott Bellware on BDD and Lean Development - recorded after Hanselminutes #146. Again, very good discussion that helps to cement ideas around "classic tests" vs BDD vs Context Specification and various such other attempts at classifications...
  • J.B. Rainsberger: "Integration Tests Are A Scam" is a recording of a presentation covering integration vs unit tests.

Other questions like this:

  • Good QA / Testing Podcast, which among others lists the meta podcast http://testingpodcast.com/

Blog posts:

  • It’s Not TDD, It’s Design By Example - Brad Wilson, similar in vein to HC #42, attempting to get across why you're writing the tests.

I know you didn't ask for books but... Can I also mention that Beck's TDD book is a must read, even though it may seem like a dated beginner book on first flick through (and Working Effectively with Legacy Code by Michael C. Feathers of course is the bible). Also, I'd append Martin(& Martin)'s Agile Principles, Patterns & Techniques as really helping in this regard. In this space (concise/distilled info on testing) also is the excellent Foundations of programming ebook. Goob books on testing I've read are The Art of Unit Testing and xUnit Test Patterns. The latter is an important antidote to the first as it is much more measured than Roy's book is very opinionated and offers a lot of unqualified 'facts' without properly going through the various options. Definitely recommend reading both books though. AOUT is very readable and gets you thinking, though it chooses specific [debatable] technologies; xUTP is in depth and neutral and really helps solidify your understanding. I read Pragmatic Unit Testing in C# with NUnit afterwards. It's good and balanced though slightly dated (it mentions RhinoMocks as a sidebar and doesnt mention Moq) - even if nothing is actually incorrect. An updated version of it would be a hands-down recommendation.

More recently I've re-read the Feathers book, which is timeless to a degree and covers important ground. However it's a more 'how, for 50 different wheres' in nature. It's definitely a must read though.

Most recently, I'm reading the excellent Growing Object-Oriented Software, Guided by Tests by Steve Freeman and Nat Pryce. I can't recommend it highly enough - it really ties everything together from big to small in terms of where TDD fits, and various levels of testing within a software architecture. While I'm throwing the kitchen sink in, Evans's DDD book is important too in terms of seeing the value of building things incrementally with maniacal refactoring in order to end up in a better place.

by anonymous   2017-08-20

Abstract class may be the trick, but as the Growing Object-Oriented Software, Guided by Tests book advises, it would impact the unit testing level:

Do not Mock Concrete class

Usage of Abstract class might not show very explicitly the various potential relationships with its collaborators.

Here's a question about this subject that I ask few times ago, to know further about that.

You would tell me: "But an abstract class is not a concrete class!"
I would call a concrete class, every class that gathers some behaviors in order to emerge an entity.
Abstract class may often implement several methods belonging to various responsibilities, and therefore reduce the explicitness of object's collaborators.

Thus, I would rephrase the "Programming to an interface" by "Programming by roles".

by anonymous   2017-08-20

There are no strict rules in this area, only guidelines. You usually have one test case per class, but there are different strategies:

  • TestCase Per Class
  • TestCase Per Feature
  • TestCase Per Method
  • TestCase Per User Story

For example you can use 'Per Class' approach for classes that are relatively small and follow SRP. But if you have legacy code with a giant *Manager class it is fine to use 'Per Method' and have dedicated test case for only one of its methods. I think that choosing naming strategy for tests is at least as important as test code organization.

Using code coverage tool can help you find spots of untested code. It is less useful as metric. Having high code coverage does not necessarily mean that you have good tests. At the end of the day what matters is that you have meaningful and readable tests.

by anonymous   2017-08-20

AF will do the right thing (User.Create() with an anonymous name arg) with no customizations whatsoever.

The only missing bit is setting the Id. Which is a question you'll have to answer for yourself -- how should your consuming code do this in the first place ? When you've decided, you could do fixture.Customize<User>( c => c.FromFactory( User.Create).Do( x => ???)

Perhaps you could consider exposing a ctor that takes an id too. Then you can do a Customize<User> ... GreedyConstructorQuery.

If your ORM is doing some wacky reflection and you like that and/or can't route around it you get to choose whether for your tests you should:

a) do that too - if that's relevant to a given test or set of tests

b) consider that to be something that just works

Regarding mixing mocking and feeding in of values to an Entity or Value object - Don't do that (Mark Seemann's Commands vs Queries article might help you here). The fact that you seem to be needing/wanting to do so makes it seems you're trying to be too ambitions in individual tests - are you finding the simplest thing to test and trying to have a single Assert testing one thing with minimal setup.

Go buy the GOOS book - it'll help you figure out ways of balancing these kinds of trade-offs.

by myoffe   2017-08-19
Nope, there's no way around it. Tests are not easy. But! Writing tests, learning from that, then improving your tests, which in turn help you design better software will make writing tests easier. And will help you write better code, in general.

Also, 99% of the time, the first test you are going to write in a job is for an existing piece of software. So it will definitely not be easy. But you will learn so much more about the software you are writing the test against rather than writing code in the edit-and-pray methodology.

I highly recommend reading http://www.amazon.com/Growing-Object-Oriented-Software-Guide...

by antrix   2017-08-19
As a start, I recommend reading "Growing Object-Oriented Software, Guided by Tests". I am still not a 100% convert but I did get many of the same questions answered.

http://www.amazon.com/Growing-Object-Oriented-Software-Guide...

by jskulski   2017-08-19
I try to setup a feedback loop as soon as possible. I spend time setting up so i can do some simple action (hit a button, run a command, or automatically watch some files) to see if I'm on target it.

I found this saves immense time and keeps me on task pretty well.

This is from: - GOOS book https://www.amazon.com/Growing-Object-Oriented-Software-Guid... - https://www.youtube.com/watch?v=nIonZ6-4nuU