I don't use TDD at work. So, the stumbling steps I take on the test-driven path are ones I choose voluntarily while doing (solo) school projects etc. The Coding Dojos of Agile Finland as well as a few skilled friends have helped a great deal but I feel my advancement is still rather haphazard. I also must confess that I have not read even the most basic books on extreme programming. So, many of the things I trouble myself with might be rather evident if I had. Therefore I would be more than happy to hear any recommendations, so I could stop asking stupid questions. (Although I know I won't have time to read anything "extra" in months...)
One of the big things I have stumbled with is writing good tests. This is of course one of the cornerstones of TDD and thus an important thing to learn well.
For me, I think the biggest question in writing tests has been: "Where do good tests come from?" As far as I have understood, the production code should be written to pass the tests. So, writing code after a test is defined should be quite straightforward. And this has definitely been the case with simple examples, e.g. the topics we have encountered in the dojos. While writing entire functional programs I have, however, been a bit unsure what to do and when to do it. Good production code comes from good tests, but where do good tests come from?
I have lately been writing two programs in which I have tried to use TDD. The one I'm just finishing is a naive parser for reading Bayesian networks in Hugin lite form and constructing actual network based on the information read. (Actually, that's only the first part of the whole program, but this is a group effort and we're working independently on different parts of the program - and this part was my responsibility.) Writing tests was quite straightforward; I had the grammar of the input files and could write tests to check the validity of small parts of the input file one at a time. I guess combining this part of the program with the other parts would have been more challenging test-wise, but I am the only one in our group who writes unit tests, so I avoid the challenge here (sadly, I might add).
The second program is a sudoku puzzle solver (yes, I know there are a lot of them already, but this was a good chance to try some AI-related stuff on my own), and here I ran into problems. I started with the smallest thing I could think of: a cell on the sudoku game board. The first tests were really easy to think of; building a constraint-based solving algorithm/strategy would definitely require the cells to know which values were still available to them, so I started by writing tests that removed some values from a cell's list of possible values and checked the remaining values to be valid etc. Soon it was clear that if a cell had all but one of its possible values removed, the remaining possible value was the only valid value that the cell could have, so it should automatically be chosen. Testing and implementing this was easy.
After a while things got hairy. When adding the bigger components of the program, I started writing tests for some particular kind of data constructs I thought I might need. I chose to write tests that would suit the faint idea I had about the inner implementation of the program. Pretty soon I realized I had no idea what I was going to do next. I had a bunch of tests and methods that passed those tests - and I didn't know if I was ever going to need the functionality I had just implemented. Trying to avoid Big Design Up Front, I almost had not designed at all. I had code that would probably remain dead even as the program would get finished.
I think the problem I have has two sides. First, I lack the knowledge and experience to decide what kind of functionality should be tested. For example, should I test some basic file handling routines? (I guess as long as I use the basic components offered by the programming language itself this would be overkill. However, custom error handling and other special functionality probably should be tested.) Second (and I think more importantly), I don't think enough about the big picture; what is unit testing (and agility in general) trying to achieve and why does TDD help to make things agile? Timo Rantalaiho wrote a good reminder in his latest post: "[W]hen doing TDD, we're not supposed to write tests but to specify requirements."
One of my teachers once said that if written academic text isn't clear, the thought behind it was not clear either. I think the same thing applies here: bad code is a sign of bad thinking, and writing code (or a test) when you're not sure what it should do means you're pretty much writing worthless crap. Avoiding BDUF should not mean avoiding design entirely.