Moving from cowboy coding to agile development

Wednesday, March 22, 2006

Doing What I Love

While doing some study-related research, I stumbled upon a nice article on doing what one loves by Paul Graham:

Although I might disagree with one or two particular things, I think Graham has some excellent points. Especially his musings on people's motivations sound really familiar and accurate. They reminded me of Terry Pratchett's words: "Too many people want to have written." Luckily the hacker culture probably isn't as tempting as writing novels, and "Too many people want to have written killer apps" isn't excruciatingly true. At least, not yet.

P.S. If someone has read Graham's book Hackers & Painters, let me know what you think about it.

Monday, March 20, 2006

My Agile Week

Last week was probably more agile for me than any other week has ever been. On Wednesday I participated in my third randori coding dojo, and on Friday I eagerly listened to each of the three presentations in the agile seminar at Kiasma.

The seminar was absolutely great. The first presentation was Vasco Duarte's Team Building and Agile Software Development, followed by Sebastian Nykopp's Test Automation in Agile Development Today and - last but definitely not least - Joseph Pelrine's How Agile Works: Complexity and Agility. Vasco Duarte and Joseph Pelrine talked more about the social and "soft" side of software development. Some of the stuff sounded pretty familiar to me because of my leadership and management studies, but there were also several new things and the familiar ones were now tied to my own trade better than ever before.

Sebastian Nykopp's presentation was more on the technical side, and thus perhaps a bit more challenging to listen to. However, having learned TDD so far only at grass-roots level, it was really nice to see the whole bundle of test automation from FitNesse to the (familiar) JUnit unit tests. As I wrote in my last post, I really should get a grip of TDD on a larger scale, so this presentation really hit a good spot.

I think the presentations gave me some valuable ideas on the ways I could try to get TDD (and perhaps some other XP methods, later) started at my workplace. We'll see.

The dojo on Wednesday was - again - loads of fun. This time the dojo was held at the FifthElement premises, being the first dojo outside of the Reaktor office I have participated in. Although I consider myself as a novice in TDD, I actually started feeling the agony of being on the red bar too long... We stayed "on the red" for over five minutes during some big change, and although (or because of) I wasn't at the keyboard at the time, I almost started feeling uneasy physically, waiting for the green bar to appear. :)

Joseph Pelrine said on Friday "[After RUP,] Agile was like a homecoming for me". I can't say I feel the same, but there's definitely something in the agile ways that I find more and more alluring. I'm already feeling TDD puritanism creeping in, and I really like the feeling.

Thursday, March 09, 2006

All design is not BDUF

I don't use TDD at work. So, the stumbling steps I take on the test-driven path are ones I choose voluntarily while doing (solo) school projects etc. The Coding Dojos of Agile Finland as well as a few skilled friends have helped a great deal but I feel my advancement is still rather haphazard. I also must confess that I have not read even the most basic books on extreme programming. So, many of the things I trouble myself with might be rather evident if I had. Therefore I would be more than happy to hear any recommendations, so I could stop asking stupid questions. (Although I know I won't have time to read anything "extra" in months...)

One of the big things I have stumbled with is writing good tests. This is of course one of the cornerstones of TDD and thus an important thing to learn well.

For me, I think the biggest question in writing tests has been: "Where do good tests come from?" As far as I have understood, the production code should be written to pass the tests. So, writing code after a test is defined should be quite straightforward. And this has definitely been the case with simple examples, e.g. the topics we have encountered in the dojos. While writing entire functional programs I have, however, been a bit unsure what to do and when to do it. Good production code comes from good tests, but where do good tests come from?

I have lately been writing two programs in which I have tried to use TDD. The one I'm just finishing is a naive parser for reading Bayesian networks in Hugin lite form and constructing actual network based on the information read. (Actually, that's only the first part of the whole program, but this is a group effort and we're working independently on different parts of the program - and this part was my responsibility.) Writing tests was quite straightforward; I had the grammar of the input files and could write tests to check the validity of small parts of the input file one at a time. I guess combining this part of the program with the other parts would have been more challenging test-wise, but I am the only one in our group who writes unit tests, so I avoid the challenge here (sadly, I might add).

The second program is a sudoku puzzle solver (yes, I know there are a lot of them already, but this was a good chance to try some AI-related stuff on my own), and here I ran into problems. I started with the smallest thing I could think of: a cell on the sudoku game board. The first tests were really easy to think of; building a constraint-based solving algorithm/strategy would definitely require the cells to know which values were still available to them, so I started by writing tests that removed some values from a cell's list of possible values and checked the remaining values to be valid etc. Soon it was clear that if a cell had all but one of its possible values removed, the remaining possible value was the only valid value that the cell could have, so it should automatically be chosen. Testing and implementing this was easy.

After a while things got hairy. When adding the bigger components of the program, I started writing tests for some particular kind of data constructs I thought I might need. I chose to write tests that would suit the faint idea I had about the inner implementation of the program. Pretty soon I realized I had no idea what I was going to do next. I had a bunch of tests and methods that passed those tests - and I didn't know if I was ever going to need the functionality I had just implemented. Trying to avoid Big Design Up Front, I almost had not designed at all. I had code that would probably remain dead even as the program would get finished.

I think the problem I have has two sides. First, I lack the knowledge and experience to decide what kind of functionality should be tested. For example, should I test some basic file handling routines? (I guess as long as I use the basic components offered by the programming language itself this would be overkill. However, custom error handling and other special functionality probably should be tested.) Second (and I think more importantly), I don't think enough about the big picture; what is unit testing (and agility in general) trying to achieve and why does TDD help to make things agile? Timo Rantalaiho wrote a good reminder in his latest post: "[W]hen doing TDD, we're not supposed to write tests but to specify requirements."

One of my teachers once said that if written academic text isn't clear, the thought behind it was not clear either. I think the same thing applies here: bad code is a sign of bad thinking, and writing code (or a test) when you're not sure what it should do means you're pretty much writing worthless crap. Avoiding BDUF should not mean avoiding design entirely.