Jump to content

Recommended Posts

After some discussion with a couple of my colleagues about testing I wondered what others do ?

I'm not talking about Unit tests etc, but the tests we developers do whilst writing code to make sure that it's functional in a way the user might expect e.g when I click this button, these controls are hidden and these other controls appear in their place, as per the design

Myself and one of my colleagues test frequently, almost to the extent of write a minor feature, check it compiles, test the UI does what we expect and is useable, verify that the edge cases also work, fix and then once all good, move on to the next (minor) feature. 
Whereas one of my other colleagues will write a whole module and not even hit compile until the end and then test, but by which time he's probably forgotten some of the features/specs, so some stuff gets omitted from his test. 

I can't understand his way of working, but he can't understand mine. So what's the most common way of doing this ?





Share this post

Link to post

I use both approaches.


Some bits of code let you take a fairly piecemeal approach, like what you describe.


Others require quite a bit of code to be written before you can do much of anything. 


In the latter situations, I'll often do compiles just to look for typos and missing declarations, but 10.4.2 is quite good at highlighting various things with background parsing.


Getting "clean code" that compiles is one thing. Making sure it does what it's supposed to is something else.


Sometimes I'll build just enough UI scaffolding to let me play with bits of code as it evolves, but I'm not a fan of "Design for Test" so I don't make independent tests.


UI-driven stuff either does what you want or it doesn't. I don't see the need for separate test code.


The vast majority of what I do is UI-driven, so I don't even bother with unit tests. They are best for libraries.


(I've often wished there was a way to add meta-methods to classes, like test meta-methods that you could add for testing. But often they need to be run at a higher level. Sometimes it makes sense to have integrity tests that check to ensure certain combinations of data members are provided and that nothing is out-of-range, but they'd still be called by higher-level functions. TForms for data entry use let you do this a little with Validate methods.)


That said, I'll frequently add treeviews or listviews or memos to let me visualize results as I'm adding things. And sometimes things don't look as nice as I originally thought, so I'll change them.

Share this post

Link to post

When writing some new class, I try to design structure first, write stub methods and then implement them one by one. Alas, implementation details frequently dictate structure change so I have to redesign it. I'm not a fan of TDD either when creating from the ground, I prefer to have something more or less ready to use. Of course, as soon as the state is compilable (I try to keep it such), I periodically compile. Then I test - run, create tests etc.

When modifying code, I write and run tests ||-ly, sometimes it helps to understand what I want from a method.

  • Like 1

Share this post

Link to post

I am definitely like you. But first pen and paper! I start something, a unit, a class, whatever, get a bit fed up with the compile-run-test cycle, more so with GUI projects, create a test project and include the (now new unit) if i feel i will save time, some scaffolding will always be needed. Then when the structure and stuff starts to look like something, i go back to the "real life project" and finish up. This can go a couple of turns depending on the complexity of the specific thingies. My debug output often is a unit "global" procedure taking a string or two that i can assign the same way easily, regardless of other scaffolding. These test projects is nothing i save or reuse months later because the actual unit will have changed. If i did that it would be something TDD-like.


Your colleagues approach make me nauseous. What with edge cases of arrays, range and overflow exceptions et. al. I would feel my code will have [untested] "holes" in it. Sometimes i even break out very simple stuff like SubStr(, , Pos() - 1) just to be sure it is full-proof.

Share this post

Link to post

I normally derive new code from existing projects. Basically I have a production of very similar projects. When instead I restart from a whole new project, I initially build a mini graphical interface with minimal functionality. Then the core of the project with the various modules. I normally focus on a functional group of methods that I test as I build them both in terms of performance (execution times with variations, possible bottlenecks) and erroneous results. I always use High, Low HighBound, Count, ..... and never numeric expressions to access the indexed data, therefore errors for "out of bound" accesses are normally never present.

Compilations and builds are done very frequently.

After I have a stable core, I move on to the UI part ... which is the longest and most laborious part normally.

Of course at the start of the project the use of some PAPER and some brainstorm is fundamental.

The tests are done frequently, it is true that perhaps the development time is longer, but in the end the results are almost always OK, without the need for major modifications

Share this post

Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now