Jump to content
Stefan Glienke

TestInsight 1.1.9.0 released

Recommended Posts

Thanks @Stefan Glienke. I'm not sure if you prefer I open an issue, but I consider this more a discussion, so will ask it here first.

I'm trying to use Test Insight with my tests as-is, but I'm having some issues which relates to my understanding of how it works. For example:

 

1. Hierarchy

My tests have some hierarchical structure that are displayed in the DUnit GUI test runner. I manually use several test decorators to organize it, and sometimes I register test with dots in names to organize it.

When using TestInsight, the results are not displayed in the same structure, but all test results are displayed in a "flat" layout. Is this known, intentional, or there is something I could do to improve it?

 

2. Loop

I understand the tests keep running in a loop? Or at least I can't finalize the test application, which seems to keep running in background. Is this also the normal behavior?

 

3. Test selection

I understand that I can select the tests to be executed, but only after one initial execution of all tests are performed. My tests take a lot of time to execute, so that's not a viable option. Is there a workaround to be able to select the tests to be executed before they are actually first executed?

 

Thanks!

Share this post


Link to post

1. Known and as designed - TI is designed with TDD in mind where you usually don't care about some hierarchies and just about passed/not passed - that is why originally it just had the group by type view - only later I introduced the group by fixture view which has no hierarchical display but just groups by the fixture names and its test cases. The issue was brought up several times but to be honest I have no intentions to replicate the full hierarchical view which the DUnit GUI has.

 

2. That is a wrong understanding - TI only runs the tests when you hit one of the play buttons or when one of the 2 options to run it automatically are enabled - however even then it only runs once the trigger (hit save or idle for the set time) happens. If the test application has an issue and keeps running in the background you can hit the "stop" button which then tries to terminate that process.

 

3. Obviously my UI is not good enough as I get this question quite often - however reading the tooltips will reveal this option - the second button from the left (the "fast forward" one) has 2 options in one - if there is no test selected it does "discover tests" - that is it runs the project but tells the framework to not execute every testcase - and thus gets all of them as skipped - if any test is selected it changes to "run selected tests". Long story short - simply hit that button and it collects all tests without actually executing them which is then an almost instant action.

Share this post


Link to post

1. This is (almost) a showstopper for me, unless I'm not fully understanding or building my tests wrongly. The thing is TMS Aurelius for example has thousands and thousands of tests. Reason is tests are replicated in several scenarios: for MySQL, for SQL Server, for Oracle, using RemoteDB, not using RemoteDB, etc.. Thus information about hierarchy is a must to understand which tests failed in which situation (the test was a "SaveBlob", but was it with SQL Server or Firebird? Using RemoteDB? Etc.) Not sure if there is an alternative way to do so or TI was not built for such scenario?

 

2. Ok, I will check it again with more attention.

 

3. Ok, I will check it also, thanks a lot.

Edited by Wagner Landgraf

Share this post


Link to post

Decision is of course up to you and I am not forcing anyone to use my plugin - after all it excels well for fast running tests and not for test suits that might take minutes or longer to run.

 

Anyhow if the lack of being able to identify failed test by their name is a problem for you there might be something wrong with your test naming.

 

See this example from Spring4D

 

image.png.0c9370e72f5d21aa7499f4cb44511fcf.png

 

And here the result from Grep as you can see that the TestOrdered and TestOrdered_Issue179 are only implemented once:

 

image.png.e0da7cd205c756a4e54eb11e1c5177c2.png

 

So I am using inheritance to have a base test class here and then only inherit from that for the different dictionary implementations to be tested.

In my case I have the identical test being performed 4 times - and I guess the same will be the case for Aurelius - with potentially some tests that might be different but could still be implemented as virtual methods in some base class and then inherited and thus provide the exact name where they failed.

Furthermore simply double clicking on the failed test should lead you do the code.

 

Edit: another question: if you run your test suite as part of CI - how would you know there which SaveBlob failed if you just name a dozen of tests "SaveBlob" - as you can see this is not a particular problem of TI but likely results from some non optimal naming of your tests and simply relying on the hierarchical view of the DUnit UI to find which one it is.

Edited by Stefan Glienke

Share this post


Link to post

Of course no one is forcing me! 🙂  It's just it's a nice tool with several benefits that I'd love to use. I will see what I will do.

 

My case is different, I also use test inheritance when applicable but as I said, in my case I have hundreds of tests, spread over different TTestCase objects because it makes sense - logical grouping, different setup and teardown mechanisms, etc. Then all of those tests are grouped by decorators to work with different "Setups" - different databases, for example. So they are actually the same test, even double clicking won't solve because they will all point to the same source code.

 

 

Share this post


Link to post

I see - if you can pass me some simple demo of how you structure your tests and where it leads to problems identifying which test result belongs to which tests it would help me understanding and possibly finding a solution - you can see the client sources that are shipped what information TI gathers from each test and if it possible is missing some information apart from the position in the hierarchy.

Share this post


Link to post

The tests are organized more or less this way (never mind some unfriendly names):

 

1945293399_PastedGraphic.thumb.png.a48b465c0382e6dd983b93ee12dee789.png

 

As you can see, there are several setups (NativeSQLite, FireDacMSSQL, etc.). For each of then, there is a test suite named "TTestBasicCriteria" for example, with several test methods.

 

This is all setup dynamically by a code like this, which gathers all registered "Aurelius tests" and then create the proper DUnit tests based on it:

 

  // Wrap all registered tests into a single test setup for initial set up and final tear down }
  for Setup in AureliusDBSetups do
  begin
    testconfig := TTestConfig.Create(Setup);
    DBTest := CreateSuiteFromConfig(testConfig, Setup.Name);
    RegisterTest(DBTest);
    if Setup.RemoteDB then
    begin
      testConfig := TRemoteDBTestConfig.Create(Setup);
      DBTest := CreateSuiteFromConfig(testConfig, Setup.Name + ' - Remote');
      RegisterTest(DBTest);
    end;
    if Setup.GenericDB then
    begin
      testConfig := TGenericDBTestConfig.Create(Setup);
      DBTest := CreateSuiteFromConfig(testConfig, Setup.Name + ' - Generic');
      RegisterTest(DBTest);
    end;
  end;

 

You can also see some of the test cases are not replicated among all databases, because it doesn't make sense - like TTestMemoryDataset, for example.

Share this post


Link to post
1 hour ago, Stefan Glienke said:

3. Obviously my UI is not good enough as I get this question quite often - however reading the tooltips will reveal this option - the second button from the left (the "fast forward" one) has 2 options in one - if there is no test selected it does "discover tests" - that is it runs the project but tells the framework to not execute every testcase - and thus gets all of them as skipped - if any test is selected it changes to "run selected tests". Long story short - simply hit that button and it collects all tests without actually executing them which is then an almost instant action.

It's hidden functionality, which is not intuitive. I had to explain it to every colleague, so you probably only saw the tip of the iceberg.

You already change the hint, once tests are selected. Why don't you also change the Icon accordingly to indicate that the functionality has changed? I can try to design an icon that fits in with the others.

Share this post


Link to post
12 minutes ago, luebbe said:

Why don't you also change the Icon accordingly to indicate that the functionality has changed? I can try to design an icon that fits in with the others.

Changing icons is not good style - I could have two buttons but then again people would not read hints. Thanks for your offer but I have the Arxialis icon set and take one from their icons if I need another.

Thing is internally it is exactly identical function - "run selected tests" - which basically means "run the test app and skip any test that is not selected" - well if there is no test to select that means "run the test app and skip all" - which results in all tests being discovered.

If something can be explained by "click this button to discover tests" I would rather spend my time elsewhere more important - possibly by writing some quick guide that explains everything on one or two pages and leaves no room for confusion - then I could at least reply with RTFM when someone asks me 😛

Edited by Stefan Glienke
  • Haha 1

Share this post


Link to post

"Run selected tests". If no tests are selected, no tests are being run. It makes sense.

 

But to be absolutely honest, I always wondered the same: How can I "find" the existing tests without running them all?" It never entered my mind I could just use the button.

  • Like 1

Share this post


Link to post
50 minutes ago, Stefan Glienke said:

Please show how these tests show in TI grouped by type.

I tried it again, I must be doing something wrong. This is what I get when I run it for the first time. Some warnings:

 

image.thumb.png.240387731c19874fde26e59e94167ad7.png

 

And then remaining tests passed. The tests that are not replicated and grouped by database:

 

image.thumb.png.a5e76e9425c464a86c5e691c5f3f9fce.png

 

And the tests that are supposed to be replicated and grouped by database:

 

image.thumb.png.92e33e1e3e13b7260385141c9b729f1f.png

 

If I click "group by type" it doesn't help much:

 

image.thumb.png.e1c673d81d1f2b6d2541c242e21ce676.png

 

Finally, I must say that all tests were run in the first place. Note that what I do is to run the application with Ctrl+Shift+F9. 

 

Share this post


Link to post

- Warnings are because TI enables the FailsIfNoChecksExecuted option all the time (see TestInsight.DUnit.RunRegisteredTest) - and then turns failures with the message 'No checks executed in TestCase' into a warning. This happens when you don't execute any CheckXXX in your test. Yes sometimes such tests exist - put a FChecksCalled := True into the code or create a class helper for TTestCase with a Pass method (which I did) or inherit from your own improved TTestCase class (which is also what I did in the past).

 

- I have to say all this back and forth showing screenshots and stuff is very tedious and time wasting for us both to find out a solution. Especially since without the exact code I cannot see which results belong to which setup code - hence my question to create some tiny two or three testcases example that demonstrates the issue that I can look into - my guess so far (and I hate guessing with little information) is that due to the approach with decorators there is some information lost that would be needed for a unique name. IIRC in DUnit a decorator inherits from ITest and thus is able to change what is returned from the GetName method - I would guess this is what is needed here to produce a unique name.

Share this post


Link to post

Ok. Reading the code I think TI is just not prepared for it. In many places you consider the test name to be ClassName + TestName, as in:

 

function GetFullQualifiedName(const test: ITest): string;
begin
  Result := (test as TObject).ClassName + '.' + test.Name;
end;

Also, TTestInsightListener.StartSuite and EndSuite are not implemented, and I believe that's what would allow a tree-like structure in the IDE plugin. 

I also don't know how the plugin handles the information sent to it, but from what is available in TTestInsightResult and no StartSuite/EndSuite information I believe it's not trivial to show the results reflecting the original tree structure.

 

Thanks for your help!

Share this post


Link to post

StartSuite/EndSuite might be used by the GUI listerner to build its hierarchy, I don't know but its not at all required - some sort of names containing their entire path information would be needed to - but I said before - not planned.

Regardless we are discussing the wrong issue here - you most likely have multiple instances of the same testcase classes here and thus multiple tests return the same name - I suspect that you have more than 350 tests you showed in the screenshot before.

Its as simple as that if I name a file just by its file name I most likely find similar named units in several folders - when I add some information that is unique for each of them - like the directory I can identify them. As I said before if the decorator would not just pass GetName onto its decoratee then the name would be unique.

 

TI is build to be test framework agnostic - and I try to keep it that way and not implement any quirk or unique feature into it - there was enough trouble already with some places where DUnit and DUnitX behaved different in terms of how they named and identified their tests already.

 

Having that said - I am very sure that even without implementing hierarchical display you can be able to use TI for your tests and be able to find out which of the similar named but using different setup tests failed.

 

Edit: I quickly looked into it and I have to say the design with the decorators and the way they are implemented in TestExtensions.pas is a bit crappy but anyhow:

We need the BeginTest/EndTest methods here as those are the ones being run for the decorator instances - we use IsDecorator to check is its one and then add/remove from a current path variable. I remember you asking me about running tests in parallel, so make that a thread var - not sure if running tests in parallel makes sure that only tests from different suites are run in parallel because otherwise DUnit itself might be in trouble... anyway - append that as well in GetFullQualifiedName - anyhow you might want to change the TTestDecorator.GetName to use its name and not just generate a generic name that does not tell much.

 

Example:

 

type
  TMyDecorator = class(TTestDecorator)
    function GetName: string; override;
  end;

{ TMyDecorator }

function TMyDecorator.GetName: string;
begin
  Result := FTestName;
end;

initialization
  RegisterTest(TMyDecorator.Create(TMyDecorator.Create(TMyTestCase.Suite, 'foo'), 'qux'));
  RegisterTest(TMyDecorator.Create(TMyTestCase.Suite, 'bar'));
end.

And these changes in TestInsight.DUnit:

 

+threadvar
+  currentPath: string;
  
function GetFullQualifiedName(const test: ITest): string;
begin
*  Result := currentPath + (test as TObject).ClassName + '.' + test.Name;
end;

procedure TTestInsightListener.EndTest(test: ITest);
begin
+  if IsDecorator(test) then
+    currentPath := Copy(fCurrentPath, 1, Length(fCurrentPath) - Length(test.Name) - 1);
end;

procedure TTestInsightListener.StartTest(test: ITest);
var
  testResult: TTestInsightResult;
begin
+  if IsDecorator(test) then
+    currentPath := fCurrentPath + test.Name + '.';

 

And I get this result

 

image.png.7e59d61ec9babd23e52972dc37a25501.png

Edited by Stefan Glienke
  • Like 1

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×