Andy Kirkham replied to the topic 'INDI Testing Framework' in the forum. 8 years ago

knro wrote:

But I'm not going to battle with on this. If you don't want automated tests then fine, I'll pass on this and not waste my time.

I don't think we should pass the opportunity for automated testing. However, we should approach this carefully and progressively. How about a middle ground approach? A lot of issues, like the issue that brought up this very subject, was due to a bug in the core library which automated sets would certainly detect. How can we implement this without a complete rewrite of several of the core components?

Progressively for sure. It's not going to happen overnight to the entire codebase. It'll have to come bit by bit. And I don't expect all INDI developers to jump on it right from the start. I will agree that having UTs in place can be something of a bind to try and start doing right from the start. It has to be a gradual process and most likely led and championed by someone. But that someone (or team) have to promote it and educate where required.

In my conversations with Gerry I offered to look at his driver. Then he swapped from that to another. So, lets not dance on this. What I suggest is this. Introduce the base framework. And then write one UT specifically at the bug this issue raised in the first place and put that in place. Once we have that we can come back around the table and decide exactly how you want to proceed. Maybe UTs for Core but not drivers? Or drivers can get UTs if their maintainer wants them? There are so many routes to take but that can be decided as we progress.

So this evening I will look for that bug you fixed (from the original issue) and UT it. Then we take it from there.

However, I would like to respond to Gerry's last post where he raised some legitimate concerns:-
so the question I've been trying to get answered, but obviously not clearly, how are we supposed to build something around the vendor binary to test an all up driver?"

The vendor binary is a shipped driver with an exported API. What happens when you call that API is of no concern what so ever to you. We are talking about testing your code, not their binary. However, this doesn't answer your question at all because your expectation is that something in the real world happens when your code does something. This is where you are missing the point.

Lets explain this by example which I hope is clear.

First, an ideal situation that doesn't happen very often but we all aspire to. Lets imagine for a second a vendor ships a binary driver and a PDF document that describes the function calls it exports that you consume. Lets assume this documentation is perfect and the driver is bug free. Let's also assume it's simple, opens and closes a dome roof. One function "root(arg)", one arg, "open" or "closed".

The natural instinct here is to write a program that opens and closes the roof. It's simple right? But will that work at my house? Don't know, I don't own one to try it on. This type of testing only works for you and others who own one. But who cares, why would I want to know? I don't own one. But the point here is testing is limited to only those who own one. No possibility of automated tests here except in a severely limited environment. But like I said, who cares?

So, now lets move onto a real world case. One we all experience everyday. Same as above but for whatever reason the vendor now believes the building moves and the roof stays still. They think "open" is now "closed" and visa versa. So, your expectation of this specification is easy but when you try to implement it then it fails. Is this a failure in your expectation? Or is it a bug from the vendor? Who knows, all we do know is the outcome is unexpected behaviour. Something needs to be fixed. So, after some investigation you change the way you call their API and you get the expected outcome. You may even leave a comment in the code lambasting the vendor. But who cares, it's fixed.

Six months later Johnny buys a dome, installs it and fires up your code. And it goes the wrong way! Little does Johnny know he's wired it back to front. But hey! Look, the software is backwards, he fixes it and commits. His dome works. Two weeks later you upgrade and..... Now, clearly it would have been better if Johnny hadn't done that. That's where unit tests come in. Swapping the fictionality would have "broke the build" and his commits would be pulled pending further investigation. Now yes, Johnny could have also smudged the unit tests to pass also. But the important point here is the unit tests are the specification. Change them at your peril! That's TWO mistakes if you do that. When the build breaks stop! What have I done?! is what is scream at you because a failed unit test is a specification/API breakage.

The above is very simplistic. It's more likely Johnny will find a "almost suitable" function and bend it slightly to meet his needs but breaking your in the process. Unit tests are your contract that in future the underlying code is sound and meets the specification.

Also, it's worth noting on how you build with unit tests. There are those out that that preach TDD (writing tests BEFORE you write code). I actually suck at this myself. I'm like "I don't know what I want to test yet, I need to write some code first!". That is not true TDD. But my take on it is if you commit UTs along with the code together you did TDD. It's tested.

I tend to achieve this by being in one of two states during development.

State 1, "the clean designer". Here, the spec is clear and the vendor API is fault free. Implement something, test it, glue it with a unit test, repeat (until done).

State 2. "the oil stained engineer". Here, State 1 was going great but then suddenly the vendor's API behaved in an unexpected manner. STOP! As an oily engineer I swap hats and start an investigation (which often involves writing spike code which you throw away). The outcome of which clarifies the specification (whether that be talk with the vendor or more often than not, reverse engineer). Once you have a clear understanding of the spec again swap hats back to State 1, implement, test, glue it in place with a unit test.

That tends to be my workflow. The unit tests that "glue it in place" are a true reflection of your final specification (after fixes if needed). Your working code is the implementation of that specification.

Lets come back to one more point. Imagine Johnny changed your code and committed it. Johnny doesn't like TDD, not only did he not fix the UT to match his change, he didn't even bother to run the test suite at all. Now, if that change lands back with you in a future upgrade you don't notice then that swap could actually damage your dome (and maybe a telescope?). You are going to have to pay for that! And the only way anyone will catch this bug is when your roof hits your telescope. Ouch. It doesn't get more real world than that. Unit tests would prevent that.

It all works ok if it's only you, your code, your dome and your telescope. But the moment you not only make it public it's going to end up one day running on someone elses system. Community code needs a level of quality that people trust.

You are already critical of SBIG and QHY quality of code. If you want to be publicly critical of others then it's best your own house is in order first. Lead by example.

Maybe the INDI website should actually start publicly listing vendor specification departures or plain deaf to support requests. We have an opportunity with a project like INDI to not only provide some levels of QA of INDI but to lead by example and shame some vendors into a more positive light. Lifting our own coding standards lifts those around us as well :)

Everyone wins (except of course unit tested code takes long to do because simply it's more work but the long run benefits outweigh the short term "feel good it works now" factor).

Read More...