STARLIMS Backend regression tests automation

Finally. It was about time someone did something about it and would advertise. Automatic testing of STARLIMS (actually, any REST-supported app!). Ask anyone working with (mostly) any LIMS, and automatic regression tests are not often there at all, even less automated.

I understand it is difficult to automate the front end, which is what tends to break… Nonetheless, I had this idea – please read through! – and I think there’s an easy way to automate some regression tests to a STARLIMS instance. Here’s my arguments why it brings value:

  1. Once a framework is in place, efforts are what you put in. It can be a lot of efforts, or minimal. I would say aim minimal efforts at the beginning. Read on, you’ll understand why.
  2. Focus on bugs. Any bug fixed in the backend, prepare a regression test. Chances are you’re doing it anyway (write a test script to check your script runs?)
  3. For features, just test the default parameters at first. You can go ahead and do more, but this will at least tell you that the script still compiles and handle default values properly.
  4. You CAN and SHOULD have a regression test on your datasources! At least, do a RunDS(yourDatasource) to check it compiles. If you’re motivated, convert the Xml to a .NET Datasource and check that the columns you’ll need are there.
  5. Pinpoint regression tests. You fixed a condition? Test that condition. Not all conditions. Otherwise it becomes a unit test, not a regression test.

The idea is that every day / week / whatever schedule you want, ALL OF YOUR REGRESSION TESTS WILL RUN AUTOMATICALLY. Therefore, if one day you fix one condition, the next day you fix something else in the same area, well, you want both your fix AND the previous condition fix to continue to work. As such, the value of your regression tests grows over time. It is a matter of habits.

What you’ll need

POSTMAN – This will allow you to create a regression collection, add a monitor, and know when a regression fails. This is the actual too.

REST API – We will be running a series of script from POSTMAN using STARLIMS REST API. You’ll see, this is the easy part. If you followed how to implement new endpoints, this will be a breeze.

STARLIMS Setup

In STARLIMS, add a new server script category for your API. In my case, I call it API_Regression_v1. This will be part of the API’s route.

Add a new script API_Regression_v1.Run. This is our request class for the POST endpoint. The code behind will be simple: we receive a script category in the parameters, and we run all children scripts. Here’s a very simple implementation:

:CLASS Request;
:INHERIT API_Helper_Custom.RestApiCustomBase;

:PROCEDURE POST;
:PARAMETERS payload;

:DECLARE ret, finalOutput;

finalOutput := CreateUdObject();
:IF !payload:IsProperty("category"); 
    finalOutput:StatusCode := Me:HTTP_NOT_FOUND;
    finalOutput:response := CreateUdObject();
    finalOutput:response:StatusCode := Me:HTTP_NOT_FOUND;
    :RETURN finalOutput;
:ENDIF;

/* TODO: implement script validation (can we run this script through regression test? / does it meet regression requirements?) ;
ret := Me:processCollection( payload:category );
finalOutput:StatusCode := Me:HTTP_SUCCESS;
finalOutput:response := ret;
finalOutput:response:StatusCode := Me:HTTP_SUCCESS;
:RETURN finalOutput;

:ENDPROC;

:PROCEDURE processCollection;
:PARAMETERS category;
:DECLARE scripts, i, sCatName, output, script;
output := CreateUdObject();
output:success := .T.;
output:scripts := {};
sCatName := Upper(category);
scripts := SQLExecute("select   coalesce(c.DISPLAYTEXT, c.CATNAME) + '.' + 
                                coalesce(s.DISPLAYTEXT, s.SCRIPTNAME) as s
                        from LIMSSERVERSCRIPTS s
                        join LIMSSERVERSCRIPTCATEGORIES c on s.CATEGORYID = c.CATEGORYID 
                        where c.CATNAME like ?sCatName? 
                        order by s", "DICTIONARY" );

:FOR i := 1 :TO Len(scripts);
    script := CreateUdObject();
    script:scriptName := scripts[i][1];
    script:success := .F.;
    script:response := "";
    :TRY;
        ExecFunction(script:scriptName);
        script:success := .T.;
    :CATCH;
        script:response := FormatErrorMessage(getLastSSLError());
        output:success := .F.;
    :ENDTRY;
    aAdd(output:scripts, script);
:NEXT;

:RETURN output;
:ENDPROC;

As you can guess, you will NOT want to expose this endpoint in a production environment. You’ll want to run this on your development / test instance; whichever makes most sense to you (maybe both). You might also want to add some more restrictions, like only if category starts with “Regression” or something along thos lines… I added a generic setting called “/API/Regression/Enabled” with a default value of false to check the route (see next point), and a list “/API/Regression/Categories” to list which category can be run (whitelisted).

Next, you should add this to your API route. I will not explain here how to do this; it should be something you are familiar with. Long story short, API_Helper_Customer.RestApiRouter should be able to route callers to this class.

POSTMAN setup

This part is very easy. Create yourself a new collection – something like STARLIMS Regression Tests v1. Prepare this collection with an environment so you can connect to your STARLIMS instance.

One neat trick: prepare your collection with a pre-request script that will make it way easier to use. I have this script I tend to re-use every time:

// get data required for API signature
const dateNow = new Date().toISOString();
const privateKey = pm.environment.get('SL-API-secret'); // secret key
const accessKey = pm.environment.get('SL-API-Auth'); // public/access key

const method = request.method;
const apiMethod = "";

var body = "";
if (pm.request.body && pm.request.body.raw){
    body = pm.request.body.raw;
}
// create base security signature
var signatureBase = `${url}\n${method}\n${accessKey}\n${apiMethod}\n${dateNow}\n${body}`;
// get encoding hash of signature that starlims will attempt to compare to
var data = CryptoJS.enc.Utf8.parse(signatureBase);
const hash = CryptoJS.HmacSHA256(data, privateKey);
const encodedHash = encodeURIComponent(CryptoJS.enc.Base64.stringify(hash));

// create headers
pm.request.headers.add({"key":"SL-API-Timestamp", "value":dateNow});
pm.request.headers.add({"key":"SL-API-Signature", "value":encodedHash});
pm.request.headers.add({"key":"SL-API-Auth", "value":accessKey});
pm.request.headers.add({"key":"Content-Type", "value":"application/json"});

Note: In the above code, there’s a few things you need to initialize in your environment; pay attention to the pm.environment.get() variables.

Once that is done, you add one request in your collection of type POST. Something that looks like this:

POST request example

See the JSON body? We’re just telling STARLIMS “this is the category I want you to run”. With the above script, all scripts in this category will run when this request is sent. And since every script is in a try/catch, you’ll get a nice response of all scripts success (or failure).

Let’s create at least 1 regression test. In the category (in my case, My_Regression_POC), I will add one script named “compile_regression_framework”. The code will be very simple: I just want to make sure my class is valid (no typo and such).

:DECLARE o;
o := CreateUdObject("API_Regression_v1.run");
:RETURN .T.;

Setup a POSTMAN monitor

Now, on to the REALLY cool stuff. In POSTMAN, go to monitor:

POSTMAN – Monitor option

Then just click the small “+” to add a new monitor. It is very straightforward: all you need to pick is a collection (which we created earlier) and an environment (which you should have by now). Then setup the schedule and the email in case it fails.

Setting up a Monitor

And that’s it, you’re set! You have your framework in place! The regression tests will run every day at 3pm (according to the above settings) and if something fails, I will receive an email. This is what the dashboard looks like after a few days:

Monitor Dashboard

Next Steps

From here on, the trick is organizing regression scripts. In my case, what I do is

  1. I create a new category at the beginning of a sprint
  2. I duplicate the request in the collection, with the sprint name in the request’s title
  3. I change the JSON of the new request to mention the new category
  4. Then, for every bug fixed during that sprint, I create a regression script in that category. That script’s purpose is solely to test what was fixed.

What happens then is every day, all previous sprints regression tests run, plus the new ones! I end up having a lot of tests.

Closing notes

Obviously, this does not replace a good testing team. It only supports them by re-testing stuff they might not think about. It also doesn’t test everything; there’s always scenarios that can’t be tested only with a server call. It doesn’t test front-end.

But still, the value is there. What is tested is tested. And if something fails, you’ll know!

One question a developer asked me on this is “sometimes, you change code, and it will break a previous test, and this test will never pass again because it is now a false test. What then?”

Answer: either delete the script, or just change the 1st line to :RETURN .T.; with a comment that the test is obsolete. Simple as that.

At some point, you can start creating more collections, adding more monitor, to split different schedules (some tests to run weekly, others daily, etc).

And finally, like I said, the complexity and how much you decide to test is really up to you. I recommend starting small; the value is there without much effort. Then, if a critical scenario arises, you can write a more complex test. That should be the exception.

You have ideas on how to make this better? Share!

2 thoughts on “STARLIMS Backend regression tests automation

  • April 27, 2023 at 20:36
    Permalink

    Hello, désolé je ne regarde pas les commentaires de mon site assez souvent!

    Je n’ai pas essayé avec Jenkins, mais si vous avez des questions spécifiques, n’hésitez pas, je ferai de mon mieux!

  • April 25, 2023 at 11:47
    Permalink

    Bonjour Michel,
    Je me nomme Van Nguyen admin de Starlims depuis 2018. On a beaucoup de problème de régression et bugs après avoir l’installation de nouveau package. C’est très intéressé de votre artacle ce que je cherche. Par contre, j’ai des questions pour implémenter des tests automatisés par jenkin surlequel je pourrais ajouter le cucumber test (tests fonctionnels). Aussi les questions sur l’aspect de la créaction des tests unitaires dans SGIL.
    Merci de prendre le temps pour me répondreé.
    Mes salutations
    Van

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.