Disclaimer: I worked on this project independently during personal time. Nothing here represents the views or endorsement of SGS. Any opinions, findings, and conclusions expressed in this blog post are solely mine. The project utilizes Python, OpenAI’s language models and STARLIMS mock code I created, which may have separate terms of use and licensing agreements.
AI is not a magician; it’s a tool. It is a tool for amplifying innovation
Fei-Fei Li
With this in mind, imagine: what if we could automatically get STARLIMS Code Review feedback? You know, an extra quality layer powered by AI?
“STARLIMS is a proprietary language” you will say.
“It has to be trained” you will say.
True; yet, what if?…
I have done another experiment. I was given a challenge to try Python and OpenAI API, but I wasn’t really given any background. Given my recent fun with CI/CD and the fact I’m back working on the STARLIMS product, I thought “Can we automatically analyze STARLIMS Code? Like an automated code-reviewer?” Well, yes, we can!
As I was recommended a long time ago, with this project, let me show you the end result. I have a REST API running locally (Python Flask) with the following 2 endpoints:
POST /analyze/<language>
POST /analyze/<language>/<session_id>
The 1st will kick a new analysis session, and the 2nd allows the user to continue their analysis session (like queuing scripts and relating them together!!!)
I usually create nice diagrams, but for this, really, the idea is
STARLIMS <-> Python <-> Open AI
So no diagram for you today! How does it work?
I can pass SSL code to a REST API, and receive this:
{
"analysis": {
"feedback": [
{
"explanation": "Defaulting a parameter with a numeric value may lead to potential issues if the parameter is expected to be a string. It's safer to default to 'NIL' or an empty string when dealing with non-numeric parameters.",
"snippet": ":DEFAULT nItemId, 1234;",
"start_line": 4,
"suggestion": "Consider defaulting 'nItemId' to 'NIL' or an empty string depending on the expected data type.",
"type": "Optimization"
}
]
},
"session_id": "aa4b3bd3-75bd-42e3-8f31-e53502e68256"
}
It works with STARLIMS Scripting Language (SSL), STARLIMS Data sources (DS) and … JScript! Here’s an example of a JScript output:
{
"analysis": {
"items": [
{
"detailed explanation": "Checking for an empty string using the comparison operator '==' is correct, but using 'Trim()' method before checking can eliminate leading and trailing white spaces.",
"feedback type": "Optimization",
"snippet of code": "if (strMaterialType == '')",
"start line number": 47,
"suggestion": "Update the condition to check for an empty string after trimming: if (strMaterialType.trim() === '')"
},
{
"detailed explanation": "Using the logical NOT operator '!' to check if 'addmattypelok' is false is correct. However, for better readability and to avoid potential issues, it is recommended to explicitly compare with 'false'.",
"feedback type": "Optimization",
"snippet of code": "if (!addmattypelok)",
"start line number": 51,
"suggestion": "Update the condition to compare with 'false': if (addmattypelok === false)"
},
{
"detailed explanation": "Checking the focused element is a good practice. However, using 'Focused' property directly can lead to potential issues if the property is not correctly handled in certain scenarios.",
"feedback type": "Optimization",
"snippet of code": "if ( btnCancel.Focused )",
"start line number": 58,
"suggestion": "Add a check to ensure 'btnCancel' is not null before checking its 'Focused' property."
}
]
},
"session_id": "7e111d84-d6f4-4ab0-8dd6-f96022c76cff"
}
How cool is that? To achieve this, I used Python and OpenAI API. I had to purchase some credits; but really, it is cheap enough and worth it when used to a small scale (like a development team). I put 10$ in there, and I have been running many tests (maybe a few hundreds) and I am down by 0.06$, so… I would say worth it.
The beauty of this is my project supports this:
Add new languages in 5 minutes (just need to add the class, update the prompt, add the reference code, restart the app, go!)
Enhance accuracy by providing good code, training the assistant what is valid code
To give you an idea, the project is very small:
Looking ahead with this small project, I’m thinking beyond just checking code for errors. Imagine if we could hook it up to our DevOps setup, like Azure DevOps or SonarQube. It would be like having a digital assistant that not only spots issues but also files bugs and suggests improvements automatically! This means smoother teamwork, better software quality, and fewer headaches for all of us.
Now that I got this working, I am thinking about bunch of exciting ideas like:
Integrate this as a Quality Gate on commits.
If it fails, goes back to developer
If it succeeds, record the results and run the pull request (or push to the next human reviewer)
Implement a mechanism for automatic Unit Tests generation (we potentially can do something there!)
Implement a mechanism for code coverage report (also possible!)
Integration of these to STARLIMS directly so we can benefit from this and include in a CI/CD pipeline somehow
Dreaming is free, is it not? Well, not quite in this case, but I’m good for another 9.94$…
I have the repo set as private on Github. This is a POC, but I think it can be a very cool thing for STARLIMS, but also will work for any other proprietary language if I get some good sample code.
Hell, it can even work for already supported languages like Javascript, c#, or anything, without training! So we could use this pattern for virtually any code review.
There is something about STARLIMS that has been bugging me for a long time. Don’t get me wrong – I think it is a great platform. I just question the wellness of XFD in 2024, and the selection of Sencha for the HTML part of it.
But an even more critical point: I question the principle of using the same server for the “backend” and the “frontend”. Really, the current architecture of STARLIMS (in a simplified way) is something like this:
Sure, you can add load balancers, multiple servers, batch processors… But ultimately, the Server’s role is both backend and Web Rendering, without really following Server-Side-Rendering (SSR) pattern. It hosts / provides the code to render from backend and let client do rendering. So, in fact, it is Client-Side-Rendering (CSR) with most of the SSR drawbacks.
This got me thinking. What if we really decoupled the front end from the backend? And what if we made this using real micro services? You know, something like this:
React does not need presentation. The infamous open-source platform behind Facebook. Very fast and easy, huge community… Even all the AI chatbot will generate good React components if you ask nicely! For security, it’s like any other platform; it’s as secure as you make it. And if you pair it with Node.js, then it’s very easy, which brings me to the next component…
Another one in no need of presentation. JavaScript on a backend? Nice! And there, on one end, you handle the session & security (with React) and communicate with STARLIMS through the out of the box REST API. Node can be just a proxy to STARLIMS (it is the case currently) but should also be leveraged to extend the REST APIs. It is a lot easier to implement new APIs and connect to STARLIMS (or anything else for that matter!) and speed up the process. Plus, you easily get cool stuff like WebSockets if you want, and you can cache some reference data in Redis to go even faster!…
Fast / lightweight / free cache (well, it was when I started). I currently use it only for sessions; since REST API is stateless in STARLIMS, I manage the sessions in Node.js, and store them in Redis, which allows me to spin multiple Node.js instances (load balancing?) and share sessions across. If you don’t need to spin multiple proxy, you don’t need this. But heh, it’s cooler with it, no?
I was thinking (I haven’t done anything about this yet) to have a cron job running in Node.js to pull reference data from STARLIMS (like test plans, tests, analytes, specifications, methods, etc) periodically and update Redis cache. Some of that data could be used in the UI (React.js) instead of hitting STARLIMS. But now, with the updated Redis license, I don’t know. I think it is fine in these circumstances, but I would need to verify.
… BUT WHY?
Because I can! – Michel R.
Well, just because. I was learning these technologies, had this idea, and I just decided to test the theory. So, I tried. And it looks like it works! There are multiple theoretical advantages to this approach:
Performance: Very fast (and potentially responsive) UI.
Technology: New technology availability (websockets, data in movement, streaming, etc.).
Integration: API first paradigm, Node.js can make it really easy to integrate with any technology!
Source control: 100% Git for UI code, opening all git concepts (push, pull requests, merge, releases, packages, etc.).
Optimization: Reduce resource consumption from STARLIMS web servers.
Scalability: High scalability through containerization and micro-services.
Pattern: Separation of concerns. Each component does what its best at.
Hiring – there is a higher availability of React.js and Node.js developers than STARLIMS developers!
Here’s some screenshots of what it can look like:
As you can see, at this stage, it is very limited. But it does work, and I like a couple of ideas / features I thought of, like the F1 for Help, the keyboard shortcuts support, and more importantly, the speed… It is snappy. In fact, the speed is limited to what the STARLIMS REST API can provide when getting data, but otherwise, everything else is way, way faster than what I’m used to.
How does it work, really?
This is magic! – Michel R.
Magic! … No, really, I somewhat “cheated”. I implemented a Generic API in the STARLIMS REST API. This endpoint supports both ExecFunction and RunDS, as well as impersonation. Considering that the REST API of STARLIMS is quite secure (it uses anti-tampering patterns, you can ask them to explain that to you if you want) and reliable, I created a generic endpoint. It receives a payload containing the script (or datasource) to run, with the parameters, and it returns the original data in JSON format.
Therefore, in React, you would write code very similar to lims.CallServer(scriptName, parameters) in XFD/Sencha.
Me being paranoid, I added a “whitelisting” feature to my generic API, so you can whitelist which scripts to allow running through the API. Being lazy, I added another script that does exactly the same, without the whitelisting, just so I wouldn’t have to whitelist everything; but hey, if you want that level of control… Why not?
Conclusion
My non-scientific observations are that this works quite well. The interface is snappy (a lot faster than even Sencha), and developing new views is somewhat easier than both technologies as well.
Tip: you can just ask an AI to generate a view in React using, let’s say, bootstrap 5 classNames, and perhaps placeholders to call your api endpoints, et voilà! you have something 90% ready.
Or you learn React and Vite and you build something yourself, your own components, and create yourself your own STARLIMS runtime (kind-of).
This whole experiment was quite fun, and I learned a ton. I think there might actually be something to do with it. I invite you to take a look at the repositories, which I decided to create a public version of for anyone to use and contribute under MIT with commercial restrictions license:
You need to have both projects to get this working. I recommend you check both README to begin with.
Right now, I am parking this project, but if you would like to learn more, want to evaluate this but need guidance, or are interested in actually using this in production, feel free to drop me an email at [email protected]! Who knows what happens next?
Well, all was good under the sun, until a reader pointed out that I had omitted a very important piece. I was expecting STARLIMS developers to know how to manage; but it is not so. Re-reading, I realized that indeed, one might need directions.
I’m talking about the API_Helper_Custom.RestApiCustomBase class.
This class is not really needed, you can instead inherit of the RestApi.RestApiBase class.
But having our own custom base is good! It allows us to implement common functionalities that all your services may need. In this example, I’ll provide an impersonation method, very useful if you wish to have a single integration user, but actually know who the user should be impersonating.
:CLASS RestApiCustomBase;
:INHERIT RestApi.RestApiBase;
:DECLARE APIEmail;
:DECLARE LangId;
:DECLARE UserName;
/* do stuff here that applies to all custom API's;
:PROCEDURE Constructor;
:DECLARE sUser;
Me:LangId := "ENG";
Me:UserName := GetUserData();
Me:APIEmail := Request:Headers:Get("SL-API-Email");
sUser := LSearch("select USRNAM from USERS where EMAIL = ? and STATUS = ?", "", "DATABASE", { Me:APIEmail, 'Active' });
:IF ( !Empty(sUser) ) .and. ( sUser <> GetUserData() );
Me:Impersonate(sUser);
:ENDIF;
Me:LangId := LSearch("select LANGID from USERS where USRNAM = ?", "ENG", "DATABASE", { MYUSERNAME });
:ENDPROC;
/* Allow system to impersonate a user so transactions are corrected against the correct user;
:PROCEDURE Impersonate;
:PARAMETERS sUser;
:IF !IsDefined("MYUSERNAME");
:PUBLIC MYUSERNAME;
:ENDIF;
MYUSERNAME := sUser;
SetUserData(MYUSERNAME);
:ENDPROC;
As you can see, this is pretty simple. Once you have this REST API ready, inherit this class, and you should be fine to have a working API. In the above example, the code expect a header SL-API-Email that will contain the email of the user to impersonate. If it is not provided, then the user to whom the key belong is the current user.
Hope this helps those who didn’t yet figure it out!
Finally. It was about time someone did something about it and would advertise. Automatic testing of STARLIMS (actually, any REST-supported app!). Ask anyone working with (mostly) any LIMS, and automatic regression tests are not often there at all, even less automated.
I understand it is difficult to automate the front end, which is what tends to break… Nonetheless, I had this idea – please read through! – and I think there’s an easy way to automate some regression tests to a STARLIMS instance. Here’s my arguments why it brings value:
Once a framework is in place, efforts are what you put in. It can be a lot of efforts, or minimal. I would say aim minimal efforts at the beginning. Read on, you’ll understand why.
Focus on bugs. Any bug fixed in the backend, prepare a regression test. Chances are you’re doing it anyway (write a test script to check your script runs?)
For features, just test the default parameters at first. You can go ahead and do more, but this will at least tell you that the script still compiles and handle default values properly.
You CAN and SHOULD have a regression test on your datasources! At least, do a RunDS(yourDatasource) to check it compiles. If you’re motivated, convert the Xml to a .NET Datasource and check that the columns you’ll need are there.
Pinpoint regression tests. You fixed a condition? Test that condition. Not all conditions. Otherwise it becomes a unit test, not a regression test.
The idea is that every day / week / whatever schedule you want, ALL OF YOUR REGRESSION TESTS WILL RUN AUTOMATICALLY. Therefore, if one day you fix one condition, the next day you fix something else in the same area, well, you want both your fix AND the previous condition fix to continue to work. As such, the value of your regression tests grows over time. It is a matter of habits.
What you’ll need
POSTMAN – This will allow you to create a regression collection, add a monitor, and know when a regression fails. This is the actual too.
REST API – We will be running a series of script from POSTMAN using STARLIMS REST API. You’ll see, this is the easy part. If you followed how to implement new endpoints, this will be a breeze.
STARLIMS Setup
In STARLIMS, add a new server script category for your API. In my case, I call it API_Regression_v1. This will be part of the API’s route.
Add a new script API_Regression_v1.Run. This is our request class for the POST endpoint. The code behind will be simple: we receive a script category in the parameters, and we run all children scripts. Here’s a very simple implementation:
:CLASS Request;
:INHERIT API_Helper_Custom.RestApiCustomBase;
:PROCEDURE POST;
:PARAMETERS payload;
:DECLARE ret, finalOutput;
finalOutput := CreateUdObject();
:IF !payload:IsProperty("category");
finalOutput:StatusCode := Me:HTTP_NOT_FOUND;
finalOutput:response := CreateUdObject();
finalOutput:response:StatusCode := Me:HTTP_NOT_FOUND;
:RETURN finalOutput;
:ENDIF;
/* TODO: implement script validation (can we run this script through regression test? / does it meet regression requirements?) ;
ret := Me:processCollection( payload:category );
finalOutput:StatusCode := Me:HTTP_SUCCESS;
finalOutput:response := ret;
finalOutput:response:StatusCode := Me:HTTP_SUCCESS;
:RETURN finalOutput;
:ENDPROC;
:PROCEDURE processCollection;
:PARAMETERS category;
:DECLARE scripts, i, sCatName, output, script;
output := CreateUdObject();
output:success := .T.;
output:scripts := {};
sCatName := Upper(category);
scripts := SQLExecute("select coalesce(c.DISPLAYTEXT, c.CATNAME) + '.' +
coalesce(s.DISPLAYTEXT, s.SCRIPTNAME) as s
from LIMSSERVERSCRIPTS s
join LIMSSERVERSCRIPTCATEGORIES c on s.CATEGORYID = c.CATEGORYID
where c.CATNAME like ?sCatName?
order by s", "DICTIONARY" );
:FOR i := 1 :TO Len(scripts);
script := CreateUdObject();
script:scriptName := scripts[i][1];
script:success := .F.;
script:response := "";
:TRY;
ExecFunction(script:scriptName);
script:success := .T.;
:CATCH;
script:response := FormatErrorMessage(getLastSSLError());
output:success := .F.;
:ENDTRY;
aAdd(output:scripts, script);
:NEXT;
:RETURN output;
:ENDPROC;
As you can guess, you will NOT want to expose this endpoint in a production environment. You’ll want to run this on your development / test instance; whichever makes most sense to you (maybe both). You might also want to add some more restrictions, like only if category starts with “Regression” or something along thos lines… I added a generic setting called “/API/Regression/Enabled” with a default value of false to check the route (see next point), and a list “/API/Regression/Categories” to list which category can be run (whitelisted).
Next, you should add this to your API route. I will not explain here how to do this; it should be something you are familiar with. Long story short, API_Helper_Customer.RestApiRouter should be able to route callers to this class.
POSTMAN setup
This part is very easy. Create yourself a new collection – something like STARLIMS Regression Tests v1. Prepare this collection with an environment so you can connect to your STARLIMS instance.
One neat trick: prepare your collection with a pre-request script that will make it way easier to use. I have this script I tend to re-use every time:
// get data required for API signature
const dateNow = new Date().toISOString();
const privateKey = pm.environment.get('SL-API-secret'); // secret key
const accessKey = pm.environment.get('SL-API-Auth'); // public/access key
const method = request.method;
const apiMethod = "";
var body = "";
if (pm.request.body && pm.request.body.raw){
body = pm.request.body.raw;
}
// create base security signature
var signatureBase = `${url}\n${method}\n${accessKey}\n${apiMethod}\n${dateNow}\n${body}`;
// get encoding hash of signature that starlims will attempt to compare to
var data = CryptoJS.enc.Utf8.parse(signatureBase);
const hash = CryptoJS.HmacSHA256(data, privateKey);
const encodedHash = encodeURIComponent(CryptoJS.enc.Base64.stringify(hash));
// create headers
pm.request.headers.add({"key":"SL-API-Timestamp", "value":dateNow});
pm.request.headers.add({"key":"SL-API-Signature", "value":encodedHash});
pm.request.headers.add({"key":"SL-API-Auth", "value":accessKey});
pm.request.headers.add({"key":"Content-Type", "value":"application/json"});
Note: In the above code, there’s a few things you need to initialize in your environment; pay attention to the pm.environment.get() variables.
Once that is done, you add one request in your collection of type POST. Something that looks like this:
POST request example
See the JSON body? We’re just telling STARLIMS “this is the category I want you to run”. With the above script, all scripts in this category will run when this request is sent. And since every script is in a try/catch, you’ll get a nice response of all scripts success (or failure).
Let’s create at least 1 regression test. In the category (in my case, My_Regression_POC), I will add one script named “compile_regression_framework”. The code will be very simple: I just want to make sure my class is valid (no typo and such).
:DECLARE o; o := CreateUdObject("API_Regression_v1.run"); :RETURN .T.;
Setup a POSTMAN monitor
Now, on to the REALLY cool stuff. In POSTMAN, go to monitor:
POSTMAN – Monitor option
Then just click the small “+” to add a new monitor. It is very straightforward: all you need to pick is a collection (which we created earlier) and an environment (which you should have by now). Then setup the schedule and the email in case it fails.
Setting up a Monitor
And that’s it, you’re set! You have your framework in place! The regression tests will run every day at 3pm (according to the above settings) and if something fails, I will receive an email. This is what the dashboard looks like after a few days:
Monitor Dashboard
Next Steps
From here on, the trick is organizing regression scripts. In my case, what I do is
I create a new category at the beginning of a sprint
I duplicate the request in the collection, with the sprint name in the request’s title
I change the JSON of the new request to mention the new category
Then, for every bug fixed during that sprint, I create a regression script in that category. That script’s purpose is solely to test what was fixed.
What happens then is every day, all previous sprints regression tests run, plus the new ones! I end up having a lot of tests.
Closing notes
Obviously, this does not replace a good testing team. It only supports them by re-testing stuff they might not think about. It also doesn’t test everything; there’s always scenarios that can’t be tested only with a server call. It doesn’t test front-end.
But still, the value is there. What is tested is tested. And if something fails, you’ll know!
One question a developer asked me on this is “sometimes, you change code, and it will break a previous test, and this test will never pass again because it is now a false test. What then?”
Answer: either delete the script, or just change the 1st line to :RETURN .T.; with a comment that the test is obsolete. Simple as that.
At some point, you can start creating more collections, adding more monitor, to split different schedules (some tests to run weekly, others daily, etc).
And finally, like I said, the complexity and how much you decide to test is really up to you. I recommend starting small; the value is there without much effort. Then, if a critical scenario arises, you can write a more complex test. That should be the exception.
Alright folks, I was recently involved in other LIMS integrations and one pattern that is very much alike is a “click this functionality to enable the equivalent API” approach. Basically, by module, you decide what can be exposed or not. And then, by role or by user (or lab, of all of that), you grant consuming rights.
It got me thinking “heh, STARLIMS used to do that with the generic.asmx web service”. RunAction and RunActionDirect anyone?
So, that’s just what I did, for fun, but also thinking that if I’d go around re-writing routing and scripts for every single functionalities, that would be a total waste of time. Now, don’t get me wrong! Like everything, I think that it depends.
You can (and should) expose only bits and pieces you need to expose, unless your plan is to use STARLIMS as a backend mostly and integrate most (if not all) of the features to external systems (those of you who you want a React or Angular front end, that’s you!).
So, if you’re in the later group, take into consideration security. You will want to set the RestApi_DevMode setting to false STARLIMS’ web.config file. This is to ensure that all communication is hashed using MD5 and not tampered with. Then, of course, you’ll check https and all these things. This is out of scope, but still worthy of note.
Once that’s done, you need 2 pieces.
You need to define a route. Personnally, I used the route /v1/generic/action . If you don’t know how to do that, I wrote and article on the topic.
You need a script to do all of this! Here’s the simplified code:
In my case, I went a little bit more fancy by adding an impersonation mechanism so the code would “run as”. You could add some authorization on which scripts can be ran, by who, when, etc. Just do it at the beginning, and the return a 403 forbidden response if execution is denied.
Yeah, I know, this is not rocket science, and this is not necessarily the most secure approach. In fact, this really opens up your STARLIMS instance to ANY integration from a third party software… But, as I mentioned in the beginning, maybe that’s what you want?
NB: I used DALL·E 2 (openai.com) to generate the image on the front page using “STARLIMS logo ripped open”. I had to try!
Alright folks! If you’ve been playing with the new STARLIMS REST API and tried production mode, perhaps you’ve run into all kind of problems providing the correct SL-API-Signature header. You may wonder “but how do I generate this?” – even following STARLIMS’s c# example may yield unexpected 401 results.
At least, it did for me.
I was able to figure it out by looking at the code that reconstructs the signature on STARLIMS side, and here’s a snippet of code that works in POSTMAN as a pre-request code:
// required for the hash part. You don't need to install anything, it is included in POSTMAN
var CryptoJS = require("crypto-js");
// get data required for API signature
const dateNow = new Date().toISOString();
// thhis is the API secret found in STARLIMS key management
const privateKey = pm.environment.get('SL-API-secret');
// this is the API access key found in STARLIMS key management
const accessKey = pm.environment.get('SL-API-Auth');
// in my case, I have a {{url}} variable, but this should be the full URL to your API endpoint
const url = pm.environment.get('url') + request.url.substring(8);
const method = request.method;
// I am not using api methods, but if you are, this should be set
const apiMethod = "";
var body = "";
if (pm.request.body.raw){
body = pm.request.body.raw;
}
// this is the reconstruction part - the text used for signature
const signatureBase = `${url}\n${method}\n${accessKey}\n${apiMethod}\n${dateNow}\n${body}`;
// encrype signature
var data = CryptoJS.enc.Utf8.parse(signatureBase);
const hash = CryptoJS.HmacSHA256(data, privateKey);
const encodedHash = encodeURIComponent(CryptoJS.enc.Base64.stringify(hash));
// set global variables used in header
pm.globals.set("SL-API-Timestamp", dateNow);
pm.globals.set("SL-API-Signature", encodedHash);
One point of interest – if it still is not working, and if you can’t figure out why, an undocumented STARLIMS feature is to add this application setting in the web.config to view more info:
<add key="RestApi_LogLevel" value="Debug" />
I hope this helps you use the new REST API provided by STARLIMS!
With the version 12 technology platform, STARLIMS offers a new REST API engine. It is really great – until you want to enhance it and add your own endpoints. That’s where it gets … complicated. Well – not so much – if you know where to start. Nothing here is hidden information, it is all written in the technology release documentation; just not easily applied.
If you read the doc, you’ve read something like this:
Routing maps incoming HTTP API requests to their implementation. If you are a Core Product team, you must implement routing in pre-defined Server Script API_Helper.RestApiRouter; if you are a Professional Services or Customer team, you must implement routing in pre-defined Server Script API_Helper_Custom.RestApiRouter (which you need to create, if it doesn’t exist).
STARLIMS Technology Platform Documentation 09-016-00-02 REV AB
That section is accessible using the /building_rest_api.html Url of the platform documentation.
It is really good, and it works, and everything listed is appropriate. I would only add 2 points for your sanity.
1- handle your routes in a different way than what STARLIMS suggest. Their example is very simple, but you’ll want to have something scalable / reusable. I went with a single function and nested hashtables. By default, the custom routing needs a Route method. To “store” the route, I’ll also add a private getRoutes method. In the future, we’ll only add entries in the getRoutes, which will simplify our life.
:PROCEDURE getRoutes;
/*
structure is:
hashTable of version
hashTable of of service
hashTable of entity
;
:DECLARE hApiVersions;
/* all route definition should be in lowercase;
/* store API Verions at 1st htable level;
hApiVersions := LimsNetConnect("", "System.Collections.Hashtable");
hApiVersions["v1"] := LimsNetConnect("", "System.Collections.Hashtable");
hApiVersions["v2"] := LimsNetConnect("", "System.Collections.Hashtable");
/* store each service within the proper version;
hApiVersions["v1"]["examples"] := LimsNetConnect("", "System.Collections.Hashtable");
/* then store each endpoint per entity;
hApiVersions["v1"]["examples"]["simple"] := "API_Examples_v1.Simple";
/* store each service within the proper version;
hApiVersions["v1"]["system"] := LimsNetConnect("", "System.Collections.Hashtable");
hApiVersions["v1"]["system"]["status"] := "API_CustomSystem_v1.status";
/* process-locks endpoints;
hApiVersions["v1"]["process-locks"] := LimsNetConnect("", "System.Collections.Hashtable");
hApiVersions["v1"]["process-locks"]["process"] := "API_ProcessLocks_v1.Process";
/* user-management endpoints;
hApiVersions["v1"]["user-management"] := LimsNetConnect("", "System.Collections.Hashtable");
hApiVersions["v1"]["user-management"]["user-session"] := "API_UserManagement_v1.UserSession";
hApiVersions["v1"]["sqs"] := LimsNetConnect("", "System.Collections.Hashtable");
hApiVersions["v1"]["sqs"]["message-queue"] := "API_SQS_v1.message";
hApiVersions["v1"]["load"] := LimsNetConnect("", "System.Collections.Hashtable");
hApiVersions["v1"]["load"]["encrypt"] := "API_Load_v1.encrypt";
hApiVersions["v1"]["load"]["origrec"] := "API_Load_v1.origrec";
:RETURN hApiVersions;
:ENDPROC;
:PROCEDURE Route;
:PARAMETERS routingInfo;
/* routingInfo
.Version : string - e.g. "v1"
.Service : string - e.g. "folderlogin"
.Entity : string - e.g. "sample";
:DECLARE hRoutesDef, sVersion, sService, sEntity;
hRoutesDef := Me:getRoutes();
/* remove case route;
sVersion := Lower(routingInfo:Version);
sService := Lower(routingInfo:Service);
sEntity := Lower(routingInfo:Entity);
:IF !Empty(hRoutesDef[sVersion]);
:IF !Empty(hRoutesDef[sVersion][sService]);
:RETURN hRoutesDef[sVersion][sService][sEntity];
:ENDIF;
:ENDIF;
:RETURN "";
:ENDPROC;
When you need to add new routes, all you do is add new lines to the getRoutes method, the logic in the Route method is static and shouldn’t change. Then, you create the corresponding categories and scripts to actually run your logic, and you’re set.
Of course, you can build your own mechanism – it is by no mean the best one; but I do find it to be easier to manage than STARLIMS’ suggestion.
Now, I know: you might be tempted to write a generic data-driven routing. I was tempted to do it. In the end, it is a balance between convenience and security. If you let it be data-driven, you loose control on what can be routed. Someone may modify the route to, let’s say, get result, to instead return all user information, and you wouldn’t know. If it’s in the code, then you’ll know. So – although it is not as convenient, don’t get your routes handled by the database. It would also add extra load on the database. So – no good reasons other than convenience, really.
2- properly document your APIs. Heck, document your APIs before you implement them! I recommend https://swagger.io/ to generate some .yaml files. Trust me: whoever will be consuming your API will thank you!
All in all, I think the STARLIMS REST API really brings the system to an all new level. Theoretically, one could build a full UI stack using React or Angular and just consume the API to run the system on a new front end.
Or one could expose data endpoints for pipelines to maintain a data mart.
Or anything. At this point, your creativity is the limiting factor. Do you have great ideas for use cases?
JMeter is a load / stress tool built in Java which allows you to simulate multiple user connections to your system and monitor how the application & hardware response to heavy load.
In STARLIMS, I find it is a very good tool for performance optimization. One can detect redundant calls, chatty pieces of code and identify bottlenecks, even when running with a single user.
As a bonus, Microsoft has a preview version of load tests based on JMeter, which can be integrated to your CI/CD process!
So, in this article, my goal is to help you get started – once setup, it’s very easy to do.
I will proceed with the following assumptions:
You know your way around STARLIMS
You have some scripting knowledge
Your STARLIMS version is 12.1 or + (I leverage the REST API introduced with 12.1. It is possible to do differently, but that will be out of scope)
Xfd is the most difficult technology for this. Therefore, that’s what I will tackle. If you are running on HTML, it will be just easier, good for you!
Environment Setup
On your local PC
Install Java Runtime – you might have to reboot. Don’t worry, I’m not going anywhere!
Make sure you have access to setting up a Manual Proxy. This can be tricky and may require your administrators to enable this for you. What you’ll want is to be able to toggle it like this (don’t enable it just yet! Just verify you can):
Proxy Setup
On your STARLIMS Server
Make it available through HTTP. Yes, you have read properly, HTTP. Not HTTPS. I think it can work HTTPS, but I ran into too much problems and found out HTTP is easiest. This is to simplify traffic recording when recording a scenario for re-processing.
Create your load users. If you expect to run 100 simultaneous users, then let’s create 100! What I did is create users named LOADUSER001 to LOADUSER250 (so I would have 250 users) and have their password to something silly like #LoadUser001 to #LoadUser250. Like I said – don’t do this if there’s any sensitive data in your system.
To help you, here’s a script to generate the users:
You will need to test the above, on my system it worked fine (haha!) but setting password and security is not always working as expected in STARLIMS; so do not despair – just be patient.
Edit the web.config file. I will presume you know which one and how to achieve that. You need to change / add the following appSetting to false: <appSetting name="TamperProofCommunication" value="false" />
Add Endpoint to Encrypt function. That’s really the tricky part. In both XFD and HTML, STARLIMS “masks” the username and password when putting it in the payload for authentication, to prevent sending in clear text. But this encryption is significant; it is part of .NET and not easily integrated to JMeter… Unless it becomes a REST API endpoint!.
So, in a nutshell, the trick is to create a new API Endpoint that receives a string and a key, and call the EncryptData(text, key) function, and return the encrypted string. I will not stress it enough: do – not – enable – this – on – a -system – with – sensitive – data. And make sure you will only use load testing users. If you do so, you’re fine.
This is the code of the REST API method to expose from STARLIMS:
Since it gets exposed as a REST API, the concept is that at the beginning of the load test, for every user, we call this with the username and the password to get the encrypted version of each, which allows us to tap into STARLIMS cookie / session mechanism. Magic!
Now, we are kind of ready – assuming you’ve followed along and got everything setup properly and were able to test your API with POSTMAN or something like that. Before moving on, let’s take a look at a typical load test plan in JMeter:
Typical setup for a single scenario
The idea is we want each user (thread) to run in its own “session”. And we want each session to be for a different user. My scenarios always involve a user login into STARLIMS once (to create a session) and the to loop on running the scenario (for example, one scenario could be aboout creating folders, another scenario about entering results, etc.) . I will leave to you the details of the test plans, but the idea is you first need to login the system, then do something.
At the level of the test plan, let’s add user-defined variables – in my case, this is only so I can switch STARLIMS instances later on (I strongly recommend you do that!):
User-defined Variables
Always at the level of the test plan, add a counter:
User Counter
This will be the magic for multiple users. Note the number format – this has to match your user naming convention, otherwise, good luck!
Now, let’s have our user login STARLIMS.
Add a Transaction Controller to the Thread Group. I renamed this one “System Login” – call it what you want.
On your new transaction controller, add a Sampler > HTTP Request, which will be our call to the REST API
HTTP Request – REST API for Encrypt method
As you can see, I did a few more things than just call the API. If we break it down, I have a pre-processor “Initialize User Variables”, a HTTP Header Manager, and a JSON Extractor. Let’s look at each of these.
Pre-processor – Initialize User Variables (Beanshell preprocessor)
This will run before this call is made – every time this call is made! This is where we initialize more variables we can use in the thread.
This will initialize the currentUser and currentPW variables we can reuse later on. Since this is a pre-processor, it means the request can reference them:
Now, let’s look at the HTTP Header Manager:
HTTP Header Manager – System Login
Pretty simple – if you have STARLIMS 12.1 or +, you just need to get yourself an API key in the RestApi application. Otherwise, this whole part might have to be adjusted according to your prefered way of calling STARLIMS. But, long story short, SL-API-Auth is the header you want, and the value should be your STARLIMS secret API key.
Finally, this API will return something (the encoded string). So we need to store it in yet another variable! Simple enough, we use a post-processor JSON extractor:
JSON Extractor
What did we just do? Here’s a breakdown:
Initialized a user name and password in variables
Constructed a HTTP request with these 2 variables
Called the REST API with our secret STARLIMS key using this request
Parsed the JSON response into another variable
If you have set the thread group to simulate 10 users, then you’ll have LOADUSER001 to LOADUSER010 initialized. This is the pattern to learn. This is what we’ll be doing all along.
Wait. How did you know what to call afterward?
Great question! That’s where the proxy gets into play. Now, we don’t want to go around and guess all the calls, and, although I like Fiddler, I think it would be very complicated to use.
In a nutshell, this is what we’ll do:
We’ll add a Recording Controller to our Thread Group
Right-click on your Thread Group > Add > Logic Controller > Recording Controller
We’ll add a Test Script Recorder to our Test Plan
Right-click on your Test Plan > Add > Non-Test Elements > HTTP(S) Test Script Recorder
Change the Target Controller to your recording Controller above, so you know where the calls will go
We’ll activate the proxy (bye bye internet!)
Open Windows Settings
Look for Proxy
Change Manual Proxy > Use a proxy server to on.
Local Address = http://localhost
Port = 8888
Click Save! I didn’t realize at first there was a save button for this…
We’ll start the Test Script Recorder
Test Script Recorder
We’ll peform our action in STARLIMS
WARNING: A good practice is to change the value of Transaction name in the Recorder Transactions Control as you progress. What I typically do is put SYSTEM_LOGIN while I launch STARLIMS. Then SYSTEM_LOGIN/VALIDATE when I enter credentials, then SYSTEM_LOGIN/OK when I click OK, etc.
If all works well, you should see items being added to your Transaction Recorder.
We’ll stop the Test Script Recorder – just click on the big red Stop
We’ll deactivate the proxy (yay!) – just toggle it off.
You should have something like this in your recorder:
Recorded HTTP Requests
If, like me, you let your Outlook opened, you will have all kind of unrelated HTTP calls. Just select these and delete them. You should be left with something like this:
After 1st cleanup
Now, let’s understand what happened here. We recorded all the calls to STARLIMS. If you wish, you can remove the GetImageById lines – typically, this should not have any performance impact as these should be cached. But heh, that’s your call.
Let’s look at the 1st request:
1st HTTP Request
Interestingly enough, we can see the Protocol is http, and the Server Name is our STARLIMS server. If you created user defined variables, then you can just clean these 2 fields up (make them empty). We can default them at the test plan level (later on). But if you do that, you must do it for all requests! So, let’s not do this (just yet). Let’s leave it as is.
Now, what we want, is to re-run this so we can have actual data to work with and to make our script dynamic. But we need to record all the requests sent and received.
Right-click on your Thread Group > Add > Listener > View Results Tree
I find this listener to be the best for this activity.
Now, let’s run this “as is” clicking the play button
play
The beauty here is you can get the data sent to STARLIMS as well as the responses, allowing us to understand how everything is connected. Let’s take a look at the Authentication.GetUserInfo – that’s our first challenge:
View Results Tree
If you look at the Request Body, you’ll see your user name (which you used to login), as well as a 2nd very strange parameter that looks like the above highlighted string in kind of pink. Now, when we log into STARLIMS, we must send that string, which, essentially, is the password hash based on the user name (one-way encoding). So the question is: how do we get this? This is where our REST API, which we prepared earlier, comes into play!
Hook user variables to payload
With this, you can do everything now! Well, as far as load testing is concerned, it can at least get you started!
Earlier, I mentioned you shouldn’t leave your Server name / path / protocol in there. Indeed, in my screenshot above, you can see it’s all empty. This is because I added a HTTP Request Default to my test plan:
HTTP Request Default
You’ll also want a HTTP Cookie Manager. This one doesn’t need configuration as far as I know; but it must exist so cookies are carried over.
CONCLUSION
What?? Conclusion already? But we were just getting started! Well, don’t worry. I have done a little bit more than just that, and I am including it with this post.
You will need to figure out a few things, like some of the APIs I use and some forms/scripts that you won’t have. But this should give you a very good overview of how it works and how it is all tied in together.
Here I was trying, with the infra team, to access my Azure container through SAS. STARLIMS has a built-in Azure container support, but it relies on a connection string with account information and all. But, like most Azure customers, that is not our reality. We use shared containers, so we need a SAS token… Which is not supported to configure as a STARLIMS connection string.
This means that next step will be any other web service consumption instead of direct containers access. Is it complex? Less than I expected!
Step 1: let’s get a SAS token!
Now, finding the said token is not always obvious, but mine looked something like this:
Hopefully, yours too! In the Azure Container tool, look for the “Shared Access Signature”, it’s the same thing.
Step 2 – integrate the Azure API!
Now, how do we put files there? The connection string and tutorials on STARLIMS will not help… But the web services will! All we need to do is write a UploadToAzureBlob procedure and a DownloadFromAzureBlob procedure (both in SSL) and that will do the trick:
As simple as that, you got yourself an upload and download to azure containers.
Conclusion
As you can see, as usual, this was quite easy! One just needs the correct information. Next step will be to see what more can containers bring to your STARLIMS installation.
Search for STARLIMS; if I got it right, it should come out in the partner section.
Follow the instructions, and you should be able to get it running!
A few things to note:
the predefined queries work, except for COC, which I kind of ditched for now.
the QBE work! Note that I don’t (yet) apply the default QBE filters, so don’t just go and pull up all your data. That will be hard on all servers.
You can create many connections; so you technically could create one for Folders, one for Samples, one for Results, one for your favourite QBE, one for your products, etc… And blend all of them! Magical!
Finally, and most importantly: I – do – not – cache – the – data. Not yet. I will eventually look at doing that, but not now. Therefore, each time you run this, you actually query the database server. Be careful.
I do this on my own time for fun. It’s just fun. Good if it helps you, but don’t hold me responsible if you mis-use this!
That’s it for now! Remember, this is a project for fun! Contact me if you want to know more, of if you wish me to consider adding features to this.