Re-thinking STARLIMS architecture

Re-thinking STARLIMS architecture

There is something about STARLIMS that has been bugging me for a long time. Don’t get me wrong – I think it is a great platform. I just question the wellness of XFD in 2024, and the selection of Sencha for the HTML part of it.

But an even more critical point: I question the principle of using the same server for the “backend” and the “frontend”. Really, the current architecture of STARLIMS (in a simplified way) is something like this:

Sure, you can add load balancers, multiple servers, batch processors… But ultimately, the Server’s role is both backend and Web Rendering, without really following Server-Side-Rendering (SSR) pattern. It hosts / provides the code to render from backend and let client do rendering. So, in fact, it is Client-Side-Rendering (CSR) with most of the SSR drawbacks.

This got me thinking. What if we really decoupled the front end from the backend? And what if we made this using real micro services? You know, something like this:

Let me explain the layers.

React.js

React does not need presentation. The infamous open-source platform behind Facebook. Very fast and easy, huge community… Even all the AI chatbot will generate good React components if you ask nicely! For security, it’s like any other platform; it’s as secure as you make it. And if you pair it with Node.js, then it’s very easy, which brings me to the next component…

Node.js

Another one in no need of presentation. JavaScript on a backend? Nice! And there, on one end, you handle the session & security (with React) and communicate with STARLIMS through the out of the box REST API. Node can be just a proxy to STARLIMS (it is the case currently) but should also be leveraged to extend the REST APIs. It is a lot easier to implement new APIs and connect to STARLIMS (or anything else for that matter!) and speed up the process. Plus, you easily get cool stuff like WebSockets if you want, and you can cache some reference data in Redis to go even faster!…

Redis

Fast / lightweight / free cache (well, it was when I started). I currently use it only for sessions; since REST API is stateless in STARLIMS, I manage the sessions in Node.js, and store them in Redis, which allows me to spin multiple Node.js instances (load balancing?) and share sessions across. If you don’t need to spin multiple proxy, you don’t need this. But heh, it’s cooler with it, no?

I was thinking (I haven’t done anything about this yet) to have a cron job running in Node.js to pull reference data from STARLIMS (like test plans, tests, analytes, specifications, methods, etc) periodically and update Redis cache. Some of that data could be used in the UI (React.js) instead of hitting STARLIMS. But now, with the updated Redis license, I don’t know. I think it is fine in these circumstances, but I would need to verify.

… BUT WHY?

Because I can! – Michel R.

Well, just because. I was learning these technologies, had this idea, and I just decided to test the theory. So, I tried. And it looks like it works! There are multiple theoretical advantages to this approach:

  1. Performance: Very fast (and potentially responsive) UI.
  2. Technology: New technology availability (websockets, data in movement, streaming, etc.).
  3. Integration: API first paradigm, Node.js can make it really easy to integrate with any technology!
  4. Source control: 100% Git for UI code, opening all git concepts (push, pull requests, merge, releases, packages, etc.).
  5. Optimization: Reduce resource consumption from STARLIMS web servers.
  6. Scalability: High scalability through containerization and micro-services.
  7. Pattern: Separation of concerns. Each component does what its best at.
  8. Hiring – there is a higher availability of React.js and Node.js developers than STARLIMS developers!

Here’s some screenshots of what it can look like:

As you can see, at this stage, it is very limited. But it does work, and I like a couple of ideas / features I thought of, like the F1 for Help, the keyboard shortcuts support, and more importantly, the speed… It is snappy. In fact, the speed is limited to what the STARLIMS REST API can provide when getting data, but otherwise, everything else is way, way faster than what I’m used to.

How does it work, really?

This is magic! – Michel R.

Magic! … No, really, I somewhat “cheated”. I implemented a Generic API in the STARLIMS REST API. This endpoint supports both ExecFunction and RunDS, as well as impersonation. Considering that the REST API of STARLIMS is quite secure (it uses anti-tampering patterns, you can ask them to explain that to you if you want) and reliable, I created a generic endpoint. It receives a payload containing the script (or datasource) to run, with the parameters, and it returns the original data in JSON format.

Therefore, in React, you would write code very similar to lims.CallServer(scriptName, parameters) in XFD/Sencha.

Me being paranoid, I added a “whitelisting” feature to my generic API, so you can whitelist which scripts to allow running through the API. Being lazy, I added another script that does exactly the same, without the whitelisting, just so I wouldn’t have to whitelist everything; but hey, if you want that level of control… Why not?

Conclusion

My non-scientific observations are that this works quite well. The interface is snappy (a lot faster than even Sencha), and developing new views is somewhat easier than both technologies as well.

Tip: you can just ask an AI to generate a view in React using, let’s say, bootstrap 5 classNames, and perhaps placeholders to call your api endpoints, et voilà! you have something 90% ready.

Or you learn React and Vite and you build something yourself, your own components, and create yourself your own STARLIMS runtime (kind-of).

This whole experiment was quite fun, and I learned a ton. I think there might actually be something to do with it. I invite you to take a look at the repositories, which I decided to create a public version of for anyone to use and contribute under MIT with commercial restrictions license:

You need to have both projects to get this working. I recommend you check both README to begin with.

Right now, I am parking this project, but if you would like to learn more, want to evaluate this but need guidance, or are interested in actually using this in production, feel free to drop me an email at [email protected]! Who knows what happens next?

An introduction to Github and Webhooks

An introduction to Github and Webhooks

In the world of software development, my quest to understand continuous deployment led me down an intriguing path. It all began with a burning desire to unravel the complexities of continuous deployment while steering clear of expensive cloud hosting services. And that’s when my DIY GitHub Webhook Server project came to life.

The Genesis

Imagine being in my shoes—eager to dive deeper into the continuous deployment process. But I didn’t want to rely on pricey cloud solutions. So, I set out to craft a DIY GitHub Webhook Server capable of handling GitHub webhooks and automating tasks upon code updates—right from my local machine. Or any machine for that matter.

The Vision

Let’s visualize a scenario with a repository—let’s call it “MyAwesomeProject”— sitting in Github, and you are somewhere in a remote cabin with okey / dicey internet access. All you have is your laptop (let’s make it a Chromebook!!). You want to code snugly, and you want to update your server that sits at home. But… You don’t WANT to remote in your server. You want it to be automatic. Like magic.

You would have to be prepared. You would clone my repo, configure your server (including port forwarding), and maybe use something like no-ip.com so you have a “fixed” URL to use your webhook with. Then:

  1. Configuring Your Repository: Start by defining the essential details of “MyAwesomeProject” within the repositories.json file—things like secretEnvName, path, webhookPath, and composeFile.
{
  "repositories": [
    {
      "name": "MyAwesomeProject",
      "secretEnvName": "GITHUB_WEBHOOK_SECRET",
      "path": "/path/to/MyAwesomeProject",
      "webhookPath": "/webhook/my-awesome-project",
      "composeFile": "docker-compose.yml"
    }
  ]
}
  1. Setting Up Your GitHub Webhook: Head over to your “MyAwesomeProject” repository on GitHub and configure a webhook. Simply point the payload URL to your server’s endpoint (e.g., http://your-ddns-domain.net/webhook/my-awesome-project).
  2. Filtering Events: The server is smartly configured to respond only to push events occurring on the ‘main’ branch (refs/heads/main). This ensures that actions are triggered exclusively upon successful pushes to this branch.
  3. Actions in Motion: Upon receiving a valid push event, the server swings into action—automatically executing a git pull on the ‘main’ branch of “MyAwesomeProject.” Subsequently, Docker containers are rebuilt using the specified docker-compose.yml file.

So, there you have it—a simplified solution for automating your project workflows using GitHub webhooks and a self-hosted server. But before we sign off, let’s talk security.

For an added layer of protection, consider setting up an Nginx server with a Let’s Encrypt SSL certificate. This secures the communication channel between GitHub and your server, ensuring data integrity and confidentiality.

While this article delves into the core aspects of webhook configuration and automation, diving into SSL setup with Nginx warrants its own discussion. Stay tuned for a follow-up article that covers this crucial security setup, fortifying your webhook infrastructure against potential vulnerabilities.

Through this journey of crafting my DIY GitHub Webhook Server, I’ve unlocked a deeper understanding of continuous deployment. Setting up repositories, configuring webhooks, and automating tasks upon code updates—all from my local setup—has been an enlightening experience. And it’s shown me that grasping the nuances of continuous deployment doesn’t always require expensive cloud solutions.

References:

Repository: https://github.com/michelroberge/webhook-mgr/

Demo app (where I use this): https://curiouscoder.ddns.net

logo

ChatGPT Experiment follow-up

Did you try to look at my experiment lately? Did it timeout or gave you a bad gateway?


Bad Gateway

Well read-on if you want to know why!

Picture this: I’ve got myself a fancy development instance of a demo application built with ChatGPT. Oh, but hold on, it’s not hosted on some magical cloud server. Nope, it’s right there, in my basement, in my own home! I’ve been using some dynamic DNS from no-ip.com. Living on the edge, right?

Now, here’s where it gets interesting. I had the whole thing running on plain old HTTP. !!! I mean, sure, I had a big red disclaimer saying it wasn’t secure, but that just didn’t sit right with me. So, off I went on an adventure to explore the depths of NGINX. I mean, I kinda-sorta knew what it was, but not really. Time to level up!

So, being the curious soul that I am, I started experimenting. It’s not perfect yet, but guess what? I learned about Let’s Encrypt in the process and now I have my very own HTTPS and a valid certificate – still in the basement! Who’s insecure now? (BTW, huge shoutout to leangaurav on medium.com, the best tutorial on this topic out there!)

As if that was not enough, I decided – AT THE SAME TIME – to also scale up the landscape.

See, I’ve been running the whole stack on Docker containers! It’s like some virtual world inside my mini PCs. And speaking of PCs, my trusty Ryzen 5 5500U wasn’t cutting it anymore, so I upgraded to a Ryzen 7 5800H with a whopping 32GB of RAM. Time to unleash some serious power and handle that load like a boss!

Now, you might think that moving everything around would be a piece of cake with Docker, but oh boy, was I in for a ride! I dove headfirst into the rabbit hole of tutorials and documentation to figure it all out. Let me tell you, it was a wild journey, but I emerged smarter and wiser than ever before.

Now, I have a full stack that seems to somewhat work, even after reboot (haha!).

Let me me break down the whole landscape – at a very high level (in the coming days & weeks, if I still am feeling like it, I will detail each steps). The server is a Trigkey S5. I have the 32GB variant running on a 5800H, which went on sale for $400 CAD on Black Friday – quite a deal! From my research, best bang for the bucks. Mobile CPU, so energy-wise very good, but of course, don’t go expecting playing AAA games on this!

Concerning my environment, I use Visual Studio Code on Windows 11 with WSL enabled. I installed the Ubuntu WSL, just because that’s the one I am comfortable with. I have a Docker compose file that consist of:

  • NodeJs Backend for APIs, database connectivity, RBAC, etc.
  • ViteJs FrontEnd for well.. everything else you see as a user 🙂
  • NGINX setup as a reverse proxy – this is what makes it work over HTTPS

This is what my compose file looks like:

version: '3'

services:

  demo-backend:
    container_name: demo-backend
    build:
      context: my-backend-app
    image: demo-backend:latest
    ports:
      - "5051:5050"
    environment:
      - MONGODB_URI=mongodb://mongodb
    networks:
      - app
    restart: always

  demo-frontend:
    container_name: demo-frontend
    build:
      context: vite-frontend
    image: demo-frontend:latest
    ports:
      - "3301:3301"
    networks:
      - app
    restart: always

  mongodb:
    image: mongo:latest
    container_name: "mongodb"
    ports:
    - 27017:27017
    networks:
      - app
    restart: always

networks:
  app:
    driver: bridge
    name: shared_network

This method is very convenient for running on my computer during the development process. I have a modified compose file without NGINX and different ports. This makes it easier for me to make changes and test them. When I’m satisfied, I switch to another compose file with the redirected ports in my router. I use “docker compose down” followed by “docker compose up” to update my app.

This is the NGINX compose file. The interesting part is I use the same network name here than in the previous compose file. When I do this, NGINX can communicate with the containers of the previous docker compose file. Why did I do this? Well, I have this demo project running, and I’m working on another project with different containers. With some configuration, I will be able to leverage my SSL certificates for both solution (or as many as I want), as well as one compose file per project. This will be very handy!

version: "3.3"

services:
  nginx:
    container_name: 'nginx-service'
    image: nginx-nginx:stable
    build:
      context: .
      dockerfile: docker/nginx.Dockerfile
    ports:
      - 3000:3000
      - 3030:3030
      - 5050:5050
    volumes:
      - ./config:/config
      - /etc/letsencrypt:/etc/letsencrypt:ro
      - /tmp/acme_challenge:/tmp/acme_challenge
    networks:
      - app
    restart: always

networks:
  app:
    driver: bridge
    name: shared_network

Of course, this is not like a cloud provider; my small PC can die, there is no redundancy; but for development and demo purposes? Quite efficient (and cheap)!

As of last night, my stuff runs on HTTPS, and should be a bit more reliable moving forward. The great part about this whole experiment is how much I learned in the process! Yes, it started from ChatGPT. But you know what? I never learned so much so fast. It was very well worth it.

I will not state I am an expert in all of this, but I feel I know now a lot more about:

  • ChatGPT I learned some tricks along the way on how to ask and what to ask, as well as how to get out of infinite loops.
  • NodeJS and Express. I knew about it, but now I understand better the middleware concepts and connectivity. I have built some cool APIs
  • ViteJs This is quite the boilerplate to get a web app up and running
  • Expo and React-Native. This is a parallel project, but I built some nice stuff I will eventually share here. If you want to build Android and IOS apps using React-Native, this framework works great. Learn more on Expo.dev
  • GitLab. I tried this for the CI/CD capabilities and workflows… Oh my! With Expo, this came in handy!! Push a commit, merge, build and deploy through EAS! (On the flip side, I reached the limits of free quite fast… I need to decide what I’ll be doing moving forward) On top of it, I was able to store my containers on their registries, making it even more practical for teamwork!
  • Nginx. The only thing I knew before was : it exists and has to do with web servers. Now I know how to use as a reverse proxy and I am starting to feel that I will use it even more in the future.
  • Docker & Containerization. Also another one of these “I kind of know what it is”.. Now I have played with containers, docker compose, and I am only starting to grasp the power of it.
  • Let’s Encrypt. I thought I understood HTTPS. I am still no expert, but now I understand a lot more how this works, and why it works.
  • Certbot this is the little magic mouse behind the whole HTTPS process. Check it out!
  • MongoDb. I played with some NoSQL in the past. But now… Oh now. I love it. I am thinking I prefer this to traditional SQL databases. Just because.

A final note on ChatGPT (since this is where it all started):

The free version of this powerful AI is outdated (I don’t want to pay for this – not yet). This resulted in many frustrations – directives that wouldn’t work. I had to revert back to Googling what I was looking for. Turns out that although ChatGPT will often cut the time down by quite a margin, the last stretch is yours. It is not yet at the point where it can replace someone.

But it can help efficiency.

A lot.

Test-drive of a new Co-pilot: ChatGPT

Ok, we all played with it a bit. We’ve seen what it can do – or at least some of it. I was first impressed, then disappointed, and sometimes somewhat in between. How can it really help me? I thought.

I always want to learn new things. On my list I had Nodejs, React and MongoDB… So I on-boarded my copilot: ChatGPT, for a test drive!

See the results there!

BTW: that application is not secured, I didn’t want to spend too much money on this (SSL, Hosting and whatnot), so don’t go around using sensitive information 🙂

Write down in the comments if you want to explore further the result, I can create an account for you 😁 or maybe even spin up a dedicated instance for you (I want to try that at some point!!)

If this is down, it can be due to many things:

  1. no-ip.com and DDNS not synchronized
  2. my home server is down
  3. my provider blocked one or many of the ports
  4. no electricity
  5. I am rebuilding
  6. Something else, like, I don’t know… Aliens?

That was a lot of fun! Do you like it?

Here’s the missing API helper

Well, all was good under the sun, until a reader pointed out that I had omitted a very important piece. I was expecting STARLIMS developers to know how to manage; but it is not so. Re-reading, I realized that indeed, one might need directions.

I’m talking about the API_Helper_Custom.RestApiCustomBase class.

This class is not really needed, you can instead inherit of the RestApi.RestApiBase class.

But having our own custom base is good! It allows us to implement common functionalities that all your services may need. In this example, I’ll provide an impersonation method, very useful if you wish to have a single integration user, but actually know who the user should be impersonating.

:CLASS RestApiCustomBase;
:INHERIT RestApi.RestApiBase;

:DECLARE APIEmail;
:DECLARE LangId;
:DECLARE UserName;

/* do stuff here that applies to all custom API's;

:PROCEDURE Constructor;

    :DECLARE sUser;
    Me:LangId := "ENG";
    Me:UserName := GetUserData();

    Me:APIEmail := Request:Headers:Get("SL-API-Email");

    sUser := LSearch("select USRNAM from USERS where EMAIL = ? and STATUS = ?", "", "DATABASE", { Me:APIEmail, 'Active' });             

    :IF ( !Empty(sUser) ) .and. ( sUser <> GetUserData() );
        Me:Impersonate(sUser);
    :ENDIF;

    Me:LangId := LSearch("select LANGID from USERS where USRNAM = ?", "ENG", "DATABASE", { MYUSERNAME });

:ENDPROC;

/* Allow system to impersonate a user so transactions are corrected against the correct user;
:PROCEDURE Impersonate;
    :PARAMETERS sUser;
    :IF !IsDefined("MYUSERNAME");
        :PUBLIC MYUSERNAME;
    :ENDIF;
    MYUSERNAME := sUser;
    SetUserData(MYUSERNAME);        
:ENDPROC;

As you can see, this is pretty simple. Once you have this REST API ready, inherit this class, and you should be fine to have a working API. In the above example, the code expect a header SL-API-Email that will contain the email of the user to impersonate. If it is not provided, then the user to whom the key belong is the current user.

Hope this helps those who didn’t yet figure it out!

STARLIMS Backend regression tests automation

STARLIMS Backend regression tests automation

Finally. It was about time someone did something about it and would advertise. Automatic testing of STARLIMS (actually, any REST-supported app!). Ask anyone working with (mostly) any LIMS, and automatic regression tests are not often there at all, even less automated.

I understand it is difficult to automate the front end, which is what tends to break… Nonetheless, I had this idea – please read through! – and I think there’s an easy way to automate some regression tests to a STARLIMS instance. Here’s my arguments why it brings value:

  1. Once a framework is in place, efforts are what you put in. It can be a lot of efforts, or minimal. I would say aim minimal efforts at the beginning. Read on, you’ll understand why.
  2. Focus on bugs. Any bug fixed in the backend, prepare a regression test. Chances are you’re doing it anyway (write a test script to check your script runs?)
  3. For features, just test the default parameters at first. You can go ahead and do more, but this will at least tell you that the script still compiles and handle default values properly.
  4. You CAN and SHOULD have a regression test on your datasources! At least, do a RunDS(yourDatasource) to check it compiles. If you’re motivated, convert the Xml to a .NET Datasource and check that the columns you’ll need are there.
  5. Pinpoint regression tests. You fixed a condition? Test that condition. Not all conditions. Otherwise it becomes a unit test, not a regression test.

The idea is that every day / week / whatever schedule you want, ALL OF YOUR REGRESSION TESTS WILL RUN AUTOMATICALLY. Therefore, if one day you fix one condition, the next day you fix something else in the same area, well, you want both your fix AND the previous condition fix to continue to work. As such, the value of your regression tests grows over time. It is a matter of habits.

What you’ll need

POSTMAN – This will allow you to create a regression collection, add a monitor, and know when a regression fails. This is the actual too.

REST API – We will be running a series of script from POSTMAN using STARLIMS REST API. You’ll see, this is the easy part. If you followed how to implement new endpoints, this will be a breeze.

STARLIMS Setup

In STARLIMS, add a new server script category for your API. In my case, I call it API_Regression_v1. This will be part of the API’s route.

Add a new script API_Regression_v1.Run. This is our request class for the POST endpoint. The code behind will be simple: we receive a script category in the parameters, and we run all children scripts. Here’s a very simple implementation:

:CLASS Request;
:INHERIT API_Helper_Custom.RestApiCustomBase;

:PROCEDURE POST;
:PARAMETERS payload;

:DECLARE ret, finalOutput;

finalOutput := CreateUdObject();
:IF !payload:IsProperty("category"); 
    finalOutput:StatusCode := Me:HTTP_NOT_FOUND;
    finalOutput:response := CreateUdObject();
    finalOutput:response:StatusCode := Me:HTTP_NOT_FOUND;
    :RETURN finalOutput;
:ENDIF;

/* TODO: implement script validation (can we run this script through regression test? / does it meet regression requirements?) ;
ret := Me:processCollection( payload:category );
finalOutput:StatusCode := Me:HTTP_SUCCESS;
finalOutput:response := ret;
finalOutput:response:StatusCode := Me:HTTP_SUCCESS;
:RETURN finalOutput;

:ENDPROC;

:PROCEDURE processCollection;
:PARAMETERS category;
:DECLARE scripts, i, sCatName, output, script;
output := CreateUdObject();
output:success := .T.;
output:scripts := {};
sCatName := Upper(category);
scripts := SQLExecute("select   coalesce(c.DISPLAYTEXT, c.CATNAME) + '.' + 
                                coalesce(s.DISPLAYTEXT, s.SCRIPTNAME) as s
                        from LIMSSERVERSCRIPTS s
                        join LIMSSERVERSCRIPTCATEGORIES c on s.CATEGORYID = c.CATEGORYID 
                        where c.CATNAME like ?sCatName? 
                        order by s", "DICTIONARY" );

:FOR i := 1 :TO Len(scripts);
    script := CreateUdObject();
    script:scriptName := scripts[i][1];
    script:success := .F.;
    script:response := "";
    :TRY;
        ExecFunction(script:scriptName);
        script:success := .T.;
    :CATCH;
        script:response := FormatErrorMessage(getLastSSLError());
        output:success := .F.;
    :ENDTRY;
    aAdd(output:scripts, script);
:NEXT;

:RETURN output;
:ENDPROC;

As you can guess, you will NOT want to expose this endpoint in a production environment. You’ll want to run this on your development / test instance; whichever makes most sense to you (maybe both). You might also want to add some more restrictions, like only if category starts with “Regression” or something along thos lines… I added a generic setting called “/API/Regression/Enabled” with a default value of false to check the route (see next point), and a list “/API/Regression/Categories” to list which category can be run (whitelisted).

Next, you should add this to your API route. I will not explain here how to do this; it should be something you are familiar with. Long story short, API_Helper_Customer.RestApiRouter should be able to route callers to this class.

POSTMAN setup

This part is very easy. Create yourself a new collection – something like STARLIMS Regression Tests v1. Prepare this collection with an environment so you can connect to your STARLIMS instance.

One neat trick: prepare your collection with a pre-request script that will make it way easier to use. I have this script I tend to re-use every time:

// get data required for API signature
const dateNow = new Date().toISOString();
const privateKey = pm.environment.get('SL-API-secret'); // secret key
const accessKey = pm.environment.get('SL-API-Auth'); // public/access key

const method = request.method;
const apiMethod = "";

var body = "";
if (pm.request.body && pm.request.body.raw){
    body = pm.request.body.raw;
}
// create base security signature
var signatureBase = `${url}\n${method}\n${accessKey}\n${apiMethod}\n${dateNow}\n${body}`;
// get encoding hash of signature that starlims will attempt to compare to
var data = CryptoJS.enc.Utf8.parse(signatureBase);
const hash = CryptoJS.HmacSHA256(data, privateKey);
const encodedHash = encodeURIComponent(CryptoJS.enc.Base64.stringify(hash));

// create headers
pm.request.headers.add({"key":"SL-API-Timestamp", "value":dateNow});
pm.request.headers.add({"key":"SL-API-Signature", "value":encodedHash});
pm.request.headers.add({"key":"SL-API-Auth", "value":accessKey});
pm.request.headers.add({"key":"Content-Type", "value":"application/json"});

Note: In the above code, there’s a few things you need to initialize in your environment; pay attention to the pm.environment.get() variables.

Once that is done, you add one request in your collection of type POST. Something that looks like this:

POST request example

See the JSON body? We’re just telling STARLIMS “this is the category I want you to run”. With the above script, all scripts in this category will run when this request is sent. And since every script is in a try/catch, you’ll get a nice response of all scripts success (or failure).

Let’s create at least 1 regression test. In the category (in my case, My_Regression_POC), I will add one script named “compile_regression_framework”. The code will be very simple: I just want to make sure my class is valid (no typo and such).

:DECLARE o;
o := CreateUdObject("API_Regression_v1.run");
:RETURN .T.;

Setup a POSTMAN monitor

Now, on to the REALLY cool stuff. In POSTMAN, go to monitor:

POSTMAN – Monitor option

Then just click the small “+” to add a new monitor. It is very straightforward: all you need to pick is a collection (which we created earlier) and an environment (which you should have by now). Then setup the schedule and the email in case it fails.

Setting up a Monitor

And that’s it, you’re set! You have your framework in place! The regression tests will run every day at 3pm (according to the above settings) and if something fails, I will receive an email. This is what the dashboard looks like after a few days:

Monitor Dashboard

Next Steps

From here on, the trick is organizing regression scripts. In my case, what I do is

  1. I create a new category at the beginning of a sprint
  2. I duplicate the request in the collection, with the sprint name in the request’s title
  3. I change the JSON of the new request to mention the new category
  4. Then, for every bug fixed during that sprint, I create a regression script in that category. That script’s purpose is solely to test what was fixed.

What happens then is every day, all previous sprints regression tests run, plus the new ones! I end up having a lot of tests.

Closing notes

Obviously, this does not replace a good testing team. It only supports them by re-testing stuff they might not think about. It also doesn’t test everything; there’s always scenarios that can’t be tested only with a server call. It doesn’t test front-end.

But still, the value is there. What is tested is tested. And if something fails, you’ll know!

One question a developer asked me on this is “sometimes, you change code, and it will break a previous test, and this test will never pass again because it is now a false test. What then?”

Answer: either delete the script, or just change the 1st line to :RETURN .T.; with a comment that the test is obsolete. Simple as that.

At some point, you can start creating more collections, adding more monitor, to split different schedules (some tests to run weekly, others daily, etc).

And finally, like I said, the complexity and how much you decide to test is really up to you. I recommend starting small; the value is there without much effort. Then, if a critical scenario arises, you can write a more complex test. That should be the exception.

You have ideas on how to make this better? Share!

Open up STARLIMS with its REST API!

Open up STARLIMS with its REST API!

Alright folks, I was recently involved in other LIMS integrations and one pattern that is very much alike is a “click this functionality to enable the equivalent API” approach. Basically, by module, you decide what can be exposed or not. And then, by role or by user (or lab, of all of that), you grant consuming rights.

It got me thinking “heh, STARLIMS used to do that with the generic.asmx web service”. RunAction and RunActionDirect anyone?

So, that’s just what I did, for fun, but also thinking that if I’d go around re-writing routing and scripts for every single functionalities, that would be a total waste of time. Now, don’t get me wrong! Like everything, I think that it depends.

You can (and should) expose only bits and pieces you need to expose, unless your plan is to use STARLIMS as a backend mostly and integrate most (if not all) of the features to external systems (those of you who you want a React or Angular front end, that’s you!).

So, if you’re in the later group, take into consideration security. You will want to set the RestApi_DevMode setting to false STARLIMS’ web.config file. This is to ensure that all communication is hashed using MD5 and not tampered with. Then, of course, you’ll check https and all these things. This is out of scope, but still worthy of note.

Once that’s done, you need 2 pieces.

  1. You need to define a route. Personnally, I used the route /v1/generic/action . If you don’t know how to do that, I wrote and article on the topic.
  2. You need a script to do all of this! Here’s the simplified code:
:PROCEDURE POST;
	:PARAMETERS payload;
	:IF payload:IsProperty("action") .and. !Empty(payload:action);
		:DECLARE response;
		response := CreateUdObject();
		response:StatusCode := 200;
		response:Response := CreateUdObject();
		response:Response:data := "";
		:IF payload:IsProperty("parameters") .and. !Empty(payload:parameters);
			response:Response:data := ExecFunction(payload:action, payload:parameters);
		:ELSE;
			response:Response:data := ExecFunction(payload:action);
		:ENDIF;
		:RETURN response;
	:ELSE;
		:DECLARE response;
		response := CreateUdObject();
		response:StatusCode := 400;
		response:Response := CreateUdObject();
		response:Response:message := "invalid action/parameters";
		:RETURN response;
	:ENDIF;
:ENDPROC;

In my case, I went a little bit more fancy by adding an impersonation mechanism so the code would “run as”. You could add some authorization on which scripts can be ran, by who, when, etc. Just do it at the beginning, and the return a 403 forbidden response if execution is denied.

Yeah, I know, this is not rocket science, and this is not necessarily the most secure approach. In fact, this really opens up your STARLIMS instance to ANY integration from a third party software… But, as I mentioned in the beginning, maybe that’s what you want?

NB: I used DALL·E 2 (openai.com) to generate the image on the front page using “STARLIMS logo ripped open”. I had to try!

Building my own RPI-based Bartop Arcade Cabinet

Building my own RPI-based Bartop Arcade Cabinet

One of my pet project this summer was to build a bartop arcade cabinet. I had some rpi400 laying around, which are rpi4 imbedded in a keyboard. The idea of always having a keyboard handy for the arcade cabinet sounded like a great feature, and to access it, I had to find a way to easily open the cabinet.

That’s why there’s hinges in from of the control!

All in all, building this was fun, and I decided to use Batocera.linux as the OS. It turn out to be the easiest one and most complete one, as well as the fastest one, based on my tests.

Main goal was to load MAME arcade games (tetris, pac man, Super Street Fighter 2). But I ended up putting Mario Kart N64, and it actually runs pretty good if we set resolution to 640×480 for that game.

There’s still one bug going on with Batocera – after a while we must reboot the Arcade since there seems to be a memory leak somewhere (developers are aware).

In the box, there’s

  • rpi400
  • Old 19 inches 4:3 monitor
  • 2 set of generic dragon arcade USB controllers
  • HDMI to VGA active adapter (powered)
  • power bar outlet (re-wired to a on/off switch in the back)
  • Altec lansing speakers
Arcade Bartop Cabinet (no stickers)

I thought it might be interesting to should you various stages of the build, in case you are looking for some inspirations:

Initial frame
Hinges for the bartop
Stained, ready to assemble!

During the whole configuration, I had a problem. RetroPie was not able to output sound properly, and Batocera was not able to connect to WiFi. It turned out this was caused by an insufficient power in the rpi.

Lesson 1: avoid a USB sound card if you can. It draws a lot of power that can interfere with the Wifi & Bluetooth module (which is what happened to me). If you do that, try to get one that can draw its power from somewhere else. I prefer rely on the HDMI sound output.

Lesson 2: if you use an old monitor, get an Active HDMI to VGA adapter. These adapters will usually include an audio output (which solves above problem). If you use a passive adapter, the chip relies on the power provided by HDMI, which may result in black screen flickers in some games. Using an active adapter fixed the problem for me.

This is a very different topic than what I usually post, but I felt like a good place to share this!

Did you ever build an Arcade cabinet?

Site Transition

My wife and I decided to stop hosting web sites as it was more a hobbie than anything. After our last handover, we downgraded our package with our hosting provider, and now, unfotunately, my prototype framework https://dev.michel-roberge.com is down.

I will try to find a way to get it back online; but that might take a while. And we’ll need to re-train the tic-tac-toe AI…

STARLIMS REST API & POSTMAN – Production Mode

Alright folks! If you’ve been playing with the new STARLIMS REST API and tried production mode, perhaps you’ve run into all kind of problems providing the correct SL-API-Signature header. You may wonder “but how do I generate this?” – even following STARLIMS’s c# example may yield unexpected 401 results.

At least, it did for me.

I was able to figure it out by looking at the code that reconstructs the signature on STARLIMS side, and here’s a snippet of code that works in POSTMAN as a pre-request code:

// required for the hash part. You don't need to install anything, it is included in POSTMAN
var CryptoJS = require("crypto-js");

// get data required for API signature
const dateNow = new Date().toISOString();
// thhis is the API secret found in STARLIMS key management
const privateKey = pm.environment.get('SL-API-secret');
// this is the API access key found in STARLIMS key management
const accessKey = pm.environment.get('SL-API-Auth');
// in my case, I have a {{url}} variable, but this should be the full URL to your API endpoint
const url = pm.environment.get('url') + request.url.substring(8);
const method = request.method;
// I am not using api methods, but if you are, this should be set
const apiMethod = "";

var body = "";
if (pm.request.body.raw){
    body = pm.request.body.raw;
}

// this is  the reconstruction part - the text used for signature
const signatureBase = `${url}\n${method}\n${accessKey}\n${apiMethod}\n${dateNow}\n${body}`;

// encrype signature
var data = CryptoJS.enc.Utf8.parse(signatureBase);
const hash = CryptoJS.HmacSHA256(data, privateKey);
const encodedHash = encodeURIComponent(CryptoJS.enc.Base64.stringify(hash));

// set global variables used in header
pm.globals.set("SL-API-Timestamp", dateNow);
pm.globals.set("SL-API-Signature", encodedHash);

One point of interest – if it still is not working, and if you can’t figure out why, an undocumented STARLIMS feature is to add this application setting in the web.config to view more info:

<add key="RestApi_LogLevel" value="Debug" />

I hope this helps you use the new REST API provided by STARLIMS!