Re-thinking STARLIMS architecture

Re-thinking STARLIMS architecture

There is something about STARLIMS that has been bugging me for a long time. Don’t get me wrong – I think it is a great platform. I just question the wellness of XFD in 2024, and the selection of Sencha for the HTML part of it.

But an even more critical point: I question the principle of using the same server for the “backend” and the “frontend”. Really, the current architecture of STARLIMS (in a simplified way) is something like this:

Sure, you can add load balancers, multiple servers, batch processors… But ultimately, the Server’s role is both backend and Web Rendering, without really following Server-Side-Rendering (SSR) pattern. It hosts / provides the code to render from backend and let client do rendering. So, in fact, it is Client-Side-Rendering (CSR) with most of the SSR drawbacks.

This got me thinking. What if we really decoupled the front end from the backend? And what if we made this using real micro services? You know, something like this:

Let me explain the layers.

React.js

React does not need presentation. The infamous open-source platform behind Facebook. Very fast and easy, huge community… Even all the AI chatbot will generate good React components if you ask nicely! For security, it’s like any other platform; it’s as secure as you make it. And if you pair it with Node.js, then it’s very easy, which brings me to the next component…

Node.js

Another one in no need of presentation. JavaScript on a backend? Nice! And there, on one end, you handle the session & security (with React) and communicate with STARLIMS through the out of the box REST API. Node can be just a proxy to STARLIMS (it is the case currently) but should also be leveraged to extend the REST APIs. It is a lot easier to implement new APIs and connect to STARLIMS (or anything else for that matter!) and speed up the process. Plus, you easily get cool stuff like WebSockets if you want, and you can cache some reference data in Redis to go even faster!…

Redis

Fast / lightweight / free cache (well, it was when I started). I currently use it only for sessions; since REST API is stateless in STARLIMS, I manage the sessions in Node.js, and store them in Redis, which allows me to spin multiple Node.js instances (load balancing?) and share sessions across. If you don’t need to spin multiple proxy, you don’t need this. But heh, it’s cooler with it, no?

I was thinking (I haven’t done anything about this yet) to have a cron job running in Node.js to pull reference data from STARLIMS (like test plans, tests, analytes, specifications, methods, etc) periodically and update Redis cache. Some of that data could be used in the UI (React.js) instead of hitting STARLIMS. But now, with the updated Redis license, I don’t know. I think it is fine in these circumstances, but I would need to verify.

… BUT WHY?

Because I can! – Michel R.

Well, just because. I was learning these technologies, had this idea, and I just decided to test the theory. So, I tried. And it looks like it works! There are multiple theoretical advantages to this approach:

  1. Performance: Very fast (and potentially responsive) UI.
  2. Technology: New technology availability (websockets, data in movement, streaming, etc.).
  3. Integration: API first paradigm, Node.js can make it really easy to integrate with any technology!
  4. Source control: 100% Git for UI code, opening all git concepts (push, pull requests, merge, releases, packages, etc.).
  5. Optimization: Reduce resource consumption from STARLIMS web servers.
  6. Scalability: High scalability through containerization and micro-services.
  7. Pattern: Separation of concerns. Each component does what its best at.
  8. Hiring – there is a higher availability of React.js and Node.js developers than STARLIMS developers!

Here’s some screenshots of what it can look like:

As you can see, at this stage, it is very limited. But it does work, and I like a couple of ideas / features I thought of, like the F1 for Help, the keyboard shortcuts support, and more importantly, the speed… It is snappy. In fact, the speed is limited to what the STARLIMS REST API can provide when getting data, but otherwise, everything else is way, way faster than what I’m used to.

How does it work, really?

This is magic! – Michel R.

Magic! … No, really, I somewhat “cheated”. I implemented a Generic API in the STARLIMS REST API. This endpoint supports both ExecFunction and RunDS, as well as impersonation. Considering that the REST API of STARLIMS is quite secure (it uses anti-tampering patterns, you can ask them to explain that to you if you want) and reliable, I created a generic endpoint. It receives a payload containing the script (or datasource) to run, with the parameters, and it returns the original data in JSON format.

Therefore, in React, you would write code very similar to lims.CallServer(scriptName, parameters) in XFD/Sencha.

Me being paranoid, I added a “whitelisting” feature to my generic API, so you can whitelist which scripts to allow running through the API. Being lazy, I added another script that does exactly the same, without the whitelisting, just so I wouldn’t have to whitelist everything; but hey, if you want that level of control… Why not?

Conclusion

My non-scientific observations are that this works quite well. The interface is snappy (a lot faster than even Sencha), and developing new views is somewhat easier than both technologies as well.

Tip: you can just ask an AI to generate a view in React using, let’s say, bootstrap 5 classNames, and perhaps placeholders to call your api endpoints, et voilà! you have something 90% ready.

Or you learn React and Vite and you build something yourself, your own components, and create yourself your own STARLIMS runtime (kind-of).

This whole experiment was quite fun, and I learned a ton. I think there might actually be something to do with it. I invite you to take a look at the repositories, which I decided to create a public version of for anyone to use and contribute under MIT with commercial restrictions license:

You need to have both projects to get this working. I recommend you check both README to begin with.

Right now, I am parking this project, but if you would like to learn more, want to evaluate this but need guidance, or are interested in actually using this in production, feel free to drop me an email at [email protected]! Who knows what happens next?

An introduction to Github and Webhooks

An introduction to Github and Webhooks

In the world of software development, my quest to understand continuous deployment led me down an intriguing path. It all began with a burning desire to unravel the complexities of continuous deployment while steering clear of expensive cloud hosting services. And that’s when my DIY GitHub Webhook Server project came to life.

The Genesis

Imagine being in my shoes—eager to dive deeper into the continuous deployment process. But I didn’t want to rely on pricey cloud solutions. So, I set out to craft a DIY GitHub Webhook Server capable of handling GitHub webhooks and automating tasks upon code updates—right from my local machine. Or any machine for that matter.

The Vision

Let’s visualize a scenario with a repository—let’s call it “MyAwesomeProject”— sitting in Github, and you are somewhere in a remote cabin with okey / dicey internet access. All you have is your laptop (let’s make it a Chromebook!!). You want to code snugly, and you want to update your server that sits at home. But… You don’t WANT to remote in your server. You want it to be automatic. Like magic.

You would have to be prepared. You would clone my repo, configure your server (including port forwarding), and maybe use something like no-ip.com so you have a “fixed” URL to use your webhook with. Then:

  1. Configuring Your Repository: Start by defining the essential details of “MyAwesomeProject” within the repositories.json file—things like secretEnvName, path, webhookPath, and composeFile.
{
  "repositories": [
    {
      "name": "MyAwesomeProject",
      "secretEnvName": "GITHUB_WEBHOOK_SECRET",
      "path": "/path/to/MyAwesomeProject",
      "webhookPath": "/webhook/my-awesome-project",
      "composeFile": "docker-compose.yml"
    }
  ]
}
  1. Setting Up Your GitHub Webhook: Head over to your “MyAwesomeProject” repository on GitHub and configure a webhook. Simply point the payload URL to your server’s endpoint (e.g., http://your-ddns-domain.net/webhook/my-awesome-project).
  2. Filtering Events: The server is smartly configured to respond only to push events occurring on the ‘main’ branch (refs/heads/main). This ensures that actions are triggered exclusively upon successful pushes to this branch.
  3. Actions in Motion: Upon receiving a valid push event, the server swings into action—automatically executing a git pull on the ‘main’ branch of “MyAwesomeProject.” Subsequently, Docker containers are rebuilt using the specified docker-compose.yml file.

So, there you have it—a simplified solution for automating your project workflows using GitHub webhooks and a self-hosted server. But before we sign off, let’s talk security.

For an added layer of protection, consider setting up an Nginx server with a Let’s Encrypt SSL certificate. This secures the communication channel between GitHub and your server, ensuring data integrity and confidentiality.

While this article delves into the core aspects of webhook configuration and automation, diving into SSL setup with Nginx warrants its own discussion. Stay tuned for a follow-up article that covers this crucial security setup, fortifying your webhook infrastructure against potential vulnerabilities.

Through this journey of crafting my DIY GitHub Webhook Server, I’ve unlocked a deeper understanding of continuous deployment. Setting up repositories, configuring webhooks, and automating tasks upon code updates—all from my local setup—has been an enlightening experience. And it’s shown me that grasping the nuances of continuous deployment doesn’t always require expensive cloud solutions.

References:

Repository: https://github.com/michelroberge/webhook-mgr/

Demo app (where I use this): https://curiouscoder.ddns.net