CI / CD Next step is awesome!

CI / CD Next step is awesome!

Alright, so if you followed along the previous post, you know I have setup Jenkins to kind of run continuous integration. Well, I have now pushed it a bit further.

I installed a docker image of SonarQube (the community edition) and wow, do I have only one regret: I should have started with all of this setup on day one.

My flow is now this:

So, in a nutshell, what is VERY COOL is that when I push code on my develop branch, this happens automatically:

  • unit tests are triggered
  • code analysis is triggered

And in SonarQube code analysis, I found bunch of interesting suggestions, enhancements and bug fixes. They were not necessarily product-breaking, but I found many things I was not even aware of. My future code will just be better.

CD pipeline?

I also added a CD pipeline for my test environment. I am not ready yet to put quality gates to automate production deployment, but I am on the right track! Here is my current CD pipeline:

It is quite simple, but it works just perfect!

Now, I was wondering if this would be too much for my server. You know, running all of these:

  • Verdaccio docker image (npm private repository)
  • Jenkins docker image (CI/CD pipelines)
  • SonarQube docker image (code analysis)
  • 3 Tests docker images (React frontend, Node backend, Service manager)
  • 3 Production docker images (same as just before)
  • Nginx docker image (reverse proxy)
  • Prometheus & Grafana (directly, not docker images) for system monitoring

Here’s what happens:

More or less: NOTHING.

Well, not enough to be concerned about it yet. Of course, there’s not a lot of users, but I expect even with a few dozen of users, it wouldn’t be so bad. And if this became really serious, the production environments would be hosted on the cloud somewhere for 100% uptime (at least as a target).

To be honest, the tough part was to get the correct Jenkinsfile structure – just because I am not used to it. For safe keeping, I am writing my two pipelines here, and who knows, maybe it can help you too!

CI pipelines – Jenkinsfile

pipeline {
    agent any
    tools {nodejs "node"} 
    stages {
        stage('Install dependencies') { 
            steps {
                sh 'npm install' 
            }
        }
        stage('Unit Tests') { 
            steps {
                sh 'npm run test' 
            }
        }
        stage('SonarQube Analysis') {
            steps{
                script {
                    def scannerHome = tool 'sonar-scanner';
                    withSonarQubeEnv('local-sonarqube') {
                        withCredentials([string(credentialsId: 'sonar-secret', variable: 'SONAR_TOKEN')]) {
                            sh "${scannerHome}/bin/sonar-scanner -Dsonar.login=\$SONAR_TOKEN"
                        }
                    }
                }
            }
        }
    }
}

CD Pipeline Jenkinsfile

pipeline {
    agent any

    stages {
        stage('Verify Docker is available') {
            steps {
                script {
                    sh 'docker version'
                }
            }
        }
        stage('Copy .env and config file') {
            steps {
                script {
                    configFileProvider([configFile(fileId: 'frontend-dev.env', variable: 'DEV_ENV')]) {
                        sh 'cp $DEV_ENV .env'
                    }
                }
                script {
                    configFileProvider([configFile(fileId: 'frontend-custom-config.js', variable: 'DEV_CONFIG')]) {
                        sh 'cp $DEV_CONFIG ./src/config.js'
                    }
                }
            }
        }
        stage('Build Dev') {
            steps {
                sh 'docker build -t frontend:develop .'
            }
        }
        stage('Stop and Remove previous Container') {
            steps {
                sh 'docker stop frontend-develop || true'
                sh 'docker rm frontend-develop || true'
            }
        }
        stage('Start Dev') {
            steps {
                sh 'docker run --name frontend-develop -d -p 3033:3033 -e PORT=3033 frontend:develop'
            }
        }
    }
}

Next step: fixes all the identified issues by SonarQube. When I am done with that, I will begin the CD for prod.

CI / CD at home – Was taking the red pill a good idea?…

CI / CD at home – Was taking the red pill a good idea?…

This whole article started as an attempt at sharing steps to get a free “no-cloud” platform for continuous integration and continuous deployment. What triggered it? The time I spent doing npm install, npm build, npm publish, left and right, forgetting one, npm test, oops I forgot one npm test and did a docker build … Damned! was that time consuming and lousy activities.

I want to code. I don’t want to do this. Let’s separate the concern: I code, the server does the rest. Sounds good? We call this separation of concerns, at a whole new level!

What is separation of concerns?

Short answer: Do what you are supposed to do and do it right.

Not so short answer: It is the principle of breaking down a complex system into distinct, independent parts or modules, each addressing a specific responsibility or concern, to promote modularity, maintainability, and scalability. It is usually applied at many (if not all) levels, like architecture, component, class, method, data, presentation, testing, infrastructure, deployment, etc.

… and why should you care? (Why it matters)

My article in itself? It doesn’t really matter, and you shouldn’t care. Unless, that is, you find yourself in this situation where you hear about continuous integration and deployment, but you don’t really know where to start. Or if you have your own software you’re working on and want to get to a next level. Or just because you like reading my stuff, who knows!

I recently started to flip this blog into a melting pot of everything I face on this personal project. Eventually, I may even release something! And then, we can have a nice history of how it got there! For the posterity!

Anyway, I am diverging from the original intent… I want to share the journey I went through to get a working environment with Jenkins and Verdaccio. I think it is a great platform for startups who can’t or won’t afford cloud hosting just yet (or for privacy reasons) but still want to achieve some level of automation.

As a reference, I’m sharing the challenges I am facing with a personal project consisting of a Node backend, a React frontend, and a background server, and how I tackle these challenges using modern tools and techniques.

I want to try something backward. Conclusion first! I completed the setup, and it works. It was painful, long, and not fun at some points.

But look at the above screenshot! Every time I push to one of my Github repo, it triggers build and test. In one case, it even publishes to my private package management registry with the :develop tag! Isn’t it magical?

If you are interested in how I got there, tag along. Otherwise, have a great day! (still, you should meet my new friend, Jenkins)

Before we begin, here are some definitions (if you know what these are, just ignore).

You never know who will read your articles (if anyone). Play dumb.

Definitions

Continuous Integration (CI) and Continuous Deployment (CD): CI/CD are practices that automate the process of building, testing, and deploying software applications. CI ensures that code changes from multiple developers are regularly merged and tested, while CD automates the deployment of these tested changes to production or staging environments.

Node.js: Node.js is a runtime environment that allows developers to run JavaScript code outside of a web browser. It’s commonly used for building server-side applications, APIs, and real-time applications.

Docker: Docker is a platform that simplifies the process of creating, deploying, and running applications using containers. Containers are lightweight, standalone executable packages that include everything needed to run an application, including the code, runtime, system tools, and libraries.

Containers: Containers are isolated environments that package an application and its dependencies together. They provide a consistent and reproducible runtime environment, ensuring that the application runs the same way regardless of the underlying infrastructure. Containers are lightweight and portable, making them easier to deploy and manage than traditional virtual machines.

Let’s Begin!

Project Structure

  • Project 1: React Frontend
  • Project 2: Common backend (most models, some repos and some services)
  • Project 3: Node Backend (API)
  • Project 4: Node Worker (processing and collection monitoring)

Environments

  • I run development on whatever machine I am using at the moment with nodemon and vite development mode
  • I build & run Docker containers on my linux server with docker compose (3 Docker files and 1 docker compose)
  • I have a nginx reverse proxy on the same server for SSL and dynamic IP (no-ip)

Objective

  • Achieve full CI/CD so I can onboard remote team members (and do what I like: code!)

IF I am successful, this will be a robust and scalable development workflow that would streamline the entire software development life cycle, from coding to deployment, for my project. I think in the enterprise, with similar tools, this proactive approach would lay a good foundation for efficient collaboration, rapid iteration, and reliable software delivery, ultimately reducing time-to-market and increasing overall productivity.

Couple of additional challenges

  • Challenge #1: Private Package Management. NPM Registry is public. I want to keep my projects private; how can I have a package for the common backend components?
  • Challenge #2: Continuous Integration (CI). How can I implement a continuous integration pipeline with this? Code is on Github, registry will be private… How do I do that?
  • Challenge #3: Continuous Deployment (CD). How can I implement a continuous deployment process? I will need to automate some testing and quality gates in the process, so how will that work?
  • Challenge #4: Future Migration to Continuous Delivery (CD+). Can I migrate to continuous delivery in the future (you know, if I ever have customers?)
  • Challenge #5: Cloud Migration / readiness. When / if this becomes serious, can my solution be migrated to a cloud provider to reduce hardware failure risk?

With this in mind, I have decided to embark on a journey to attempt to setup a stable environment to address achieve this and face each challenge. Will you take the blue pill and go back to your routine, or the red pill and follow me down the rabbit hole?..

Starting point: bare metal (target: Ubuntu Server, Jenkins, Verdaccio, Docker)

I have this Trigkey S5 miniPC which I think is pretty awesome for the price. It comes with Windows 11, but from what I read everywhere, to host what I want, I should go with a linux distro. So there I go, I install some linux distro from a USB key and cross fingers it boots…

I went with Ubuntu Server (24.04 LTS). BTW, the miniPC is a Ryzen 5800H with 32GB RAM, should be sufficient for a while. On there, I have the installed these – pretty straightforward and you can find many tutorials online, so I won’t go in details:

  • Docker (engine & compose)
  • Git
  • Cockpit (this makes it easier for me to manage my server)

I also have a NGINX reverse proxy container. You can google something like nginx reverse proxy ssl letsencrypt docker and you’ll find great tutorials on setting this up as well. I may write another article later when I reach that point for some items (if required in this journey). But really, that’s gravy at this stage.

Install and Configure Jenkins

Jenkins is an open-source automation server that provides CI/CD pipeline solutions. From what I could gather, we can use Jenkins Docker image for easy management and portability, and it strikes a good balance between flexibility and simplicity. Disclaimer: it is my first experiment with Jenkins, so I have no clue how this will come out…

1. Let’s first prepare some place to store Jenkins’ data:

sudo mkdir /opt/jenkins

2. Download the Docker image:

sudo docker pull jenkins/jenkins:lts

3. Created a docker-compose.yml file to run jenkins:

services:
  jenkins:
    image: jenkins/jenkins:lts
    container_name: jenkins
    ports:
      - 8080:8080
      - 50000:50000
    volumes:
      - ./jenkins:/var/jenkins_home
      - /var/run/docker.sock:/var/run/docker.sock
    restart: always
    environment:
      - DOCKER_HOST=unix:///var/run/docker.sock

4. And launched it: sudo compose up -d

Et voilà! Jenkins seems to be running:

5. Since I mounted ./jenkins as my jenkins home, I can just run cat jenkins/secrets/initialAdminPassword and I can get the initial admin password, and continue. (for some reasons, I had to paste and click continue twice, then it worked)

I went with the recommended plugins to begin with. According to the documentation, we can easily add more later.

Install and Configure Verdaccio

Verdaccio will be my private package management registry. To install it, I just created a docker compose file, setup some directories, and boom.

version: '3.8'
services:
  verdaccio:
    image: verdaccio/verdaccio
    container_name: verdaccio
    ports:
      - "4873:4873"
    volumes:
      - ./config:/verdaccio/conf
      - ./storage:/verdaccio/storage
      - ./plugins:/verdaccio/plugins
      - ./packages:/verdaccio/packages

Run it with sudo docker compose up -d and that’s it.

Let’s put all of this together and create our first pipeline! – inspired from Build a Node.js and React app with npm (jenkins.io)

Problem 1 – Github & Jenkins ssh authentication

Well, I was not prepared for this. I spent a lot of time, but since Jenkins runs in a container, it does not share everything with the host, and somehow, adding the private keys were not adding the known hosts. So I had to run these commands:

me@server:~/jenkins$ sudo docker exec -it jenkins bash
jenkins@257100f0320f:/$ git ls-remote -h [email protected]:michelroberge/myproject.git HEAD
The authenticity of host 'github.com (140.82.113.3)' can't be established.
ED25519 key fingerprint is SHA256:+...
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'github.com' (ED25519) to the list of known hosts.

After that, this worked. It is only later I found that I need to pass –legacy-auth to the npm login when in headless mode. Moving forward, that won’t be a problem anymore.

Problem 2 – npm not found

Who would have thought. Node is not installed by default, you need to add the plugin. Once added, a typical workflow will need to include it! Something like:

pipeline {
    agent any
    tools {nodejs "node"} 
    stages {
        stage('Install') { 
            steps {
                sh 'npm install' 
            }
        }
        stage('Build Library') { 
            steps {
                sh 'npm run build:lib' 
            }
        }
        stage('Build Application') { 
            steps {
                sh 'npm run build:app' 
            }
        }
    }
}

the tools section is what matters. I have named my NodeJs installation node, hence the “node” in the name. Now that I played with it, I understand: this allows me to have different node versions and use the one I want in the workflow I want. Neat!

And finally, I got something happening:

First step achieved! Now I can add npm run test to have my unit tests running automatically.

This is nice, but it is not triggered automatically. Since I use Github, I can leverage the webhooks through the github trigger:

Then all I need is to put a webhook trigger in github that will point to https://<my-public-address-for-jenkins>:<port>/github-hook/ and that’s it!

The result

With this, I can now build a fully automated CI pipeline. Now, what is fully automated? That’s where the heavy-lifting begins. I will be exploring and reading about it more in the next weeks, but ultimately, I want to automate this:

  1. Develop branch CI – when I push anything to the develop branch, run the following tasks:
    • pull from develop
    • install (install dependencies)
    • build (build the repo)
    • test (run the unit tests, API tests, e2e tests, etc. depending on the repo)
  2. Staging branch CD – when I do a pull request from develop into staging branch, run the following tasks:
    • pull from staging
    • install
    • build
    • test (yes, same again)
    • host in testing environment (docker)
    • load tests (new suite of test to verify response under heavy load)
  3. I will then do “health checks”, analyze, and decide if I can/should do a pull from staging into main.
  4. Main branch CD – when I do a pull request from staging into main, run the following tasks:
    • pull from main
    • install
    • build
    • test (of course!)
    • host in staging environment (docker)
    • do some check, and then swap spot with current production docker
    • take down the swapped docker

The reason I keep some manual tasks (step 3) is because I want to handle build candidates in a kind of “the old way”. When I introduce some additional testing automation suites, I will probably enhance the whole thing.

By implementing these automated CI/CD workflows, I hope to achieve the following benefits:

  • Faster Feedback Cycles: Automated testing and deployment processes provide rapid feedback on code changes, allowing developers to quickly identify and resolve issues. I hope I won’t be the only developer forever on this project!
  • Early Detection of Issues: Continuous integration and testing catch defects early in the development cycle, preventing them from propagating to later stages and reducing the cost of fixing them.
  • Efficient and Reliable Deployments: Automated deployment processes ensure consistent and repeatable deployments, reducing the risk of human errors and minimizing downtime.
  • Improved Collaboration: Automated workflows facilitate collaboration among team members by providing a standardized and streamlined development process.

This is also something that will help me in my professional life – I kind of knew about it – but always relied on others to do it. So now, I will at least understand better what’s happening and the impact behind. I love learning!

And guess what: this approach aligns with industry best practices for modern software development and delivery, including:

  • Separation of Concerns: Separating the frontend, backend, and worker components into different projects promotes maintainability and scalability.
  • Continuous Integration: Regular integration of code changes into a shared repository, along with automated builds and tests, ensures early detection of issues and facilitates collaboration.
  • Continuous Deployment: Automated deployment processes enable frequent and reliable releases, reducing the risk of manual errors and accelerating time-to-market.
  • Test Automation: Comprehensive testing strategies, including unit tests, API tests, end-to-end tests, and load tests, ensure high-quality software and catch issues early in the development cycle.
  • Containerization: Using Docker containers for deployment ensures consistent and reproducible environments across development, testing, and production stages.

To me, this experiment demonstrates the importance of proactively addressing challenges related to project organization, package management, and automation in software development – earlier than later. With tools like Jenkins, Verdaccio, and Docker, I have laid the groundwork for a robust and scalable CI/CD pipeline that facilitates efficient collaboration, rapid iteration, and reliable software delivery.

As my project evolves, I plan to further enhance the automation processes, ensuring a smooth transition to continuous delivery and potential migration to cloud providers.

Let me try to explain to you, what to my taste is characteristic for all intelligent thinking. It is, that one is willing to study in depth an aspect of one’s subject matter in isolation for the sake of its own consistency, all the time knowing that one is occupying oneself only with one of the aspects. We know that a program must be correct and we can study it from that viewpoint only; we also know that it should be efficient and we can study its efficiency on another day, so to speak. In another mood we may ask ourselves whether, and if so: why, the program is desirable. But nothing is gained—on the contrary!—by tackling these various aspects simultaneously. It is what I sometimes have called “the separation of concerns”, which, even if not perfectly possible, is yet the only available technique for effective ordering of one’s thoughts, that I know of. This is what I mean by “focusing one’s attention upon some aspect”: it does not mean ignoring the other aspects, it is just doing justice to the fact that from this aspect’s point of view, the other is irrelevant. It is being one- and multiple-track minded simultaneously.

Edsger W. Dijkstra – 1974 paper “On the role of scientific thought”

Moving forward, I will be adding new pipelines as they become required.

Hope you learned something today!

Re-thinking STARLIMS architecture

Re-thinking STARLIMS architecture

There is something about STARLIMS that has been bugging me for a long time. Don’t get me wrong – I think it is a great platform. I just question the wellness of XFD in 2024, and the selection of Sencha for the HTML part of it.

But an even more critical point: I question the principle of using the same server for the “backend” and the “frontend”. Really, the current architecture of STARLIMS (in a simplified way) is something like this:

Sure, you can add load balancers, multiple servers, batch processors… But ultimately, the Server’s role is both backend and Web Rendering, without really following Server-Side-Rendering (SSR) pattern. It hosts / provides the code to render from backend and let client do rendering. So, in fact, it is Client-Side-Rendering (CSR) with most of the SSR drawbacks.

This got me thinking. What if we really decoupled the front end from the backend? And what if we made this using real micro services? You know, something like this:

Let me explain the layers.

React.js

React does not need presentation. The infamous open-source platform behind Facebook. Very fast and easy, huge community… Even all the AI chatbot will generate good React components if you ask nicely! For security, it’s like any other platform; it’s as secure as you make it. And if you pair it with Node.js, then it’s very easy, which brings me to the next component…

Node.js

Another one in no need of presentation. JavaScript on a backend? Nice! And there, on one end, you handle the session & security (with React) and communicate with STARLIMS through the out of the box REST API. Node can be just a proxy to STARLIMS (it is the case currently) but should also be leveraged to extend the REST APIs. It is a lot easier to implement new APIs and connect to STARLIMS (or anything else for that matter!) and speed up the process. Plus, you easily get cool stuff like WebSockets if you want, and you can cache some reference data in Redis to go even faster!…

Redis

Fast / lightweight / free cache (well, it was when I started). I currently use it only for sessions; since REST API is stateless in STARLIMS, I manage the sessions in Node.js, and store them in Redis, which allows me to spin multiple Node.js instances (load balancing?) and share sessions across. If you don’t need to spin multiple proxy, you don’t need this. But heh, it’s cooler with it, no?

I was thinking (I haven’t done anything about this yet) to have a cron job running in Node.js to pull reference data from STARLIMS (like test plans, tests, analytes, specifications, methods, etc) periodically and update Redis cache. Some of that data could be used in the UI (React.js) instead of hitting STARLIMS. But now, with the updated Redis license, I don’t know. I think it is fine in these circumstances, but I would need to verify.

… BUT WHY?

Because I can! – Michel R.

Well, just because. I was learning these technologies, had this idea, and I just decided to test the theory. So, I tried. And it looks like it works! There are multiple theoretical advantages to this approach:

  1. Performance: Very fast (and potentially responsive) UI.
  2. Technology: New technology availability (websockets, data in movement, streaming, etc.).
  3. Integration: API first paradigm, Node.js can make it really easy to integrate with any technology!
  4. Source control: 100% Git for UI code, opening all git concepts (push, pull requests, merge, releases, packages, etc.).
  5. Optimization: Reduce resource consumption from STARLIMS web servers.
  6. Scalability: High scalability through containerization and micro-services.
  7. Pattern: Separation of concerns. Each component does what its best at.
  8. Hiring – there is a higher availability of React.js and Node.js developers than STARLIMS developers!

Here’s some screenshots of what it can look like:

As you can see, at this stage, it is very limited. But it does work, and I like a couple of ideas / features I thought of, like the F1 for Help, the keyboard shortcuts support, and more importantly, the speed… It is snappy. In fact, the speed is limited to what the STARLIMS REST API can provide when getting data, but otherwise, everything else is way, way faster than what I’m used to.

How does it work, really?

This is magic! – Michel R.

Magic! … No, really, I somewhat “cheated”. I implemented a Generic API in the STARLIMS REST API. This endpoint supports both ExecFunction and RunDS, as well as impersonation. Considering that the REST API of STARLIMS is quite secure (it uses anti-tampering patterns, you can ask them to explain that to you if you want) and reliable, I created a generic endpoint. It receives a payload containing the script (or datasource) to run, with the parameters, and it returns the original data in JSON format.

Therefore, in React, you would write code very similar to lims.CallServer(scriptName, parameters) in XFD/Sencha.

Me being paranoid, I added a “whitelisting” feature to my generic API, so you can whitelist which scripts to allow running through the API. Being lazy, I added another script that does exactly the same, without the whitelisting, just so I wouldn’t have to whitelist everything; but hey, if you want that level of control… Why not?

Conclusion

My non-scientific observations are that this works quite well. The interface is snappy (a lot faster than even Sencha), and developing new views is somewhat easier than both technologies as well.

Tip: you can just ask an AI to generate a view in React using, let’s say, bootstrap 5 classNames, and perhaps placeholders to call your api endpoints, et voilà! you have something 90% ready.

Or you learn React and Vite and you build something yourself, your own components, and create yourself your own STARLIMS runtime (kind-of).

This whole experiment was quite fun, and I learned a ton. I think there might actually be something to do with it. I invite you to take a look at the repositories, which I decided to create a public version of for anyone to use and contribute under MIT with commercial restrictions license:

You need to have both projects to get this working. I recommend you check both README to begin with.

Right now, I am parking this project, but if you would like to learn more, want to evaluate this but need guidance, or are interested in actually using this in production, feel free to drop me an email at [email protected]! Who knows what happens next?

An introduction to Github and Webhooks

An introduction to Github and Webhooks

In the world of software development, my quest to understand continuous deployment led me down an intriguing path. It all began with a burning desire to unravel the complexities of continuous deployment while steering clear of expensive cloud hosting services. And that’s when my DIY GitHub Webhook Server project came to life.

The Genesis

Imagine being in my shoes—eager to dive deeper into the continuous deployment process. But I didn’t want to rely on pricey cloud solutions. So, I set out to craft a DIY GitHub Webhook Server capable of handling GitHub webhooks and automating tasks upon code updates—right from my local machine. Or any machine for that matter.

The Vision

Let’s visualize a scenario with a repository—let’s call it “MyAwesomeProject”— sitting in Github, and you are somewhere in a remote cabin with okey / dicey internet access. All you have is your laptop (let’s make it a Chromebook!!). You want to code snugly, and you want to update your server that sits at home. But… You don’t WANT to remote in your server. You want it to be automatic. Like magic.

You would have to be prepared. You would clone my repo, configure your server (including port forwarding), and maybe use something like no-ip.com so you have a “fixed” URL to use your webhook with. Then:

  1. Configuring Your Repository: Start by defining the essential details of “MyAwesomeProject” within the repositories.json file—things like secretEnvName, path, webhookPath, and composeFile.
{
  "repositories": [
    {
      "name": "MyAwesomeProject",
      "secretEnvName": "GITHUB_WEBHOOK_SECRET",
      "path": "/path/to/MyAwesomeProject",
      "webhookPath": "/webhook/my-awesome-project",
      "composeFile": "docker-compose.yml"
    }
  ]
}
  1. Setting Up Your GitHub Webhook: Head over to your “MyAwesomeProject” repository on GitHub and configure a webhook. Simply point the payload URL to your server’s endpoint (e.g., http://your-ddns-domain.net/webhook/my-awesome-project).
  2. Filtering Events: The server is smartly configured to respond only to push events occurring on the ‘main’ branch (refs/heads/main). This ensures that actions are triggered exclusively upon successful pushes to this branch.
  3. Actions in Motion: Upon receiving a valid push event, the server swings into action—automatically executing a git pull on the ‘main’ branch of “MyAwesomeProject.” Subsequently, Docker containers are rebuilt using the specified docker-compose.yml file.

So, there you have it—a simplified solution for automating your project workflows using GitHub webhooks and a self-hosted server. But before we sign off, let’s talk security.

For an added layer of protection, consider setting up an Nginx server with a Let’s Encrypt SSL certificate. This secures the communication channel between GitHub and your server, ensuring data integrity and confidentiality.

While this article delves into the core aspects of webhook configuration and automation, diving into SSL setup with Nginx warrants its own discussion. Stay tuned for a follow-up article that covers this crucial security setup, fortifying your webhook infrastructure against potential vulnerabilities.

Through this journey of crafting my DIY GitHub Webhook Server, I’ve unlocked a deeper understanding of continuous deployment. Setting up repositories, configuring webhooks, and automating tasks upon code updates—all from my local setup—has been an enlightening experience. And it’s shown me that grasping the nuances of continuous deployment doesn’t always require expensive cloud solutions.

References:

Repository: https://github.com/michelroberge/webhook-mgr/

Demo app (where I use this): https://curiouscoder.ddns.net

logo

ChatGPT Experiment follow-up

Did you try to look at my experiment lately? Did it timeout or gave you a bad gateway?


Bad Gateway

Well read-on if you want to know why!

Picture this: I’ve got myself a fancy development instance of a demo application built with ChatGPT. Oh, but hold on, it’s not hosted on some magical cloud server. Nope, it’s right there, in my basement, in my own home! I’ve been using some dynamic DNS from no-ip.com. Living on the edge, right?

Now, here’s where it gets interesting. I had the whole thing running on plain old HTTP. !!! I mean, sure, I had a big red disclaimer saying it wasn’t secure, but that just didn’t sit right with me. So, off I went on an adventure to explore the depths of NGINX. I mean, I kinda-sorta knew what it was, but not really. Time to level up!

So, being the curious soul that I am, I started experimenting. It’s not perfect yet, but guess what? I learned about Let’s Encrypt in the process and now I have my very own HTTPS and a valid certificate – still in the basement! Who’s insecure now? (BTW, huge shoutout to leangaurav on medium.com, the best tutorial on this topic out there!)

As if that was not enough, I decided – AT THE SAME TIME – to also scale up the landscape.

See, I’ve been running the whole stack on Docker containers! It’s like some virtual world inside my mini PCs. And speaking of PCs, my trusty Ryzen 5 5500U wasn’t cutting it anymore, so I upgraded to a Ryzen 7 5800H with a whopping 32GB of RAM. Time to unleash some serious power and handle that load like a boss!

Now, you might think that moving everything around would be a piece of cake with Docker, but oh boy, was I in for a ride! I dove headfirst into the rabbit hole of tutorials and documentation to figure it all out. Let me tell you, it was a wild journey, but I emerged smarter and wiser than ever before.

Now, I have a full stack that seems to somewhat work, even after reboot (haha!).

Let me me break down the whole landscape – at a very high level (in the coming days & weeks, if I still am feeling like it, I will detail each steps). The server is a Trigkey S5. I have the 32GB variant running on a 5800H, which went on sale for $400 CAD on Black Friday – quite a deal! From my research, best bang for the bucks. Mobile CPU, so energy-wise very good, but of course, don’t go expecting playing AAA games on this!

Concerning my environment, I use Visual Studio Code on Windows 11 with WSL enabled. I installed the Ubuntu WSL, just because that’s the one I am comfortable with. I have a Docker compose file that consist of:

  • NodeJs Backend for APIs, database connectivity, RBAC, etc.
  • ViteJs FrontEnd for well.. everything else you see as a user 🙂
  • NGINX setup as a reverse proxy – this is what makes it work over HTTPS

This is what my compose file looks like:

version: '3'

services:

  demo-backend:
    container_name: demo-backend
    build:
      context: my-backend-app
    image: demo-backend:latest
    ports:
      - "5051:5050"
    environment:
      - MONGODB_URI=mongodb://mongodb
    networks:
      - app
    restart: always

  demo-frontend:
    container_name: demo-frontend
    build:
      context: vite-frontend
    image: demo-frontend:latest
    ports:
      - "3301:3301"
    networks:
      - app
    restart: always

  mongodb:
    image: mongo:latest
    container_name: "mongodb"
    ports:
    - 27017:27017
    networks:
      - app
    restart: always

networks:
  app:
    driver: bridge
    name: shared_network

This method is very convenient for running on my computer during the development process. I have a modified compose file without NGINX and different ports. This makes it easier for me to make changes and test them. When I’m satisfied, I switch to another compose file with the redirected ports in my router. I use “docker compose down” followed by “docker compose up” to update my app.

This is the NGINX compose file. The interesting part is I use the same network name here than in the previous compose file. When I do this, NGINX can communicate with the containers of the previous docker compose file. Why did I do this? Well, I have this demo project running, and I’m working on another project with different containers. With some configuration, I will be able to leverage my SSL certificates for both solution (or as many as I want), as well as one compose file per project. This will be very handy!

version: "3.3"

services:
  nginx:
    container_name: 'nginx-service'
    image: nginx-nginx:stable
    build:
      context: .
      dockerfile: docker/nginx.Dockerfile
    ports:
      - 3000:3000
      - 3030:3030
      - 5050:5050
    volumes:
      - ./config:/config
      - /etc/letsencrypt:/etc/letsencrypt:ro
      - /tmp/acme_challenge:/tmp/acme_challenge
    networks:
      - app
    restart: always

networks:
  app:
    driver: bridge
    name: shared_network

Of course, this is not like a cloud provider; my small PC can die, there is no redundancy; but for development and demo purposes? Quite efficient (and cheap)!

As of last night, my stuff runs on HTTPS, and should be a bit more reliable moving forward. The great part about this whole experiment is how much I learned in the process! Yes, it started from ChatGPT. But you know what? I never learned so much so fast. It was very well worth it.

I will not state I am an expert in all of this, but I feel I know now a lot more about:

  • ChatGPT I learned some tricks along the way on how to ask and what to ask, as well as how to get out of infinite loops.
  • NodeJS and Express. I knew about it, but now I understand better the middleware concepts and connectivity. I have built some cool APIs
  • ViteJs This is quite the boilerplate to get a web app up and running
  • Expo and React-Native. This is a parallel project, but I built some nice stuff I will eventually share here. If you want to build Android and IOS apps using React-Native, this framework works great. Learn more on Expo.dev
  • GitLab. I tried this for the CI/CD capabilities and workflows… Oh my! With Expo, this came in handy!! Push a commit, merge, build and deploy through EAS! (On the flip side, I reached the limits of free quite fast… I need to decide what I’ll be doing moving forward) On top of it, I was able to store my containers on their registries, making it even more practical for teamwork!
  • Nginx. The only thing I knew before was : it exists and has to do with web servers. Now I know how to use as a reverse proxy and I am starting to feel that I will use it even more in the future.
  • Docker & Containerization. Also another one of these “I kind of know what it is”.. Now I have played with containers, docker compose, and I am only starting to grasp the power of it.
  • Let’s Encrypt. I thought I understood HTTPS. I am still no expert, but now I understand a lot more how this works, and why it works.
  • Certbot this is the little magic mouse behind the whole HTTPS process. Check it out!
  • MongoDb. I played with some NoSQL in the past. But now… Oh now. I love it. I am thinking I prefer this to traditional SQL databases. Just because.

A final note on ChatGPT (since this is where it all started):

The free version of this powerful AI is outdated (I don’t want to pay for this – not yet). This resulted in many frustrations – directives that wouldn’t work. I had to revert back to Googling what I was looking for. Turns out that although ChatGPT will often cut the time down by quite a margin, the last stretch is yours. It is not yet at the point where it can replace someone.

But it can help efficiency.

A lot.

Test-drive of a new Co-pilot: ChatGPT

Ok, we all played with it a bit. We’ve seen what it can do – or at least some of it. I was first impressed, then disappointed, and sometimes somewhat in between. How can it really help me? I thought.

I always want to learn new things. On my list I had Nodejs, React and MongoDB… So I on-boarded my copilot: ChatGPT, for a test drive!

See the results there!

BTW: that application is not secured, I didn’t want to spend too much money on this (SSL, Hosting and whatnot), so don’t go around using sensitive information 🙂

Write down in the comments if you want to explore further the result, I can create an account for you 😁 or maybe even spin up a dedicated instance for you (I want to try that at some point!!)

If this is down, it can be due to many things:

  1. no-ip.com and DDNS not synchronized
  2. my home server is down
  3. my provider blocked one or many of the ports
  4. no electricity
  5. I am rebuilding
  6. Something else, like, I don’t know… Aliens?

That was a lot of fun! Do you like it?

STARLIMS Backend regression tests automation

STARLIMS Backend regression tests automation

Finally. It was about time someone did something about it and would advertise. Automatic testing of STARLIMS (actually, any REST-supported app!). Ask anyone working with (mostly) any LIMS, and automatic regression tests are not often there at all, even less automated.

I understand it is difficult to automate the front end, which is what tends to break… Nonetheless, I had this idea – please read through! – and I think there’s an easy way to automate some regression tests to a STARLIMS instance. Here’s my arguments why it brings value:

  1. Once a framework is in place, efforts are what you put in. It can be a lot of efforts, or minimal. I would say aim minimal efforts at the beginning. Read on, you’ll understand why.
  2. Focus on bugs. Any bug fixed in the backend, prepare a regression test. Chances are you’re doing it anyway (write a test script to check your script runs?)
  3. For features, just test the default parameters at first. You can go ahead and do more, but this will at least tell you that the script still compiles and handle default values properly.
  4. You CAN and SHOULD have a regression test on your datasources! At least, do a RunDS(yourDatasource) to check it compiles. If you’re motivated, convert the Xml to a .NET Datasource and check that the columns you’ll need are there.
  5. Pinpoint regression tests. You fixed a condition? Test that condition. Not all conditions. Otherwise it becomes a unit test, not a regression test.

The idea is that every day / week / whatever schedule you want, ALL OF YOUR REGRESSION TESTS WILL RUN AUTOMATICALLY. Therefore, if one day you fix one condition, the next day you fix something else in the same area, well, you want both your fix AND the previous condition fix to continue to work. As such, the value of your regression tests grows over time. It is a matter of habits.

What you’ll need

POSTMAN – This will allow you to create a regression collection, add a monitor, and know when a regression fails. This is the actual too.

REST API – We will be running a series of script from POSTMAN using STARLIMS REST API. You’ll see, this is the easy part. If you followed how to implement new endpoints, this will be a breeze.

STARLIMS Setup

In STARLIMS, add a new server script category for your API. In my case, I call it API_Regression_v1. This will be part of the API’s route.

Add a new script API_Regression_v1.Run. This is our request class for the POST endpoint. The code behind will be simple: we receive a script category in the parameters, and we run all children scripts. Here’s a very simple implementation:

:CLASS Request;
:INHERIT API_Helper_Custom.RestApiCustomBase;

:PROCEDURE POST;
:PARAMETERS payload;

:DECLARE ret, finalOutput;

finalOutput := CreateUdObject();
:IF !payload:IsProperty("category"); 
    finalOutput:StatusCode := Me:HTTP_NOT_FOUND;
    finalOutput:response := CreateUdObject();
    finalOutput:response:StatusCode := Me:HTTP_NOT_FOUND;
    :RETURN finalOutput;
:ENDIF;

/* TODO: implement script validation (can we run this script through regression test? / does it meet regression requirements?) ;
ret := Me:processCollection( payload:category );
finalOutput:StatusCode := Me:HTTP_SUCCESS;
finalOutput:response := ret;
finalOutput:response:StatusCode := Me:HTTP_SUCCESS;
:RETURN finalOutput;

:ENDPROC;

:PROCEDURE processCollection;
:PARAMETERS category;
:DECLARE scripts, i, sCatName, output, script;
output := CreateUdObject();
output:success := .T.;
output:scripts := {};
sCatName := Upper(category);
scripts := SQLExecute("select   coalesce(c.DISPLAYTEXT, c.CATNAME) + '.' + 
                                coalesce(s.DISPLAYTEXT, s.SCRIPTNAME) as s
                        from LIMSSERVERSCRIPTS s
                        join LIMSSERVERSCRIPTCATEGORIES c on s.CATEGORYID = c.CATEGORYID 
                        where c.CATNAME like ?sCatName? 
                        order by s", "DICTIONARY" );

:FOR i := 1 :TO Len(scripts);
    script := CreateUdObject();
    script:scriptName := scripts[i][1];
    script:success := .F.;
    script:response := "";
    :TRY;
        ExecFunction(script:scriptName);
        script:success := .T.;
    :CATCH;
        script:response := FormatErrorMessage(getLastSSLError());
        output:success := .F.;
    :ENDTRY;
    aAdd(output:scripts, script);
:NEXT;

:RETURN output;
:ENDPROC;

As you can guess, you will NOT want to expose this endpoint in a production environment. You’ll want to run this on your development / test instance; whichever makes most sense to you (maybe both). You might also want to add some more restrictions, like only if category starts with “Regression” or something along thos lines… I added a generic setting called “/API/Regression/Enabled” with a default value of false to check the route (see next point), and a list “/API/Regression/Categories” to list which category can be run (whitelisted).

Next, you should add this to your API route. I will not explain here how to do this; it should be something you are familiar with. Long story short, API_Helper_Customer.RestApiRouter should be able to route callers to this class.

POSTMAN setup

This part is very easy. Create yourself a new collection – something like STARLIMS Regression Tests v1. Prepare this collection with an environment so you can connect to your STARLIMS instance.

One neat trick: prepare your collection with a pre-request script that will make it way easier to use. I have this script I tend to re-use every time:

// get data required for API signature
const dateNow = new Date().toISOString();
const privateKey = pm.environment.get('SL-API-secret'); // secret key
const accessKey = pm.environment.get('SL-API-Auth'); // public/access key

const method = request.method;
const apiMethod = "";

var body = "";
if (pm.request.body && pm.request.body.raw){
    body = pm.request.body.raw;
}
// create base security signature
var signatureBase = `${url}\n${method}\n${accessKey}\n${apiMethod}\n${dateNow}\n${body}`;
// get encoding hash of signature that starlims will attempt to compare to
var data = CryptoJS.enc.Utf8.parse(signatureBase);
const hash = CryptoJS.HmacSHA256(data, privateKey);
const encodedHash = encodeURIComponent(CryptoJS.enc.Base64.stringify(hash));

// create headers
pm.request.headers.add({"key":"SL-API-Timestamp", "value":dateNow});
pm.request.headers.add({"key":"SL-API-Signature", "value":encodedHash});
pm.request.headers.add({"key":"SL-API-Auth", "value":accessKey});
pm.request.headers.add({"key":"Content-Type", "value":"application/json"});

Note: In the above code, there’s a few things you need to initialize in your environment; pay attention to the pm.environment.get() variables.

Once that is done, you add one request in your collection of type POST. Something that looks like this:

POST request example

See the JSON body? We’re just telling STARLIMS “this is the category I want you to run”. With the above script, all scripts in this category will run when this request is sent. And since every script is in a try/catch, you’ll get a nice response of all scripts success (or failure).

Let’s create at least 1 regression test. In the category (in my case, My_Regression_POC), I will add one script named “compile_regression_framework”. The code will be very simple: I just want to make sure my class is valid (no typo and such).

:DECLARE o;
o := CreateUdObject("API_Regression_v1.run");
:RETURN .T.;

Setup a POSTMAN monitor

Now, on to the REALLY cool stuff. In POSTMAN, go to monitor:

POSTMAN – Monitor option

Then just click the small “+” to add a new monitor. It is very straightforward: all you need to pick is a collection (which we created earlier) and an environment (which you should have by now). Then setup the schedule and the email in case it fails.

Setting up a Monitor

And that’s it, you’re set! You have your framework in place! The regression tests will run every day at 3pm (according to the above settings) and if something fails, I will receive an email. This is what the dashboard looks like after a few days:

Monitor Dashboard

Next Steps

From here on, the trick is organizing regression scripts. In my case, what I do is

  1. I create a new category at the beginning of a sprint
  2. I duplicate the request in the collection, with the sprint name in the request’s title
  3. I change the JSON of the new request to mention the new category
  4. Then, for every bug fixed during that sprint, I create a regression script in that category. That script’s purpose is solely to test what was fixed.

What happens then is every day, all previous sprints regression tests run, plus the new ones! I end up having a lot of tests.

Closing notes

Obviously, this does not replace a good testing team. It only supports them by re-testing stuff they might not think about. It also doesn’t test everything; there’s always scenarios that can’t be tested only with a server call. It doesn’t test front-end.

But still, the value is there. What is tested is tested. And if something fails, you’ll know!

One question a developer asked me on this is “sometimes, you change code, and it will break a previous test, and this test will never pass again because it is now a false test. What then?”

Answer: either delete the script, or just change the 1st line to :RETURN .T.; with a comment that the test is obsolete. Simple as that.

At some point, you can start creating more collections, adding more monitor, to split different schedules (some tests to run weekly, others daily, etc).

And finally, like I said, the complexity and how much you decide to test is really up to you. I recommend starting small; the value is there without much effort. Then, if a critical scenario arises, you can write a more complex test. That should be the exception.

You have ideas on how to make this better? Share!

Building my own RPI-based Bartop Arcade Cabinet

Building my own RPI-based Bartop Arcade Cabinet

One of my pet project this summer was to build a bartop arcade cabinet. I had some rpi400 laying around, which are rpi4 imbedded in a keyboard. The idea of always having a keyboard handy for the arcade cabinet sounded like a great feature, and to access it, I had to find a way to easily open the cabinet.

That’s why there’s hinges in from of the control!

All in all, building this was fun, and I decided to use Batocera.linux as the OS. It turn out to be the easiest one and most complete one, as well as the fastest one, based on my tests.

Main goal was to load MAME arcade games (tetris, pac man, Super Street Fighter 2). But I ended up putting Mario Kart N64, and it actually runs pretty good if we set resolution to 640×480 for that game.

There’s still one bug going on with Batocera – after a while we must reboot the Arcade since there seems to be a memory leak somewhere (developers are aware).

In the box, there’s

  • rpi400
  • Old 19 inches 4:3 monitor
  • 2 set of generic dragon arcade USB controllers
  • HDMI to VGA active adapter (powered)
  • power bar outlet (re-wired to a on/off switch in the back)
  • Altec lansing speakers
Arcade Bartop Cabinet (no stickers)

I thought it might be interesting to should you various stages of the build, in case you are looking for some inspirations:

Initial frame
Hinges for the bartop
Stained, ready to assemble!

During the whole configuration, I had a problem. RetroPie was not able to output sound properly, and Batocera was not able to connect to WiFi. It turned out this was caused by an insufficient power in the rpi.

Lesson 1: avoid a USB sound card if you can. It draws a lot of power that can interfere with the Wifi & Bluetooth module (which is what happened to me). If you do that, try to get one that can draw its power from somewhere else. I prefer rely on the HDMI sound output.

Lesson 2: if you use an old monitor, get an Active HDMI to VGA adapter. These adapters will usually include an audio output (which solves above problem). If you use a passive adapter, the chip relies on the power provided by HDMI, which may result in black screen flickers in some games. Using an active adapter fixed the problem for me.

This is a very different topic than what I usually post, but I felt like a good place to share this!

Did you ever build an Arcade cabinet?

STARLIMS REST API & POSTMAN – Production Mode

Alright folks! If you’ve been playing with the new STARLIMS REST API and tried production mode, perhaps you’ve run into all kind of problems providing the correct SL-API-Signature header. You may wonder “but how do I generate this?” – even following STARLIMS’s c# example may yield unexpected 401 results.

At least, it did for me.

I was able to figure it out by looking at the code that reconstructs the signature on STARLIMS side, and here’s a snippet of code that works in POSTMAN as a pre-request code:

// required for the hash part. You don't need to install anything, it is included in POSTMAN
var CryptoJS = require("crypto-js");

// get data required for API signature
const dateNow = new Date().toISOString();
// thhis is the API secret found in STARLIMS key management
const privateKey = pm.environment.get('SL-API-secret');
// this is the API access key found in STARLIMS key management
const accessKey = pm.environment.get('SL-API-Auth');
// in my case, I have a {{url}} variable, but this should be the full URL to your API endpoint
const url = pm.environment.get('url') + request.url.substring(8);
const method = request.method;
// I am not using api methods, but if you are, this should be set
const apiMethod = "";

var body = "";
if (pm.request.body.raw){
    body = pm.request.body.raw;
}

// this is  the reconstruction part - the text used for signature
const signatureBase = `${url}\n${method}\n${accessKey}\n${apiMethod}\n${dateNow}\n${body}`;

// encrype signature
var data = CryptoJS.enc.Utf8.parse(signatureBase);
const hash = CryptoJS.HmacSHA256(data, privateKey);
const encodedHash = encodeURIComponent(CryptoJS.enc.Base64.stringify(hash));

// set global variables used in header
pm.globals.set("SL-API-Timestamp", dateNow);
pm.globals.set("SL-API-Signature", encodedHash);

One point of interest – if it still is not working, and if you can’t figure out why, an undocumented STARLIMS feature is to add this application setting in the web.config to view more info:

<add key="RestApi_LogLevel" value="Debug" />

I hope this helps you use the new REST API provided by STARLIMS!

JMeter + STARLIMS for load testing

JMeter + STARLIMS for load testing

JMeter is a load / stress tool built in Java which allows you to simulate multiple user connections to your system and monitor how the application & hardware response to heavy load.

In STARLIMS, I find it is a very good tool for performance optimization. One can detect redundant calls, chatty pieces of code and identify bottlenecks, even when running with a single user.

As a bonus, Microsoft has a preview version of load tests based on JMeter, which can be integrated to your CI/CD process!

So, in this article, my goal is to help you get started – once setup, it’s very easy to do.

I will proceed with the following assumptions:

  • You know your way around STARLIMS
  • You have some scripting knowledge
  • Your STARLIMS version is 12.1 or + (I leverage the REST API introduced with 12.1. It is possible to do differently, but that will be out of scope)
  • Xfd is the most difficult technology for this. Therefore, that’s what I will tackle. If you are running on HTML, it will be just easier, good for you!

Environment Setup

On your local PC

  • Install Java Runtime – you might have to reboot. Don’t worry, I’m not going anywhere!
  • Download JMeter and extract it somewhere (remember where!)
  • Make sure you have access to setting up a Manual Proxy. This can be tricky and may require your administrators to enable this for you. What you’ll want is to be able to toggle it like this (don’t enable it just yet! Just verify you can):
Proxy Setup

On your STARLIMS Server

  • Make it available through HTTP. Yes, you have read properly, HTTP. Not HTTPS. I think it can work HTTPS, but I ran into too much problems and found out HTTP is easiest. This is to simplify traffic recording when recording a scenario for re-processing.
  • Create your load users. If you expect to run 100 simultaneous users, then let’s create 100! What I did is create users named LOADUSER001 to LOADUSER250 (so I would have 250 users) and have their password to something silly like #LoadUser001 to #LoadUser250. Like I said – don’t do this if there’s any sensitive data in your system.
  • To help you, here’s a script to generate the users:
:RETURN SubmitToBatch("LoadTestPrep.UserCreator.ASync", { 100 });

:PROCEDURE Async;
:PARAMETERS nNumberOfUsers;
:DEFAULT nNumberOfUsers, 1;

:DECLARE sUserName, sOldPassword, sNewPassword, i, nOrigrec, oNewUser, aUserDetails, pwEncOld, pwEncNew;
resp := "nothing yet";
:FOR i := 1 :TO nNumberOfUsers;	
	oNewUser := CreateUdObject();
	oNewUser:USRNAM := "LOADUSER" + StrZero(i, 4,0);
	oNewUser:FULLNAME := "Load User " + StrZero(i, 4,0);
	oNewUser:JOBDESCRIPTION := "Load Test";
	oNewUser:EMAIL := "user" + StrZero(i,4,0) + "@dummy.com";
	oNewUser:LANGID := "ENG";
	oNewUser:POWERUSER := "Y";
	oNewUser:TREEAUTH := { "L" };
	oNewUser:RASCLIENTID := "Internal";
	oNewUser:DEPTLIST := "Changzhou";
	oNewUser:QUESTION_ID := 1;
    oNewUser:ANSWER := "1234";
    oNewUser:CONFIRMANSWER := "1234";
    oNewUser:PIN := "1234";	
	oNewUser:Id := "UserManagement.newUserModel-" + LimsString(i);
	
	UsrMes("Processing " + oNewUser:USRNAM);
	resp := ExecFunction("UserManagement.createNewUser", { oNewUser });
	
	resp := "User " + oNewUser:USRNAM + " does not exist";
	nOrigrec := LSearch("select ORIGREC from USERS where USRNAM = ?", 0, "DATABASE", { oNewUser:USRNAM });
	:IF nOrigrec > 0;
		aUserDetails := {
			{
				"TREEAUTH",
				{
					"L"
				},
				"S",
				{"L"}
			},
			{
				"SHOWERRORDETAILS",
				"Y",
				"S",
				"N"
			},
			{
				"STATUS",
				"Active",
				"S",
				"Pending"
			}
		};
		
		pwEncOld := "#LoadUsr" + StrZero(i, 4, 0);
		pwEncNew := "#LoadUser" + StrZero(i, 4, 0);
		ExecFunction("UserManagement.saveUserDetails", { NIL, "USERS", aUserDetails, nOrigrec });
		ExecFunction("Security_Module.ChangePassword", { oNewUser:USRNAM, "NEW", pwEncOld });
		ExecFunction("Security_Module.ChangePassword", { oNewUser:USRNAM, "", pwEncNew });
		resp := ExecFunction("UserManagement.updateHTMLUserSecurityInformation", {NIL,"USERS",{{"PWEXPD",Now():AddYears(100),"D",Now()}},nOrigrec,{}});
	:ENDIF;
:NEXT;

UsrMes( "Done" );
:ENDPROC;

You will need to test the above, on my system it worked fine (haha!) but setting password and security is not always working as expected in STARLIMS; so do not despair – just be patient.

  • Edit the web.config file. I will presume you know which one and how to achieve that. You need to change / add the following appSetting to false: <appSetting name="TamperProofCommunication" value="false" />
  • Add Endpoint to Encrypt function. That’s really the tricky part. In both XFD and HTML, STARLIMS “masks” the username and password when putting it in the payload for authentication, to prevent sending in clear text. But this encryption is significant; it is part of .NET and not easily integrated to JMeter… Unless it becomes a REST API endpoint!.
  • So, in a nutshell, the trick is to create a new API Endpoint that receives a string and a key, and call the EncryptData(text, key) function, and return the encrypted string. I will not stress it enough: do – not – enable – this – on – a -system – with – sensitive – data. And make sure you will only use load testing users. If you do so, you’re fine.

This is the code of the REST API method to expose from STARLIMS:

:PROCEDURE GET;
:PARAMETERS payload;
:DECLARE response;

response := CreateUdObject();
response:StatusCode := Me:HTTP_SUCCESS;
response:Response := CreateUdObject();
:IF payload:IsProperty("text") .and. payload:IsProperty("pw");
    :DECLARE t, p, secret;
    t := limsString(payload:text);
    p := limsString(payload:pw);
    secret := EncryptData(t, p);
    response:Response:message := secret;
:ELSE;
    response:Response:message := "Missing data";
    response:StatusCode := 500;
:ENDIF;

:RETURN response;
:ENDPROC;

Since it gets exposed as a REST API, the concept is that at the beginning of the load test, for every user, we call this with the username and the password to get the encrypted version of each, which allows us to tap into STARLIMS cookie / session mechanism. Magic!

Now, we are kind of ready – assuming you’ve followed along and got everything setup properly and were able to test your API with POSTMAN or something like that. Before moving on, let’s take a look at a typical load test plan in JMeter:

Typical setup for a single scenario

The idea is we want each user (thread) to run in its own “session”. And we want each session to be for a different user. My scenarios always involve a user login into STARLIMS once (to create a session) and the to loop on running the scenario (for example, one scenario could be aboout creating folders, another scenario about entering results, etc.) . I will leave to you the details of the test plans, but the idea is you first need to login the system, then do something.

At the level of the test plan, let’s add user-defined variables – in my case, this is only so I can switch STARLIMS instances later on (I strongly recommend you do that!):

User-defined Variables

Always at the level of the test plan, add a counter:

User Counter

This will be the magic for multiple users. Note the number format – this has to match your user naming convention, otherwise, good luck!

Now, let’s have our user login STARLIMS.

  1. Add a Transaction Controller to the Thread Group. I renamed this one “System Login” – call it what you want.
  2. On your new transaction controller, add a Sampler > HTTP Request, which will be our call to the REST API
HTTP Request – REST API for Encrypt method

As you can see, I did a few more things than just call the API. If we break it down, I have a pre-processor “Initialize User Variables”, a HTTP Header Manager, and a JSON Extractor. Let’s look at each of these.

Pre-processor – Initialize User Variables (Beanshell preprocessor)

This will run before this call is made – every time this call is made! This is where we initialize more variables we can use in the thread.

currentUser = "LOADUSER" + "${un}";
s = "000" + "${un}";
v = s.split("");
s = s.substring(v.length - 4);
currentPw = "#LoadUser" + s;
vars.put("currentUser", currentUser);
vars.put("currentPW", currentPw);
vars.put("startFolderNo", "LT22-000" + "${un}");
log.info("Current User: " + currentUser);

This will initialize the currentUser and currentPW variables we can reuse later on. Since this is a pre-processor, it means the request can reference them:

Now, let’s look at the HTTP Header Manager:

HTTP Header Manager – System Login

Pretty simple – if you have STARLIMS 12.1 or +, you just need to get yourself an API key in the RestApi application. Otherwise, this whole part might have to be adjusted according to your prefered way of calling STARLIMS. But, long story short, SL-API-Auth is the header you want, and the value should be your STARLIMS secret API key.

Finally, this API will return something (the encoded string). So we need to store it in yet another variable! Simple enough, we use a post-processor JSON extractor:

JSON Extractor

What did we just do? Here’s a breakdown:

  1. Initialized a user name and password in variables
  2. Constructed a HTTP request with these 2 variables
  3. Called the REST API with our secret STARLIMS key using this request
  4. Parsed the JSON response into another variable

If you have set the thread group to simulate 10 users, then you’ll have LOADUSER001 to LOADUSER010 initialized. This is the pattern to learn. This is what we’ll be doing all along.

Wait. How did you know what to call afterward?

Great question! That’s where the proxy gets into play. Now, we don’t want to go around and guess all the calls, and, although I like Fiddler, I think it would be very complicated to use.

In a nutshell, this is what we’ll do:

  1. We’ll add a Recording Controller to our Thread Group
    1. Right-click on your Thread Group > Add > Logic Controller > Recording Controller
  2. We’ll add a Test Script Recorder to our Test Plan
    1. Right-click on your Test Plan > Add > Non-Test Elements > HTTP(S) Test Script Recorder
    2. Change the Target Controller to your recording Controller above, so you know where the calls will go
  3. We’ll activate the proxy (bye bye internet!)
    1. Open Windows Settings
    2. Look for Proxy
    3. Change Manual Proxy > Use a proxy server to on.
    4. Local Address = http://localhost
    5. Port = 8888
    6. Click Save! I didn’t realize at first there was a save button for this…
  4. We’ll start the Test Script Recorder
Test Script Recorder
  1. We’ll peform our action in STARLIMS
    1. WARNING: A good practice is to change the value of Transaction name in the Recorder Transactions Control as you progress. What I typically do is put SYSTEM_LOGIN while I launch STARLIMS. Then SYSTEM_LOGIN/VALIDATE when I enter credentials, then SYSTEM_LOGIN/OK when I click OK, etc.
    2. If all works well, you should see items being added to your Transaction Recorder.
  2. We’ll stop the Test Script Recorder – just click on the big red Stop
  3. We’ll deactivate the proxy (yay!) – just toggle it off.

You should have something like this in your recorder:

Recorded HTTP Requests

If, like me, you let your Outlook opened, you will have all kind of unrelated HTTP calls. Just select these and delete them. You should be left with something like this:

After 1st cleanup

Now, let’s understand what happened here. We recorded all the calls to STARLIMS. If you wish, you can remove the GetImageById lines – typically, this should not have any performance impact as these should be cached. But heh, that’s your call.

Let’s look at the 1st request:

1st HTTP Request

Interestingly enough, we can see the Protocol is http, and the Server Name is our STARLIMS server. If you created user defined variables, then you can just clean these 2 fields up (make them empty). We can default them at the test plan level (later on). But if you do that, you must do it for all requests! So, let’s not do this (just yet). Let’s leave it as is.

Now, what we want, is to re-run this so we can have actual data to work with and to make our script dynamic. But we need to record all the requests sent and received.

Right-click on your Thread Group > Add > Listener > View Results Tree

I find this listener to be the best for this activity.

Now, let’s run this “as is” clicking the play button

play

The beauty here is you can get the data sent to STARLIMS as well as the responses, allowing us to understand how everything is connected. Let’s take a look at the Authentication.GetUserInfo – that’s our first challenge:

View Results Tree

If you look at the Request Body, you’ll see your user name (which you used to login), as well as a 2nd very strange parameter that looks like the above highlighted string in kind of pink. Now, when we log into STARLIMS, we must send that string, which, essentially, is the password hash based on the user name (one-way encoding). So the question is: how do we get this? This is where our REST API, which we prepared earlier, comes into play!

Hook user variables to payload

With this, you can do everything now! Well, as far as load testing is concerned, it can at least get you started!

Earlier, I mentioned you shouldn’t leave your Server name / path / protocol in there. Indeed, in my screenshot above, you can see it’s all empty. This is because I added a HTTP Request Default to my test plan:

HTTP Request Default

You’ll also want a HTTP Cookie Manager. This one doesn’t need configuration as far as I know; but it must exist so cookies are carried over.

CONCLUSION

What?? Conclusion already? But we were just getting started! Well, don’t worry. I have done a little bit more than just that, and I am including it with this post.

You can get a semi-working test plan here.

You will need to figure out a few things, like some of the APIs I use and some forms/scripts that you won’t have. But this should give you a very good overview of how it works and how it is all tied in together.

As a side note, the reason I got involved into this was caused by Microsoft adding JMeter as part of the tools in their load test preview!

Hope you find good use to this!