Nested Database Transactions

C# – Asynchronous Database Nested Transactions

Hello everyone!

I know, I know, it has been a long time since you read anything interesting here. Well, as you should know by now, I changed job, and stepped out of the LIMS industry a bit. You will not be seeing much STARLIMS code moving forward; in fact, probably none at all. That nice framework is on its decline, and unfortunately, I don’t think it’s up for revival.

But that’s not why you got here, nor why you are reading this! I caught your attention with the title, didn’t I?

So, in my new job, I am working on a product, and one of the challenge we’re facing is a problem that comes back often: database transactions, nesting database transactions, and nowadays: asynchronous nested transactions!

THE PROBLEM.

Imagine you are building a super-duper application that runs with some RDBMS (i.e. SQL Server). Your application is web-based (!) and running in ASP.NET 8 (or ASP .NET Core, for those of you older).

Let’s now say you have an order processing system where you have the following scenarios:

Independent Transactions: Orders are processed concurrently to improve performance. Each order’s creation and inventory update should be isolated, with its own transaction context.

Nested Transactions: When adding items to an order, if any item fails, the entire order should fail. This nested operation should commit only at the outermost level, ensuring atomicity.

To represent better the problem, imagine the above 2 scenarios running from the same process, in 2 different tasks:

How do we implement this in such a way that the outcome will be an independent result? Let’s say we want the outcome of Task 1 to have no impact on Task 2, and vice-versa?

And how do we make it so that the sub-transaction in Order Creation is “nested” within the Create Order transaction?

Let’s assume the following code (it is not obvious to come with an actual scenario, but since we don’t know how the system will scale, let’s presume the following):

async Task CreateOrder(OrderRequest req)
{
	using (var mainTransaction = Database.BeginTransaction() )
	{
		// nested transaction
		await GenerateInvoice(newOrder);
		await mainTransaction.Commit();
	}
}

async Task GenerateInvoice(Order order)
{
	using (var nestedTransaction = Database.BeginTransaction() )
	{
		/* invoice generation logic */
		await nestedTransaction.Commit();
	}
}

async Task UpdateInventory(Order order)
{
	using (var tr = Database.BeginTransaction() )
	{
		/* update inventory logic */
		await tr.Commit();
	}
}

And then, for testing purposes, let’s say we were running CreateOrder in one Task, and UpdateInventory in the other Task.

How can we ensure that they will be independent? How can we make them “awaiter aware”?

THE SOLUTION

LocalAsync. We can make the transaction stack and the IDbTransaction local async by decorating them like this:

LocalAsync<List<Transactions>> transactions = new();
LocalAsync<Transaction> scopedTransaction = new();

And this, my friend, will make the variable unique per async call.

In the Database class (or wherever you will handle your transactions), you will wrap the BeginTransaction like this:

public ITransaction BeginTransaction()
{
    var transactions = GetTransactions();
    if (transactions.Count == 0)
    {
        var connection = GetConnection();
        scopedTransaction.Value = connection.BeginTransaction();
    }
    var tr = new Transaction(this, scopedTransaction.Value!);
    transactions.Add(tr);
    return tr;
}

public List<Transaction> GetTransactions()
{
    transactions.Value ??= [];
    return transactions.Value!;
}

What happens: whenever the code calls BeginTransaction, it will look in the current async scope for the list of transactions. If found, it will use it, else create a new one (see GetTransactions).

Then, the actual database transaction is abstracted within a custom Transaction class and defined only for the 1st transaction of your stack. So, if you call BeginTransaction 3 times in the same scope, you’ll have 3 “Transactions” (our custom class), but only the one in position 0 will have an actual IDbTransaction. That guy will be responsible to Commit(); or Rollback in the Dispose. The others become “virtual” transactions with no effect other than tracking (the transactions are wrappers to ensure transactional context without actual commits/rollbacks except for the first in the list).

This allows Nested transactions, and this allows multi-task transactions. Assuming the following code:

var t1 = Task.Run(async () =>
{
    using (var tr = Database.BeginTransaction())
    {
        await repo1.DoSomething();
        await repo2.DoSomethingElse(); // Let's assume this repo function also has a Database.BeginTransaction() because it can be used outside of this code!!!

        await tr.Commit();
    }
});

var t2 = Task.Run(async () =>
{
    using (var tr = Database.BeginTransaction())
    {
        await repo1.DoSomethingAgain();
        await repo1.LetsGo();
        
        await tr.Commit();
    }
});

await Task.WhenAll([t1, t2]);

All outcomes are acceptable:

  • t1 and t2 both succeed
  • t1 succeeds and t2 fails
  • t1 fails and t2 succeeds
  • t1 and t2 both fail.

t1 would succeed only if everything in the using succeeds, including the nested DoSomethingElse() that contains its own BeginTransaction + Commit().

Of course, this example code would never be implemented other than in an integration test; but heh, I think it’s the best way to explain the concept.

CONCLUSION

We’ve been banging our heads against the wall, running in circle, trying plenty of stuff including TransactionScope, but we couldn’t figure it out. Until someone said “We need somehow to figure out the context of the async call“.

That was it. That was the key. Now, I’m not saying this is a complete solution, but definitely, this is part of it. I have multiple different scenarios in my integration tests similar to the one I mention above, with sometimes forcefully fails, and everything works as expected. I just need to be careful on the concurrency; SQL Server developer is limited to 100 connections so to force collision, test retries and such, is quite a challenge; topic for another post maybe?..

I hope this can come in handy for you too!

AI Code Review Assistant

Revolutionizing Code Analysis for STARLIMS and Beyond

Disclaimer: I worked on this project independently during personal time. Nothing here represents the views or endorsement of SGS. Any opinions, findings, and conclusions expressed in this blog post are solely mine. The project utilizes Python, OpenAI’s language models and STARLIMS mock code I created, which may have separate terms of use and licensing agreements.

AI is not a magician; it’s a tool. It is a tool for amplifying innovation

Fei-Fei Li

With this in mind, imagine: what if we could automatically get STARLIMS Code Review feedback? You know, an extra quality layer powered by AI?

STARLIMS is a proprietary language” you will say.

It has to be trained” you will say.

True; yet, what if?…

I have done another experiment. I was given a challenge to try Python and OpenAI API, but I wasn’t really given any background. Given my recent fun with CI/CD and the fact I’m back working on the STARLIMS product, I thought “Can we automatically analyze STARLIMS Code? Like an automated code-reviewer?” Well, yes, we can!

As I was recommended a long time ago, with this project, let me show you the end result. I have a REST API running locally (Python Flask) with the following 2 endpoints:

POST /analyze/<language>

POST /analyze/<language>/<session_id>

The 1st will kick a new analysis session, and the 2nd allows the user to continue their analysis session (like queuing scripts and relating them together!!!)

I usually create nice diagrams, but for this, really, the idea is

STARLIMS <-> Python <-> Open AI

So no diagram for you today! How does it work?

I can pass SSL code to a REST API, and receive this:

{
    "analysis": {
        "feedback": [
            {
                "explanation": "Defaulting a parameter with a numeric value may lead to potential issues if the parameter is expected to be a string. It's safer to default to 'NIL' or an empty string when dealing with non-numeric parameters.",
                "snippet": ":DEFAULT nItemId, 1234;",
                "start_line": 4,
                "suggestion": "Consider defaulting 'nItemId' to 'NIL' or an empty string depending on the expected data type.",
                "type": "Optimization"
            }
        ]
    },
    "session_id": "aa4b3bd3-75bd-42e3-8f31-e53502e68256"
}

It works with STARLIMS Scripting Language (SSL), STARLIMS Data sources (DS) and … JScript! Here’s an example of a JScript output:

{
    "analysis": {
        "items": [
            {
                "detailed explanation": "Checking for an empty string using the comparison operator '==' is correct, but using 'Trim()' method before checking can eliminate leading and trailing white spaces.",
                "feedback type": "Optimization",
                "snippet of code": "if (strMaterialType == '')",
                "start line number": 47,
                "suggestion": "Update the condition to check for an empty string after trimming: if (strMaterialType.trim() === '')"
            },
            {
                "detailed explanation": "Using the logical NOT operator '!' to check if 'addmattypelok' is false is correct. However, for better readability and to avoid potential issues, it is recommended to explicitly compare with 'false'.",
                "feedback type": "Optimization",
                "snippet of code": "if (!addmattypelok)",
                "start line number": 51,
                "suggestion": "Update the condition to compare with 'false': if (addmattypelok === false)"
            },
            {
                "detailed explanation": "Checking the focused element is a good practice. However, using 'Focused' property directly can lead to potential issues if the property is not correctly handled in certain scenarios.",
                "feedback type": "Optimization",
                "snippet of code": "if ( btnCancel.Focused )",
                "start line number": 58,
                "suggestion": "Add a check to ensure 'btnCancel' is not null before checking its 'Focused' property."
            }
        ]
    },
    "session_id": "7e111d84-d6f4-4ab0-8dd6-f96022c76cff"
}

How cool is that? To achieve this, I used Python and OpenAI API. I had to purchase some credits; but really, it is cheap enough and worth it when used to a small scale (like a development team). I put 10$ in there, and I have been running many tests (maybe a few hundreds) and I am down by 0.06$, so… I would say worth it.

The beauty of this is my project supports this:

  • Add new languages in 5 minutes (just need to add the class, update the prompt, add the reference code, restart the app, go!)
  • Enhance accuracy by providing good code, training the assistant what is valid code

To give you an idea, the project is very small:

Looking ahead with this small project, I’m thinking beyond just checking code for errors. Imagine if we could hook it up to our DevOps setup, like Azure DevOps or SonarQube. It would be like having a digital assistant that not only spots issues but also files bugs and suggests improvements automatically! This means smoother teamwork, better software quality, and fewer headaches for all of us.

Now that I got this working, I am thinking about bunch of exciting ideas like:

  • Integrate this as a Quality Gate on commits.
    • If it fails, goes back to developer
    • If it succeeds, record the results and run the pull request (or push to the next human reviewer)
  • Implement a mechanism for automatic Unit Tests generation (we potentially can do something there!)
  • Implement a mechanism for code coverage report (also possible!)
  • Integration of these to STARLIMS directly so we can benefit from this and include in a CI/CD pipeline somehow

Dreaming is free, is it not? Well, not quite in this case, but I’m good for another 9.94$…

I have the repo set as private on Github. This is a POC, but I think it can be a very cool thing for STARLIMS, but also will work for any other proprietary language if I get some good sample code.

Hell, it can even work for already supported languages like Javascript, c#, or anything, without training! So we could use this pattern for virtually any code review.

Interested? Leave a comment or shoot me an email!

CI / CD Next step is awesome!

CI / CD Next step is awesome!

Alright, so if you followed along the previous post, you know I have setup Jenkins to kind of run continuous integration. Well, I have now pushed it a bit further.

I installed a docker image of SonarQube (the community edition) and wow, do I have only one regret: I should have started with all of this setup on day one.

My flow is now this:

So, in a nutshell, what is VERY COOL is that when I push code on my develop branch, this happens automatically:

  • unit tests are triggered
  • code analysis is triggered

And in SonarQube code analysis, I found bunch of interesting suggestions, enhancements and bug fixes. They were not necessarily product-breaking, but I found many things I was not even aware of. My future code will just be better.

CD pipeline?

I also added a CD pipeline for my test environment. I am not ready yet to put quality gates to automate production deployment, but I am on the right track! Here is my current CD pipeline:

It is quite simple, but it works just perfect!

Now, I was wondering if this would be too much for my server. You know, running all of these:

  • Verdaccio docker image (npm private repository)
  • Jenkins docker image (CI/CD pipelines)
  • SonarQube docker image (code analysis)
  • 3 Tests docker images (React frontend, Node backend, Service manager)
  • 3 Production docker images (same as just before)
  • Nginx docker image (reverse proxy)
  • Prometheus & Grafana (directly, not docker images) for system monitoring

Here’s what happens:

More or less: NOTHING.

Well, not enough to be concerned about it yet. Of course, there’s not a lot of users, but I expect even with a few dozen of users, it wouldn’t be so bad. And if this became really serious, the production environments would be hosted on the cloud somewhere for 100% uptime (at least as a target).

To be honest, the tough part was to get the correct Jenkinsfile structure – just because I am not used to it. For safe keeping, I am writing my two pipelines here, and who knows, maybe it can help you too!

CI pipelines – Jenkinsfile

pipeline {
    agent any
    tools {nodejs "node"} 
    stages {
        stage('Install dependencies') { 
            steps {
                sh 'npm install' 
            }
        }
        stage('Unit Tests') { 
            steps {
                sh 'npm run test' 
            }
        }
        stage('SonarQube Analysis') {
            steps{
                script {
                    def scannerHome = tool 'sonar-scanner';
                    withSonarQubeEnv('local-sonarqube') {
                        withCredentials([string(credentialsId: 'sonar-secret', variable: 'SONAR_TOKEN')]) {
                            sh "${scannerHome}/bin/sonar-scanner -Dsonar.login=\$SONAR_TOKEN"
                        }
                    }
                }
            }
        }
    }
}

CD Pipeline Jenkinsfile

pipeline {
    agent any

    stages {
        stage('Verify Docker is available') {
            steps {
                script {
                    sh 'docker version'
                }
            }
        }
        stage('Copy .env and config file') {
            steps {
                script {
                    configFileProvider([configFile(fileId: 'frontend-dev.env', variable: 'DEV_ENV')]) {
                        sh 'cp $DEV_ENV .env'
                    }
                }
                script {
                    configFileProvider([configFile(fileId: 'frontend-custom-config.js', variable: 'DEV_CONFIG')]) {
                        sh 'cp $DEV_CONFIG ./src/config.js'
                    }
                }
            }
        }
        stage('Build Dev') {
            steps {
                sh 'docker build -t frontend:develop .'
            }
        }
        stage('Stop and Remove previous Container') {
            steps {
                sh 'docker stop frontend-develop || true'
                sh 'docker rm frontend-develop || true'
            }
        }
        stage('Start Dev') {
            steps {
                sh 'docker run --name frontend-develop -d -p 3033:3033 -e PORT=3033 frontend:develop'
            }
        }
    }
}

Next step: fixes all the identified issues by SonarQube. When I am done with that, I will begin the CD for prod.

CI / CD at home – Was taking the red pill a good idea?…

CI / CD at home – Was taking the red pill a good idea?…

This whole article started as an attempt at sharing steps to get a free “no-cloud” platform for continuous integration and continuous deployment. What triggered it? The time I spent doing npm install, npm build, npm publish, left and right, forgetting one, npm test, oops I forgot one npm test and did a docker build … Damned! was that time consuming and lousy activities.

I want to code. I don’t want to do this. Let’s separate the concern: I code, the server does the rest. Sounds good? We call this separation of concerns, at a whole new level!

What is separation of concerns?

Short answer: Do what you are supposed to do and do it right.

Not so short answer: It is the principle of breaking down a complex system into distinct, independent parts or modules, each addressing a specific responsibility or concern, to promote modularity, maintainability, and scalability. It is usually applied at many (if not all) levels, like architecture, component, class, method, data, presentation, testing, infrastructure, deployment, etc.

… and why should you care? (Why it matters)

My article in itself? It doesn’t really matter, and you shouldn’t care. Unless, that is, you find yourself in this situation where you hear about continuous integration and deployment, but you don’t really know where to start. Or if you have your own software you’re working on and want to get to a next level. Or just because you like reading my stuff, who knows!

I recently started to flip this blog into a melting pot of everything I face on this personal project. Eventually, I may even release something! And then, we can have a nice history of how it got there! For the posterity!

Anyway, I am diverging from the original intent… I want to share the journey I went through to get a working environment with Jenkins and Verdaccio. I think it is a great platform for startups who can’t or won’t afford cloud hosting just yet (or for privacy reasons) but still want to achieve some level of automation.

As a reference, I’m sharing the challenges I am facing with a personal project consisting of a Node backend, a React frontend, and a background server, and how I tackle these challenges using modern tools and techniques.

I want to try something backward. Conclusion first! I completed the setup, and it works. It was painful, long, and not fun at some points.

But look at the above screenshot! Every time I push to one of my Github repo, it triggers build and test. In one case, it even publishes to my private package management registry with the :develop tag! Isn’t it magical?

If you are interested in how I got there, tag along. Otherwise, have a great day! (still, you should meet my new friend, Jenkins)

Before we begin, here are some definitions (if you know what these are, just ignore).

You never know who will read your articles (if anyone). Play dumb.

Definitions

Continuous Integration (CI) and Continuous Deployment (CD): CI/CD are practices that automate the process of building, testing, and deploying software applications. CI ensures that code changes from multiple developers are regularly merged and tested, while CD automates the deployment of these tested changes to production or staging environments.

Node.js: Node.js is a runtime environment that allows developers to run JavaScript code outside of a web browser. It’s commonly used for building server-side applications, APIs, and real-time applications.

Docker: Docker is a platform that simplifies the process of creating, deploying, and running applications using containers. Containers are lightweight, standalone executable packages that include everything needed to run an application, including the code, runtime, system tools, and libraries.

Containers: Containers are isolated environments that package an application and its dependencies together. They provide a consistent and reproducible runtime environment, ensuring that the application runs the same way regardless of the underlying infrastructure. Containers are lightweight and portable, making them easier to deploy and manage than traditional virtual machines.

Let’s Begin!

Project Structure

  • Project 1: React Frontend
  • Project 2: Common backend (most models, some repos and some services)
  • Project 3: Node Backend (API)
  • Project 4: Node Worker (processing and collection monitoring)

Environments

  • I run development on whatever machine I am using at the moment with nodemon and vite development mode
  • I build & run Docker containers on my linux server with docker compose (3 Docker files and 1 docker compose)
  • I have a nginx reverse proxy on the same server for SSL and dynamic IP (no-ip)

Objective

  • Achieve full CI/CD so I can onboard remote team members (and do what I like: code!)

IF I am successful, this will be a robust and scalable development workflow that would streamline the entire software development life cycle, from coding to deployment, for my project. I think in the enterprise, with similar tools, this proactive approach would lay a good foundation for efficient collaboration, rapid iteration, and reliable software delivery, ultimately reducing time-to-market and increasing overall productivity.

Couple of additional challenges

  • Challenge #1: Private Package Management. NPM Registry is public. I want to keep my projects private; how can I have a package for the common backend components?
  • Challenge #2: Continuous Integration (CI). How can I implement a continuous integration pipeline with this? Code is on Github, registry will be private… How do I do that?
  • Challenge #3: Continuous Deployment (CD). How can I implement a continuous deployment process? I will need to automate some testing and quality gates in the process, so how will that work?
  • Challenge #4: Future Migration to Continuous Delivery (CD+). Can I migrate to continuous delivery in the future (you know, if I ever have customers?)
  • Challenge #5: Cloud Migration / readiness. When / if this becomes serious, can my solution be migrated to a cloud provider to reduce hardware failure risk?

With this in mind, I have decided to embark on a journey to attempt to setup a stable environment to address achieve this and face each challenge. Will you take the blue pill and go back to your routine, or the red pill and follow me down the rabbit hole?..

Starting point: bare metal (target: Ubuntu Server, Jenkins, Verdaccio, Docker)

I have this Trigkey S5 miniPC which I think is pretty awesome for the price. It comes with Windows 11, but from what I read everywhere, to host what I want, I should go with a linux distro. So there I go, I install some linux distro from a USB key and cross fingers it boots…

I went with Ubuntu Server (24.04 LTS). BTW, the miniPC is a Ryzen 5800H with 32GB RAM, should be sufficient for a while. On there, I have the installed these – pretty straightforward and you can find many tutorials online, so I won’t go in details:

  • Docker (engine & compose)
  • Git
  • Cockpit (this makes it easier for me to manage my server)

I also have a NGINX reverse proxy container. You can google something like nginx reverse proxy ssl letsencrypt docker and you’ll find great tutorials on setting this up as well. I may write another article later when I reach that point for some items (if required in this journey). But really, that’s gravy at this stage.

Install and Configure Jenkins

Jenkins is an open-source automation server that provides CI/CD pipeline solutions. From what I could gather, we can use Jenkins Docker image for easy management and portability, and it strikes a good balance between flexibility and simplicity. Disclaimer: it is my first experiment with Jenkins, so I have no clue how this will come out…

1. Let’s first prepare some place to store Jenkins’ data:

sudo mkdir /opt/jenkins

2. Download the Docker image:

sudo docker pull jenkins/jenkins:lts

3. Created a docker-compose.yml file to run jenkins:

services:
  jenkins:
    image: jenkins/jenkins:lts
    container_name: jenkins
    ports:
      - 8080:8080
      - 50000:50000
    volumes:
      - ./jenkins:/var/jenkins_home
      - /var/run/docker.sock:/var/run/docker.sock
    restart: always
    environment:
      - DOCKER_HOST=unix:///var/run/docker.sock

4. And launched it: sudo compose up -d

Et voilà! Jenkins seems to be running:

5. Since I mounted ./jenkins as my jenkins home, I can just run cat jenkins/secrets/initialAdminPassword and I can get the initial admin password, and continue. (for some reasons, I had to paste and click continue twice, then it worked)

I went with the recommended plugins to begin with. According to the documentation, we can easily add more later.

Install and Configure Verdaccio

Verdaccio will be my private package management registry. To install it, I just created a docker compose file, setup some directories, and boom.

version: '3.8'
services:
  verdaccio:
    image: verdaccio/verdaccio
    container_name: verdaccio
    ports:
      - "4873:4873"
    volumes:
      - ./config:/verdaccio/conf
      - ./storage:/verdaccio/storage
      - ./plugins:/verdaccio/plugins
      - ./packages:/verdaccio/packages

Run it with sudo docker compose up -d and that’s it.

Let’s put all of this together and create our first pipeline! – inspired from Build a Node.js and React app with npm (jenkins.io)

Problem 1 – Github & Jenkins ssh authentication

Well, I was not prepared for this. I spent a lot of time, but since Jenkins runs in a container, it does not share everything with the host, and somehow, adding the private keys were not adding the known hosts. So I had to run these commands:

me@server:~/jenkins$ sudo docker exec -it jenkins bash
jenkins@257100f0320f:/$ git ls-remote -h [email protected]:michelroberge/myproject.git HEAD
The authenticity of host 'github.com (140.82.113.3)' can't be established.
ED25519 key fingerprint is SHA256:+...
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'github.com' (ED25519) to the list of known hosts.

After that, this worked. It is only later I found that I need to pass –legacy-auth to the npm login when in headless mode. Moving forward, that won’t be a problem anymore.

Problem 2 – npm not found

Who would have thought. Node is not installed by default, you need to add the plugin. Once added, a typical workflow will need to include it! Something like:

pipeline {
    agent any
    tools {nodejs "node"} 
    stages {
        stage('Install') { 
            steps {
                sh 'npm install' 
            }
        }
        stage('Build Library') { 
            steps {
                sh 'npm run build:lib' 
            }
        }
        stage('Build Application') { 
            steps {
                sh 'npm run build:app' 
            }
        }
    }
}

the tools section is what matters. I have named my NodeJs installation node, hence the “node” in the name. Now that I played with it, I understand: this allows me to have different node versions and use the one I want in the workflow I want. Neat!

And finally, I got something happening:

First step achieved! Now I can add npm run test to have my unit tests running automatically.

This is nice, but it is not triggered automatically. Since I use Github, I can leverage the webhooks through the github trigger:

Then all I need is to put a webhook trigger in github that will point to https://<my-public-address-for-jenkins>:<port>/github-hook/ and that’s it!

The result

With this, I can now build a fully automated CI pipeline. Now, what is fully automated? That’s where the heavy-lifting begins. I will be exploring and reading about it more in the next weeks, but ultimately, I want to automate this:

  1. Develop branch CI – when I push anything to the develop branch, run the following tasks:
    • pull from develop
    • install (install dependencies)
    • build (build the repo)
    • test (run the unit tests, API tests, e2e tests, etc. depending on the repo)
  2. Staging branch CD – when I do a pull request from develop into staging branch, run the following tasks:
    • pull from staging
    • install
    • build
    • test (yes, same again)
    • host in testing environment (docker)
    • load tests (new suite of test to verify response under heavy load)
  3. I will then do “health checks”, analyze, and decide if I can/should do a pull from staging into main.
  4. Main branch CD – when I do a pull request from staging into main, run the following tasks:
    • pull from main
    • install
    • build
    • test (of course!)
    • host in staging environment (docker)
    • do some check, and then swap spot with current production docker
    • take down the swapped docker

The reason I keep some manual tasks (step 3) is because I want to handle build candidates in a kind of “the old way”. When I introduce some additional testing automation suites, I will probably enhance the whole thing.

By implementing these automated CI/CD workflows, I hope to achieve the following benefits:

  • Faster Feedback Cycles: Automated testing and deployment processes provide rapid feedback on code changes, allowing developers to quickly identify and resolve issues. I hope I won’t be the only developer forever on this project!
  • Early Detection of Issues: Continuous integration and testing catch defects early in the development cycle, preventing them from propagating to later stages and reducing the cost of fixing them.
  • Efficient and Reliable Deployments: Automated deployment processes ensure consistent and repeatable deployments, reducing the risk of human errors and minimizing downtime.
  • Improved Collaboration: Automated workflows facilitate collaboration among team members by providing a standardized and streamlined development process.

This is also something that will help me in my professional life – I kind of knew about it – but always relied on others to do it. So now, I will at least understand better what’s happening and the impact behind. I love learning!

And guess what: this approach aligns with industry best practices for modern software development and delivery, including:

  • Separation of Concerns: Separating the frontend, backend, and worker components into different projects promotes maintainability and scalability.
  • Continuous Integration: Regular integration of code changes into a shared repository, along with automated builds and tests, ensures early detection of issues and facilitates collaboration.
  • Continuous Deployment: Automated deployment processes enable frequent and reliable releases, reducing the risk of manual errors and accelerating time-to-market.
  • Test Automation: Comprehensive testing strategies, including unit tests, API tests, end-to-end tests, and load tests, ensure high-quality software and catch issues early in the development cycle.
  • Containerization: Using Docker containers for deployment ensures consistent and reproducible environments across development, testing, and production stages.

To me, this experiment demonstrates the importance of proactively addressing challenges related to project organization, package management, and automation in software development – earlier than later. With tools like Jenkins, Verdaccio, and Docker, I have laid the groundwork for a robust and scalable CI/CD pipeline that facilitates efficient collaboration, rapid iteration, and reliable software delivery.

As my project evolves, I plan to further enhance the automation processes, ensuring a smooth transition to continuous delivery and potential migration to cloud providers.

Let me try to explain to you, what to my taste is characteristic for all intelligent thinking. It is, that one is willing to study in depth an aspect of one’s subject matter in isolation for the sake of its own consistency, all the time knowing that one is occupying oneself only with one of the aspects. We know that a program must be correct and we can study it from that viewpoint only; we also know that it should be efficient and we can study its efficiency on another day, so to speak. In another mood we may ask ourselves whether, and if so: why, the program is desirable. But nothing is gained—on the contrary!—by tackling these various aspects simultaneously. It is what I sometimes have called “the separation of concerns”, which, even if not perfectly possible, is yet the only available technique for effective ordering of one’s thoughts, that I know of. This is what I mean by “focusing one’s attention upon some aspect”: it does not mean ignoring the other aspects, it is just doing justice to the fact that from this aspect’s point of view, the other is irrelevant. It is being one- and multiple-track minded simultaneously.

Edsger W. Dijkstra – 1974 paper “On the role of scientific thought”

Moving forward, I will be adding new pipelines as they become required.

Hope you learned something today!

Re-thinking STARLIMS architecture

Re-thinking STARLIMS architecture

There is something about STARLIMS that has been bugging me for a long time. Don’t get me wrong – I think it is a great platform. I just question the wellness of XFD in 2024, and the selection of Sencha for the HTML part of it.

But an even more critical point: I question the principle of using the same server for the “backend” and the “frontend”. Really, the current architecture of STARLIMS (in a simplified way) is something like this:

Sure, you can add load balancers, multiple servers, batch processors… But ultimately, the Server’s role is both backend and Web Rendering, without really following Server-Side-Rendering (SSR) pattern. It hosts / provides the code to render from backend and let client do rendering. So, in fact, it is Client-Side-Rendering (CSR) with most of the SSR drawbacks.

This got me thinking. What if we really decoupled the front end from the backend? And what if we made this using real micro services? You know, something like this:

Let me explain the layers.

React.js

React does not need presentation. The infamous open-source platform behind Facebook. Very fast and easy, huge community… Even all the AI chatbot will generate good React components if you ask nicely! For security, it’s like any other platform; it’s as secure as you make it. And if you pair it with Node.js, then it’s very easy, which brings me to the next component…

Node.js

Another one in no need of presentation. JavaScript on a backend? Nice! And there, on one end, you handle the session & security (with React) and communicate with STARLIMS through the out of the box REST API. Node can be just a proxy to STARLIMS (it is the case currently) but should also be leveraged to extend the REST APIs. It is a lot easier to implement new APIs and connect to STARLIMS (or anything else for that matter!) and speed up the process. Plus, you easily get cool stuff like WebSockets if you want, and you can cache some reference data in Redis to go even faster!…

Redis

Fast / lightweight / free cache (well, it was when I started). I currently use it only for sessions; since REST API is stateless in STARLIMS, I manage the sessions in Node.js, and store them in Redis, which allows me to spin multiple Node.js instances (load balancing?) and share sessions across. If you don’t need to spin multiple proxy, you don’t need this. But heh, it’s cooler with it, no?

I was thinking (I haven’t done anything about this yet) to have a cron job running in Node.js to pull reference data from STARLIMS (like test plans, tests, analytes, specifications, methods, etc) periodically and update Redis cache. Some of that data could be used in the UI (React.js) instead of hitting STARLIMS. But now, with the updated Redis license, I don’t know. I think it is fine in these circumstances, but I would need to verify.

… BUT WHY?

Because I can! – Michel R.

Well, just because. I was learning these technologies, had this idea, and I just decided to test the theory. So, I tried. And it looks like it works! There are multiple theoretical advantages to this approach:

  1. Performance: Very fast (and potentially responsive) UI.
  2. Technology: New technology availability (websockets, data in movement, streaming, etc.).
  3. Integration: API first paradigm, Node.js can make it really easy to integrate with any technology!
  4. Source control: 100% Git for UI code, opening all git concepts (push, pull requests, merge, releases, packages, etc.).
  5. Optimization: Reduce resource consumption from STARLIMS web servers.
  6. Scalability: High scalability through containerization and micro-services.
  7. Pattern: Separation of concerns. Each component does what its best at.
  8. Hiring – there is a higher availability of React.js and Node.js developers than STARLIMS developers!

Here’s some screenshots of what it can look like:

As you can see, at this stage, it is very limited. But it does work, and I like a couple of ideas / features I thought of, like the F1 for Help, the keyboard shortcuts support, and more importantly, the speed… It is snappy. In fact, the speed is limited to what the STARLIMS REST API can provide when getting data, but otherwise, everything else is way, way faster than what I’m used to.

How does it work, really?

This is magic! – Michel R.

Magic! … No, really, I somewhat “cheated”. I implemented a Generic API in the STARLIMS REST API. This endpoint supports both ExecFunction and RunDS, as well as impersonation. Considering that the REST API of STARLIMS is quite secure (it uses anti-tampering patterns, you can ask them to explain that to you if you want) and reliable, I created a generic endpoint. It receives a payload containing the script (or datasource) to run, with the parameters, and it returns the original data in JSON format.

Therefore, in React, you would write code very similar to lims.CallServer(scriptName, parameters) in XFD/Sencha.

Me being paranoid, I added a “whitelisting” feature to my generic API, so you can whitelist which scripts to allow running through the API. Being lazy, I added another script that does exactly the same, without the whitelisting, just so I wouldn’t have to whitelist everything; but hey, if you want that level of control… Why not?

Conclusion

My non-scientific observations are that this works quite well. The interface is snappy (a lot faster than even Sencha), and developing new views is somewhat easier than both technologies as well.

Tip: you can just ask an AI to generate a view in React using, let’s say, bootstrap 5 classNames, and perhaps placeholders to call your api endpoints, et voilà! you have something 90% ready.

Or you learn React and Vite and you build something yourself, your own components, and create yourself your own STARLIMS runtime (kind-of).

This whole experiment was quite fun, and I learned a ton. I think there might actually be something to do with it. I invite you to take a look at the repositories, which I decided to create a public version of for anyone to use and contribute under MIT with commercial restrictions license:

You need to have both projects to get this working. I recommend you check both README to begin with.

Right now, I am parking this project, but if you would like to learn more, want to evaluate this but need guidance, or are interested in actually using this in production, feel free to drop me an email at [email protected]! Who knows what happens next?

An introduction to Github and Webhooks

An introduction to Github and Webhooks

In the world of software development, my quest to understand continuous deployment led me down an intriguing path. It all began with a burning desire to unravel the complexities of continuous deployment while steering clear of expensive cloud hosting services. And that’s when my DIY GitHub Webhook Server project came to life.

The Genesis

Imagine being in my shoes—eager to dive deeper into the continuous deployment process. But I didn’t want to rely on pricey cloud solutions. So, I set out to craft a DIY GitHub Webhook Server capable of handling GitHub webhooks and automating tasks upon code updates—right from my local machine. Or any machine for that matter.

The Vision

Let’s visualize a scenario with a repository—let’s call it “MyAwesomeProject”— sitting in Github, and you are somewhere in a remote cabin with okey / dicey internet access. All you have is your laptop (let’s make it a Chromebook!!). You want to code snugly, and you want to update your server that sits at home. But… You don’t WANT to remote in your server. You want it to be automatic. Like magic.

You would have to be prepared. You would clone my repo, configure your server (including port forwarding), and maybe use something like no-ip.com so you have a “fixed” URL to use your webhook with. Then:

  1. Configuring Your Repository: Start by defining the essential details of “MyAwesomeProject” within the repositories.json file—things like secretEnvName, path, webhookPath, and composeFile.
{
  "repositories": [
    {
      "name": "MyAwesomeProject",
      "secretEnvName": "GITHUB_WEBHOOK_SECRET",
      "path": "/path/to/MyAwesomeProject",
      "webhookPath": "/webhook/my-awesome-project",
      "composeFile": "docker-compose.yml"
    }
  ]
}
  1. Setting Up Your GitHub Webhook: Head over to your “MyAwesomeProject” repository on GitHub and configure a webhook. Simply point the payload URL to your server’s endpoint (e.g., http://your-ddns-domain.net/webhook/my-awesome-project).
  2. Filtering Events: The server is smartly configured to respond only to push events occurring on the ‘main’ branch (refs/heads/main). This ensures that actions are triggered exclusively upon successful pushes to this branch.
  3. Actions in Motion: Upon receiving a valid push event, the server swings into action—automatically executing a git pull on the ‘main’ branch of “MyAwesomeProject.” Subsequently, Docker containers are rebuilt using the specified docker-compose.yml file.

So, there you have it—a simplified solution for automating your project workflows using GitHub webhooks and a self-hosted server. But before we sign off, let’s talk security.

For an added layer of protection, consider setting up an Nginx server with a Let’s Encrypt SSL certificate. This secures the communication channel between GitHub and your server, ensuring data integrity and confidentiality.

While this article delves into the core aspects of webhook configuration and automation, diving into SSL setup with Nginx warrants its own discussion. Stay tuned for a follow-up article that covers this crucial security setup, fortifying your webhook infrastructure against potential vulnerabilities.

Through this journey of crafting my DIY GitHub Webhook Server, I’ve unlocked a deeper understanding of continuous deployment. Setting up repositories, configuring webhooks, and automating tasks upon code updates—all from my local setup—has been an enlightening experience. And it’s shown me that grasping the nuances of continuous deployment doesn’t always require expensive cloud solutions.

References:

Repository: https://github.com/michelroberge/webhook-mgr/

Demo app (where I use this): https://curiouscoder.ddns.net

logo

ChatGPT Experiment follow-up

Did you try to look at my experiment lately? Did it timeout or gave you a bad gateway?


Bad Gateway

Well read-on if you want to know why!

Picture this: I’ve got myself a fancy development instance of a demo application built with ChatGPT. Oh, but hold on, it’s not hosted on some magical cloud server. Nope, it’s right there, in my basement, in my own home! I’ve been using some dynamic DNS from no-ip.com. Living on the edge, right?

Now, here’s where it gets interesting. I had the whole thing running on plain old HTTP. !!! I mean, sure, I had a big red disclaimer saying it wasn’t secure, but that just didn’t sit right with me. So, off I went on an adventure to explore the depths of NGINX. I mean, I kinda-sorta knew what it was, but not really. Time to level up!

So, being the curious soul that I am, I started experimenting. It’s not perfect yet, but guess what? I learned about Let’s Encrypt in the process and now I have my very own HTTPS and a valid certificate – still in the basement! Who’s insecure now? (BTW, huge shoutout to leangaurav on medium.com, the best tutorial on this topic out there!)

As if that was not enough, I decided – AT THE SAME TIME – to also scale up the landscape.

See, I’ve been running the whole stack on Docker containers! It’s like some virtual world inside my mini PCs. And speaking of PCs, my trusty Ryzen 5 5500U wasn’t cutting it anymore, so I upgraded to a Ryzen 7 5800H with a whopping 32GB of RAM. Time to unleash some serious power and handle that load like a boss!

Now, you might think that moving everything around would be a piece of cake with Docker, but oh boy, was I in for a ride! I dove headfirst into the rabbit hole of tutorials and documentation to figure it all out. Let me tell you, it was a wild journey, but I emerged smarter and wiser than ever before.

Now, I have a full stack that seems to somewhat work, even after reboot (haha!).

Let me me break down the whole landscape – at a very high level (in the coming days & weeks, if I still am feeling like it, I will detail each steps). The server is a Trigkey S5. I have the 32GB variant running on a 5800H, which went on sale for $400 CAD on Black Friday – quite a deal! From my research, best bang for the bucks. Mobile CPU, so energy-wise very good, but of course, don’t go expecting playing AAA games on this!

Concerning my environment, I use Visual Studio Code on Windows 11 with WSL enabled. I installed the Ubuntu WSL, just because that’s the one I am comfortable with. I have a Docker compose file that consist of:

  • NodeJs Backend for APIs, database connectivity, RBAC, etc.
  • ViteJs FrontEnd for well.. everything else you see as a user 🙂
  • NGINX setup as a reverse proxy – this is what makes it work over HTTPS

This is what my compose file looks like:

version: '3'

services:

  demo-backend:
    container_name: demo-backend
    build:
      context: my-backend-app
    image: demo-backend:latest
    ports:
      - "5051:5050"
    environment:
      - MONGODB_URI=mongodb://mongodb
    networks:
      - app
    restart: always

  demo-frontend:
    container_name: demo-frontend
    build:
      context: vite-frontend
    image: demo-frontend:latest
    ports:
      - "3301:3301"
    networks:
      - app
    restart: always

  mongodb:
    image: mongo:latest
    container_name: "mongodb"
    ports:
    - 27017:27017
    networks:
      - app
    restart: always

networks:
  app:
    driver: bridge
    name: shared_network

This method is very convenient for running on my computer during the development process. I have a modified compose file without NGINX and different ports. This makes it easier for me to make changes and test them. When I’m satisfied, I switch to another compose file with the redirected ports in my router. I use “docker compose down” followed by “docker compose up” to update my app.

This is the NGINX compose file. The interesting part is I use the same network name here than in the previous compose file. When I do this, NGINX can communicate with the containers of the previous docker compose file. Why did I do this? Well, I have this demo project running, and I’m working on another project with different containers. With some configuration, I will be able to leverage my SSL certificates for both solution (or as many as I want), as well as one compose file per project. This will be very handy!

version: "3.3"

services:
  nginx:
    container_name: 'nginx-service'
    image: nginx-nginx:stable
    build:
      context: .
      dockerfile: docker/nginx.Dockerfile
    ports:
      - 3000:3000
      - 3030:3030
      - 5050:5050
    volumes:
      - ./config:/config
      - /etc/letsencrypt:/etc/letsencrypt:ro
      - /tmp/acme_challenge:/tmp/acme_challenge
    networks:
      - app
    restart: always

networks:
  app:
    driver: bridge
    name: shared_network

Of course, this is not like a cloud provider; my small PC can die, there is no redundancy; but for development and demo purposes? Quite efficient (and cheap)!

As of last night, my stuff runs on HTTPS, and should be a bit more reliable moving forward. The great part about this whole experiment is how much I learned in the process! Yes, it started from ChatGPT. But you know what? I never learned so much so fast. It was very well worth it.

I will not state I am an expert in all of this, but I feel I know now a lot more about:

  • ChatGPT I learned some tricks along the way on how to ask and what to ask, as well as how to get out of infinite loops.
  • NodeJS and Express. I knew about it, but now I understand better the middleware concepts and connectivity. I have built some cool APIs
  • ViteJs This is quite the boilerplate to get a web app up and running
  • Expo and React-Native. This is a parallel project, but I built some nice stuff I will eventually share here. If you want to build Android and IOS apps using React-Native, this framework works great. Learn more on Expo.dev
  • GitLab. I tried this for the CI/CD capabilities and workflows… Oh my! With Expo, this came in handy!! Push a commit, merge, build and deploy through EAS! (On the flip side, I reached the limits of free quite fast… I need to decide what I’ll be doing moving forward) On top of it, I was able to store my containers on their registries, making it even more practical for teamwork!
  • Nginx. The only thing I knew before was : it exists and has to do with web servers. Now I know how to use as a reverse proxy and I am starting to feel that I will use it even more in the future.
  • Docker & Containerization. Also another one of these “I kind of know what it is”.. Now I have played with containers, docker compose, and I am only starting to grasp the power of it.
  • Let’s Encrypt. I thought I understood HTTPS. I am still no expert, but now I understand a lot more how this works, and why it works.
  • Certbot this is the little magic mouse behind the whole HTTPS process. Check it out!
  • MongoDb. I played with some NoSQL in the past. But now… Oh now. I love it. I am thinking I prefer this to traditional SQL databases. Just because.

A final note on ChatGPT (since this is where it all started):

The free version of this powerful AI is outdated (I don’t want to pay for this – not yet). This resulted in many frustrations – directives that wouldn’t work. I had to revert back to Googling what I was looking for. Turns out that although ChatGPT will often cut the time down by quite a margin, the last stretch is yours. It is not yet at the point where it can replace someone.

But it can help efficiency.

A lot.

Test-drive of a new Co-pilot: ChatGPT

Ok, we all played with it a bit. We’ve seen what it can do – or at least some of it. I was first impressed, then disappointed, and sometimes somewhat in between. How can it really help me? I thought.

I always want to learn new things. On my list I had Nodejs, React and MongoDB… So I on-boarded my copilot: ChatGPT, for a test drive!

See the results there!

BTW: that application is not secured, I didn’t want to spend too much money on this (SSL, Hosting and whatnot), so don’t go around using sensitive information 🙂

Write down in the comments if you want to explore further the result, I can create an account for you 😁 or maybe even spin up a dedicated instance for you (I want to try that at some point!!)

If this is down, it can be due to many things:

  1. no-ip.com and DDNS not synchronized
  2. my home server is down
  3. my provider blocked one or many of the ports
  4. no electricity
  5. I am rebuilding
  6. Something else, like, I don’t know… Aliens?

That was a lot of fun! Do you like it?

STARLIMS Backend regression tests automation

STARLIMS Backend regression tests automation

Finally. It was about time someone did something about it and would advertise. Automatic testing of STARLIMS (actually, any REST-supported app!). Ask anyone working with (mostly) any LIMS, and automatic regression tests are not often there at all, even less automated.

I understand it is difficult to automate the front end, which is what tends to break… Nonetheless, I had this idea – please read through! – and I think there’s an easy way to automate some regression tests to a STARLIMS instance. Here’s my arguments why it brings value:

  1. Once a framework is in place, efforts are what you put in. It can be a lot of efforts, or minimal. I would say aim minimal efforts at the beginning. Read on, you’ll understand why.
  2. Focus on bugs. Any bug fixed in the backend, prepare a regression test. Chances are you’re doing it anyway (write a test script to check your script runs?)
  3. For features, just test the default parameters at first. You can go ahead and do more, but this will at least tell you that the script still compiles and handle default values properly.
  4. You CAN and SHOULD have a regression test on your datasources! At least, do a RunDS(yourDatasource) to check it compiles. If you’re motivated, convert the Xml to a .NET Datasource and check that the columns you’ll need are there.
  5. Pinpoint regression tests. You fixed a condition? Test that condition. Not all conditions. Otherwise it becomes a unit test, not a regression test.

The idea is that every day / week / whatever schedule you want, ALL OF YOUR REGRESSION TESTS WILL RUN AUTOMATICALLY. Therefore, if one day you fix one condition, the next day you fix something else in the same area, well, you want both your fix AND the previous condition fix to continue to work. As such, the value of your regression tests grows over time. It is a matter of habits.

What you’ll need

POSTMAN – This will allow you to create a regression collection, add a monitor, and know when a regression fails. This is the actual too.

REST API – We will be running a series of script from POSTMAN using STARLIMS REST API. You’ll see, this is the easy part. If you followed how to implement new endpoints, this will be a breeze.

STARLIMS Setup

In STARLIMS, add a new server script category for your API. In my case, I call it API_Regression_v1. This will be part of the API’s route.

Add a new script API_Regression_v1.Run. This is our request class for the POST endpoint. The code behind will be simple: we receive a script category in the parameters, and we run all children scripts. Here’s a very simple implementation:

:CLASS Request;
:INHERIT API_Helper_Custom.RestApiCustomBase;

:PROCEDURE POST;
:PARAMETERS payload;

:DECLARE ret, finalOutput;

finalOutput := CreateUdObject();
:IF !payload:IsProperty("category"); 
    finalOutput:StatusCode := Me:HTTP_NOT_FOUND;
    finalOutput:response := CreateUdObject();
    finalOutput:response:StatusCode := Me:HTTP_NOT_FOUND;
    :RETURN finalOutput;
:ENDIF;

/* TODO: implement script validation (can we run this script through regression test? / does it meet regression requirements?) ;
ret := Me:processCollection( payload:category );
finalOutput:StatusCode := Me:HTTP_SUCCESS;
finalOutput:response := ret;
finalOutput:response:StatusCode := Me:HTTP_SUCCESS;
:RETURN finalOutput;

:ENDPROC;

:PROCEDURE processCollection;
:PARAMETERS category;
:DECLARE scripts, i, sCatName, output, script;
output := CreateUdObject();
output:success := .T.;
output:scripts := {};
sCatName := Upper(category);
scripts := SQLExecute("select   coalesce(c.DISPLAYTEXT, c.CATNAME) + '.' + 
                                coalesce(s.DISPLAYTEXT, s.SCRIPTNAME) as s
                        from LIMSSERVERSCRIPTS s
                        join LIMSSERVERSCRIPTCATEGORIES c on s.CATEGORYID = c.CATEGORYID 
                        where c.CATNAME like ?sCatName? 
                        order by s", "DICTIONARY" );

:FOR i := 1 :TO Len(scripts);
    script := CreateUdObject();
    script:scriptName := scripts[i][1];
    script:success := .F.;
    script:response := "";
    :TRY;
        ExecFunction(script:scriptName);
        script:success := .T.;
    :CATCH;
        script:response := FormatErrorMessage(getLastSSLError());
        output:success := .F.;
    :ENDTRY;
    aAdd(output:scripts, script);
:NEXT;

:RETURN output;
:ENDPROC;

As you can guess, you will NOT want to expose this endpoint in a production environment. You’ll want to run this on your development / test instance; whichever makes most sense to you (maybe both). You might also want to add some more restrictions, like only if category starts with “Regression” or something along thos lines… I added a generic setting called “/API/Regression/Enabled” with a default value of false to check the route (see next point), and a list “/API/Regression/Categories” to list which category can be run (whitelisted).

Next, you should add this to your API route. I will not explain here how to do this; it should be something you are familiar with. Long story short, API_Helper_Customer.RestApiRouter should be able to route callers to this class.

POSTMAN setup

This part is very easy. Create yourself a new collection – something like STARLIMS Regression Tests v1. Prepare this collection with an environment so you can connect to your STARLIMS instance.

One neat trick: prepare your collection with a pre-request script that will make it way easier to use. I have this script I tend to re-use every time:

// get data required for API signature
const dateNow = new Date().toISOString();
const privateKey = pm.environment.get('SL-API-secret'); // secret key
const accessKey = pm.environment.get('SL-API-Auth'); // public/access key

const method = request.method;
const apiMethod = "";

var body = "";
if (pm.request.body && pm.request.body.raw){
    body = pm.request.body.raw;
}
// create base security signature
var signatureBase = `${url}\n${method}\n${accessKey}\n${apiMethod}\n${dateNow}\n${body}`;
// get encoding hash of signature that starlims will attempt to compare to
var data = CryptoJS.enc.Utf8.parse(signatureBase);
const hash = CryptoJS.HmacSHA256(data, privateKey);
const encodedHash = encodeURIComponent(CryptoJS.enc.Base64.stringify(hash));

// create headers
pm.request.headers.add({"key":"SL-API-Timestamp", "value":dateNow});
pm.request.headers.add({"key":"SL-API-Signature", "value":encodedHash});
pm.request.headers.add({"key":"SL-API-Auth", "value":accessKey});
pm.request.headers.add({"key":"Content-Type", "value":"application/json"});

Note: In the above code, there’s a few things you need to initialize in your environment; pay attention to the pm.environment.get() variables.

Once that is done, you add one request in your collection of type POST. Something that looks like this:

POST request example

See the JSON body? We’re just telling STARLIMS “this is the category I want you to run”. With the above script, all scripts in this category will run when this request is sent. And since every script is in a try/catch, you’ll get a nice response of all scripts success (or failure).

Let’s create at least 1 regression test. In the category (in my case, My_Regression_POC), I will add one script named “compile_regression_framework”. The code will be very simple: I just want to make sure my class is valid (no typo and such).

:DECLARE o;
o := CreateUdObject("API_Regression_v1.run");
:RETURN .T.;

Setup a POSTMAN monitor

Now, on to the REALLY cool stuff. In POSTMAN, go to monitor:

POSTMAN – Monitor option

Then just click the small “+” to add a new monitor. It is very straightforward: all you need to pick is a collection (which we created earlier) and an environment (which you should have by now). Then setup the schedule and the email in case it fails.

Setting up a Monitor

And that’s it, you’re set! You have your framework in place! The regression tests will run every day at 3pm (according to the above settings) and if something fails, I will receive an email. This is what the dashboard looks like after a few days:

Monitor Dashboard

Next Steps

From here on, the trick is organizing regression scripts. In my case, what I do is

  1. I create a new category at the beginning of a sprint
  2. I duplicate the request in the collection, with the sprint name in the request’s title
  3. I change the JSON of the new request to mention the new category
  4. Then, for every bug fixed during that sprint, I create a regression script in that category. That script’s purpose is solely to test what was fixed.

What happens then is every day, all previous sprints regression tests run, plus the new ones! I end up having a lot of tests.

Closing notes

Obviously, this does not replace a good testing team. It only supports them by re-testing stuff they might not think about. It also doesn’t test everything; there’s always scenarios that can’t be tested only with a server call. It doesn’t test front-end.

But still, the value is there. What is tested is tested. And if something fails, you’ll know!

One question a developer asked me on this is “sometimes, you change code, and it will break a previous test, and this test will never pass again because it is now a false test. What then?”

Answer: either delete the script, or just change the 1st line to :RETURN .T.; with a comment that the test is obsolete. Simple as that.

At some point, you can start creating more collections, adding more monitor, to split different schedules (some tests to run weekly, others daily, etc).

And finally, like I said, the complexity and how much you decide to test is really up to you. I recommend starting small; the value is there without much effort. Then, if a critical scenario arises, you can write a more complex test. That should be the exception.

You have ideas on how to make this better? Share!

Building my own RPI-based Bartop Arcade Cabinet

Building my own RPI-based Bartop Arcade Cabinet

One of my pet project this summer was to build a bartop arcade cabinet. I had some rpi400 laying around, which are rpi4 imbedded in a keyboard. The idea of always having a keyboard handy for the arcade cabinet sounded like a great feature, and to access it, I had to find a way to easily open the cabinet.

That’s why there’s hinges in from of the control!

All in all, building this was fun, and I decided to use Batocera.linux as the OS. It turn out to be the easiest one and most complete one, as well as the fastest one, based on my tests.

Main goal was to load MAME arcade games (tetris, pac man, Super Street Fighter 2). But I ended up putting Mario Kart N64, and it actually runs pretty good if we set resolution to 640×480 for that game.

There’s still one bug going on with Batocera – after a while we must reboot the Arcade since there seems to be a memory leak somewhere (developers are aware).

In the box, there’s

  • rpi400
  • Old 19 inches 4:3 monitor
  • 2 set of generic dragon arcade USB controllers
  • HDMI to VGA active adapter (powered)
  • power bar outlet (re-wired to a on/off switch in the back)
  • Altec lansing speakers
Arcade Bartop Cabinet (no stickers)

I thought it might be interesting to should you various stages of the build, in case you are looking for some inspirations:

Initial frame
Hinges for the bartop
Stained, ready to assemble!

During the whole configuration, I had a problem. RetroPie was not able to output sound properly, and Batocera was not able to connect to WiFi. It turned out this was caused by an insufficient power in the rpi.

Lesson 1: avoid a USB sound card if you can. It draws a lot of power that can interfere with the Wifi & Bluetooth module (which is what happened to me). If you do that, try to get one that can draw its power from somewhere else. I prefer rely on the HDMI sound output.

Lesson 2: if you use an old monitor, get an Active HDMI to VGA adapter. These adapters will usually include an audio output (which solves above problem). If you use a passive adapter, the chip relies on the power provided by HDMI, which may result in black screen flickers in some games. Using an active adapter fixed the problem for me.

This is a very different topic than what I usually post, but I felt like a good place to share this!

Did you ever build an Arcade cabinet?